uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,477,468,751,081 | arxiv | \section{Introduction}
The \emph{ordinary} representation theory of finite groups over the complex
numbers was developed by Frobenius, Burnside, Schur and others around the
beginning of the 20th century, and the study of \emph{modular} representation
theory, dealing with
representations over fields of positive characteristic, was initiated by
Brauer around the mid of the 20th century. Still, amazingly enough, many
fundamental and easily formulated questions remain open in the ordinary as
well as in the modular representation theory of finite groups. For example,
in 1963 Brauer \cite{Br63} formulated a list of deep conjectures about
ordinary and modular representations of finite groups, most of which are yet
unsolved, and further important conjectures were subsequently put forward by
McKay, Alperin, Brou\'e and Dade. It is the purpose of this survey to expound
some of
the recent considerable advances on several of the major open conjectures in
this area. In this sense, this article can be seen as a continuation of the
1991 survey by Michler \cite{Mi91}. In particular, the opening question in
the introduction to that paper whether central conjectures in (modular)
representation theory might be provable as a consequence of the classification
of finite simple groups, has recently been given at least a partial positive
answer.
\par
We will concentrate here on so-called local-global conjectures which propose
to relate the representation theory of a finite group $G$ to that of its
\emph{local subgroups} for some prime $p$, that is, of subgroups ${\mathrm{N}}_G(P)$
where $P\le G$ is a non-trivial $p$-subgroup. The charm of these conjectures,
as so often, lies in the stunning simplicity of their formulation as opposed
to their seeming intractability. More specifically, we will discuss the McKay
conjecture, its block-wise version known as Alperin--McKay conjecture,
Brauer's height zero conjecture and Dade's conjecture, all of which concern
character degrees of finite groups, as well as the Alperin weight conjecture,
which postulates a formula for the number of modular irreducible characters
in terms of local data. A possible structural explanation of some instances of
these conjectures is offered by Brou\'e's abelian defect group conjecture.
All of these conjectures and observations point towards some hidden theory
explaining these phenomena, but we are unable to find it yet.
\par
The approach of using local data to obtain global information on a finite group
had already proved very successful in the classification of finite simple
groups. Now, conversely, the classification seems to provide a way for proving
local-global conjectures in representation theory. The basic idea is to reduce
the conjectures to possibly more complicated statements about finite simple
groups, which we then hope to verify by using our detailed knowledge on these
groups. This approach has already proved successful in two
important cases, see Theorems~\ref{thm:MS} and~\ref{thm:H0}.
\par
Not unexpectedly, the attempt to apply the classification has on the one hand
led to the development of new, purely representation theoretic notions,
tools and results, and on the other hand it has made apparent that our
knowledge even of the ordinary representation theory of the finite simple
groups is far from sufficient for many purposes. In this way, the reduction
approach to the local-global conjectures has already spawned powerful new
methods, interesting questions and challenging research topics even outside
its immediate range.
\vskip 1pc
\noindent
{\bf Acknowledgement:} The author thanks Olivier Dudas, Eugenio Giannelli,
Radha Kessar, Caroline Lassueur, Gabriel Navarro and Britta Sp\"ath for
various helpful comments on an earlier version.
\section{The McKay Conjecture, the Alperin--McKay Conjecture and refinements}
The McKay conjecture \cite{Mc72} is the easiest local-global conjecture to
state. It could already have been formulated by Frobenius or Burnside, but
was first noticed only in 1972. It is also the origin, together with Alperin's
Weight Conjecture~\ref{conj:BAW}, of the more general Dade
Conjecture~\ref{conj:Dade} as well as of Brou\'e's Conjecture~\ref{conj:Br}.
\subsection{Characters of $p'$-degree}
For a finite group $G$ and a prime $p$ we denote by
$$\operatorname{Irr}_{p'}(G):=\{\chi\in\operatorname{Irr}(G)\mid p\text{ does not divide }\chi(1)\}$$
the set of irreducible complex characters of $G$ of degree prime to $p$.
\begin{conjecture}[McKay (1972)] \label{conj:McK}
Let $G$ be a finite group and $p$ be a prime. Then
$$|\operatorname{Irr}_{p'}(G)|=|\operatorname{Irr}_{p'}({\mathrm{N}}_G(P))| \, ,$$
where $P$ is a Sylow $p$-subgroup of $G$ and ${\mathrm{N}}_G(P)$ its normaliser.
\end{conjecture}
That is to say, certain fundamental information on the representation theory
of $G$ is encoded in local subgroups of $G$, namely in the Sylow normalisers.
In fact, McKay \cite{Mc72} made his conjecture only for $G$ a simple group
and for the prime $p=2$. It was Isaacs, in his landmark paper \cite{Is73},
who proved the conjecture for all groups of odd order and any prime $p$.
Soon afterwards, Alperin \cite{Al76} refined and extended the statement of
Conjecture~\ref{conj:McK} to include Brauer blocks, now known as the
Alperin--McKay conjecture. To formulate it let us fix a $p$-modular system
$(K,{\mathcal{O}},k)$, where ${\mathcal{O}}$ is a discrete valuation ring with field of fractions
$K$ of characteristic~0 and with finite residue field $k$ of characteristic~$p$,
large enough for the given finite group $G$. Then the group ring ${\mathcal{O}} G$
decomposes as a direct sum of minimal 2-sided ideals, the \emph{$p$-blocks} of
$G$, and every irreducible character of $G$ is non-zero on exactly one of these
blocks. This induces a partition $\operatorname{Irr}(G)=\coprod_B \operatorname{Irr}(B)$ of the irreducible
characters of $G$, where $B$ runs
over the $p$-blocks of $G$. To each block $B$ is attached a $p$-subgroup
$D\le G$, uniquely determined up to conjugacy, a so-called \emph{defect group}
of $B$. For a block $B$ with defect group $D$ we then write
$$\operatorname{Irr}_0(B):=\{\chi\in\operatorname{Irr}(B)\mid \operatorname{ht}(\chi)=0\}$$
for the set of \emph{height zero characters} in $B$; here the \emph{height}
$\operatorname{ht}(\chi)$ of the irreducible character $\chi$ is defined by the formula
$\chi(1)_p|D|_p=p^{\operatorname{ht}(\chi)}|G|_p$. Thus $\operatorname{Irr}_0(B)=\operatorname{Irr}_{p'}(B)$ if $D$ is
a Sylow $p$-subgroup of $G$.
Brauer has shown how to construct a $p$-block $b$ of ${\mathrm{N}}_G(D)$, closely
related to $B$, called the \emph{Brauer correspondent} of $B$. We then also
say that $B=b^G$ is the Brauer induced block from $b$.
\begin{conjecture}[Alperin (1976)] \label{conj:AM}
Let $G$ be a finite group, $p$ be a prime and $B$ a $p$-block of $G$ with
defect group $D$. Then
$$|\operatorname{Irr}_0(B)|=|\operatorname{Irr}_0(b)| \, ,$$
where $b$ is the Brauer correspondent of $B$ in ${\mathrm{N}}_G(D)$.
\end{conjecture}
Clearly, by summing over all blocks of maximal defect, that is, blocks whose
defect groups are Sylow $p$-subgroups of $G$, the Alperin--McKay
Conjecture~\ref{conj:AM} implies the McKay Conjecture~\ref{conj:McK}.
Soon after its formulation the Alperin--McKay Conjecture~\ref{conj:AM} was
proved for $p$-solvable groups by Okuyama and Wajima \cite{OW80} and
independently by Dade \cite{D80}. It has also been verified for symmetric
groups ${\mathfrak{S}}_n$ and alternating groups ${\mathfrak{A}}_n$ by Olsson \cite{Ol76}, and for
their covering groups and for the general linear groups $\operatorname{GL}_n(q)$ by Michler
and Olsson \cite{MO83,MO90}.
Subsequently, several refinements of this conjecture were proposed. The first
one by Isaacs and Navarro \cite{IN02} predicts additional congruences; here
$n_{p'}$ denotes the part prime to $p$ of an integer~$n$:
\begin{conjecture}[Isaacs--Navarro (2002)] \label{conj:IN}
In the situation of Conjecture~\ref{conj:AM} there exists a bijection
$\Omega:\operatorname{Irr}_0(B)\rightarrow\operatorname{Irr}_0(b)$ and a collection of signs
$(\eps_\chi|\chi\in\operatorname{Irr}_0(B))$ such that
$$\Omega(\chi)(1)_{p'}\equiv\eps_\chi\chi(1)_{p'}\pmod{p}.$$
\end{conjecture}
(Note that this is a true refinement of Conjecture~\ref{conj:AM} whenever
$p\ge5$.)
This has been shown to hold for example for ${\mathfrak{S}}_n$, ${\mathfrak{A}}_n$ and their double
covers by Fong \cite{F03}, Nath \cite{N09} and Gramain \cite{Gr11}
respectively. Two further
refinements on the properties of the required bijection concerning the action
of those Galois automorphisms fixing a prime ideal above $p$ were put forward
in the same paper \cite{IN02}, and by Navarro \cite{Na04} respectively.
Yet another refinement due to Turull \cite{Tu07} includes $p$-adic fields and
Schur indices.
\subsection{A reduction theorem}
While Conjecture~\ref{conj:McK} was subsequently checked for several
further families of finite groups, the first significant breakthrough in the
case of general groups was achieved by Isaacs, Navarro and the author
\cite{IMN} in 2007 where they reduced the McKay conjecture to a question on
simple groups:
\begin{theorem}[Isaacs--Malle--Navarro (2007)] \label{thm:IMN}
The McKay Conjecture~\ref{conj:McK} holds for all finite groups at the prime
$p$, if all finite \emph{non-abelian simple} groups satisfy the so-called
\emph{inductive McKay condition} at the prime $p$.
\end{theorem}
This inductive condition for a simple group $S$ is stronger than just the
validity of McKay's conjecture for $S$, and in particular also involves the
covering groups and the automorphism group of the simple group in question:
If $S$ is simple, and $G$ is its universal covering group (the largest perfect
central extension of $S$), then the \emph{inductive McKay condition} on $S$ is
satisfied, if for some proper $\operatorname{Aut}(G)_P$-invariant subgroup $M<G$ containing
the normaliser ${\mathrm{N}}_G(P)$ of a Sylow $p$-subgroup $P$ of $G$
\begin{enumerate}
\item[(1)] there exists an $\operatorname{Aut}(G)_M$-equivariant bijection
$\Omega:\operatorname{Irr}_{p'}(G)\rightarrow\operatorname{Irr}_{p'}(M)$, respecting central characters,
\item[(2)] such that the extendibility obstructions of $\chi$ and
$\Omega(\chi)$ to their respective inertia groups in $G\rtimes\operatorname{Aut}(G)$,
considered as 2-cocycles, coincide for all $\chi\in\operatorname{Irr}_{p'}(G)$.
\end{enumerate}
Here, for $X\le G$, $\operatorname{Aut}(G)_X$ denotes the stabiliser of $X$ in $\operatorname{Aut}(G)$.
Note that due to the inductive nature of the reduction argument we need not
descend all the way to ${\mathrm{N}}_G(P)$, but only to some intermediary subgroup~$M$
of our choice. As will be seen below this is very useful in the case of finite
groups of Lie type. In fact, the condition stated in \cite[\S10]{IMN} (where
this notion is called \emph{being good for the prime $p$}) even allows for a
slightly bigger group to be considered in place of $G$, which is particularly
useful in dealing with the finite groups of Lie type.
\par
The inductive McKay condition has been shown for the alternating and sporadic
groups by the author \cite{Ma08}, and for groups of Lie type in their defining
characteristic by Sp\"ath \cite{Sp12}, extending work of Brunat \cite{Br09}
and building on a result of Maslowski \cite{Ms10}. Thus only the simple groups
of Lie type at primes $p$ different from their defining characteristic remain
to be considered. These are best studied as finite reductive groups.
\subsection{The inductive condition for groups of Lie type}
If $G$ is the universal covering group of a simple group of Lie type, then up
to finitely many known exceptions
(see e.g.~\cite[Tab.~24.3]{MT}) there exists a simple linear algebraic group
${\mathbf{G}}$ of simply connected type over the algebraic closure of a finite field
${\mathbb{F}}_q$ and a Steinberg endomorphism $F:{\mathbf{G}}\rightarrow{\mathbf{G}}$ such that
$G={\mathbf{G}}^F$ is the finite group of fixed points in ${\mathbf{G}}$ under $F$, a finite
reductive group. Lusztig has obtained a parametrisation of the irreducible
complex characters of the groups ${\mathbf{G}}^F$ and in particular has determined their
degrees. To describe this, let's assume for simplicity that $F$ is the
Frobenius map with respect to some ${\mathbb{F}}_q$-structure of ${\mathbf{G}}$. Let ${\mathbf{G}}^*$ be
a \emph{dual group to} ${\mathbf{G}}$ (with root datum dual to the one of ${\mathbf{G}}$)
and with compatible Frobenius map on ${\mathbf{G}}^*$ also denoted by $F$. Then Lusztig
\cite{LuB} has constructed a partition
$$\operatorname{Irr}(G)=\coprod_{s\in G_{\operatorname{ss}}^*/\sim}{\mathcal{E}}(G,s)$$
of the irreducible characters of $G$ into \emph{Lusztig series} ${\mathcal{E}}(G,s)$
parametrised
by semisimple elements $s\in G^*:={\mathbf{G}}^{*F}$ up to $G^*$-conjugacy. Further for
any semisimple element $s\in G^*$ he obtained a \emph{Jordan decomposition}
$$\Psi_s:{\mathcal{E}}(G,s)\buildrel{1 - 1}\over\longrightarrow{\mathcal{E}}({\mathrm{C}}_{G^*}(s),1)$$
relating the Lusztig series ${\mathcal{E}}(G,s)$ to the so-called \emph{unipotent
characters} of the (possibly disconnected) group ${\mathrm{C}}_{G^*}(s)$, such that
the degrees satisfy
\begin{align}
\chi(1)& = |G^*:{\mathrm{C}}_{G^*}(s)|_{q'}\,\Psi_s(\chi)(1)\qquad
\text{ for all }\chi\in{\mathcal{E}}(G,s).\label{eq:JD}
\end{align}
The unipotent characters of finite reductive groups have been classified by
Lusztig \cite{LuB} and he has given combinatorial formulas for their degrees.
It is thus in principle possible to determine the irreducible characters of
$G$ of $p'$-degree. For example, if $p$ is a prime not dividing $q$,
Equation~(\ref{eq:JD}) shows that $\chi\in{\mathcal{E}}(G,s)$ lies in $\operatorname{Irr}_{p'}(G)$ if
and only if $s$ centralises a Sylow $p$-subgroup of $G^*$ and the Jordan
correspondent $\Psi_s(\chi)$ lies in $\operatorname{Irr}_{p'}({\mathrm{C}}_{G^*}(s))$, thus yielding
a reduction to unipotent characters.
\par
The proper tool for discussing unipotent characters is provided by
\emph{$d$-Harish-Chandra theory}, introduced by Brou\'e--Malle--Michel
\cite{BMM} and further developed by the author \cite{MaU,Ma00,Ma07}. For this,
let
$$d=d_p(q):=\text{multiplicative order of $q$ }
\begin{cases} \text{modulo $p$}& \text{ if $p$ is odd},\\
\text{modulo $4$}& \text{ if $p=2$}.\end{cases}$$
In \cite{Ma07} we give a parametrisation of $\operatorname{Irr}_{p'}(G)$ in terms of
combinatorial data related to the \emph{relative Weyl group}
${\mathrm{N}}_G({\mathbf{T}}_d)/{\mathrm{C}}_G({\mathbf{T}}_d)$ of a Sylow $d$-torus ${\mathbf{T}}_d$ of $G$. This is
always a finite complex reflection group. Here an $F$-stable torus ${\mathbf{T}}\le{\mathbf{G}}$
is called a \emph{$d$-torus} if it splits over ${\mathbb{F}}_{q^d}$ and no $F$-stable
subtorus of ${\mathbf{T}}$ splits over any smaller field. A $d$-torus of maximal
possible dimension in ${\mathbf{G}}$ is called a \emph{Sylow $d$-torus}. Such Sylow
$d$-tori are unique up to $G$-conjugacy~\cite{BM92}.
On the other hand, the following result \cite[Thms.~5.14 and~5.19]{Ma07} shows
that in most cases we may choose $M:={\mathrm{N}}_G({\mathbf{T}}_d)$ as the intermediary
subgroup occurring in the inductive McKay condition:
\begin{theorem}[Malle (2007)] \label{thm:normSyl}
Let ${\mathbf{G}}$ be simple, defined over ${\mathbb{F}}_q$ with corresponding Frobenius map
$F:{\mathbf{G}}\rightarrow{\mathbf{G}}$ and let $G:={\mathbf{G}}^F$. Let $p{\not|}q$ be a prime divisor
of $|G|$, and set $d=d_p(q)$. Then the normaliser ${\mathrm{N}}_G({\mathbf{T}}_d)$ of a Sylow
$d$-torus ${\mathbf{T}}_d$ of $G$ contains a Sylow $p$-subgroup of $G$ unless one of
the following holds:
\begin{itemize}
\item[(a)] $p=3$, and $G=\operatorname{SL}_3(q)$ with $q\equiv4,7\pmod9$, $G=\operatorname{SU}_3(q)$
with $q\equiv2,5\pmod9$, or $G=G_2(q)$ with $q\equiv2,4,5,7\pmod9$; or
\item[(b)] $p=2$, and $G=\operatorname{Sp}_{2n}(q)$ with $n\ge1$ and $q\equiv3,5\pmod8$.
\end{itemize}
\end{theorem}
In particular, with this choice $M$ only depends on $d$, but not on the precise
structure of a Sylow $p$-subgroup or the Sylow $p$-normaliser, which makes a
uniform
argument feasible. The four exceptional series in Theorem~\ref{thm:normSyl}
can be dealt with separately (see \cite{Ma08b}). For example, part~(b) includes
the case that $G=\operatorname{SL}_2(q)$ where $q\equiv3,5\pmod8$ and the Sylow 2-normaliser
is isomorphic to $\operatorname{SL}_2(3)$, while torus normalisers are dihedral groups.
For the general case, in a delicate
Clifford theoretic analysis Sp\"ath \cite{Sp09,Sp10} has shown that
$\operatorname{Irr}_{p'}({\mathrm{N}}_G({\mathbf{T}}_d))$ can be parametrised by the same combinatorial
objects as for $\operatorname{Irr}_{p'}(G)$, thus completing the proof of:
\begin{theorem}[Malle (2007) and Sp\"ath (2010)]
Let ${\mathbf{G}}$ be simple, defined over ${\mathbb{F}}_q$ with corresponding Frobenius map
$F:{\mathbf{G}}\rightarrow{\mathbf{G}}$ and let $G:={\mathbf{G}}^F$. Let $p{\not|}q$ be a prime divisor
of $|G|$, $d=d_p(q)$, and assume that we are not in one of the exceptions (a)
or~(b) of Theorem~\ref{thm:normSyl}. Then there is a bijection
$$\Omega:\operatorname{Irr}_{p'}(G)\rightarrow\operatorname{Irr}_{p'}({\mathrm{N}}_G({\mathbf{T}}_d))\ \text{with }
\Omega(\chi)(1)\equiv\pm\chi(1)\!\!\!\!\pmod p\text{ for }
\chi\in \operatorname{Irr}_{p'}(G).$$
\end{theorem}
So in particular we obtain degree congruences as predicted by
Conjecture~\ref{conj:IN}.
The equivariance and cohomology properties of such a bijection
$\Omega$ required by the inductive McKay condition have at present
been shown by Cabanes--Sp\"ath \cite{CS13,CS15b} and the author \cite{Ma08b}
for all series of groups of Lie type except types $B_n$, $C_n$, $D_n$,
$\tw2D_n$, $E_6$, $\tw2E_6$
and $E_7$. The most difficult and complicated part was certainly the proof by
Cabanes and Sp\"ath \cite{CS15b} that the linear and unitary groups do
satisfy the inductive McKay condition. It relies on a powerful criterion
of Sp\"ath which allows one to descend a bijection for the much easier case
of $\operatorname{GL}_n(q)$, for example, to its quasi-simple subgroup $\operatorname{SL}_n(q)$ if the
inertia groups of $p'$-characters of $\operatorname{SL}_n(q)$ and of the intermediary
subgroup $M$ have a certain semidirect product decomposition, see
\cite[Thm.~2.12]{Sp12} for details. This criterion is shown to hold for linear
and unitary groups using Kawanaka's generalised Gelfand Graev characters.
\par
The treatment of the remaining seven series of groups seems to require further
knowledge on their ordinary representation theory, not immediate from Lusztig's
results. More precisely, a solution will need to solve the following:
\begin{problem}
For $G$ quasi-simple of Lie type, determine the action of $\operatorname{Aut}(G)$ on
$\operatorname{Irr}(G)$.
\end{problem}
More precisely, it is not known in general how outer automorphisms act on
irreducible complex characters of $G$ lying in series ${\mathcal{E}}(G,s)$ with
${\mathrm{C}}_{{\mathbf{G}}^*}(s)$ not connected. In particular, the ordinary character degrees
of extensions of $G$ by outer automorphisms are unknown.
The most recent and most far-reaching result in this area has been obtained by
Sp\"ath and the author \cite{MS15}, showing that McKay's original question has
an affirmative answer:
\begin{theorem}[Malle--Sp\"ath (2015)] \label{thm:MS}
The McKay conjecture holds for all finite groups at the prime $p=2$.
\end{theorem}
For the proof we show that the groups in the remaining seven families also
satisfy the inductive McKay condition at the prime $2$ and then apply
Theorem~\ref{thm:IMN}. This relies on an equivariant extension of the
Howlett--Lehrer theory of endomorphism algebras of induced cuspidal modules
for finite groups with a BN-pair, and on special properties of the prime~2
as a divisor of character degrees of groups of Lie type. Namely, except for the
characters of degree $(q-1)/2$ of $\operatorname{SL}_2(q)$ for $q\equiv3\pmod4$, non-linear
cuspidal characters are always of even degree. The latter statement fails
drastically for odd primes. An immediate extension to other primes thus seems
very hard.
The result of Theorem~\ref{thm:MS} shows that the approach to the local-global
conjectures via the reduction to finite simple groups is indeed successful.
\subsection{The block-wise reduction}
The strategy and proof of Theorem~\ref{thm:IMN} have become the blueprint
for all later reductions of other local-global conjectures. So Sp\"ath
\cite{Sp13a} saw how this reduction could be (simplified and then) extended
to the block-wise setting:
\begin{theorem}[Sp\"ath (2013)]
The Alperin--McKay Conjecture~\ref{conj:AM} holds for all finite groups at
the prime $p$, if all finite \emph{non-abelian simple} groups satisfy the
so-called \emph{inductive Alperin--McKay condition} at $p$.
\end{theorem}
In fact, her reduction also applies to the more precise Isaacs--Navarro
Conjecture~\ref{conj:IN} involving degree congruences.
The \emph{inductive Alperin--McKay condition} on a simple group $S$ is quite
similar to the inductive McKay condition as outlined above: Let $G$ denote the
universal covering group of $S$. Then for each isomorphism class of defect
group $D\le G$ we need a subgroup ${\mathrm{N}}_G(D)\le M_D\le G$, proper unless $D$ is
central, such that for each block $B$ with defect group $D$ and Brauer
corresponding block $b$ of $M_D$ there exists an $\operatorname{Aut}(G)_b$-equivariant
bijection $\Omega:\operatorname{Irr}_0(B)\rightarrow\operatorname{Irr}_0(b)$, respecting central
characters and having further rather technical properties phrased in terms of
projective representations of automorphism groups of $G$, see
\cite[Def.~7.2]{Sp13a} for details, as well as the article by Sp\"ath in this
volume \cite{Sp16}. Here again $\operatorname{Aut}(G)_b$ denotes the
stabiliser of the block $b$ in $\operatorname{Aut}(G)$. This condition has been verified by
Koshitani and Sp\"ath \cite{KS15a,KS15b} for all blocks with cyclic defect
groups, as well as for groups of Lie type in their defining characteristic and
alternating groups at odd primes by Sp\"ath \cite{Sp13a}, while Denoncin
\cite{De14} proved it for alternating groups at $p=2$. Cabanes and Sp\"ath
\cite{CS15a} show it for blocks of $\operatorname{SU}_n(q)$ and $\operatorname{SL}_n(q)$ of maximal
defect. For the sporadic groups see the
website by Breuer \cite{BrWeb}.
In this context we mention the following open question:
\begin{problem}
Find a reduction for the Alperin--McKay conjecture including the action of
certain Galois automorphisms as predicted by Isaacs and Navarro
\cite{IN02,Na04}.
\end{problem}
A recent result of Ladisch \cite{La15} can be seen as a first step towards such
a reduction. This might also give a hint for even more natural bijections in
the verification of the inductive conditions for groups of Lie type.
\section{Brauer's Height Zero Conjecture}
The Alperin--McKay Conjecture~\ref{conj:AM} predicts the number of characters
of height zero by local data. When are these all the irreducible characters
in a given block?
\subsection{Characters of height zero}
An answer is postulated in Brauer's Height Zero Conjecture \cite{Br55}
from~1955:
\begin{conjecture}[Brauer (1955)] \label{conj:H0}
Let $B$ be a $p$-block of a finite group with defect group $D$. Then
all irreducible characters in $B$ have height zero if and only if $D$ is
abelian.
\end{conjecture}
A positive solution would provide, for example, an extremely
simple method to detect from a group's ordinary character table whether its
Sylow $p$-subgroups are abelian: indeed, the Sylow $p$-subgroups are defect
groups of the principal block, and the characters (degrees) in the latter can
be read off from the character table.
The $p$-solvable case of Conjecture~\ref{conj:H0} is an impressive theorem
by Gluck and Wolf \cite{GW84}. All further substantial progress on this
question was made using the classification of finite simple groups. The most
far reaching general result so far concerns 2-blocks whose defect groups are
Sylow 2-subgroups \cite{NT12}:
\begin{theorem}[Navarro--Tiep (2012)] \label{thm:maxdef}
Let $B$ be a 2-block of a finite group of maximal defect. Then Brauer's
Height Zero Conjecture~\ref{conj:H0} holds for $B$.
\end{theorem}
In particular, the above criterion for detection of abelian Sylow $p$-subgroups
from the ordinary character table holds when $p=2$.
The proof of Theorem~\ref{thm:maxdef} relies on Walter's determination of
finite groups with abelian Sylow 2-subgroups as well as on Lusztig's
previously described classification of irreducible characters of finite
reductive groups.
\subsection{The ``if'' direction}
For the case of arbitrary blocks and primes, Berger and Kn\"orr \cite{BK}
derived the following optimal reduction to the same statement for blocks of
quasi-simple groups (recall that a finite group $G$ is \emph{quasi-simple}
if $G$ is perfect and $G/Z(G)$ is simple):
\begin{theorem}[Berger--Kn\"orr (1988)] \label{thm:BK}
The ``if''-direction of Brauer's Height Zero Conjecture~\ref{conj:H0} holds
for the $p$-blocks of all finite groups, if it holds for the $p$-blocks of all
quasi-simple groups.
\end{theorem}
First significant steps in the verification of the assumption of this reduction
theorem were subsequently obtained by Olsson \cite{Ol90} for the covering
groups of alternating groups. The case of groups of Lie type in their
defining characteristic is easy for this questions, as defect groups are
either Sylow $p$-subgroups or trivial, and Sylow $p$-subgroups are non-abelian
unless we are in the case of $\operatorname{PSL}_2(q)$. For non-defining characteristic,
Blau and Ellers \cite{BE} obtained the following important result:
\begin{theorem}[Blau--Ellers (1999)] \label{thm:BE}
Brauer's Height Zero Conjecture~\ref{conj:H0} holds for all blocks of
quasi-simple central factor groups of $\operatorname{SL}_n(q)$ and $\operatorname{SU}_n(q)$.
\end{theorem}
\subsection{Blocks of groups of Lie type}
The case of the other quasi-simple groups of Lie type could only be settled
after having obtained a full parametrisation of their $p$-blocks. This
classification is very closely related to Lusztig induction and
can again be most elegantly phrased in terms of $d$-Harish-Chandra theory.
It was achieved over a period of over 30 years by work of many authors.
As before let ${\mathbf{G}}$ be a connected reductive algebraic group defined over
${\mathbb{F}}_q$ with corresponding
Frobenius endomorphism $F:{\mathbf{G}}\rightarrow{\mathbf{G}}$, and let ${\mathbf{G}}^*$ be dual to ${\mathbf{G}}$.
The first general reduction step was given by Brou\'e and Michel \cite{BM89}
who showed a remarkable compatibility between Brauer blocks and Lusztig series:
for any semisimple $p'$-element $s\in {\mathbf{G}}^{*F}$ the set
${\mathcal{E}}_p({\mathbf{G}}^F,s):=\coprod_t {\mathcal{E}}({\mathbf{G}}^F,st)$ is a union of $p$-blocks, where
$t$ runs over $p$-elements in the centraliser ${\mathrm{C}}_{{\mathbf{G}}^*}(s)^F$. All further
progress is linked to Lusztig induction. For an $F$-stable Levi subgroup
${\mathbf{L}}\le{\mathbf{G}}$, using $\ell$-adic cohomology of suitable varieties attached to
${\mathbf{L}}$ and ${\mathbf{G}}$, Lusztig has defined an induction map
$$R_\bL^\bG:{\mathbb{Z}}\operatorname{Irr}({\mathbf{L}}^F)\rightarrow{\mathbb{Z}}\operatorname{Irr}({\mathbf{G}}^F).$$
Proving a conjecture of Brou\'e, Bonnaf\'e and Rouquier \cite{BR03}
showed that most of the series ${\mathcal{E}}_p({\mathbf{G}}^F,s)$ ``come from below'' (see
also the recent extension of this result by Bonnaf\'e , Dat and Rouquier
\cite[Thm.~7.7]{BDR}):
\begin{theorem}[Bonnaf\'e--Rouquier (2003)] \label{thm:BR}
Let $s\in{\mathbf{G}}^{*F}$ be a semisimple $p'$-element, and let ${\mathbf{L}}\le{\mathbf{G}}$ be an
$F$-stable Levi subgroup such that ${\mathrm{C}}_{{\mathbf{G}}^*}(s)\le{\mathbf{L}}^*$. Then $R_\bL^\bG$
lifts to Morita equivalences between the blocks in ${\mathcal{E}}_p({\mathbf{L}}^F,s)$ and in
${\mathcal{E}}_p({\mathbf{G}}^F,s)$.
\end{theorem}
This reduces the determination of blocks to the so-called \emph{quasi-isolated}
situation, that is to series ${\mathcal{E}}_p({\mathbf{G}}^F,s)$ where ${\mathrm{C}}_{{\mathbf{G}}^*}(s)$ is not
contained in any proper $F$-stable Levi subgroup of ${\mathbf{G}}^*$. Here crucial
steps were provided by Fong--Srinivasan \cite{FS86} for groups of
classical type, Brou\'e--Malle--Michel \cite{BMM} for unipotent blocks and
large primes, Cabanes--Enguehard \cite{CE99} for general blocks and primes
$p\ge5$, Enguehard \cite{E00} for unipotent blocks of exceptional type groups,
and Kessar--Malle \cite{KM13,KM15a} for the remaining quasi-isolated cases.
To describe the result, let
$${\mathcal{E}}({\mathbf{G}}^F,p'):=\{\chi\in{\mathcal{E}}({\mathbf{G}}^F,s)\mid
s\in{\mathbf{G}}_{\operatorname{ss}}^{*F}\text{ of $p'$-order}\},$$
the set of irreducible characters lying in Lusztig series labelled by
$p'$-elements. Then $R_\bL^\bG$ restricts to a map
${\mathbb{Z}}{\mathcal{E}}({\mathbf{L}}^F,p')\rightarrow{\mathbb{Z}}{\mathcal{E}}({\mathbf{G}}^F,p')$. Levi subgroups of the form
${\mathrm{C}}_{\mathbf{G}}({\mathbf{T}})$, where ${\mathbf{T}}$ is a $d$-torus of ${\mathbf{G}}$ are called \emph{$d$-split},
and $\chi\in\operatorname{Irr}({\mathbf{G}}^F)$ is called \emph{$d$-cuspidal} if it does not occur as
a constituent of $R_\bL^\bG(\la)$ for any proper $d$-split Levi subgroup ${\mathbf{L}}<{\mathbf{G}}$
and any $\la\in\operatorname{Irr}({\mathbf{L}}^F)$. More generally $\chi\in{\mathcal{E}}({\mathbf{G}}^F,s)$ is called
\emph{$d$-Jordan cuspidal} if its Jordan correspondent
$\Psi_s(\chi)\in{\mathcal{E}}(C_{{\mathbf{G}}^*}(s)^F,1)$ is $d$-cuspidal. With this, the
classification of $p$-blocks (in the smoothest case) can be formulated as
follows in terms of Lusztig induction:
\begin{theorem} \label{thm:p-blocks}
Let ${\mathbf{H}}$ be a simple algebraic group of simply connected type defined over
${\mathbb{F}}_q$ with corresponding Frobenius endomorphism $F:{\mathbf{H}}\rightarrow{\mathbf{H}}$. Let
${\mathbf{G}}\le {\mathbf{H}}$ be an $F$-stable Levi subgroup. Let $p{\not|}q$ be a prime
and set $d=d_p(q)$.
\begin{enumerate}
\item[\rm(a)] For any $d$-split Levi subgroup ${\mathbf{L}}\le{\mathbf{G}}$ and any
$d$-Jordan-cuspidal character $\la\in{\mathcal{E}}({\mathbf{L}}^F,p')$, there exists a unique
$p$-block $b({\mathbf{L}},\la)$ of ${\mathbf{G}}^F$ such that all irreducible constituents
of $R_\bL^\bG(\la)$ lie in $b({\mathbf{L}},\la)$.
\item[\rm(b)] The induced map $({\mathbf{L}},\la)\mapsto b({\mathbf{L}},\la)$ on
$G$-conjugacy classes of pairs as in~(a) is bijective if $p\geq 3$ is good
for ${\mathbf{G}}$, and if moreover $p\ne 3$ if ${\mathbf{G}}^F$ has a factor $\tw3D_4(q)$.
\end{enumerate}
\end{theorem}
A statement in full generality can be found in \cite[Thm.~A]{KM15a}. Kessar
and the author \cite{KM13} used this classification to complete the proof of
the ``if'' direction of Brauer's Height Zero Conjecture~\ref{conj:H0}, relying
on the Berger--Kn\"orr reduction (Theorem~\ref{thm:BK}) and on the Blau--Ellers
result (Theorem~\ref{thm:BE}), thus offering further proof of the viability
of the reduction approach to local-global conjectures:
\begin{theorem}[Kessar--Malle (2013)] \label{thm:H0}
Let $B$ be a $p$-block of a finite group. If $B$ has abelian defect groups,
then all irreducible characters in $B$ have height zero.
\end{theorem}
As an important step in the proof we show that the Bonnaf\'e--Rouquier Morita
equivalences in Theorem~\ref{thm:BR} preserve abelianity of defect groups (this
has now been reproved more conceptually in \cite{BDR}).
Navarro, Solomon and Tiep \cite{NST15} use Theorem~\ref{thm:H0} to derive an
effective criterion to decide the abelianity of Sylow subgroups from the
character table.
\subsection{The ``only if'' direction}
A crucial ingredient of Navarro and Tiep's proof of Theorem~\ref{thm:maxdef}
was a theorem of Gluck and Wolf for the prime~2. The missing odd-$p$ analogue
of this seemed to constitute a major obstacle towards establishing the
remaining, ``only if'' direction of the Height Zero Conjecture. Using the
classification of finite simple groups Navarro and Tiep \cite{NT13} have now
obtained a proof of this result:
\begin{theorem}[Navarro--Tiep (2013)]
Let $N\unlhd G$ be finite groups, $p$ a prime, and $\theta\in\operatorname{Irr}(N)$ a
$G$-invariant character. If $\chi(1)/\theta(1)$ is prime to $p$ for all
$\chi\in\operatorname{Irr}(G)$ lying above $\theta$ then $G/N$ has abelian Sylow
$p$-subgroups.
\end{theorem}
Building on this, Navarro and Sp\"ath \cite{NS14} succeeded in proving the
following reduction theorem for this direction of the conjecture:
\begin{theorem}[Navarro--Sp\"ath (2014)] \label{thm:NS}
The ``only if''-direction of Brauer's Height Zero Conjecture~\ref{conj:H0}
holds for all finite groups at the prime $p$, if
\begin{enumerate}
\item[(1)] it holds for all $p$-blocks of all quasi-simple groups, and
\item[(2)] all simple groups satisfy the inductive Alperin--McKay condition
at $p$.
\end{enumerate}
\end{theorem}
For their proof, they introduce and study the new notion of central block
isomorphic character triples.
The first assumption of Theorem~\ref{thm:NS} was recently shown to hold
\cite{KM15b}, again building on the classification of blocks of finite
reductive groups described before:
\begin{theorem}[Kessar--Malle (2015)]
The ``only if''-direction of Brauer's Height Zero Conjecture~\ref{conj:H0}
holds for all $p$-blocks of all quasi-simple groups.
\end{theorem}
Thus, Brauer's height zero conjecture will follow once the inductive
Alperin--McKay condition has been verified for all simple groups. This
again underlines the central importance of the Alperin--McKay
Conjecture~\ref{conj:AM} in the representation theory of finite groups.
\subsection{Characters of positive height}
Conjecture~\ref{conj:H0} only considers characters of height zero. It is
natural to ask what can be said about the heights of other characters
in a given block. There are two conjectural answers to this question. To state
the first one, for $B$ a $p$-block we define
$$\operatorname{mh}(B):=\min\{\operatorname{ht}(\chi)\mid \chi\in\operatorname{Irr}(B)\setminus\operatorname{Irr}_0(B)\},$$
the minimal positive height of a character in $\operatorname{Irr}(B)$, and we formally set
$\operatorname{mh}(B)=\infty$ if all characters in $B$ are of height~0. Eaton and Moret\'o
\cite{EM14} have put forward the following:
\begin{conjecture}[Eaton--Moret\'o (2014)] \label{conj:EM}
Let $B$ be a $p$-block of a finite group with defect group $D$. Then
$\operatorname{mh}(B)=\operatorname{mh}(D)$.
\end{conjecture}
The case when $\operatorname{mh}(B)=\infty$ is Brauer's height zero conjecture, since
clearly all characters of the defect group $D$ are of height zero if and only
$D$ is abelian.
Eaton and Moret\'o \cite{EM14} proved their conjecture for all blocks
of symmetric and sporadic groups, and for $\operatorname{GL}_n(q)$
for the defining prime. They also showed that for $p$-solvable groups we
always have $\operatorname{mh}(D)\le\operatorname{mh}(B)$, and that this inequality is true for all groups
if Dade's projective conjecture (see Section~\ref{sec:Dade}) holds. Brunat
and the author \cite{BM15} then checked that the Eaton--Moret\'o
Conjecture~\ref{conj:EM} holds for all principal blocks of quasi-simple groups,
for all $p$-blocks of quasi-simple groups of Lie type in characteristic~$p$,
all unipotent blocks of quasi-simple exceptional groups of Lie type, and all
$p$-blocks of covering groups of an alternating or symmetric group. No
reduction of this conjecture to simple groups is known, though.
A different approach to characters of positive height is given by Dade's
Conjecture, which we review in Section~\ref{sec:Dade} below.
\section{The Alperin Weight Conjecture} \label{sec:AWC}
While the McKay Conjecture counts characters of $p'$-degree, the Alperin
Weight Conjecture concerns characters whose degree has maximal possible
$p$-part, the so-called defect zero characters.
\subsection{Weights and chains} An irreducible character $\chi$ of a finite
group $G$ has \emph{defect zero} if $\chi(1)_p=|G|_p$. A \emph{$p$-weight}
of $G$ is a pair $(Q,\psi)$ where $Q\le G$ is a \emph{radical $p$-subgroup},
that is, $Q=O_p({\mathrm{N}}_G(Q))$, and $\psi\in\operatorname{Irr}({\mathrm{N}}_G(Q)/Q)$ is a defect zero
character. If $\psi$ lies in the
block $b$ of ${\mathrm{N}}_G(Q)$, then the weight $(Q,\psi)$ is said to \emph{belong to
the block $b^G$} of $G$. Alperin's original formulation of the weight
conjecture \cite{Al87} now proposes to count the $p$-modular irreducible
Brauer characters $\operatorname{IBr}(B)$ in a $p$-block $B$ in terms of weights:
\begin{conjecture}[Alperin (1986)] \label{conj:BAW}
Let $G$ be a finite group, $p$ be a prime and $B$ a $p$-block of $G$. Then
$$|\operatorname{IBr}(B)|=|\{[Q,\psi]\mid (Q,\psi)\text{ a $p$-weight belonging to }B\}|,$$
where $[Q,\psi]$ denotes the $G$-conjugacy class of the $p$-weight
$(Q,\psi)$.
\end{conjecture}
The name ``weights'' was apparently chosen since for groups of Lie type in
defining characteristic the irreducible Brauer characters are indeed labelled
by (restricted) weights of the corresponding linear algebraic groups.
Alperin \cite{Al87} notes the following nice consequence of his conjecture:
\begin{theorem}[Alperin]
Assume that Conjecture~\ref{conj:BAW} holds. Let $B$ be a block with abelian
defect groups and $b$ its Brauer correspondent. Then $|\operatorname{Irr}(B)|=|\operatorname{Irr}(b)|$ and
$|\operatorname{IBr}(B)|=|\operatorname{IBr}(b)|$.
\end{theorem}
Kn\"orr and Robinson have given a reformulation of the weight conjecture in
terms of certain simplicial complexes related to the $p$-local structure of
the group $G$. For this, let ${\mathcal{P}}(G)$ denote
the set of chains $1<P_1<\ldots<P_l$ of $p$-subgroups of $G$. This induces a
structure of a simplicial complex on the set of non-trivial $p$-subgroups of
$G$. For $C=(1<P_1<\ldots<P_l)$ such a chain set $|C|=l$, the length of $C$,
and for
$B$ a $p$-block of $G$ let $B_C$ denote the union of all blocks $b$ of the
normaliser ${\mathrm{N}}_G(C)$ with $b^G=B$. With this notation, Kn\"orr and Robinson
\cite[Thm.~3.8]{KR89} obtain the following reformulation:
\begin{theorem}[Kn\"orr--Robinson (1989)] \label{thm:KR}
The following two assertions are equivalent for a prime $p$:
\begin{enumerate}
\item[\rm(i)] Conjecture~\ref{conj:BAW} holds for all $p$-blocks of all
finite groups;
\item[\rm(ii)] for all $p$-blocks $B$ of all finite groups $G$ we have
$$\sum_{C\in{\mathcal{P}}(G)/\sim}(-1)^{|C|}|\operatorname{IBr}(B_C)| =0,$$
where the sum runs over the chains in ${\mathcal{P}}(G)$ up to $G$-conjugacy.
\end{enumerate}
\end{theorem}
Here, in fact the set ${\mathcal{P}}$ can also be replaced by the homotopy equivalent
sets of all chains of elementary abelian $p$-subgroups, or of all radical
$p$-subgroups, or by the set of chains in which all members are normal in the
larger ones.
By using M\"obius inversion it is possible from Theorem~\ref{thm:KR} to
describe the number of $p$-defect zero characters of $G$ in terms of local
subgroup information.
In the case of abelian defect groups, there is a strong relation between
Alperin's Weight Conjecture and the two previously introduced conjectures:
\begin{theorem}[Kn\"orr--Robinson (1989)]
The following two assertions are equivalent for a prime $p$:
\begin{enumerate}
\item[\rm(i)] the Alperin--McKay Conjecture~\ref{conj:AM} holds for every
$p$-block with abelian defect;
\item[\rm(ii)] Alperin's Weight Conjecture~\ref{conj:BAW} holds for every
$p$-block with abelian defect.
\end{enumerate}
\end{theorem}
In fact, Kn\"orr--Robinson \cite[Prop.~5.6]{KR89} had to assume the
validity of the ``if''-direction of Conjecture~\ref{conj:H0} which is now
Theorem~\ref{thm:H0}.
The Alperin Weight Conjecture~\ref{conj:BAW} was proved by Isaacs and Navarro
\cite{IN95}
for $p$-solvable groups. It holds for all blocks with cyclic or non-abelian
metacyclic defect group by work of Brauer, Dade, Olsson and Sambale, see the
lecture notes \cite{Sam14}. It was shown to hold for groups of Lie type
in defining characteristic by Cabanes \cite{Ca88}, for ${\mathfrak{S}}_n$
and for $\operatorname{GL}_n(q)$ by Alperin and Fong \cite{AF90}, and by J.~An for certain
groups of classical type, see \cite{An94} and the references
therein. The latter proofs rely on an explicit determination of all weights
in the groups under consideration.
\subsection{Reductions}
As in the case of the Alperin--McKay conjecture, the Alperin weight
conjecture was first reduced in a non-block-wise form to some stronger
inductive statement (AWC) about finite simple groups by Navarro and Tiep
\cite{NT11} in 2011. In the same paper, they verified their inductive AWC
condition for example for groups of Lie type in their defining characteristic,
as well as for all simple groups with abelian Sylow 2-subgroups, while An and
Dietrich \cite{AD12} show it for sporadic groups. This reduction was then
refined by Sp\"ath \cite{Sp13b} to treat the block-wise version:
\begin{theorem}[Sp\"ath (2013)]
The Alperin Weight Conjecture~\ref{conj:BAW} holds for all finite groups at
the prime $p$ if all finite \emph{non-abelian simple} groups satisfy the
so-called \emph{inductive block-wise Alperin weight condition (BAW)} at $p$.
\end{theorem}
Puig \cite{Pu11,Pu12} has announced another reduction of
Conjecture~\ref{conj:BAW} to nearly simple groups.
As in the case of the other inductive conditions, the \emph{inductive BAW
condition} for a simple group $S$ requires the existence of suitable equivariant
bijections at the level of the universal $p'$-covering group $G$ of $S$, this
time between $\operatorname{IBr}(B)$ and the weights attached to the block $B$ of $G$,
see \cite[Def.~4.1]{Sp13b} and also \cite{Sp16}.
In the same paper Sp\"ath shows that her inductive BAW condition holds for
various classes of simple groups, including the groups of Lie type in their
defining characteristic and for all simple groups with abelian Sylow
2-subgroup.
The inductive BAW condition has meanwhile been established by Breuer
\cite{BrWeb} for most sporadic simple groups, by the author \cite{Ma14} for
alternating groups, the Suzuki and the Ree groups, and by Schulte
\cite{Sch15} for the families of exceptional groups $G_2(q)$ and $\tw3D_4(q)$.
Koshitani and Sp\"ath \cite{KS15a} show that it holds for all blocks with
cyclic defect groups when $p$ is odd.
For blocks $B$ with abelian defect groups Cabanes--Sp\"ath
\cite[Thm.~7.4]{CS13} and the author \cite[Thm.~3.8]{Ma14} have observed a
strong relation between the inductive BAW condition and the inductive
Alperin--McKay condition; we give here an even more general version from
\cite[Thm.~1.2]{KS15a}:
\begin{theorem}[Koshitani--Sp\"ath (2015)] \label{thm:KS}
Let $S$ be non-abelian simple with universal covering group $G$, $B$ a
$p$-block of $G$ with abelian defect group $D$ and Brauer correspondent $b$
in ${\mathrm{N}}_G(D)$. Assume that the following hold:
\begin{enumerate}
\item[\rm(1)] The inductive Alperin--McKay condition holds for $B$ with
respect to $M:=N_G(D)$ with a bijection $\Omega:\operatorname{Irr}_0(B)\rightarrow
\operatorname{Irr}_0(b)$; and
\item[\rm(2)] the decomposition matrix associated to
$\Omega^{-1}(\{\chi\in\operatorname{Irr}(b)\mid D\le\ker(\chi)\})$ is lower uni-triangular
with respect to some ordering of the characters.
\end{enumerate}
Then the inductive BAW condition holds for $B$ (considered as a block of
$G/O_p(G))$.
\end{theorem}
This result highlights the importance of the existence of basic sets. Recall
that $X\subseteq\operatorname{Irr}(B)$ is a \emph{basic set for $B$} if the restrictions
to $p'$-elements of the $\chi\in X$ are linearly independent and span the
lattice ${\mathbb{Z}}\operatorname{IBr}(B)$ of Brauer characters. Such basic sets are known to exist
for groups of Lie type ${\mathbf{G}}^F$ whenever the prime $p$ is good and does not
divide $|Z({\mathbf{G}}^F)|$: by a result of Geck and Hiss \cite{GH91}, ${\mathcal{E}}({\mathbf{G}}^F,p')$
is a basic set for ${\mathbf{G}}^F$, which moreover by definition is $\operatorname{Aut}(G)$-invariant.
Denoncin \cite{De15} has recently constructed such basic sets for the special
linear and unitary groups for all non-defining primes building on work
of Geck \cite{Ge91}.
It is an open question, formulated in \cite[(1.6)]{GH91} whether basic sets
exist for the blocks of all finite groups.
\begin{problem}
Construct natural $\operatorname{Aut}(G)_B$-invariant basic sets for blocks $B$ of finite
groups of Lie type $G$.
\end{problem}
Given an $\operatorname{Aut}(G)$-invariant basic set, condition~(2) of Theorem~\ref{thm:KS}
would be satisfied for example if the $p$-modular decomposition matrix of $G$
is lower unitriangular with respect to this basic set. This property is widely
believed to hold for groups of Lie type, and has been shown in a number
of important situations, for example by Gruber and Hiss \cite{GH97} if $G$ is
of classical type and the prime $p$ is \emph{linear} for $G$, as well as for
$\operatorname{SL}_n(q)$ and $\operatorname{SU}_n(q)$ by
Kleshchev and Tiep \cite{KT09} and Denoncin \cite{De15}, respectively.
\begin{problem}
Show that decomposition matrices of finite reductive groups in non-defining
characteristic have uni-triangular shape.
\end{problem}
In the case of arbitrary defect it then still remains to determine the weights.
The weights of certain classical groups as well as of several series of
exceptional groups of Lie type of small rank have been determined by An and
collaborators, see e.g.~\cite{An94,ADH14}, but this has not resulted in a
general, type independent approach.
\begin{problem}
Give a generic description of weights of finite reductive groups, possibly
in the spirit of $d$-Harish-Chandra theory. Is there an analogue of Jordan
decomposition for weights?
\end{problem}
\section{Dade's Conjecture} \label{sec:Dade}
Dade's Conjecture \cite{D92} extends the Kn\"orr--Robinson formulation in
Theorem~\ref{thm:KR} of the Alperin Weight Conjecture, and suggests a way to
count the characters of any defect in terms of the local subgroup structure.
It thus generalises both the McKay Conjecture~\ref{conj:AM} and the Alperin
Weight Conjecture~\ref{conj:BAW}. For this let us write
$$\operatorname{Irr}_d(B):=\{\chi\in\operatorname{Irr}(B)\mid \operatorname{ht}(\chi)=d\}$$
for the irreducible characters in a block $B$ of height~$d$. Recall the set
${\mathcal{P}}(G)$ of chains of $p$-subgroups of $G$ from the previous section. The
so-called \emph{projective form} of Dade's conjecture claims:
\begin{conjecture}[Dade (1992)] \label{conj:Dade}
Let $B$ be a $p$-block of a finite group $G$. Then
$$\sum_{C\in{\mathcal{P}}(G)/\sim}(-1)^{|C|}|\operatorname{Irr}_d(B_C|\nu)| =0\qquad
\text{for every }\nu\in\operatorname{Irr}(O_p(G))\text{ and }d\ge0,$$
where the sum runs over chains in ${\mathcal{P}}(G)$ up to $G$-conjugacy.
\end{conjecture}
As for the Kn\"orr--Robinson formulation of Alperin's Weight Conjecture, the
set ${\mathcal{P}}$ of chains may be replaced by chaines involving only elementary
abelian $p$-subgroups, or only radical $p$-subgroups.
Dade's Conjecture was proved for $p$-solvable groups by Robinson \cite{Ro00}.
An has shown Dade's conjecture for general linear and unitary groups in
non-defining characteristic, and for various further groups of Lie type of
small rank, see e.g.~\cite{An01}.
Recently, in a tour de force Sp\"ath \cite{Sp15} managed to reduce a suitable
form of Dade's conjecture to a statement on simple groups:
\begin{theorem}[Sp\"ath (2015)]
Dade's projective Conjecture~\ref{conj:BAW} holds for all finite groups at
the prime $p$, if all finite \emph{non-abelian simple} groups satisfy the
so-called \emph{character triple conjecture} at $p$.
\end{theorem}
The character triple conjecture (see \cite[Conj.~1.2]{Sp15}) is a statement
about chains in ${\mathcal{P}}$ similar to Dade's projective conjecture, but as in the
previous inductive conditions it also involves the covering groups and the
action of automorphisms. It has been proved for blocks with cyclic defect,
the blocks of sporadic quasi-simple groups except for the baby monster $B$
and the monster $M$ at $p=2$, and for $\operatorname{PSL}_2(q)$ \cite[Thm.~9.2]{Sp15}.
\begin{problem}
Find a generic way to describe $p$-chains in finite reductive groups.
\end{problem}
\section{Brou\'e's Abelian Defect Group Conjecture}
An attempt to give a structural explanation for all of the ``numerical''
local-global conjectures mentioned so far is made by Brou\'e's conjecture at
least in the case of blocks with abelian defect groups. Recall that the
Alperin--McKay Conjecture~\ref{conj:AM} relates character degrees of a
$p$-block $B$ of a finite group $G$ with defect group $D$ to those of a
Brauer corresponding block $b$ of ${\mathrm{N}}_G(D)$. Brou\'e \cite{Br90} realised
that this numerical relation would be a consequence of a (conjectural) intimate
structural relation between the module categories of the ${\mathcal{O}}$-algebras $B$
and $b$:
\begin{conjecture}[Brou\'e (1990)] \label{conj:Br}
Let $B$ be a block of a finite group with defect group $D$ and $b$ its Brauer
corresponding block of ${\mathrm{N}}_G(D)$. Then the bounded derived module categories
of $B$ and of $b$ are equivalent.
\end{conjecture}
Brou\'e shows that the validity of his conjecture for a block $B$ would imply
the Alperin--McKay conjecture as well as the Alperin weight conjecture for $B$.
Brou\'e's conjecture holds for all blocks of $p$-solvable groups, since in
this case by Dade \cite{D80} and Harris--Linckelmann \cite{HL00}, the two
blocks in question are in fact Morita equivalent. It also holds for blocks with
cyclic or Klein four defect groups by work of Rickard \cite{Ri89,Ri96}. Using
the classification of finite simple groups it has been shown for principal
blocks with defect group $C_3\times C_3$ by Koshitani and Kunugi \cite{KK02},
and it has been verified for many blocks with abelian defect of sporadic
simple groups.
In their landmark paper \cite{CR1} Chuang and Rouquier have given a proof of
Brou\'e's conjecture for ${\mathfrak{S}}_n$ and for $\operatorname{GL}_n(q)$, building on
previous results of Chuang and Kessar \cite{CK02} and Turner \cite{Tu02} who
obtained derived equivalences for a very particular class of blocks, the
so-called Rouquier blocks. Dudas, Varagnolo and Vasserot \cite{DVV} have
constructed derived equivalences between blocks of various finite unitary
groups which together with a result of Livesey \cite{Liv15} provides a
verification of Brou\'e's conjecture for $\operatorname{GU}_n(q)$ for linear primes.
Brou\'e and the author \cite{BMM} proposed a more precise form of
Conjecture~\ref{conj:Br} in the case of unipotent blocks of finite groups of
Lie type in terms of the $\ell$-adic cohomology of Deligne--Lusztig varieties.
This version has recently been explored by Craven and Rouquier \cite{CR2}
using the concept of perverse equivalences.
Despite of these partial successes, in contrast to the situation for the other
conjectures stated earlier, there is no general reduction theorem for Brou\'e's
conjecture to a condition on simple groups. A further challenge lies in the
fact that currently Brou\'e's conjecture is only formulated for blocks with
abelian defect groups, and it remains unclear how a generalisation to blocks
of arbitrary defect might look like.
|
1,477,468,751,082 | arxiv | \section*{Results}
\subsection*{Uniqueness of human behavior}
To evaluate the likelihood of identifying individuals within smartphone usage data we use a dataset that spans 12 months (Feb. 1st 2016 to Jan. 31st 2017) and encompasses 3.5 million people using in total 1.1 million unique apps.
We have chosen to disregard phone vendor specific apps, such as alarm clock apps, built-in phone dialer apps, etc. and only focus on apps that are downloadable from Google Play. From this we form app \emph{fingerprints} for each user, i.e. a binary vector containing information about which apps the user has used for every month. We only consider apps actually used by a user in a month, not apps that were installed but never used.
Figure 1 illustrates the typical patterns of app usage, with individuals continuously changing their app-fingerprint over the course of a year by trying out new apps and ceasing to use others.
As such, app-fingerprints slowly drift over time, with the average rate of change being roughly constant between consecutive months (Figure S1).
In combination with fingerprints drifting, the number of apps people use on their smartphones is constant over time as well, suggesting that humans have a limited capacity for interacting, navigating, and managing the plethora of services and social networks offered by smartphones (Figure S2).
This limiting effect has been observed in other aspects of life such as interactions among people~\cite{dunbar1992neocortex} or geo-spatial exploration~\cite{alessandretti2016evidence}.
\begin{figure}[!hptb]
\centering
\includegraphics[width=0.49\textwidth]{figures/version_1/illustration.pdf}
\label{fig:illustration}
\caption{Smartphone usage patterns change over time, with users continuously changing which apps they use.
This study is based on smartphone app-fingerprints of 3,556,083 individuals.
For each month between February 2016 and January 2017, we retrieve the list of apps a person has used during the period ($n_{\text{month}} = 23$ apps per person per month on average, or $n_{\text{year}} = 76$ apps on average during the full 12-month period).
App-fingerprints are represented as a sparse \textit{user} $\times$ \textit{app} $\times$ \textit{month} tensor, with $1$ indicating a person has used an app during a specific month, $0$ otherwise.
To look at longer time-windows, we aggregate entries according to a maximum value heuristic and retain entries if they are greater than zero.
}
\end{figure}
The risk of re-identifying individuals is estimated by means of unicity~\cite{de2013unique,de2015unique}.
Here, re-identification corresponds to successful assignment of an app-fingerprint to a single unique user in our dataset.
This does not entail that we can directly get the \textit{real} identity of a person, such as name, address, e-mail, social security number, etc.
This, however, would become possible if this knowledge is cross-referenced with other data sources, which there unfortunately has been countless examples of~\cite{narayanan2008robust,barbaro2006face,barth2012re,sweeney2013identifying,tockar2014riding}.
Given an individual's app-fingerprint, unicity quantifies the number of apps needed to uniquely re-identify that person; the fewer apps we need the more unique a person is and vice versa.
Given a dataset of app-fingerprints and set of apps $i$, $j$ and $k$, a user $u$ is uniquely identifiable if that user, and only that user, in the dataset has used apps $i$, $j$ and $k$, i.e. matching the fingerprint of user $u$.
In our dataset we evaluate uniqueness as the percentage of users we can re-identify using $n$ number of apps.
To attack the dataset without any prior knowledge of the system itself, the most realistic strategy is to pick apps at random.
Figure 2A shows the efficiency of this type of random sampling of apps, with $21.8\%$ of users being re-identified from using 4 apps.
Although this value means only 1 of every 5 individual can be re-identified, it is surprisingly high given that we only use binary features (that is, has the user used the app or not) and have no information regarding \emph{when} an app was used or for \emph{how long}---features which would only make fingerprints more unique.
In case of a real attack, however, the above results might give the general public a false sense of security as it is possible to use free, publicly available information to formulate an attack strategy that greatly outperforms the random strategy.
The popularity of apps follows a heavy-tailed distribution~\cite{olmstead2016apps} (and see Figure S3); a few apps are used by millions or even billions of individuals, while an overwhelming majority of apps only have a couple of users.
All this information is available on \emph{Google Play} from where it is possible to retrieve by automatic means, or it can be purchased from vendors such as AppMonsta.
Because this information is so easily attainable, we formulate a strategy that takes the user base of apps (popularity of apps) into account, starting with the least used apps: the \textit{popularity strategy}.
Rather than using the popularity in terms of downloads on Google Play, we use the popularity counted as the number of users that use an app in our dataset (see Methods for details).
A real-world re-identification attack strategy could use the Google Play download numbers for each app to reduce the amount of computation required.
Figure 2B shows that just using 2 apps with the popularity strategy greatly outperforms the random strategy, and using 4 apps, we are able to re-identify $91.2\%$ of users.
\begin{figure}[!hptb]
\centering
\includegraphics[width=0.49\textwidth]{figures/version_1/unicity_5.pdf}
\label{fig:unicity}
\caption{Uniqueness of smartphone app-fingerprints given $n$ number of apps. (A) Selecting apps at random is not an efficient way of identifying individuals and achieves a modest re-identification rate of $21.8\%$ when using 4 apps. (B) Using freely available outside information from Google Play to attack the problem yields significantly higher rates of re-identifications, $91.2\%$ when using 4 apps. Error bars denote one standard deviation.
App-fingerprints are constructed from the full 12 months of data, and $99.7\%$ of individuals within our dataset have a unique fingerprint.}
\end{figure}
\subsection*{Seasonal variability of anonymity}
Human lives, routines and behaviors evolve over time~\cite{kossinets2006empirical,sekara2016fundamental,alessandretti2016evidence}, and therefore individual app-fingerprints might become harder (or easier) to identify.
To quantify the seasonal variability of uniqueness, we construct monthly fingerprints for all individuals and evaluate anonymity using the unicity framework.
Figure 3 shows the fraction of individuals that are re-identifiable per month, and reveals an increased fraction of identifications for June, July, and August---months which are typically considered vacation months.
The increase in uniqueness is independent of how we select apps (random, or by popularity).
In fact, during these three months the process of identifying individuals from randomly selected apps is respectively $14.8\%$ and $18.4\%$ more effective when using $5$ and $10$ apps.
For the popularity scheme, we note $6.8\%$ and $8.0\%$ higher rates of identifications when using $5$ and $10$ apps.
The increase in identifiability stems from a combination of related behavioral changes (Figure S4).
Apps related to categories such as travel, weather, sports, and health \& fitness gain popularity during the summer months (June, July, August), related to people traveling and downloading apps that help them navigate new cities, using fitness apps to motivate them to exercise more, and using apps that enable them to follow global sports events such as the 2016 UEFA European Championship in football (soccer).
Simultaneously, apps related to categories such as education and business become less popular.
This suggests an interplay between our physical behavior and our app-fingerprint, indicating that when we change our geo-spatial routines by traveling and exploring new places, we also change our app usage. This change in phone behavior makes our app-fingerprints more unique and easier to identify.
\begin{figure}[!hptb]
\centering
\includegraphics[width=0.49\textwidth]{figures/version_1/unicity_time_norm.pdf}
\label{fig:unicity_time}
\caption{Seasonal variations of re-identifiable app-fingerprints over 12 months.
The fraction of individuals which we can re-identify by using $n$ apps (1-10) changes from month to month, revealing that uniqueness has a temporal component, and that people are more unique during summer.
This is independent of whether apps are selected using: (A) a random heuristic or (B) an attack scheme.
Compared to Figure 2, the fraction of re-identified individuals per month is lower because we have segmented behavior into monthly fingerprints as compared to constructing fingerprints from 12 months of data.
Uniqueness is rescaled according to the set size of apps present within each month (see Figure S5).
}
\end{figure}
\subsection*{Hiding in the crowd}
Our dataset is limited to 3.5 million users, similar in size to a small country, but how will uniqueness change as more users are added (increased sample-size)?
Will it become possible to hide in the crowd?
More precisely, how does the population size affect the extent to which a specific app-fingerprint remains unique.
That is, as more and more users are added to our sample, does the likelihood to observe multiple individuals with identical fingerprints also increase?
This corresponds to an inverse k-anonymity problem~\cite{sweeney2002k}, where one needs to estimate the number of users that should be added in order to increase the overall anonymity of the dataset.
(Bearing in mind that overall anonymity is not a good measure for the sensitivity of individual traces.)
To understand the effect of sample-size on unicity, we first slice our dataset into smaller subsamples and use it to estimate the uniqueness for sample sizes ranging from 100,000 to 3.5 million individuals.
Figure 4A reveals that sample size has a large effect on the re-identification rate when selecting apps using a random heuristic.
Considering $n_{\text{apps}} = 5$, the average re-identification rate decreases from $45.89\%$ for a sample size of 1 million individuals to $37.33\%$ for 2 million individuals and $32.09\%$ for the full sample of 3.5 million people.
The attack scheme is considerably less affected (Figure 4B).
For $n_{\text{apps}} = 5$ we find that the re-identification rates are respectively $96.60\%$, $94.23\%$ and $92.72\%$ for sample sizes of 1, 2 and 3.5 million individuals.
As such, increasing the sample size by $250\%$ (from 1 to 3.5 million individuals) only reduces uniqueness by approximately 4 percent-points.
In order to estimate uniqueness for sample sizes larger than the study population we extrapolate results from Figure~4B for $n_{\text{apps}} = 5$.
We express uniqueness of fingerprints using multiple functional forms including: power-laws ($\sim x^{\gamma} $), exponentials ($\sim \exp(\gamma x)$), stretched exponentials ($\sim \exp(x^\gamma)$), and linear functions ($\sim x$), where $x$ denotes the sample size and $\gamma$ is a scaling factor.
The stretched exponential and power-law show the highest agreement with the data (Figure S6), and roughly suggest that 5 apps are enough to re-identify 75\%--80\% of individuals for 10 times larger samples (35 million individuals).
Although the applied analysis displays high uncertainty with regards to extrapolations, it illustrates the observation that increasing the population size does not help us in hiding in the crowd (that is, uniqueness is not a characteristic of small sample sizes).
\begin{figure}[!hptb]
\centering
\includegraphics[width=0.49\textwidth]{figures/version_1/unicity_sample-size_5.pdf}
\label{fig:unicity_sample-size}
\caption{Identifying fingerprints across data-samples with varying population sizes.
Fingerprints are constructed from 12 months of data.
The uniqueness of individual fingerprints is reduced (lower re-identification rates) as we increase the sample-size independently of whether apps are selected: (A) randomly or (B) according to the attack heuristic.
The magnitude of the change, however, varies greatly different between the two heuristics.
Results show in both panels are calculated from multiple realizations of the data (see Materials and Methods section).
}
\end{figure}
\section*{Discussion}
Phone behavior is different from credit card traces and mobile phone mobility data in that the ease with which data can be collected, and any Android app can request permission to access your app history. We reviewed apps with more than 100,000 downloads which request the `retrieve running apps' permission on Android, and that are free (no price or in app purchases). Out of these 40 apps 31 contain ads. There are 15 apps that belong to the Personalization or Tools category, mostly anti-virus or launcher apps, which may need the permission to provide their features. For the other 25 apps, we found no features in the app that would motivate requesting this permission. Some of these apps are from major phone vendors whose privacy policy says they may share data with third parties.
The economic incentives, the easy and global scale of collecting and trading this data without users' knowledge creates some serious concerns, especially since this practice is in violation of users' expectations or knowledge~\cite{martin2018penalty, posner1981economics}.
The EU General Data Protection Regulation~(GDPR) may be a first step towards addressing these concerns through regulation, since it does mention unicity~\cite{gdpr} and applies globally to data about any EU citizen.
Our conclusion from this study is that application usage data should be considered personal information, since it is a unique fingerprint.
This study was performed using app usage data collected from Android phones from a single vendor only.
As phone vendor specific apps were disregarded in the analysis, we expect the results to generalize across all Android devices.
Further, we have no reason to believe that app usage behaviour and uniqueness is fundamentally different for individuals using iOS devices compared to Android users.
\matmethods{
\subsection*{The dataset}
We use a dataset that spans 12 months, from Feb. 1st 2016 to Feb. 1st 2017, and contains monthly app-fingerprints for 3,556,083 individuals with pseudonymized app and user identifiers.
Each fingerprint is a binary vector composed of the apps a person has used during a month. We do not consider apps that are installed but unused.
We further disregard phone vendor specific apps such as: alarm clock, phone dialer, settings etc. and only focus on apps that are downloadable from Google Play.
This removes vendor bias, and makes re-identification harder. The users are selected from major markets in the Americas, Europe and Asia. Thus, the impact of regional variations on uniqueness due to local applications is smaller than if we had sampled users from anywhere in the world.
In total, the number of unique apps in the dataset is 1,129,110, and each individual in the dataset uses at least 3 apps per month.
Data collection is approved by the Sony Mobile Logging Board and written consent in electronic form has been obtained for all study participants according to the Sony Mobile Application Terms of Service and the Sony Mobile Privacy Policy.
Raw data cannot be shared publicly on the web, but we offer the possibility to reproduce our results starting from raw records by spending a research visit at Sony Mobile Communications.
\subsection*{Estimating uniqueness}
To estimate the uniqueness of app-fingerprints, we apply the unicity framework~\cite{de2013unique} on $k$ samples of 10,000 randomly selected individuals.
For each individual we select $n$ apps (without replacement) from the person's app-fingerprint.
With the popularity based attack, apps with low user base are selected to increase the uniqueness of the app usage pattern.
The person is then said to be unique if they are the only individual in the dataset whose app-fingerprint contains those apps.
In cases where $n$ is larger the the total length of a person's app-fingerprint we instead select $\min(n,|\text{fingerprint}|)$ number of apps.
Uniqueness for a sample $k_i$ is then estimated as the fraction of the users that have unique traces.
Overall uniqueness is the average of the $s$ samples, and error-bars are given by the standard deviation.
We use $s=20$.
\subsection*{Subsampling the dataset}
To quantify the relation between sample size and uniqueness, we subsample the dataset by selecting a fraction of the original dataset.
For each sample $s_i$ we estimate uniqueness using the above methodology.
To account for selection bias we estimate uniqueness as the average of multiple realizations of a sample size.
We use 20 realizations for sample sizes between 100,000 - 500,000, 10 realizations for samples between 600,000 - 900,000, and 5 realizations for sample sizes above 1,000,000 individuals.
}
\showmatmethods
\acknow{V.S. and H.J would like to thank Sune Lehmann for useful discussions and feedback.}
\showacknow
\pnasbreak
\pnasbreak
\pnasbreak
\pnasbreak
\pnasbreak
\pnasbreak
\clearpage
|
1,477,468,751,083 | arxiv | \section{Introduction}
Without loss of generality, we consider a domain $\Omega$ that is a union of rectangular domains in $\mathbb{R}^2$, and we assume that $\Omega$ is formed by two different materials separated by a curve $\Gamma$. In particular, this means $\Gamma$ separates $\Omega$ into two sub-domains $\Omega^-$ and $\Omega^+$ such that $\overline{\Omega}= \overline{\Omega^- \cup \Omega^+ \cup \Gamma}$. Consequently, the diffusion coefficient $\beta$ on $\Omega$ is assumed to be a piecewise constant function:
\begin{eqnarray*}
\beta(x,y) = \left\{ \begin{array} {ll}
\beta^-, ~~(x,y) \in \Omega^-, \\
\beta^+, ~~(x,y) \in \Omega^+,
\end{array} \right.
\end{eqnarray*}
such that $\min\{\beta^-, \beta^+\} > 0$.
The main purpose of this article is to present a group of partially penalized immersed finite element (IFE) methods using Cartesian meshes to solve popular elliptic interface problems appearing in many applications in the following form:
\begin{eqnarray}
- \nabla \cdot \big( \beta \nabla u(x,y) \big) &=& f(x,y),~~(x,y) \in \Omega^- \cup \Omega^+,
\label{bvp_pde} \\
u(x,y)&=&0,~~(x,y) \in \partial \Omega, \label{eq:bvp_bc}
\end{eqnarray}
together with the jump conditions on the interface $\Gamma \subset \Omega$:
\begin{eqnarray}
\left[u \right]|_\Gamma &=& 0, \label{eq:bvp_int_1} \\
\left[\beta \pderiv{u}{n}\right]|_{\Gamma} &=& 0. \label{eq:bvp_int_2}
\end{eqnarray}
The homogeneous boundary condition \eqref{eq:bvp_bc} is discussed here for simplicity's sake, the method and related analysis can be readily extended to interface problems with a non-homogeneous boundary condition.
A large number of numerical methods based on Cartesian meshes have been introduced for elliptic interface problems. Since Peskin's pioneering work of the immersed boundary method \cite{CPeskin_Blood_Flow}, a variety of methods have been developed in finite difference formulation, such as the immersed interface method \cite{RLeveque_ZLLi_IIM}, the matched interface and boundary method \cite{YZhou_SZhao_MFeig_GWei_MIB_Elliptic}, and the ghost fluid method \cite{Osher_Ghost_Fluid_1999}. We refer to the book \cite{ZLi_KIto_IIM} for an overview of different numerical methods in finite difference framework.
In finite element formulation, certain types of modifications need to be executed for elements around the interface. One way is to modify the weak formulation of finite element equations near the interface. We refer to some representative methods such as the penalty finite element method
\cite{Babuska_Elliptic_Discontinuous,
JBarrett_CElliott_Fitted_Unfitted},
the unfitted finite element method
\cite{Hansbo_Hansbo_FEM_Elliptic},
the discontinuous Galerkin formulation methods
\cite{PBastin_CEngwer_Unfitted_DG, GGuyomarch_CLee_KJeon_DG_Elliptic}.
An alternative approach is to modify the approximating functions around the interface, for instance, the general finite element method \cite{Babuska_Banerjee_Osborn_Survey_GFEM, IBabuska_CCaloz_JOsborn_FEM_Elliptic_Rough_Cofficients}, the multi-scale finite element method
\cite{CChu_IGraham_THou_Multiscale_FE_Elliptic_Interface, Efendiev_Hou_Multiscale_FEM}, the extended finite element method \cite{Dolbow_Moes_Belytschko_XFEM}, the partition of unity method \cite{Babuska_Melenk_PU_FEM, Babuska_Zhang_Partition_of_Unity}, to name just a few.
Immersed finite element (IFE) methods are a particular class of finite element (FE) methods belonging to the second approach mentioned above, and they can solve interface problems with meshes independent of the interface
\cite{SAdjerid_TLin_1D_IDG,
SAdjerid_TLin_1D_IFE,
XHe_Thesis_Bilinear_IFE,
XHe_TLin_YLin_Bilinear_Approximation,
XHe_TLin_YLin_XZhang_Moving_CNIFE,
RKafafy_TLin_YLin_JWang_3D_IFE_Electric,
ZLi_IIM_FE,
ZLi_TLin_XWu_Linear_IFE,
TLin_YLin_WSun_ZWang_IFE_4TH,
TLin_XZhang_Elasticity,
SSauter_RWarnke_Composite_FE,
SVallaghe_TPapadopoulo_TriLinear_IFE}. If desired, an IFE method can use a Cartesian mesh to solve a boundary value problem (BVP) whose coefficient is discontinuous
across a curve $\Gamma$ with a non-trivial geometry. The basic idea of an IFE method is to employ standard FE functions in non-interface elements not intersecting with the interface $\Gamma$, but on each interface element, it uses IFE functions constructed with piecewise polynomials based on the natural partition of this element formed by the interface and the jump conditions required by the interface problem. The IFE functions are macro-elements \cite{DBraess_Finite_Element,Clough_Tocher_FE}, and each IFE function partially solves the related interface problem because it satisfies the interface jump conditions in a certain sense. Also, the IFE space on an interface element is consistent with
the corresponding FE space based on the same polynomial space in the sense that the IFE space becomes the FE space if the discontinuity in the coefficient
$\beta$ disappears in that element, see \cite{XHe_Thesis_Bilinear_IFE, XHe_TLin_YLin_Bilinear_Approximation} for more details.
IFE methods have been developed for solving interface problems involving several important types of partial differential equations, such as the second order elliptic equation
\cite{SAdjerid_TLin_1D_IDG,
SAdjerid_TLin_1D_IFE,
YGong_BLi_ZLi_Nonhomo_IFE,
XHe_Thesis_Bilinear_IFE,
XHe_TLin_YLin_Bilinear_Approximation,
RKafafy_TLin_YLin_JWang_3D_IFE_Electric,
Kwak_Wee_Chang_Broken_P1_IFE,
ZLi_IIM_FE,
ZLi_TLin_XWu_Linear_IFE,
TLin_YLin_RRogers_MRyan_Rectangle,
SSauter_RWarnke_Composite_FE,
SVallaghe_TPapadopoulo_TriLinear_IFE,
Wu_Li_Lai_Adaptive_IFE,
XZhang_PHDThesis}, the bi-harmonic and beam equations \cite{TLin_YLin_WSun_ZWang_IFE_4TH}, the planar elasticity system
\cite{YGong_ZLLi_Elas_IFE,
ZLLi_XZYang_IFE_Elasticity,
TLin_DSheen_XZhang_RQ1_IFE_Elasiticity,
TLin_XZhang_Elasticity},
the parabolic equation with fixed interfaces
\cite{Attanayake_Senaratne_Convergence_IFE_Parabolic,
TLin_DSheen_IFE_Laplace,
Wang_Wang_Yu_Immersed_EL_Interfaces}, and the parabolic equation with a moving interface
\cite{XHe_TLin_YLin_XZhang_Moving_CNIFE,
TLin_YLin_XZhang_MoL_Nonhomo,
TLin_YLin_XZhang_IFE_MoL}.
When jump conditions are suitably employed in the construction of IFE functions for an interface problem, the resulting IFE space usually has the optimal approximation capability from the point view of polynomials used in this IFE space
\cite{SAdjerid_TLin_1D_IDG,
BCamp_TLin_YLin_WSun_Quadratic_IFE,
BCamp_Thesis,
XHe_TLin_YLin_Bilinear_Approximation,
ZLi_TLin_YLin_RRogers_linear_IFE,
MBenRomdhane_Thesis_Quadratic_IFE,
XZhang_PHDThesis}. Numerical examples
\cite{SAdjerid_TLin_1D_IFE,
ZLi_IIM_FE,
ZLi_TLin_YLin_RRogers_linear_IFE,
ZLi_TLin_XWu_Linear_IFE} demonstrate that methods based on IFE spaces can converge optimally for second order elliptic interface problems. However, the proof for their optimal error bounds is {\em still elusive} except for the one dimensional case
\cite{SAdjerid_TLin_1D_IFE}, even though there have been a few attempts \cite{Chou_Kwak_Wee_IFE_Triangle_Analysis,XHe_TLin_YLin_Convergence_IFE,Kwak_Wee_Chang_Broken_P1_IFE,Wang_Chen_IFE_Analysis}. For two dimensional elliptic interface problems, only a {\em suboptimal} convergence in the $H^1$-norm has been rigorously proven \cite{XHe_TLin_YLin_Convergence_IFE}.
One of the major obstacles is the error estimation on edges between two interface elements where IFE functions have discontinuity.
Certain types of trace inequalities are needed and can be established, but it is not clear whether the generic constant factor in these inequalities is actually independent of the interface location. The scaling argument in the standard finite element error estimation is not applicable here because the local IFE spaces on two different interface elements are not affine equivalent in general. Besides, numerical experiments have demonstrated that the classic IFE methods in the literature
often have a much lager point-wise error over interface elements which, we believe, is caused by the inter-element discontinuity of IFE functions. In some cases, the convergence rates can even deteriorate when the mesh becomes finer. These observations motivate us to apply a certain penalty over interface edges for controlling negative impacts from this discontinuity. Natural candidates are those well known penalty strategies for handling inter-element discontinuity in interior penalty Galerkin methods and discontinuous Galerkin methods
\cite{Babuska_penalty,
Babuska_Zlamal_Nonconform_FEM_Penalty,
Brezzi_Cockburn_Marini_Suli_DG,
Douglas_Dupont_Penalty_Elliptic_Parabolic,
OdenBabuskaBaumann_DG_hp,
RiviereWheelerGiraut_DG,
RustenVassilevskiWinther_interior_penalty,
M.F.Wheeler_colloc_interior_penalty}. These considerations lead to the partially penalized IFE methods in this article. Theoretically, thanks to the enhanced stability by the penalty terms, we are able to prove that these new IFE methods do converge optimally in an energy norm. In addition, we have observed through abundant numerical experiments that these partially penalized IFE methods maintain their expected convergence rate in both $H^1$-norm and $L^2$-norm when their mesh becomes finer and finer while the classic IFE methods cannot maintain in some situations.
The partial penalty idea has also been used in the unfitted finite element method \cite{Hansbo_Hansbo_FEM_Elliptic}.
In this method, penalty terms are introduced on interface instead of interface edges because approximating functions are allowed to be discontinuous inside interface elements but they are continuous on element boundaries within each subdomain. IFE methods reverse this idea by imposing continuity of approximating functions inside each element but allowing discontinuity possibly only across interface edges. In addition, on the same mesh, the unfitted finite element method has a slightly larger number of degrees of freedom than IFE methods. On the other hand, the unfitted finite element method has been proven to have the optimal convergence rate under the usual piecewise $H^2$ regularity
\cite{Hansbo_Hansbo_FEM_Elliptic} while the analysis in the represent article needs to assume a piecewise $H^3$ or $W^{2,\infty}$ regular
in order to establish the optimal convergence for the partially penalized IFE methods.
Also, we note that these partially penalized IFE methods and their related error analysis can be readily modified to obtain IFE methods based on the discontinuous Galerkin formulation with advantages such as adaptivity even with Cartesian meshes. However, on the same mesh, the DG IFE methods generally have far more global degrees of freedom. For instance, on a Cartesian triangular mesh, a DG IFE method has about 6 times more unknowns than the classic IFE method. The partially penalized IFE methods presented here have the same global degrees of freedom as their classic counterparts; hence they can be more competitive in applications where advantages of DG IFE methods are not needed.
The rest of this article is organized as follows. In Section 2, we derive partially penalized IFE methods based on either linear or bilinear IFE functions for the interface problem. In Section 3, we show that the well-known trace inequalities on an element are also valid for linear and bilinear IFE functions even though they are not $H^2$ functions locally in an interface element. In Section 4, we show that these IFE schemes do have the optimal convergence rate in an energy norm.
In Section 5, we will present numerical examples to demonstrate features of these IFE methods.
\section{Partially penalized IFE methods}
Let $\cT_h$, $0 < h < 1$, be a family of Cartesian triangular or rectangular meshes on $\Omega$. For each mesh $\cT_h$, we let $\cN_h$ be the set of vertices of its elements, and let $\cE_h$ be the set of its edges and ${\mathring \cE}_h$ be the set of interior edges.
In addition, we let $\cT_h^i$ be the set of interface elements of $\cT_h$ and $\cT_h^n$ be the set of non-interface elements. Similarly, we let ${\mathring \cE}_h^i$ be the set of interior interface edges and let ${\mathring \cE}_h^n$ be the set of interior non-interface edges. For every interior edge $B \in {\mathring \cE}_h$, we denote
two elements that share the common edge $B$ by $T_{B,1}$ and $T_{B,2}$. For a function $u$ defined on $T_{B,1}\cup T_{B,2}$, we denote its average and
jump on $B$ by
\begin{align*}
\{u\}_B = \frac{1}{2}\big((u|_{T_{B,1}})|_{B} + (u|_{T_{B,2}})|_{B}\big),~~[u]_B = (u|_{T_{B,1}})|_{B} - (u|_{T_{B,2}})|_{B}.
\end{align*}
For simplicity's sake, we will often drop the subscript $B$ from these notations if there is no danger to cause any confusions. We will also use the following function spaces:
\begin{equation*}
\tW^{r, p}(\Omega) = \{v \in W^{1,p}(\Omega)~|~u|_{\Omega^s} \in W^{r,p}(\Omega^s),~s = + \text{~or~} -\}
\text{~~for $r \geq 1$ and $1 \leq p \leq \infty$},
\end{equation*}
equipped the norm
\begin{equation*}
\norm{v}_{\tW^{r,p}(\Omega)}^p = \norm{v}_{W^{r,p}(\Omega^-)}^p + \norm{v}_{W^{r,p}(\Omega^+)}^p,~~\forall v \in \tW^{r, p}(\Omega).
\end{equation*}
As usual, for $p=2$, we use $\tH^r(\Omega)= \tW^{r,2}(\Omega)$ and denote its corresponding norm by
\begin{equation*}
\norm{v}_{r}^2 = \norm{v}_{\tH^r(\Omega)}^2 = \norm{v}_{H^r(\Omega^-)}^2 + \norm{v}_{H^r(\Omega^+)}^2,~~\forall v \in \tH^r(\Omega).
\end{equation*}
With a suitable assumption about the regularity of $\Gamma$ and $f$ (e.g. \cite{Babuska_Elliptic_Discontinuous}), we can assume that the exact solution $u$ to the interface problem is in $\tH^2(\Omega)$.
To derive a weak form of interface problem described by \eqref{bvp_pde}-\eqref{eq:bvp_int_2} for an IFE method, we will use the following space:
\begin{equation*}
V_h = \{v ~|~v \text{~satisfies conditions (HV1)-(HV4) described as follows}\}
\end{equation*}
\begin{description}
\item[(HV1)] $v|_K \in H^1(K),~\forall K \in \cT_h$.
\item[(HV2)] $v$ is continuous at every $X \in \cN_h$.
\item[(HV3)] $v$ is continuous across each $B \in {\mathring \cE}_h^n$.
\item[(HV4)] $v|_{\partial \Omega} = 0$.
\end{description}
We multiply equation \eqref{bvp_pde} by a test function $v \in V_h$, integrate both sides on each element $K \in \cT_h$, and apply Green's formula to have
\begin{equation*}
\int_K \beta \nabla v \cdot \nabla u dX - \int_{\partial K} \beta \nabla u \cdot \bfn vds = \int_K vf dX.
\end{equation*}
Summarizing over all elements leads to
\begin{equation}\label{eq:weak_1}
\sum_{K \in \cT_h} \int_K \beta \nabla v \cdot \nabla u dX - \sum_{B \in {\mathring \cE}_h^i} \int_B \left\{\beta \nabla u \cdot \bfn_B\right\}[v] ds = \int_\Omega v f dX.
\end{equation}
Here we have used the fact that
\begin{equation*}
\left\{\beta \nabla u \cdot \bfn_B\right\}_B = (\beta \nabla u \cdot \bfn_B)|_B, ~~\forall B \in {\mathring \cE}_h^n.
\end{equation*}
Because of the regularity of $u$, for arbitrary parameters $\epsilon, \alpha > 0$, and $\sigma_B^0\geq 0$, we have
\begin{equation}\label{eq:weak_2}
\epsilon \sum_{B \in {\mathring \cE}_h^i}\int_B \left\{\beta \nabla v \cdot \bfn_B\right\} [u] ds = 0,
~\sum_{B \in {\mathring \cE}_h^i}\int_B \frac{\sigma_B^0}{\abs{B}^\alpha} [v][u] ds = 0.
\end{equation}
Therefore, adding \eqref{eq:weak_2} to \eqref{eq:weak_1} leads to the following weak form of the interface problem \eqref{bvp_pde}-\eqref{eq:bvp_int_2}:
\begin{eqnarray}
\sum_{K \in \cT_h} \int_K \beta \nabla v \cdot \nabla u dX - \sum_{B \in {\mathring \cE}_h^i} \int_B \left\{\beta \nabla u \cdot \bfn_B\right\}[v] ds \label{eq:weak_form} && \\
+\epsilon \sum_{B \in {\mathring \cE}_h^i}\int_B \left\{\beta \nabla v \cdot \bfn_B\right\} [u] ds
+ \sum_{B \in {\mathring \cE}_h^i}\int_B \frac{\sigma_B^0}{\abs{B}^\alpha} [v][u] ds &=& \int_\Omega v f dX,~~\forall v \in V_h.\nonumber
\end{eqnarray}
We now recall the linear and bilinear IFE spaces to be used in our partially penalized IFE methods based on the weak form \eqref{eq:weak_form}. On each element $K \in \cT_h$, we let
\begin{eqnarray*}
S_h(K) = span\{\phi_j(X), 1 \leq j \leq d_K\}, ~~d_K = \begin{cases}
3,&\text{if $K$ is a triangular element}, \\
4,&\text{if $K$ is a rectangular element},
\end{cases}
\end{eqnarray*}
where $\phi_j, 1 \leq j \leq d_K$ are the standard linear or bilinear nodal basis functions for $K \in \cT_h^n$; otherwise, for $K \in \cT_h^i$,
$\phi_j, 1 \leq j \leq d_K$ are the linear or bilinear IFE basis functions discussed in \cite{ZLi_TLin_YLin_RRogers_linear_IFE,ZLi_TLin_XWu_Linear_IFE} and \cite{XHe_TLin_YLin_Bilinear_Approximation,TLin_YLin_RRogers_MRyan_Rectangle}, respectively. Then, we define the IFE space over the whole solution domain $\Omega$ as follows:
\begin{equation*}
S_h(\Omega) = \{v ~|~\text{$v$ satisfies conditions (IFE1) - (IFE3) given below}\}
\end{equation*}
\begin{description}
\item[(IFE1)] $v|_K \in S_h(K),~\forall K \in \cT_h$.
\item[(IFE2)] $v$ is continuous at every $X \in \cN_h$.
\item[(IFE3)] $v|_{\partial \Omega} = 0$.
\end{description}
It is easy to see that $S_h(\Omega) \subset V_h(\Omega)$. Now, we describe the partially penalized IFE methods for the interface problem \eqref{bvp_pde}-\eqref{eq:bvp_int_2}: find $u_h \in S_h(\Omega)$ such that
\begin{equation}\label{eq:IFE_eq}
a_h(v_h, u_h) = (v_h, f),~~\forall v_h \in S_h(\Omega),
\end{equation}
where the bilinear form $a_h(\cdot, \cdot)$ is defined on $S_h(\Omega)$ by
\begin{eqnarray}
a_h(v_h, w_h) &=& \sum_{K \in \cT_h} \int_K \beta \nabla v_h \cdot \nabla w_h dX - \sum_{B \in {\mathring \cE}_h^i} \int_B \left\{\beta \nabla w_h \cdot \bfn_B\right\}[v_h] ds \nonumber \\
&+& \epsilon \sum_{B \in {\mathring \cE}_h^i}\int_B \left\{\beta \nabla v_h \cdot \bfn_B\right\} [w_h] ds
+ \sum_{B \in {\mathring \cE}_h^i}\int_B \frac{\sigma_B^0}{\abs{B}^\alpha} [v_h][w_h] ds,~~\forall v_h, w_h \in S_h(\Omega).
\label{eq:IFE_BF}
\end{eqnarray}
\section{Trace inequalities for IFE functions}
Using the standard scaling argument, we can obtain the following well known trace inequalities \cite{Riviere_DG_book}: there exists a constant $C$ such that
\begin{eqnarray}
\quad\norm{v}_{L^2(B)} &\leq& C \abs{B}^{1/2}\abs{K}^{-1/2}\left(\norm{v}_{L^2(K)} + h \norm{\nabla v}_{L^2(K)}\right),~\forall v \in H^1(K), \label{eq:trace_inq_1}\\
\quad\norm{\nabla v}_{L^2(B)} &\leq& C \abs{B}^{1/2}\abs{K}^{-1/2}\left(\norm{\nabla v}_{L^2(K)} + h \norm{\nabla^2 v}_{L^2(K)}\right),~\forall v \in H^2(K). \label{eq:trace_inq_2}
\end{eqnarray}
where $B$ is an edge of $K$.
Our goal in this section is to extend these trace inequalities to IFE functions in $S_h(K)$ for $K \in \cT_h^i$. First, we recall that $S_h(K) \subset C(K)\cap H^1(K)$ for all $K \in \cT_h$ \cite{XHe_TLin_YLin_Bilinear_Approximation, ZLi_TLin_YLin_RRogers_linear_IFE}. This implies that inequality \eqref{eq:trace_inq_1} is also valid for
$v \in S_h(K)$ even if $K \in \cT_h^i$. However, the second trace inequality \eqref{eq:trace_inq_2}
cannot be applied to $v \in S_h(K)$ with $K \in \cT_h^i$ because $v \not \in H^2(K)$ in general.
\subsection{Trace inequalities for linear IFE functions}
It is relatively easier to prove that the trace inequality for a linear IFE function in a triangular interface element is true because its gradient is a piecewise constant function. Without loss of generality, we consider the following triangular interface element
\begin{equation*}
K = \bigtriangleup A_1A_2A_3, ~~~A_1 = (0,0),~A_2 = (h,0), ~A_3 = (0,h).
\end{equation*}
Assume that the interface $\Gamma$ intersects the edge of $K$ at points $D$ and $E$
and the straight line $\overline{DE}$ separates $K$ into $K^-$ and $K^+$, see the illustration on the left in Fig. \ref{fig:tri_rec_IFE_elements}. Consider a linear IFE function on $K$ in the following form
\begin{equation}
v(x,y) = \begin{cases}
v^-(x,y) = c_1^- + c_2^-x + c_3^-y,& \text{if~} (x, y) \in K^-, \\
v^+(x,y) = c_1^+ + c_2^+x + c_3^+y,& \text{if~} (x, y) \in K^+,
\end{cases} \label{eq:linear_IFE_c1c2c3_format}
\end{equation}
which satisfies the following jump conditions \cite{ZLi_TLin_YLin_RRogers_linear_IFE}:
\begin{align} \label{eq:linear_IFE_jump_cond_coef_one_piece_bnd_by_another}
v^-(D) = v^+(D), ~~v^-(E) = v^+(E), ~~\beta^- \pderiv{v^-}{\bfn_{\overline{DE}}} = \beta^+ \pderiv{v^+}{\bfn_{\overline{DE}}}.
\end{align}
\begin{figure}[hbt]
\centerline{
\hbox{\includegraphics[height=1.5in]{tri_IFE_element_DE_on_A1A3_A1A2}}~~~~
\hbox{\includegraphics[height=1.5in]{rect_IFE_element_DE_on_A1A3_A1A2_type_I}}
}
\caption{a triangular interface element (left) and a rectangular interface element (right)}
\label{fig:tri_rec_IFE_elements}
\centering
\end{figure}
\begin{lemma}\label{lem:linear_IFE_coef_one_piece_bnd_by_another}
There exists a constant $C>1$ independent of the interface location such that for every linear IFE function $v$ on the interface element $K$ defined in \eqref{eq:linear_IFE_c1c2c3_format} the following inequalities hold:
\begin{equation}\label{eq:linear_IFE_coef_one_piece_bnd_by_another}
\frac{1}{C} \norm{(c_1^+, c_2^+, c_3^+)}\leq \norm{(c_1^-, c_2^-, c_3^-)} \leq C \norm{(c_1^+, c_2^+, c_3^+)}.
\end{equation}
\end{lemma}
\begin{proof}
We prove the second inequality in \eqref{eq:linear_IFE_coef_one_piece_bnd_by_another}, and similar arguments
can be used to show the first one. Applying the jump conditions
\eqref{eq:linear_IFE_jump_cond_coef_one_piece_bnd_by_another} we can show that coefficients of
$v(x,y)$ must satisfy the following equality:
\begin{align*}
M^- \begin{pmatrix}
c_1^- \\
c_2^- \\
c_3^-
\end{pmatrix} = M^+ \begin{pmatrix}
c_1^+ \\
c_2^+ \\
c_3^+
\end{pmatrix},
\end{align*}
where $M^s, ~s = -, +$ are two matrices.
Without loss of generality, we further assume
\begin{align*}
D = (0,dh), E = (eh, 0), \text{~with~} 0 \leq d \leq 1, 0 \leq e \leq 1.
\end{align*}
Then
\begin{eqnarray*}
M^s = \begin{pmatrix}
1&0&dh \\
1&eh&0 \\
0&-\beta^s dh & -\beta^s eh
\end{pmatrix},~~s = - \text{~or~} +
\end{eqnarray*}
whose determinant is
\begin{equation*}
\det(M^s) = -\beta^s(d^2+e^2)h^2,~~s = - \text{~or~} +
\end{equation*}
which is nonzero because $(d,e) \not = (0,0)$. Hence we can solve for $c_i^+$s in terms of $c_i^-$ to have
\begin{align}
&c_i^+ = f_{i1}c_1^- + f_{i2}c_2^- + f_{i3}c_3^-,~~i = 1, 2, 3,
\label{eq:linear_IFE_coef_one_piece_bnd_by_another_3} \\
&f_{ij} = \Frac{g_{ij}^-\beta^-}{\beta^+(d^2+e^2)} + \frac{g_{ij}^+\beta^+}{\beta^+(d^2+e^2)}, ~~1 \leq i, j \leq 3, \label{eq:linear_IFE_coef_one_piece_bnd_by_another_4}
\end{align}
with
\begin{align*}
&g_{11}^- = 0, g_{11}^+ = d^2+e^2, ~g_{12}^- = -d^2eh, g_{12}^+ = d^2eh, ~g_{13}^- = -de^2h, g_{13}^+ = de^2h, \\
&g_{21}^- = 0, g_{21}^+ = 0, ~g_{22}^- = d^2, g_{22}^+ = e^2, ~g_{23}^- = de, g_{23}^+ = -de, \\
&g_{31}^- = 0, g_{31}^+ = 0, ~g_{32}^- = de, g_{32}^+ = -de, ~g_{33}^- = e^2, g_{33}^+ = d^2.
\end{align*}
Therefore, there exists a constant $C$ that depends on $\beta^-$ and $\beta^+$, but is independent of $d, e$, such that
\begin{align*}
\abs{f_{ij}} \leq C, ~1 \leq i,j \leq 3.
\end{align*}
Then, the second inequality in \eqref{eq:linear_IFE_coef_one_piece_bnd_by_another} follows from
\eqref{eq:linear_IFE_coef_one_piece_bnd_by_another_3} and the above bounds for $\abs{f_{ij}},~1 \leq i,j \leq 3$.
\end{proof}
\begin{remark}\label{rem:linear_IFE_coef_one_piece_bnd_by_another}
In the proof of Lemma \ref{lem:linear_IFE_coef_one_piece_bnd_by_another}, we have shown
$f_{i1} = 0, i = 2, 3$. Consequently, we can show that there exists a constant $C>1$ such that
\begin{equation}\label{eq:linear_IFE_coef_one_piece_bnd_by_another_1_1}
\frac{1}{C}\norm{(c_2^+, c_3^+)}\leq \norm{(c_2^-, c_3^-)} \leq C\norm{(c_2^+, c_3^+)}.
\end{equation}
\end{remark}
Now, we establish the trace inequality on a triangular interface element $K = \bigtriangleup A_1A_2A_3$.
\begin{lemma}\label{lem:linear_IFE_trace_ineq}
There exists a constant $C$ independent of the interface location such that for every linear IFE function $v$ on $K$ the following
inequalities hold
\begin{align}
&\norm{\beta v_p}_{L^2(B)} \leq C h^{1/2} \abs{K}^{-1/2}\norm{\sqrt{\beta} \nabla v}_{L^2(K)}, ~~p = x,y, \label{eq:linear_IFE_trace_ineq_1}\\
&\norm{\beta \nabla v \cdot \bfn_B}_{L^2(B)} \leq C h^{1/2} \abs{K}^{-1/2}\norm{\sqrt{\beta} \nabla v}_{L^2(K)}. \label{eq:linear_IFE_trace_ineq_2}
\end{align}
\end{lemma}
\begin{proof}
Without loss of generality, we assume again that the interface $\Gamma$ intersects with the boundary of $K$ at
\begin{align*}
D = (0,dh), E = (eh, 0), \text{~with~} 0 \leq d \leq 1, 0 \leq e \leq 1,
\end{align*}
and the line $\overline{DE}$ separates $K$ into two subelement $K^-$ and $K^+$ with $A_3 \in K^+$ and
$\abs{K^+} \geq \frac{1}{2}\abs{K}$. Furthermore, we assume $B = \overline{A_1A_3}$ is an interface edge with
$B = B^- \cup B^+$. Similar arguments can be applied to establish the trace inequality in other cases.
By direct calculations, we have
\begin{eqnarray*}
\norm{\beta^+ v_x}_{L^2(B^+)}^2&=& (c_2^+)^2 \abs{B^+}\big(\abs{\beta^+}\big)^2 \leq \big((c_2^+)^2 + (c_3^+)^2\big)\abs{K^+} \frac{\abs{B^+}}{\abs{K^+}}\big(\abs{\beta^+}\big)^2 \nonumber \\
&=&\beta^+ \frac{\abs{B^+}}{\abs{K^+}} \norm{\sqrt{\beta^+} \nabla v}_{L^2(K^+)}^2 \leq C \frac{\abs{B^+}}{\abs{K^+}} \norm{\sqrt{\beta} \nabla v}_{L^2(K)}^2, \nonumber
\end{eqnarray*}
\emph{i.e.},
\begin{equation}\label{eq:linear_IFE_trace_ineq_3}
\norm{\beta^+ v_x}_{L^2(B^+)}\leq 2Ch^{1/2}\abs{K}^{-1/2} \norm{\sqrt{\beta} \nabla v}_{L^2(K)}.
\end{equation}
Similarly, we can show that
\begin{equation}\label{eq:linear_IFE_trace_ineq_4}
\norm{\beta^+ v_y}_{L^2(B^+)}
\leq 2Ch^{1/2}\abs{K}^{-1/2} \norm{\sqrt{\beta} \nabla v}_{L^2(K)}.
\end{equation}
On $B^-$, applying the estimates in {Remark} \ref{rem:linear_IFE_coef_one_piece_bnd_by_another}, we have
\begin{align}
\norm{\beta^- v_x}_{L^2(B^-)}^2 &= (c_2^-)^2 \abs{B^-}\big(\abs{\beta^-}\big)^2 \leq C\big((c_2^+)^2 + (c_3^+)^2\big)\abs{K^+} \frac{\abs{B^-}}{\abs{K^+}}\big(\abs{\beta^-}\big)^2 \nonumber \\
&=C\beta^- \frac{\abs{B^-}}{\abs{K^+}} \norm{\sqrt{\beta^+} \nabla v}_{L^2(K^+)}^2 \leq C \frac{\abs{B^-}}{\abs{K^+}} \norm{\sqrt{\beta} \nabla v}_{L^2(K)}^2. \nonumber
\end{align}
Hence,
\begin{equation}\label{eq:linear_IFE_trace_ineq_5}
\norm{\beta^- v_x}_{L^2(B^-)}
\leq 2Ch^{1/2}\abs{K}^{-1/2} \norm{\sqrt{\beta} \nabla v}_{L^2(K)}.
\end{equation}
Similarly,
\begin{equation}
\norm{\beta^- v_y}_{L^2(B^-)}
\leq 2Ch^{1/2}\abs{K}^{-1/2} \norm{\sqrt{\beta} \nabla v}_{L^2(K)}. \label{eq:linear_IFE_trace_ineq_6}
\end{equation}
Then, the combination of \eqref{eq:linear_IFE_trace_ineq_3} and \eqref{eq:linear_IFE_trace_ineq_5}
yields the inequality \eqref{eq:linear_IFE_trace_ineq_1} for $p = x$. Similarly,
the inequality \eqref{eq:linear_IFE_trace_ineq_1} for $p = y$ follows from combining
\eqref{eq:linear_IFE_trace_ineq_4} and \eqref{eq:linear_IFE_trace_ineq_6}. Finally,
\eqref{eq:linear_IFE_trace_ineq_2} follows directly from \eqref{eq:linear_IFE_trace_ineq_1}.
\end{proof}
\subsection{Trace inequalities for bilinear IFE functions}
Without loss of generality, we consider a rectangular interface element $K$ with the following vertices:
\begin{align*}
A_1 = (0,0), ~~A_2 = (h, 0),~~ A_3 = (0,h),~~ A_4 = (h,h).
\end{align*}
Again, assume that the interface $\Gamma$ intersects with $\partial K$ at points
$D$ and $E$ and the linear $\overline{DE}$ separates $K$ into two subelements
$K^-$ and $K^+$, see the illustration on the right in Fig. \ref{fig:tri_rec_IFE_elements}. We assume that
$K$ is one of the two types of rectangular interface elements \cite{XHe_Thesis_Bilinear_IFE, XHe_TLin_YLin_Bilinear_Approximation}:
\noindent
{\bf Type I interface element}: The interface $\Gamma$ intersects $\partial K$ at
\begin{align*}
D = (0, dh), E=(eh, 0), ~~0 \leq d \leq 1, 0 \leq e \leq 1.
\end{align*}
{\bf Type II interface element}: The interface $\Gamma$ intersects $\partial K$ at
\begin{align*}
D = (dh, h), E=(eh, 0), ~~0 \leq d \leq 1, 0 \leq e \leq 1.
\end{align*}
On this interface element $K$, let $v$ be a bilinear IFE function in the following form:
\begin{align}
v(x,y) = \begin{cases}
v^-(x,y) = c_1^- + c_2^-x + c_3^-y + c_4xy, & \text{if~} (x, y) \in K^-, \\
v^+(x,y) = c_1^+ + c_2^+x + c_3^+y + c_4xy, & \text{if~} (x, y) \in K^+,
\end{cases}\label{eq:bilinear_IFE_c1c2c3c4_format}
\end{align}
which satisfies pertinent interface jump conditions \cite{XHe_Thesis_Bilinear_IFE, XHe_TLin_YLin_Convergence_IFE}.
First, using similar arguments, we can show that the coefficients of a bilinear IFE function
$v$ satisfy inequalities similar to those in Lemma \ref{lem:linear_IFE_coef_one_piece_bnd_by_another}.
\begin{lemma}\label{lem:bilinear_IFE_coef_one_piece_bnd_by_another}
There exists a constant $C>1$ independent of the interface location such that for every bilinear IFE function $v$ on the interface element $K$ defined in \eqref{eq:bilinear_IFE_c1c2c3c4_format} the following inequalities hold:
\begin{equation}\label{eq:bilinear_IFE_coef_one_piece_bnd_by_another}
\frac{1}{C} \norm{(c_1^+, c_2^+, c_3^+, c_4)}\leq\norm{(c_1^-, c_2^-, c_3^-, c_4)} \leq C \norm{(c_1^+, c_2^+, c_3^+, c_4)}.
\end{equation}
\end{lemma}
The proof of trace inequalities for a bilinear IFE function is a little more complicated because its gradient is
not a constant. The following lemma provides an aid.
\begin{lemma}\label{lem:bilinear_IFE_gradient_lower_bnd}
Assume $K$ is an interface element such that
\begin{align*}
\abs{K^s} \geq \frac{1}{2}\abs{K},
\end{align*}
with $s = -$ or $+$. Then there exists a polygon $\tK \subset K^s$ and two positive constants
$C_1$ and $C_2$ independent of the interface location such that
\begin{align}
&\abs{\tK} \geq C_1 \abs{K}, \label{eq:bilinear_IFE_gradient_lower_bnd_1}\\
&\frac{h}{\abs{\tK}} \norm{\sqrt{\beta^s}\nabla v}_{L^2(\tK)}^2 \geq C_2\beta^s
\big(h(c_2^s)^2 + h(c_3^s)^2 + h^3 (c_4)^2\big). \label{eq:bilinear_IFE_gradient_lower_bnd_2}
\end{align}
\end{lemma}
\begin{proof} Let us partition $K$ into $4$ congruent squares $K_i, i = 1, 2, 3, 4$ by the lines connecting the two pairs of opposite mid points of edges of $K$ such that $A_i$ is a vertex of $K_i$. Since $\abs{K^s} \geq \frac{1}{2}\abs{K}$, one of these $4$ small squares must be inside $K^s$. Without loss generality, we assume that $K_4 \subset K^s$. By direct calculations we have
\begin{align*}
&\norm{v_x}_{L^2(K_4)}^2 = \frac{h^2}{48}(12(c_2^s)^2 + 18c_2^sc_4h + 7 c_4^2h^2) \geq \frac{h^2}{48}
\left[\left(12 - \frac{9}{\sigma_1}\right)(c_2^s)^2 + (7-9\sigma_1)c_4^2h^2\right], \\
&\norm{v_y}_{L^2(K_4)}^2 = \frac{h^2}{48}(12(c_3^s)^2 + 18c_3^sc_4h + 7 c_4^2h^2)\geq \frac{h^2}{48}
\left[\left(12 - \frac{9}{\sigma_2}\right)(c_3^s)^2 + (7-9\sigma_2)c_4^2h^2\right],
\end{align*}
where $\sigma_1$ and $\sigma_2$ are arbitrary positive constants. Letting $\sigma_i = \sigma \in (9/12, 7/9), i = 1, 2$ in the above inequalities leads to
\begin{align*}
\norm{\nabla v}_{L^2(K_4)}^2 \geq Ch^2((c_2^s)^2 + (c_3^s)^2 + c_4^2h^2)
\end{align*}
where
\begin{align*}
C = \min\{12 - \frac{9}{\sigma}, 2(7-9\sigma)\} > 0.
\end{align*}
Then, \eqref{eq:bilinear_IFE_gradient_lower_bnd_1} and \eqref{eq:bilinear_IFE_gradient_lower_bnd_2}
follow by letting $\tK = K_4$.
\commentout{
Clear[dd, ee, tmp1, tmp2, tmp3, x, y]
Phi[x_, y_] = c1 + c2*x + c3*y + c4*x*y
Phidx[x_, y_] = D[Phi[x, y], {x, 1}]
Phidy[x_, y_] = D[Phi[x, y], {y, 1}]
"Integrals in the domain tK"
PhidxArea1 =
Simplify[
Integrate[Integrate[Phidx[x, y]^2, {y, h/2, h}], {x, h/2, h}]]
PhidyArea1 =
Simplify[
Integrate[Integrate[Phidy[x, y]^2, {y, h/2, h}], {x, h/2, h}]]
}
\end{proof}
Now, we are ready to establish the trace inequality for bilinear IFE functions on an interface element $K = \square A_1A_2A_3A_4$.
\begin{lemma}\label{lem:blinear_IFE_trace_ineq}
There exists a constant $C$ independent of the interface location such that for every bilinear IFE function $v(x,y)$ on $K$ the following
inequalities hold
\begin{align}
&\norm{\beta v_p}_{L^2(B)} \leq C h^{1/2} \abs{K}^{-1/2}\norm{\sqrt{\beta} \nabla v}_{L^2(K)}, ~~p = x, y, \label{eq:blinear_IFE_trace_ineq_1}\\
&\norm{\beta \nabla v \cdot \bfn_B}_{L^2(B)} \leq C h^{1/2} \abs{K}^{-1/2}\norm{\sqrt{\beta} \nabla v}_{L^2(K)}. \label{eq:blinear_IFE_trace_ineq_2}
\end{align}
\end{lemma}
\begin{proof}
Without loss of generality, we assume that $K$ is a Type I interface element and
$B = \overline{A_1A_3}$ is an interface edge. To be more specific, we also assume that
$A_4 \in K^+$ and $\abs{K^+} \geq \frac{1}{2}\abs{K}$. Then
\begin{align*}
B = B^- \cup B^+, ~~B^- = \overline{A_1D}, ~B^+ = \overline{DA_3}.
\end{align*}
Direct calculations lead to
\begin{eqnarray}
\qquad\quad\norm{\beta^- v_x}_{L^2(B^-)}^2 &=& (\beta^-)^2\big(dh(c_2^-)^2 + d^2h^2c_2^-c_4 + \frac{1}{3}d^3h^3 c_4^2\big),
\label{eq:trace_inq_bilinear_vx_B-} \\
\norm{\beta^- v_y}_{L^2(B^-)}^2 &=& (\beta^-)^2dh (c_3^-)^2, \label{eq:trace_inq_bilinear_vy_B-} \\
\norm{\beta^+ v_x}_{L^2(B^+)}^2 &=& \big(\beta^+\big)^2\left[(1-d)h(c_2^+)^2 + (1-d^2)h^2(c_2^+)c_4 + \frac{1}{3}(1-d^3)h^3 c_4^2\right], \label{eq:trace_inq_bilinear_vx_B+} \\
\norm{\beta^+ v_y}_{L^2(B^+)}^2 &=& \big(\beta^+\big)^2(1-d)h(c_3^+)^2. \label{eq:trace_inq_bilinear_vy_B+}
\end{eqnarray}
Applying \eqref{eq:bilinear_IFE_gradient_lower_bnd_1} and \eqref{eq:bilinear_IFE_gradient_lower_bnd_2}
to \eqref{eq:trace_inq_bilinear_vx_B+} and \eqref{eq:trace_inq_bilinear_vy_B+} yields
\begin{equation}\label{eq:trace_inq_bilinear_vs_B-}
\quad\norm{\beta^+ v_p}_{L^2(B^+)}^2 \leq C \frac{h}{\abs{K}}\norm{\sqrt{\beta} \nabla v}_{L^2(K^+)}^2
\leq C \frac{h}{\abs{K}}\norm{\sqrt{\beta} \nabla v}_{L^2(K)}^2,~~p = x, y.
\end{equation}
Moreover, applying \eqref{eq:bilinear_IFE_coef_one_piece_bnd_by_another},
\eqref{eq:bilinear_IFE_gradient_lower_bnd_1}, and \eqref{eq:bilinear_IFE_gradient_lower_bnd_2}
to \eqref{eq:trace_inq_bilinear_vx_B-} and \eqref{eq:trace_inq_bilinear_vy_B-} leads to
\begin{equation}\label{eq:trace_inq_bilinear_vs_B+}
\norm{\beta^- v_p}_{L^2(B^-)}^2 \leq C \frac{h}{\abs{K}}\norm{\sqrt{\beta} \nabla v}_{L^2(K^+)}^2
\leq C \frac{h}{\abs{K}}\norm{\sqrt{\beta} \nabla v}_{L^2(K)}^2,~~p = x, y.
\end{equation}
Then, \eqref{eq:blinear_IFE_trace_ineq_1} follows from combining
\eqref{eq:trace_inq_bilinear_vs_B-} and \eqref{eq:trace_inq_bilinear_vs_B+} together.
Finally, \eqref{eq:blinear_IFE_trace_ineq_2} obviously follows from \eqref{eq:blinear_IFE_trace_ineq_1}.
\end{proof}
\section{Error Estimation for Partially Penalized IFE Methods}
We show that the IFE solution to the interface problem solved from \eqref{eq:IFE_eq} has an optimal convergence from
the point of the polynomials used in the involved IFE spaces. Unless otherwise specified, we always
assume that $\cT_h, 0<h<1$ is a family of regular Cartesian triangular or rectangular meshes \cite{PCiarlet_FEM}.
We start from proving the coercivity of the bilinear form $a_h(\cdot, \cdot)$ defined in
\eqref{eq:IFE_BF} on the IFE space $S_h(\Omega)$ with respect to the following energy norm:
\begin{equation} \label{eq:IP-IFE_method_dis_H1_norm}
\norm{v_h}_h = \left(\sum_{K \in \cT_h} \int_K \beta \nabla v_h \cdot \nabla v_h dX + \sum_{B \in {\mathring \cE}_h^i}\int_B \frac{\sigma_B^0}{\abs{B}^\alpha} [v_h][v_h] ds\right)^{1/2}.
\end{equation}
\begin{lemma} \label{lem:coercivity}
There exists a constant $\kappa>0$ such that
\begin{equation}
\kappa \norm{v_h}_h^2 \leq a_h(v_h, v_h),~~\forall v_h \in S_h(\Omega) \label{eq:coercivity}
\end{equation}
is true for $\epsilon = 1$ unconditionally and is true for $\epsilon = 0$ or $\epsilon = -1$ under the condition that the stabilization parameter $\sigma_B^0$ in $a_h(\cdot, \cdot)$ is large enough. \\
\end{lemma}
\begin{proof}
First, for $\epsilon = 1$, we note that the coercivity follows directly from the definitions of
$a_h(\cdot, \cdot)$ and $\norm{\cdot}_h$.
For $\epsilon = -1, 0$, note that
\begin{align}
a_h(v_h, v_h) &= \sum_{K \in \cT_h} \int_K \beta \nabla v_h \cdot \nabla v_h dX + (\epsilon - 1) \sum_{B \in {\mathring \cE}_h^i} \int_B \left\{\beta \nabla v_h \cdot \bfn_B\right\}[v_h] ds \label{eq:coercivity_2} \\
& + \sum_{B \in {\mathring \cE}_h^i}\int_B \frac{\sigma_B^0}{\abs{B}^\alpha} [v_h][v_h] ds, \nonumber
\end{align}
and the main concern is the second term on the right hand side. For each interface
edge $B \in {\mathring \cE}_h^i$ we let $K_{B,i}\in \cT_h, i = 1, 2$ be the two elements sharing $B$ as their common edge. Then, by the trace inequality \eqref{eq:linear_IFE_trace_ineq_2} or \eqref{eq:blinear_IFE_trace_ineq_2} and using $\alpha \geq 1$, we have,
\begin{eqnarray*}
& & \int_B \left\{\beta \nabla v_h \cdot \bfn_B\right\}[v_h] ds \leq \norm{\left\{\beta \nabla v_h \cdot \bfn_B\right\}}_{L^2(B)} \norm{[v_h]}_{L^2(B)} \\
&\leq& \left(\frac{1}{2}\norm{(\beta \nabla v_h \cdot \bfn_B)|_{K_{B,1}}}_{L^2(B)}
+\frac{1}{2}\norm{(\beta \nabla v_h \cdot \bfn_B)|_{K_{B,2}}}_{L^2(B)}\right)\norm{[v_h]}_{L^2(B)} \\
&\leq& \left(\frac{C}{2}h_{K_{B,1}}^{-1/2} \norm{\sqrt{\beta} \nabla v_h}_{L^2(K_{B,1})} + \frac{C}{2}h_{K_{B,2}}^{-1/2} \norm{\sqrt{\beta} \nabla v_h}_{L^2(K_{B,2})}\right)\norm{[v_h]}_{L^2(B)} \\
&=& \Frac{C}{2}\abs{B}^{\alpha/2}\left(h_{K_{B,1}}^{-1/2} \norm{\sqrt{\beta} \nabla v_h}_{L^2(K_{B,1})} + h_{K_{B,2}}^{-1/2} \norm{\sqrt{\beta}\nabla v_h}_{L^2(K_{B,2})}\right)\frac{1}{\abs{B}^{\alpha/2}}\norm{[v_h]}_{L^2(B)} \\
&\leq& C\left(\norm{\sqrt{\beta} \nabla v_h}_{L^2(K_{B,1})}^2 + \norm{\sqrt{\beta} \nabla v_h}_{L^2(K_{B,2})}^2\right)^{1/2} \frac{1}{\abs{B}^{\alpha/2}}\norm{[v_h]}_{L^2(B)}.
\end{eqnarray*}
Therefore, for any $\delta > 0$, we have
\begin{align}
& \sum_{B \in {\mathring \cE}_h^i} \int_B \left\{\beta \nabla v_h \cdot \bfn_B\right\}[v_h] ds \label{eq:coercivity_3} \\
\leq &\sum_{B \in {\mathring \cE}_h^i} C\left(\norm{\sqrt{\beta} \nabla v_h}_{L^2(K_{B,1})}^2 + \norm{\sqrt{\beta} \nabla v_h}_{L^2(K_{B,2})}^2\right)^{1/2} \frac{1}{\abs{B}^{\alpha/2}}\norm{[v_h]}_{L^2(B)} \nonumber \\
\leq & \frac{\delta}{2}\sum_{K\in \cT_h}\norm{\sqrt{\beta} \nabla v_h}_{L^2(K)}^2 + \frac{C}{2\delta}\sum_{B \in {\mathring \cE}_h^i}\frac{1}{\abs{B}^{\alpha}}\norm{[v_h]}_{L^2(B)}^2.\nonumber
\end{align}
Then for $\epsilon = 0$ we let $\delta = 1$ and $\sigma_B^0 = C$, and for $\epsilon = -1$ we let $\delta = 1/2$ and $\sigma_B^0 = 5C/2$, where $C$ is in the above inequality.
The coercivity result \eqref{eq:coercivity} follows from using these parameters in \eqref{eq:coercivity_3} and
putting it in \eqref{eq:coercivity_2}.
\end{proof}
In the error estimation for the IFE solution, we need to use the fact that both linear and bilinear
IFE spaces have the optimal approximation capability
\cite{XHe_Thesis_Bilinear_IFE,XHe_TLin_YLin_Bilinear_Approximation,ZLi_TLin_YLin_RRogers_linear_IFE}. In particular, for every
$u \in \tH_0^2(\Omega)$ satisfying the interface jump conditions \eqref{eq:bvp_int_1} and \eqref{eq:bvp_int_2},
there exists a constant $C$ such that the interpolation $I_hu$ in the (either linear or bilinear) IFE space
$S_h(\Omega)$ has the following error bound:
\begin{align}
\norm{u - I_hu}_{L^2(\Omega)} + h\left(\sum_{T\in\mathcal{T}_h}\norm{u - I_hu}^2_{H^1(T)}\right)^{\frac{1}{2}} \leq Ch^2\norm{u}_{\tilde H^2(\Omega)}. \label{eq:intp_error_bnd}
\end{align}
In addition, we also need the error bound for $I_hu$ on interface edges which is given in the following
lemma.
\begin{lemma}\label{lem:interp_error_bnd_edge}
For every
$u \in \tH^{3}(\Omega)$ satisfying the interface jump conditions \eqref{eq:bvp_int_1} and \eqref{eq:bvp_int_2}, there exists a constant $C$ independent of the
interface such that its interpolation $I_hu$ in the IFE space
$S_h(\Omega)$ has the following error bound:
\begin{align}\label{eq:interp_error_bnd_edge}
\norm{\beta (\nabla(u-I_hu))|_{K} \cdot \bfn_B}_{L^2(B)}^2 \leq
C\big(h^2\norm{u}_{\tilde H^3(\Omega)}^2 + h \norm{u}_{\tilde H^2(K)}^2\big)
\end{align}
where $K$ is an interface element and $B$ is one of its interface edge.
\end{lemma}
\begin{proof}
We give a proof for linear IFEs, and the arguments can be used to establish this error bound for
bilinear IFEs.
Without loss of generality, let $K = \bigtriangleup A_1A_2A_3$ be an interface triangle such that
\begin{align}\label{eq: triangle vertices}
A_1 = (0,h), A_2 = (0,0), A_3 = (h,0)
\end{align}
and assume that the interface points on the edge of $K$ are
\begin{align}\label{eq: triangle DE}
D = (0,d), E = (e,h-e)
\end{align}
with $A_1 \in K^+$. Also we only discuss $B = \overline{A_1A_2}$, the estimate on the other
interface edge can be established similarly.
By Lemma 3.3 and Lemma 3.4 in \cite{ZLi_TLin_YLin_RRogers_linear_IFE}, for every $X \in \overline{DA_2}$, we have
\begin{align}
(I_{h}u(X) - u(X))_p &= \big(N^-(D) - N_{\overline{DE}}\big)\nabla u^-(X)(A_1-D) \pderiv{\phi_1(X)}{p}
\nonumber \\
& +I_1(X) \pderiv{\phi_1(X)}{p} + I_2(X)\pderiv{\phi_2(X)}{p} + I_3(X) \pderiv{\phi_3(X)}{p},~~p = x, y,
\label{eq:interp_error_bnd_edge_1}
\end{align}
where
\begin{align*
N^-(D) = \begin{pmatrix}
n_y(D)^2 + \rho n_x(D)^2 & (\rho - 1)n_x(D)n_y(D) \\
(\rho - 1)n_x(D)n_y(D) & n_y(D)^2 + \rho n_x(D)^2
\end{pmatrix},N_{\overline{DE}} = \begin{pmatrix}
{\bar n}_y^2 + \rho {\bar n}_x^2 & (\rho - 1){\bar n}_x {\bar n}_y \\
(\rho - 1){\bar n}_x {\bar n}_y & {\bar n}_y^2 + \rho {\bar n}_x^2
\end{pmatrix}
\end{align*}
$\rho = \beta^- /\beta^+$, ${\bf n}(X) = (n_x(X), n_y(X))^T$ is the normal to $\Gamma$ at $X$,
${\bf n}(\overline{DE}) = ({\bar n}_x, {\bar n}_y)^T$ is the normal of $\overline{DE}$,
and
\begin{align}\label{eq: I1}
I_1(X) & = (1-t_d)\big(N^-(D) - I)\int_0^1 \oderiv{\nabla u^-}{t}(tD + (1-t)X)\cdot(A_1-X) dt \nonumber\\
& ~+ \int_0^{t_d}(1-t) \oderivm{u^-}{t}{2}(tA_1 + (1-t)X) dt + \int_{t_d}^1(1-t) \oderivm{u^+}{t}{2}(tA_1 + (1-t)X) dt,
\end{align}
\begin{align}\label{eq: I2I3}
I_i(X) &= \int_0^1(1-t) \oderivm{u^-}{t}{2}(tA_2 + (1-t)X) dt, ~~~~~i = 2,3,
\end{align}
where $D = t_dA_1 + (1-t_d)X = X + t_d(A_1-X)$.
By Lemma 3.1 and Theorem 2.4 of \cite{ZLi_TLin_YLin_RRogers_linear_IFE}, we have
\begin{align}
& \int_{\overline{DA_2}}\left(\big(N^-(D) - N_{\overline{DE}}\big)\nabla u^-(X)(A_1-D) \pderiv{\phi_1(X)}{p}\right)^2dX \leq C h^3 \norm{u}_{H^3(\Omega^-)}^2,
\label{eq:interp_error_bnd_edge_2}
\end{align}
for $p=x,y$. By direct calculations we have
\begin{align*}
&\abs{\oderiv{\nabla u^-}{t}(tD + (1-t)X)\cdot(A_1-X)} \leq \big(\abs{u_{xx}^-(tD + (1-t)X)(x_d - x)(x_1-x)} \\
& \quad + \abs{u_{xy}^-(tD + (1-t)X)(y_d - y)(x_1-x)} + \abs{u_{yx}^-(tD + (1-t)X)(x_d - x)(y_1-y)} \\
& \quad+ \abs{u_{yy}^-(tD + (1-t)X)(y_d-y)(y_1-y)}\big),
\end{align*}
and
\begin{align*}
&\abs{\oderivm{u^s}{t}{2}(tA_i + (1-t)X)} \\
\leq & \big(\abs{u_{xx}^s(tA_i + (1-t)X)(x_i - x)(x_i-x)}
+ \abs{u_{xy}^s(tA_i + (1-t)X)(y_i - y)(x_i-x)} \\
& ~~+ \abs{u_{yx}^s(tA_i + (1-t)X)(x_i-x)(y_i-y)}
+ \abs{u_{yy}^s(tA_i + (1-t)X)(y_i - y)(y_i-y)}\big)
\end{align*}
where $s = \pm, i = 1, 2, 3$. Let $I_{1,i}(X), i = 1, 2, 3$ be three integrals in $I_1(X)$, respectively. Then,
by Theorem 2.4 of \cite{ZLi_TLin_YLin_RRogers_linear_IFE}, we have
\begin{eqnarray}
\int_{\overline{DA_2}} \left(I_{1,1}(X)\pderiv{\phi_1(X)}{p}\right)^2 dX
&\leq& \frac{C}{h^2} \int_0^d (1-t_d)^2 \int_0^1 \abs{u_{yy}^-(0, ty_d + (1-t)y)(y_d - y)(h-y)}^2 dt dy \nonumber \\
&\leq& Ch^2 \int_0^d\abs{u_{yy}^-(0,z)}^2 dz \leq C h^2 \norm{u}_{H^3(\Omega^-)}^2,\label{eq:interp_error_bnd_edge_3}
\end{eqnarray}
\begin{eqnarray}
\int_{\overline{DA_2}} \left(I_{1,2}(X)\pderiv{\phi_1(X)}{p}\right)^2 dX
&\leq& C \int_0^d \int_0^{t_d}\abs{u_{yy}^-(0,y + t(h-y))}^2(h-y)^2(1-t)^2 dt dy \nonumber \\
&\leq& Ch^2 \int_0^d\abs{u_{yy}^-(0,z)}^2 dz \leq C h^2 \norm{u}_{H^3(\Omega^-)}^2,\label{eq:interp_error_bnd_edge_4}
\end{eqnarray}
\begin{eqnarray}
\int_{\overline{DA_2}} \left(I_{1,3}(X)\pderiv{\phi_1(X)}{p}\right)^2 dX &\leq& C \int_0^d \int_{t_d}^1\abs{u_{yy}^+(0,y + t(h-y))}^2(h-y)^2(1-t)^2 dt dy \nonumber \\
&\leq& C h^2 \int_{d}^h\abs{u_{yy}^+(0,z)}^2 dz \leq C h^2 \norm{u}_{H^3(\Omega^+)}^2.\label{eq:interp_error_bnd_edge_5}
\end{eqnarray}
Similarly, we can show that
\begin{eqnarray}
\int_{\overline{DA_2}} \left(I_2(X)\pderiv{\phi_2(X)}{p}\right)^2 dX
&\leq& C \int_0^d \int_0^1 \abs{u_{yy}^-(0,(1-t)y)}^2 y^2(1-t)^2 dt dy \nonumber \\
&\leq& Ch^2 \int_0^d \abs{u_{yy}^-(0,z)}^2dz \leq Ch^2 \norm{u}_{H^3(\Omega^-)}^2. \label{eq:interp_error_bnd_edge_6}
\end{eqnarray}
For the term involving $I_3(X)$, we have
\begin{eqnarray}\label{eq:interp_error_bnd_edge_7}
&& \qquad\qquad\qquad\int_{\overline{DA_2}} \left(I_3(X)\pderiv{\phi_3(X)}{p}\right)^2 dX\\
&\leq& C \left(h^2\int_0^d \int_0^1 \abs{u_{xx}^-(th,(1-t)y)}^2 (1-t)^2 dt dy
+ \int_0^d \int_0^1 \abs{u_{xy}^-(th,(1-t)y)}^2y^2(1-t)^2 dt dy \right. \nonumber \\
&+& \left.\int_0^d \int_0^1 \abs{u_{yx}^-(th,(1-t)y)}^2y^2(1-t)^2 dt dy + \int_0^d \int_0^1 \abs{u_{yy}^-(th,(1-t)y)}^2 y^2(1-t)^2 dt dy \right).\nonumber
\end{eqnarray}
Let $th = p, (1-t)y = q$, then we have
\begin{align*}
t = \frac{p}{h}, y = \frac{q}{1-t} = \frac{q}{1- p/h},
\end{align*}
and,
\begin{align*}
&\frac{q^2}{h-p} = \frac{(1-t)^2y^2}{h - th} = (1-t)y\frac{y}{h}, \abs{(1-t)y} \leq h, \frac{y}{h} \leq 1.
\end{align*}
Hence,
\begin{align*}
&h^2\int_0^d \int_0^1 \abs{u_{xx}^-(th,(1-t)y)}^2 (1-t)^2 dt dy = \iint \limits_{\bigtriangleup_{DA_2A_3}} \abs{u_{xx}^-(p,q)}^2 \frac{h^2(1-p/h)^2}{h-p} dp dq \\
& \leq \iint \limits_{\bigtriangleup_{DA_2A_3}} \abs{u_{xx}^-(p,q)}^2 (h-p) dp dq \leq C h \norm{u}_{\tilde H^2(K)}^2, \\
&\int_0^d \int_0^1 \abs{u_{xy}^-(th,(1-t)y)}^2 y^2(1-t)^2 dt dy = \iint \limits_{\bigtriangleup_{DA_2A_3}} \abs{u_{xy}^-(p,q)}^2 \frac{q^2}{h-p} dp dq
\leq C h \norm{u}_{\tilde H^2(K)}^2, \\
&\int_0^d \int_0^1 \abs{u_{yy}^-(th,(1-t)y)}^2 y^2(1-t)^2 dt dy =\iint \limits_{\bigtriangleup_{DA_2A_3}} \abs{u_{yy}^-(p,q)}^2 \frac{q^2}{h-p} dp dq
\leq C h \norm{u}_{\tilde H^2(K)}^2.
\end{align*}
Using these estimates in \eqref{eq:interp_error_bnd_edge_6}, we have
\begin{align}
&\int_{\overline{DA_2}} \left(I_3(X)\pderiv{\phi_3(X)}{p}\right)^2 dX \leq C h \norm{u}_{\tilde H^2(K)}^2.
\label{eq:interp_error_bnd_edge_8}
\end{align}
Finally, the inequality \eqref{eq:interp_error_bnd_edge} follows from putting estimates
\eqref{eq:interp_error_bnd_edge_2}-\eqref{eq:interp_error_bnd_edge_6}, \eqref{eq:interp_error_bnd_edge_7}
into \eqref{eq:interp_error_bnd_edge_1}.
\end{proof}
\begin{remark}\label{lem:interp_error_bnd_edge_inf}
For every
$u \in \tW^{2,\infty}(\Omega)$ satisfying the interface jump conditions \eqref{eq:bvp_int_1} and \eqref{eq:bvp_int_2},
we can also show that there exists a constant $C$ independent of the interface such that the interpolation $I_hu$ in the IFE space
$S_h(\Omega)$ fulfills
\begin{align}\label{eq:interp_error_bnd_edge_inf}
\norm{\beta (\nabla(u-I_hu))|_{K} \cdot \bfn_B}_{L^2(B)}^2 \leq Ch^3\norm{u}_{\tilde W^{2,\infty}(\Omega)}^2,
\end{align}
where $K$ is an interface element and $B$ is one of its interface edge. A proof for
\eqref{eq:interp_error_bnd_edge_inf} is given in Appendix \ref{remark_1} which uses arguments similar to
those for proving Lemma \ref{lem:interp_error_bnd_edge}.
\end{remark}
Now, we are ready to derive the error bound for IFE solutions generated by the partially penalized IFE method
\eqref{eq:IFE_eq}.
\begin{theorem}\label{th:error_bnd_H1}
Assume that the exact solution $u$ to the interface problem \eqref{bvp_pde}-\eqref{eq:bvp_int_2} is
in $\tH^3(\Omega)$ and $u_h$ is its IFE solution generated with
$\alpha = 1$ on a Cartesian (either triangular or rectangular) mesh $\mathcal{T}_h$. Then there exists
a constant $C$ such that
\begin{align}
\norm{u - u_h}_h \leq C h \norm{u}_{\tilde H^3(\Omega)}. \label{eq:error_bnd_H1}
\end{align}
\end{theorem}
\begin{proof}
From the weak form \eqref{eq:weak_form} and the IFE equation \eqref{eq:IFE_eq} we have
\begin{align}
&a_h(v_h, u_h - w_h) = a_h(v_h, u-w_h), ~~\forall v_h, w_h \in S_h(\Omega). \label{eq:error_bnd_H1_1}
\end{align}
Letting $v_h = u_h - w_h$ in \eqref{eq:error_bnd_H1_1} and using the coercivity of $a_h(\cdot, \cdot)$, we have
\begin{eqnarray}
&& \qquad\qquad\kappa \norm{u_h - w_h}_h^2 \leq \abs{a_h(u_h - w_h, u_h - w_h)} = \abs{a_h(u_h - w_h, u-w_h)} \label{eq:error_bnd_H1_2}\\
&& \leq \left|\sum_{K \in \cT_h} \int_K \beta \nabla(u_h - w_h) \cdot \nabla(u - w_h) dX\right| +
\left|\sum_{B \in {\mathring \cE}_h^i} \int_B \left\{\beta \nabla(u-w_h) \cdot \bfn_B\right\}[u_h - w_h] ds \right| \nonumber\\
&& \left| \epsilon \sum_{B \in {\mathring \cE}_h^i} \int_B \left\{\beta \nabla(u_h-w_h) \cdot \bfn_B\right\}[u - w_h] ds\right|
+ \left|\sum_{B \in {\mathring \cE}_h^i}\int_B \frac{\sigma_B^0}{\abs{B}^\alpha} [u_h-w_h][u-w_h] ds \right|.
\nonumber
\end{eqnarray}
We denote the four terms on the right of \eqref{eq:error_bnd_H1_2} by $Q_i, i = 1, 2, 3, 4$. Then,
\begin{eqnarray}
Q_1 &\leq& \left(\sum_{K \in \cT_h} \norm{\beta^{1/2} \nabla(u - w_h)}_{L^2(K)}^2\right)^{1/2}\left(\sum_{K \in \cT_h} \norm{\beta^{1/2} \nabla(u_h - w_h)}_{L^2(K)}^2\right)^{1/2} \nonumber \\
&\leq& \frac{3}{2\kappa}\max(\beta^-, \beta^+) \norm{\nabla(u-w_h)}_{L^2(\Omega)}^2 + \frac{\kappa}{6}\sum_{K \in \cT_h} \norm{\beta^{1/2} \nabla(u_h - w_h)}_{L^2(K)}^2 \nonumber \\
&\leq& C\norm{\nabla(u-w_h)}_{L^2(\Omega)}^2 + \frac{\kappa}{6}\norm{u_h - w_h}_h^2. \label{eq:error_bnd_H1_Q1}
\end{eqnarray}
\begin{eqnarray}
Q_2 &\leq& \frac{\kappa}{6}\sum_{B \in {\mathring \cE}_h^i}\frac{\sigma_B^0}{\abs{B}^\alpha}\norm{[u_h - w_h]}_{L^2(B)}^2 + C\sum_{B \in {\mathring \cE}_h^i}\frac{\abs{B}^\alpha}{\sigma_B^0}\norm{\left\{\beta \nabla(u-w_h) \cdot \bfn_B\right\}}_{L^2(B)}^2 \nonumber \\
&\leq& \frac{\kappa}{6}\norm{u_h - w_h}_h^2 + C\sum_{B \in {\mathring \cE}_h^i}\frac{\abs{B}^\alpha}{\sigma_B^0}\norm{\left\{\beta \nabla(u-w_h) \cdot \bfn_B\right\}}_{L^2(B)}^2. \label{eq:error_bnd_H1_Q2}
\end{eqnarray}
To bound $Q_3$, for each $B \in {\mathring \cE}_h^i
$, we let $K_{B,i} \in \cT_h, i = 1, 2$ be such that $B = K_{B,1}\cap K_{B,2}$. First,
by the standard trace inequality on elements for $H^1$ functions, we have
\begin{align*}
\norm{[u - w_h]}_{L^2(B)} &\leq \norm{(u-w_h)|_{K_{B,1}}}_{L^2(B)} + \norm{(u-w_h)|_{K_{B,2}}}_{L^2(B)} \\
&\leq Ch^{-1/2}\left(\norm{u - w_h}_{L^2(K_{B,1})} + h\norm{\nabla(u - w_h)}_{L^2(K_{B,1})}\right) \\
&\hspace{0.2in} + Ch^{-1/2}\left(\norm{u - w_h}_{L^2(K_{B,2})} + h\norm{\nabla(u - w_h)}_{L^2(K_{B,2})}\right).
\end{align*}
Then, applying the trace inequalities established in
Lemma \ref{lem:linear_IFE_trace_ineq} or Lemma \ref{lem:blinear_IFE_trace_ineq} depending on
whether linear IFEs or bilinear IFE are considered, we have
\begin{eqnarray*}
&& \norm{\left\{\beta \nabla(u_h-w_h) \cdot \bfn_B\right\}}_{L^2(B)} \\
&\leq& \frac{C}{2}h^{-1/2}\left(\norm{\sqrt{\beta} \nabla (u_h-w_h)}_{L^2(K_{B,1})} + \norm{\sqrt{\beta} \nabla (u_h-w_h)}_{L^2(K_{B,2})}\right).
\end{eqnarray*}
Hence
\begin{align}
Q_3 &\leq \abs{\epsilon} \sum_{B \in {\mathring \cE}_h^i} \norm{\left\{\beta \nabla(u_h-w_h) \cdot \bfn_B\right\}}_{L^2(B)} \norm{[u - w_h]}_{L^2(B)} \nonumber \\
&\leq Ch^{-2}\left(\norm{u - w_h}_{L^2(\Omega)}^2 + h^2\norm{\nabla(u - w_h)}_{L^2(\Omega)}^2 \right)
+ \frac{\kappa}{6}\norm{u_h-w_h}_h^2.
\label{eq:error_bnd_H1_Q3}
\end{align}
To bound $Q_4$, by the standard trace inequality, we have
\begin{align}
&\int_B \frac{\sigma_B^0}{\abs{B}^\alpha} [u-w_h][u-w_h] ds= \frac{\sigma_B^0}{\abs{B}^\alpha}\norm{[u-w_h]}_{L^2(B)}^2 \nonumber \\
&\leq \frac{\sigma_B^0}{\abs{B}^\alpha} \left(\norm{(u-w_h)|_{K_{B,1}}}_{L^2(B)} + \norm{(u-w_h)|_{K_{B,2}}}_{L^2(B)}\right)^2 \nonumber \\
&\leq \frac{\sigma_B^0}{\abs{B}^\alpha}C\abs{B}\abs{K_{B,1}}^{-1}\big(\norm{u-w_h}_{L^2(K_{B,1})} +
h \norm{\nabla (u -w_h)}_{L^2(K_{B,1})}\big)^2 \nonumber \\
&\hspace{0.1in} + \frac{\sigma_B^0}{\abs{B}^\alpha}C\abs{B}\abs{K_{B,2}}^{-1}\big(\norm{u-w_h}_{L^2(K_{B,2})} +h \norm{\nabla (u -w_h)}_{L^2(K_{B,2})}\big)^2 \nonumber \\
& \leq Ch^{-(\alpha+1)}\big(\norm{u-w_h}_{L^2(K_{B,1})} +
h \norm{\nabla (u -w_h)}_{L^2(K_{B,1})}\big)^2 \nonumber \\
&\hspace{0.1in} + Ch^{-(\alpha+1)}\big(\norm{u-w_h}_{L^2(K_{B,2})} +h \norm{\nabla (u -w_h)}_{L^2(K_{B,2})}\big)^2, \label{eq:error_bnd_H1_7}
\end{align}
where we have used the facts that
\begin{align*}
&\abs{B} = h \text{~~or~~} \abs{B} = \sqrt{2}h, ~~\abs{K_{B,i}} = \Frac{h^2}{2}, \text{~~or~~} \abs{K_{B,i}} = h^2, i = 1, 2.
\end{align*}
Then
\begin{align}
Q_4 &\leq \sum_{B \in {\mathring \cE}_h^i}\left(\frac{\kappa}{6} \int_B \frac{\sigma_B^0}{\abs{B}^\alpha} [u_h-w_h][u_h-w_h] ds
+ \frac{3}{2\kappa} \int_B \frac{\sigma_B^0}{\abs{B}^\alpha} [u-w_h][u-w_h] ds \right) \nonumber \\
&\leq \frac{\kappa}{6}\norm{u_h-w_h}_h^2 + \frac{3}{2\kappa}\sum_{B \in {\mathring \cE}_h^i}\int_B \frac{\sigma_B^0}{\abs{B}^\alpha} [u-w_h][u-w_h] ds \nonumber \\
&\leq \frac{\kappa}{6}\norm{u_h-w_h}_h^2 + Ch^{-(\alpha+1)}\big(\norm{u-w_h}_{L^2(\Omega)}^2 +
h^2 \norm{\nabla (u -w_h)}_{L^2(\Omega)}^2\big). \label{eq:error_bnd_H1_Q4}
\end{align}
Then, we put all these bounds for $Q_i, i = 1, 2, 3, 4$ in \eqref{eq:error_bnd_H1_2} to have
\begin{eqnarray*}
\norm{u_h - w_h}_h^2 &\leq& C\norm{\nabla(u-w_h)}_{L^2(\Omega)}^2 + C\sum_{B \in {\mathring \cE}_h^i}\frac{\abs{B}^\alpha}{\sigma_B^0}\norm{\left\{\beta \nabla(u-w_h) \cdot \bfn_B\right\}}_{L^2(B)}^2 \\
&& +~ Ch^{-2}\left(\norm{u - w_h}_{L^2(\Omega)}^2 + h^2\norm{\nabla(u - w_h)}_{L^2(\Omega)}^2 \right)\\
&& +~ Ch^{-(\alpha+1)}\big(\norm{u-w_h}_{L^2(\Omega)}^2 +
h^2 \norm{\nabla (u -w_h)}_{L^2(\Omega)}^2\big).
\end{eqnarray*}
Hence, we let $w_h = I_hu$ in the above and using the optimal approximation capability of linear and bilinear
IFE spaces \eqref{eq:intp_error_bnd}
to have
\begin{align}
&\norm{u_h - I_hu}_h^2 \leq C\big(h^2+h^{4-(\alpha+1)}\big)\norm{u}_{\tilde H^2(\Omega)}^2
+C\sum_{B \in {\mathring \cE}_h^i}\frac{\abs{B}^\alpha}{\sigma_B^0}\norm{\left\{\beta \nabla(u-I_hu) \cdot \bfn_B\right\}}_{L^2(B)}^2. \label{eq:error_bnd_H1_3}
\end{align}
For the second term on the right in \eqref{eq:error_bnd_H1_3}, we use
Lemma \ref{lem:interp_error_bnd_edge} to bound it:
\begin{align}
&\sum_{B \in {\mathring \cE}_h^i}\frac{\abs{B}^\alpha}{\sigma_B^0}\norm{\left\{\beta \nabla(u-I_hu) \cdot \bfn_B\right\}}_{L^2(B)}^2 \nonumber \\
\leq & \sum_{B \in {\mathring \cE}_h^i}\frac{\abs{B}^\alpha}{\sigma_B^0}
C\big(h^2 \norm{u}_{3,\Omega}^2 + h\norm{u}_{2, K_{B,1}} + h\norm{u}_{2, K_{B,1}}\big)
\leq Ch^{1+\alpha}\norm{u}_{\tilde H^3(\Omega)}^2.
\label{eq:error_bnd_H1_4}
\end{align}
Here we have used the fact that the number of interface elements is of $O(h^{-1})$. Hence, for
$\alpha = 1$, we can combine \eqref{eq:error_bnd_H1_3} and \eqref{eq:error_bnd_H1_4} to have
\begin{align}
&\norm{u_h - I_hu}_h \leq Ch\norm{u}_{\tilde H^3(\Omega)}.
\label{eq:error_bnd_H1_5}
\end{align}
In addition, using the optimal approximation capability of linear and
bilinear IFE spaces \eqref{eq:intp_error_bnd}
and \eqref{eq:error_bnd_H1_7}, we can show that
\begin{align}
\norm{u - I_hu}_h\leq Ch\norm{u}_{\tilde H^2(\Omega)}.
\label{eq:error_bnd_H1_6}
\end{align}
Finally, the error estimate \eqref{eq:error_bnd_H1} follows from applying
\eqref{eq:error_bnd_H1_5} and \eqref{eq:error_bnd_H1_6} to the following standard inequality:
\begin{equation*}
\norm{u - u_h}_h \leq \norm{u-I_hu}_h + \norm{u_h - I_hu}_h.
\end{equation*}
\end{proof}
\begin{remark}\label{th:error_bnd_H1_inf}
If the exact solution $u$ to the interface problem \eqref{bvp_pde}-\eqref{eq:bvp_int_2} is
in the function space $\tW^{2,\infty}(\Omega)$, then, using Remark \ref{lem:interp_error_bnd_edge_inf} and arguments similar to those for the proof of Theorem \ref{th:error_bnd_H1}, we can show that
the IFE solution $u_h$ generated with
$\alpha = 1$ on a Cartesian (either triangular or rectangular) mesh $\mathcal{T}_h$ has the following
error estimate:
\begin{align}
\norm{u - u_h}_h \leq C\big(h \norm{u}_{\tilde H^2(\Omega)} + h^{3/2}\norm{u}_{\tW^{2,\infty}(\Omega)}\big). \label{eq:error_bnd_H1_inf}
\end{align}
\end{remark}
\section{Numerical Examples}
In this section, we present a couple of numerical examples to demonstrate features of the
partially penalized IFE methods for elliptic interface problems.
For comparison, the interface problem to be solved in the numerical examples is the same as the one in \cite{XHe_TLin_YLin_Bilinear_Approximation}. Specifically, we consider the interface problem \eqref{bvp_pde}-\eqref{eq:bvp_int_2}
except that \eqref{eq:bvp_bc} is replaced by the non-homogeneous boundary condition $u|_{\partial \Omega} = g$. Let the solution domain $\Omega$ be the open rectangle $(-1,1)\times(-1,1)$ and the interface $\Gamma$ be the circle centered at origin point with a radius $r_0 = \pi/6.28$ which separates $\Omega$ into two sub-domains, denoted by $\Omega^-$ and $\Omega^+$, \emph{i.e.},
\begin{equation*}
\Omega^- = \{(x,y): x^2+y^2 < r_0^2\}, ~~~\text{and}~~~
\Omega^+ = \{(x,y): x^2+y^2 > r_0^2\}.
\end{equation*}
The exact solution $u$ to the interface problem is chosen as follows
\begin{equation}\label{eq: true solution}
u(x,y) =
\left\{
\begin{array}{ll}
\frac{r^\alpha}{\beta^-}, & \text{if~} r \leq r_0, \\
\frac{r^\alpha}{\beta^+} + \left(\frac{1}{\beta^-} - \frac{1}{\beta^+}\right), & \text{otherwise},
\end{array}
\right.
\end{equation}
where $\alpha = 5$, and $r = \sqrt{x^2+y^2}$. The functions $f$ and $g$ in this interface problem are consequently
determined by $u$. The Cartesian meshes $\mathcal{T}_h, h>0$ are formed by
partitioning $\Omega$ into $N\times N$ congruent squares of size $h = 2/N$ for a set of values of integer $N$.
To describe our numerical results, we rewrite the bilinear form in the partially penalized IFE methods \eqref{eq:IFE_eq} as
follows
\begin{eqnarray}\label{eq: bilinear form}
a_{h} (u_h, v_h) &=& \sum_{T\in \mathcal{T}_h} \int_T\beta \nabla u_h \cdot \nabla v_h dX + \delta
\sum_{B \in {\mathring \cE}_h^i} \int_B \{\beta\nabla u_h\cdot\mathbf{n}_B\}[v_h]ds \nonumber \\
&&~~~~ + \epsilon
\sum_{B \in {\mathring \cE}_h^i} \int_B \{\beta\nabla v_h\cdot\mathbf{n}_B\}[u_h]ds +
\sum_{B \in {\mathring \cE}_h^i} \frac{\sigma_B^0}{|B|}\int_{B}[u_h][v_h]ds.
\end{eqnarray}
When $\delta = 0$, $\epsilon = 0$, and $\sigma^0_B = 0$ for all $B\in {\mathring \cE}_h^i$, the partially penalized bilinear IFE method becomes the \emph{classic} bilinear IFE method proposed in \cite{XHe_Thesis_Bilinear_IFE, XHe_TLin_YLin_Bilinear_Approximation}. When $\delta = -1$, and $\sigma^0_B > 0$, we call the partially penalized IFE methods corresponding to $\epsilon = -1, 0, 1$, respectively, the symmetric, incomplete, and nonsymmetric PPIFE methods because of their similarity to the corresponding DG methods \cite{Riviere_DG_book}.
In our numerical experiment, we use the parameter $\alpha = 1$ as suggested by
our error estimation. Also, we use $\sigma_B^0 = 10\max(\beta^-,\beta^+)$ in the SPPIFE and IPPIFE schemes and $\sigma_B^0 = 1$ in the NPPIFE scheme.
In the first example, we test these IFE methods with the above interface problem whose diffusion coefficient has a large jump $(\beta^-,\beta^+) = (1,10000)$. Because of the large value of $\beta^+$, the exact solution $u(x,y)$ varies little in $\Omega^+$ which might be one of the reasons
this example is not overly difficult for all the IFE methods, and they indeed perform comparably well, see Tables \ref{table: IFE solution H1 error 1 10000}, \ref{table: IFE solution L2 error 1 10000}, and \ref{table: IFE solution infinity error 1 10000}. Specifically, all the partially penalized IFE methods converge optimally in the $H^1$-norm as predicted by the
error analysis in the previous section. The classic IFE method also converges optimally in the $H^1$-norm even though the related error bound has not been rigorously established yet. The data in Table \ref{table: IFE solution L2 error 1 10000} demonstrate that all the IFE methods converge in the $L^2$-norm at the expected optimal rate. Even though they all seem to converge in the $L^\infty$-norm, the data
in Table \ref{table: IFE solution infinity error 1 10000} do not reveal any definite rates for them.
\begin{table}[th]
\caption{Comparison of $|u_h - u|_{H^1(\Omega)}$ for different IFE methods with $\beta^- = 1$, $\beta^+ = 10000$.}
\vspace{-5mm}
\begin{center}
\begin{footnotesize}
\begin{tabular}{|c|cc|cc|cc|cc|}
\hline
& \multicolumn{2}{c|}{Classic IFE}
& \multicolumn{2}{c|} {SPP IFE}
& \multicolumn{2}{c|} {IPP IFE}
& \multicolumn{2}{c|} {NPP IFE}\\
\hline
$N$
& $|\cdot|_{H^1}$ & rate
& $|\cdot|_{H^1}$ & rate
& $|\cdot|_{H^1}$ & rate
& $|\cdot|_{H^1}$ & rate \\
\hline
$20$ &$1.9495E{-2}$ & &$1.9538E{-2}$ & &$1.9538E{-2}$ & &$1.9482E{-2}$ & \\
$40$ &$1.0539E{-2}$ & 0.8873 &$1.0601E{-2}$ & 0.8821 &$1.0602E{-2}$ & 0.8820 &$1.0562E{-2}$ & 0.8832 \\
$80$ &$5.4219E{-3}$ & 0.9590 &$5.5838E{-3}$ & 0.9249 &$5.5844E{-3}$ & 0.9248 &$5.4945E{-3}$ & 0.9428 \\
$160$ &$2.7157E{-3}$ & 0.9975 &$2.7720E{-3}$ & 1.0103 &$2.7724E{-3}$ & 1.0103 &$2.7291E{-3}$ & 1.0096 \\
$320$ &$1.3535E{-3}$ & 1.0046 &$1.3887E{-3}$ & 0.9973 &$1.3890E{-3}$ & 0.9970 &$1.3613E{-3}$ & 1.0035 \\
$640$ &$6.7876E{-4}$ & 0.9957 &$6.9915E{-4}$ & 0.9900 &$6.9923E{-4}$ & 0.9902 &$6.7915E{-4}$ & 1.0032 \\
$1280$ &$3.4023E{-4}$ & 0.9964 &$3.4439E{-4}$ & 1.0216 &$3.4447E{-4}$ & 1.0214 &$3.3979E{-4}$ & 0.9991 \\
$2560$ &$1.7079E{-4}$ & 0.9942 &$1.7036E{-4}$ & 1.0155 &$1.7038E{-4}$ & 1.0156 &$1.6994E{-4}$ & 0.9996 \\
\hline
\end{tabular}
\end{footnotesize}
\end{center}
\label{table: IFE solution H1 error 1 10000}
\end{table}
\begin{table}[th]
\caption{Comparison of $\|u_h - u\|_{L^2(\Omega)}$ for different IFE methods with $\beta^- = 1$, $\beta^+ = 10000$.}
\vspace{-5mm}
\begin{center}
\begin{footnotesize}
\begin{tabular}{|c|cc|cc|cc|cc|}
\hline
& \multicolumn{2}{c|}{Classic IFE}
& \multicolumn{2}{c|} {SPP IFE}
& \multicolumn{2}{c|} {IPP IFE}
& \multicolumn{2}{c|} {NPP IFE}\\
\hline
$N$
& $\|\cdot\|_{L^2}$ & rate
& $\|\cdot\|_{L^2}$ & rate
& $\|\cdot\|_{L^2}$ & rate
& $\|\cdot\|_{L^2}$ & rate \\
\hline
$20$ &$1.1175E{-3}$ & &$1.1273E{-3}$ & &$1.1276E{-3}$ & &$1.1179E{-3}$ & \\
$40$ &$2.8572E{-4}$ & 1.9676 &$2.9171E{-4}$ & 1.9503 &$2.9181E{-4}$ & 1.9501 &$2.8567E{-4}$ & 1.9683 \\
$80$ &$7.5990E{-5}$ & 1.9107 &$8.5150E{-5}$ & 1.7764 &$8.5223E{-5}$ & 1.7757 &$7.6342E{-5}$ & 1.9038 \\
$160$ &$1.8116E{-5}$ & 2.0685 &$2.2589E{-5}$ & 1.9144 &$2.2645E{-5}$ & 1.9120 &$1.8098E{-5}$ & 2.0766 \\
$320$ &$4.4753E{-6}$ & 2.0172 &$6.1332E{-6}$ & 1.8809 &$6.1629E{-6}$ & 1.8775 &$4.5193E{-6}$ & 2.0017 \\
$640$ &$1.0969E{-6}$ & 2.0286 &$1.6502E{-6}$ & 1.8940 &$1.6502E{-6}$ & 1.9010 &$1.1235E{-6}$ & 2.0081 \\
$1280$ &$2.6689E{-7}$ & 2.0391 &$3.7104E{-7}$ & 2.1530 &$3.7275E{-7}$ & 2.1464 &$2.7680E{-7}$ & 2.0211 \\
$2560$ &$6.3940E{-8}$ & 2.0615 &$7.3251E{-8}$ & 2.3407 &$7.3657E{-8}$ & 2.3393 &$6.8809E{-8}$ & 2.0082 \\
\hline
\end{tabular}
\end{footnotesize}
\end{center}
\label{table: IFE solution L2 error 1 10000}
\end{table}
\begin{table}[h]
\caption{Comparison of $\|u_h - u\|_{L^\infty(\Omega)}$ for different IFE methods with $\beta^- = 1$, $\beta^+ = 10000$.}
\vspace{-5mm}
\begin{center}
\begin{footnotesize}
\begin{tabular}{|c|cc|cc|cc|cc|}
\hline
& \multicolumn{2}{c|}{Classic IFE}
& \multicolumn{2}{c|} {SPP IFE}
& \multicolumn{2}{c|} {IPP IFE}
& \multicolumn{2}{c|} {NPP IFE}\\
\hline
$N$
& $\|\cdot\|_{L^\infty}$ & rate
& $\|\cdot\|_{L^\infty}$ & rate
& $\|\cdot\|_{L^\infty}$ & rate
& $\|\cdot\|_{L^\infty}$ & rate \\
\hline
$20$ &$8.8830E{-4}$ & &$9.9803E{-4}$ & &$9.9820E{-4}$ & &$8.4663E{-4}$ & \\
$40$ &$4.3525E{-4}$ & 1.0292 &$5.2256E{-4}$ & 0.9335 &$5.2185E{-4}$ & 0.9357 &$4.2059E{-4}$ & 1.0093 \\
$80$ &$1.6536E{-4}$ & 1.3962 &$2.2692E{-4}$ & 1.2034 &$2.2745E{-4}$ & 1.1981 &$1.6816E{-4}$ & 1.3225 \\
$160$ &$7.4603E{-5}$ & 1.1483 &$9.8441E{-5}$ & 1.2049 &$9.8619E{-5}$ & 1.2056 &$7.2067E{-5}$ & 1.2225 \\
$320$ &$1.2972E{-5}$ & 2.5238 &$3.6045E{-5}$ & 1.4494 &$3.6421E{-5}$ & 1.4371 &$1.4362E{-5}$ & 2.3271 \\
$640$ &$6.1332E{-6}$ & 1.0807 &$1.7607E{-5}$ & 1.0337 &$1.7531E{-5}$ & 1.0548 &$6.4702E{-5}$ & 1.1504 \\
$1280$ &$2.5136E{-6}$ & 1.2869 &$5.1961E{-6}$ & 1.7606 &$5.1799E{-6}$ & 1.7589 &$1.6993E{-6}$ & 1.9289 \\
$2560$ &$1.2810E{-6}$ & 0.9725 &$1.0242E{-6}$ & 2.3430 &$1.0274E{-6}$ & 2.3339 &$9.1534E{-7}$ & 0.8926 \\
\hline
\end{tabular}
\end{footnotesize}
\end{center}
\label{table: IFE solution infinity error 1 10000}
\end{table}
However, we have observed that the partially penalized IFE methods outperform the classic IFE method in many situations. We demonstrate this by numerical results generated by these IFE methods for the interface problem whose diffusion coefficient has a typical moderate jump $(\beta^-,\beta^+) = (1,10)$. IFE solution errors are listed in Tables \ref{table: IFE solution H1 error 1 10}, \ref{table: IFE solution L2 error 1 10}, and \ref{table: IFE solution infinity error 1 10}. From the data in Table \ref{table: IFE solution H1 error 1 10}, we can see that all the partially penalized IFE methods maintain their predicted $O(h)$ convergence rate
in the $H^1$-norm over all the meshes up to the finest one, while the classic IFE method slightly looses its convergence rate in the $H^1$-norm when the mesh becomes very fine. The effects of the penalization are more prominent when the errors are gauged in the $L^2$-norm and $L^\infty$-norm. According to the data
in Table \ref{table: IFE solution L2 error 1 10}, all the partially penalized IFE methods converge at the optimal rate in the $L^2$-norm but the $L^2$-norm rate of
the classic IFE method clearly degenerates when the mesh becomes finer. Similar phenomenon for the $L^\infty$-norm convergence can be observed from those data
in Table \ref{table: IFE solution infinity error 1 10}.
\begin{table}[ht]
\caption{Comparison of $|u_h - u|_{H^1(\Omega)}$ for different IFE methods with $\beta^- = 1$, $\beta^+ = 10$.}
\vspace{-5mm}
\begin{center}
\begin{footnotesize}
\begin{tabular}{|c|cc|cc|cc|cc|}
\hline
& \multicolumn{2}{c|}{Classic IFE}
& \multicolumn{2}{c|} {SPP IFE}
& \multicolumn{2}{c|} {IPP IFE}
& \multicolumn{2}{c|} {NPP IFE}\\
\hline
$N$
& $|\cdot|_{H^1}$ & rate
& $|\cdot|_{H^1}$ & rate
& $|\cdot|_{H^1}$ & rate
& $|\cdot|_{H^1}$ & rate \\
\hline
$20$ &$6.4758E{-2}$ & &$6.4751E{-2}$ & &$6.4753E{-2}$ & &$6.4719E{-2}$ & \\
$40$ &$3.2779E{-2}$ & 0.9823 &$3.2650E{-2}$ & 0.9878 &$3.2651E{-2}$ & 0.9878 &$3.2637E{-2}$ & 0.9877 \\
$80$ &$1.6596E{-2}$ & 0.9723 &$1.6386E{-2}$ & 0.9947 &$1.6386E{-2}$ & 0.9947 &$1.6382E{-2}$ & 0.9944 \\
$160$ &$8.4072E{-3}$ & 0.9811 &$8.2081E{-3}$ & 0.9974 &$8.2082E{-3}$ & 0.9973 &$8.2063E{-3}$ & 0.9973 \\
$320$ &$4.2566E{-3}$ & 0.9819 &$4.1071E{-3}$ & 0.9988 &$4.1071E{-3}$ & 0.9989 &$4.1068E{-3}$ & 0.9987 \\
$640$ &$2.2267E{-3}$ & 0.9348 &$2.0544E{-3}$ & 0.9994 &$2.0545E{-3}$ & 0.9994 &$2.0544E{-3}$ & 0.9994 \\
$1280$ &$1.1795E{-3}$ & 0.9167 &$1.0274E{-3}$ & 0.9997 &$1.0274E{-3}$ & 0.9997 &$1.0274E{-3}$ & 0.9996 \\
$2560$ &$6.6893E{-4}$ & 0.8182 &$5.1377E{-4}$ & 0.9998 &$5.1377E{-4}$ & 0.9998 &$5.1377E{-4}$ & 0.9998 \\
\hline
\end{tabular}
\end{footnotesize}
\end{center}
\label{table: IFE solution H1 error 1 10}
\end{table}
\begin{table}[ht]
\caption{Comparison of $\|u_h - u\|_{L^2(\Omega)}$ for different IFE methods with $\beta^- = 1$, $\beta^+ = 10$.}
\vspace{-5mm}
\begin{center}
\begin{footnotesize}
\begin{tabular}{|c|cc|cc|cc|cc|}
\hline
& \multicolumn{2}{c|}{Classic IFE}
& \multicolumn{2}{c|} {SPP IFE}
& \multicolumn{2}{c|} {IPP IFE}
& \multicolumn{2}{c|} {NPP IFE}\\
\hline
$N$
& $\|\cdot\|_{L^2}$ & rate
& $\|\cdot\|_{L^2}$ & rate
& $\|\cdot\|_{L^2}$ & rate
& $\|\cdot\|_{L^2}$ & rate \\
\hline
$20$ &$4.3003E{-3}$ & &$4.2945E{-3}$ & &$4.2989E{-3}$ & &$4.2869E{-3}$ & \\
$40$ &$1.0622E{-3}$ & 2.0174 &$1.0749E{-3}$ & 1.9983 &$1.0745E{-3}$ & 2.0003 &$1.0626E{-3}$ & 2.0124 \\
$80$ &$2.6196E{-4}$ & 2.0196 &$2.6833E{-4}$ & 2.0021 &$2.6797E{-4}$ & 2.0036 &$2.6440E{-4}$ & 2.0067 \\
$160$ &$6.4952E{-5}$ & 2.0119 &$6.7047E{-5}$ & 2.0008 &$6.6872E{-5}$ & 2.0026 &$6.5876E{-5}$ & 2.0049 \\
$320$ &$1.6311E{-5}$ & 1.9935 &$1.6829E{-5}$ & 1.9942 &$1.6794E{-5}$ & 1.9935 &$1.6594E{-5}$ & 1.9891 \\
$640$ &$4.4482E{-6}$ & 1.8746 &$4.2038E{-6}$ & 2.0012 &$4.1934E{-6}$ & 2.0017 &$4.1383E{-6}$ & 2.0035 \\
$1280$ &$1.4445E{-6}$ & 1.6226 &$1.0501E{-6}$ & 2.0012 &$1.0472E{-6}$ & 2.0015 &$1.0336E{-6}$ & 2.0014 \\
$2560$ &$6.7593E{-7}$ & 1.0956 &$2.6254E{-7}$ & 1.9999 &$2.6149E{-7}$ & 2.0017 &$2.5821E{-7}$ & 2.0357 \\
\hline
\end{tabular}
\end{footnotesize}
\end{center}
\label{table: IFE solution L2 error 1 10}
\end{table}
\begin{table}[ht]
\caption{Comparison of $\|u_h - u\|_{L^\infty(\Omega)}$ for different IFE methods with $\beta^- = 1$, $\beta^+ = 10$.}
\vspace{-5mm}
\begin{center}
\begin{footnotesize}
\begin{tabular}{|c|cc|cc|cc|cc|}
\hline
& \multicolumn{2}{c|}{Classic IFE}
& \multicolumn{2}{c|} {SPP IFE}
& \multicolumn{2}{c|} {IPP IFE}
& \multicolumn{2}{c|} {NPP IFE}\\
\hline
$N$
& $\|\cdot\|_{L^\infty}$ & rate
& $\|\cdot\|_{L^\infty}$ & rate
& $\|\cdot\|_{L^\infty}$ & rate
& $\|\cdot\|_{L^\infty}$ & rate \\
\hline
$20$ &$1.0969E{-3}$ & &$1.3680E{-3}$ & &$1.3785E{-3}$ & &$1.0082E{-3}$ & \\
$40$ &$5.4748E{-4}$ & 1.0026 &$3.9775E{-4}$ & 1.7822 &$3.9769E{-4}$ & 1.7934 &$1.9172E{-4}$ & 2.3947 \\
$80$ &$5.0812E{-4}$ & 0.1077 &$1.0601E{-4}$ & 1.9077 &$1.0582E{-4}$ & 1.9100 &$5.4491E{-5}$ & 1.8149 \\
$160$ &$2.2635E{-4}$ & 1.1667 &$3.1598E{-5}$ & 1.7463 &$3.1217E{-5}$ & 1.7612 &$1.4045E{-5}$ & 1.9559 \\
$320$ &$1.2290E{-4}$ & 0.8811 &$7.0324E{-6}$ & 2.1677 &$6.9364E{-6}$ & 2.1701 &$3.5092E{-6}$ & 2.0009 \\
$640$ &$7.0810E{-5}$ & 0.7954 &$1.9288E{-6}$ & 1.8664 &$1.9213E{-6}$ & 1.8521 &$9.1942E{-7}$ & 1.9324 \\
$1280$ &$3.4111E{-5}$ & 1.0537 &$5.0505E{-7}$ & 1.9332 &$4.9869E{-7}$ & 1.9459 &$2.2932E{-7}$ & 2.0034 \\
$2560$ &$1.7815E{-5}$ & 0.9371 &$1.0199E{-7}$ & 2.3080 &$1.0075E{-7}$ & 2.3074 &$5.9784E{-8}$ & 1.9395 \\
\hline
\end{tabular}
\end{footnotesize}
\end{center}
\label{table: IFE solution infinity error 1 10}
\end{table}
It is known that the point-wise accuracy of the classic IFE methods in the literature is usually quite poor around the interface and we suspect the discontinuity of the IFE functions across interface edges is the main cause for this shortcoming. With the penalty to control the discontinuity in IFE functions, a partially penalized IFE method has the potential to produce better point-wise approximations. To demonstrate this, we plot errors of a classic bilinear IFE solution and a NPP IFE solution in Figure \ref{fig: infinity norm error comparison}. The IFE solutions in these plots are generated on the mesh with $80\times 80$ rectangles. From the plot on the left, we can easily see that the classic IFE solution has much larger errors in the vicinity of the interface. The plot on the right shows that the magnitude of the error in the NPP IFE solution is much smaller uniformly over the whole solution domain. This advantage is also observed for other partially penalized IFE solutions.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.45\textwidth]{bilinear_IFE_infinity_norm_error}\quad
\includegraphics[width=0.45\textwidth]{bilinear_NPP_IFE_infinity_norm_error}
\end{center}
\caption{A comparison of point-wise errors of a classic bilinear IFE solution and a nonsymmetric partially penalized IFE solution.
The coefficient in the interface problem has a moderate jump: $(\beta^-,\beta^+) = (1,10)$.}
\label{fig: infinity norm error comparison}
\end{figure}
\begin{appendices}
\section{A proof of Remark \ref{lem:interp_error_bnd_edge_inf}}\label{remark_1}
We give a proof for the linear IFEs, and the same arguments can be used to show this error bound for
bilinear IFEs.
Without loss of generality, let $K = \bigtriangleup A_1A_2A_3$ be an interface triangle whose vertices and interface intersection points are given in \eqref{eq: triangle vertices} and \eqref{eq: triangle DE}
with $A_1 \in K^+$. Also we only discuss $B = \overline{A_1A_2}$, the estimate on the other
interface edge can be established similarly.
By Lemma 3.3 and Lemma 3.4 in \cite{ZLi_TLin_YLin_RRogers_linear_IFE}, for every $X \in \overline{DA_2}$, we have
\begin{align}
(I_{h}u(X) - u(X))_p &= \big(N^-(D) - N_{\overline{DE}}\big)\nabla u^-(X)(A_1-D) \pderiv{\phi_1(X)}{p}
\nonumber \\
& +I_1(X) \pderiv{\phi_1(X)}{p} + I_2(X)\pderiv{\phi_2(X)}{p} + I_3(X) \pderiv{\phi_3(X)}{p},~~p = x, y,
\label{eq:interp_error_bnd_edge_inf_1}
\end{align}
where $N^-(D)$, $N_{\overline{DE}}$, $\rho$, ${\bf n}(X)$, and ${\bf n}(\overline{DE})$ are defined the same as in the proof of Lemma \ref{lem:interp_error_bnd_edge}.
The quantities $I_i(X)$, $i=1,2,3,$ are given in \eqref{eq: I1} and \eqref{eq: I2I3}.
By Lemma 3.1 and Theorem 2.4 of \cite{ZLi_TLin_YLin_RRogers_linear_IFE}, we have
\begin{align}
& \int_{\overline{DA_2}}\left(\big(N^-(D) - N_{\overline{DE}}\big)\nabla u^-(X)(A_1-D) \pderiv{\phi_1(X)}{p}\right)^2dX \leq C h^3 \norm{u}_{W^{2,\infty}(\Omega^-)}^2
\label{eq:interp_error_bnd_edge_inf_2}
\end{align}
for $p=x,y$. By direct calculations we have for $i = 1, 2, 3, p = x, y$
\begin{align*}
&\abs{\oderiv{\nabla u^-}{t}(tD + (1-t)X)\cdot(A_1-X)} \leq \big(\abs{u_{xx}^-(tD + (1-t)X)}
+ \abs{u_{xy}^-(tD + (1-t)X)} \\
& \hspace{1.7in} + \abs{u_{yx}^-(tD + (1-t)X)} + \abs{u_{yy}^-(tD + (1-t)X)}\big)h^2, \\
&\abs{\oderivm{u^s}{t}{2}(tA_i + (1-t)X)} \leq \big(\abs{u_{xx}^s(tA_i + (1-t)X)}
+ \abs{u_{xy}^s(tA_i + (1-t)X)} \\
& \hspace{1.7in} + \abs{u_{yx}^s(tA_i + (1-t)X)} + \abs{u_{yy}^s(tA_i + (1-t)X)}\big)h^2.
\end{align*}
By these estimates and Theorem 2.4 of \cite{ZLi_TLin_YLin_RRogers_linear_IFE}, we have
\begin{align}
\int_{\overline{DA_2}}\left(I_i(X)\pderiv{\phi_i(X)}{p}\right)^2 dX \leq Ch^3
\norm{u}_{\tW^{2,\infty}(\Omega)}^2, ~~ \label{eq:interp_error_bnd_edge_inf_3}
\end{align}
Then, using \eqref{eq:interp_error_bnd_edge_inf_2} and applying \eqref{eq:interp_error_bnd_edge_inf_2} and \eqref{eq:interp_error_bnd_edge_inf_3} we have
\begin{align}
\int_{\overline{DA_2}}\big((I_{h}u(X) - u(X))_p\big)^2 dX \leq Ch^3\norm{u}_{\tW^{2,\infty}(\Omega)}^2,
~~p = x, y. \label{eq:interp_error_bnd_edge_inf_4}
\end{align}
Similar arguments can be used to show
\begin{align}
\int_{\overline{A_1D}}\big((I_{h}u(X) - u(X))_p\big)^2 dX \leq Ch^3\norm{u}_{\tW^{2,\infty}(\Omega)}^2,
~~p = x, y. \label{eq:interp_error_bnd_edge_inf_5}
\end{align}
Finally, the estimate \eqref{eq:interp_error_bnd_edge_inf} on the interface edge
$B = \overline{A_1A_2}$ follows from
\eqref{eq:interp_error_bnd_edge_inf_4} and \eqref{eq:interp_error_bnd_edge_inf_5}.
\end{appendices}
\bibliographystyle{abbrv}
|
1,477,468,751,084 | arxiv | \section{Introduction}
Mesh networking, which is a key enabler for Internet-of-Things (IoT), is a promising solution for range extension and improved resilience. Bluetooth is a prominent short-range connectivity technology which has evolved tremendously over the last decade. Bluetooth mesh (released in 2017) \cite{BT_mesh} is the latest addition to the IoT connectivity landscape. The Bluetooth mesh specification provides a full-stack mesh networking solution for IoT. Bluetooth mesh provides a simple and efficient wireless networking solution potentially providing mesh connectivity to thousands\footnote{Bluetooth mesh can support up to 32767 nodes in a network with a maximum of 126 hops.} of nodes without any coordination. In a concurrent development, the Bluetooth 5.0 standard (released in 2016) \cite{BT_5} introduces various new features for IoT including improved data transmission modes, enhance frequency hopping, and new Physical (PHY) layers.
Bluetooth mesh has been adopted for various IoT applications including home/building automation, smart lighting, and industrial wireless sensor networks. In the smart lighting community, Bluetooth mesh is recognized as the \emph{killer of the lighting switch} due to its distributed control capabilities. Recently, Silvair\footnote{\url{https://silvair-media.s3.amazonaws.com/media/filer_public/c8/cf/c8cfa7fb-278c-450b-89c6-6c5137ca4fc4/25_02_silvair_emc_casestudy.pdf}} has deployed one of the largest Bluetooth mesh networks comprising more than 3500 nodes spread across 17 floors of an office building.
Despite growing success of Bluetooth mesh in industry, academic studies on its performance aspects portray a mixed picture. One simulation-based study \cite{und_perf_BT_mesh} identifies scalability as its main limitation. Another study following analytic approach \cite{BT_mesh_analysis} claims configuration of wide range of parameters as the main challenge. Yet another study \cite{exp_BT_mesh_1}, conducting experimental evaluation, suggests that Bluetooth mesh is better suited to applications with sporadic traffic patterns. On the other hand, various aspects of Bluetooth mesh performance remain unexplored. One example is the latency performance of Bluetooth mesh as most existing studies are heavily focused toward reliability. Bluetooth mesh stack offers a number of configuration options at different layers that provide the capability of perfect reliability. Hence, the latency associated with perfect reliability requires deeper investigation, particularly under different communication patterns. The role of Bluetooth 5.0 for Bluetooth mesh as well as performance enhancement techniques are not well-investigated. This necessitates further experimental evaluation of Bluetooth mesh and performance insights. Investigating performance trade-offs for Bluetooth mesh is crucial to asses its suitability for new near-real-time and non-real-time IoT applications. This is also important as Bluetooth-based proprietary industrial solutions (e.g., \cite{enc_TII} and \cite{IO_Link_W}) are not compatible with the Bluetooth mesh specification.
\begin{figure}
\centering
\includegraphics[scale=0.27]{Mesh_arch_stack}
\caption{System architecture and protocol stack of Bluetooth mesh. }
\label{arch_stack}
\vspace{-1.5em}
\end{figure}
\subsection{Related Work}
Yin \emph{et al.} \cite{survey_BT_mesh_5} conducted a comprehensive survey of Bluetooth mesh and Bluetooth 5.0.
Rondon \emph{et al.} \cite{und_perf_BT_mesh} investigated performance of Bluetooth mesh, in terms of reliability, scalability and delay, through simulations on a grid topology. Hernandez-Solana \emph{et al.} \cite{BT_mesh_analysis} adopted an analytic approach for investigating interaction of different Bluetooth mesh parameters. Di Marco \emph{et al.} \cite{eval_BT_5_modes} conducted simulations-based evaluation of different data transfer modes of Bluetooth 5.0.
Some studies have conducted experimental evaluation of Bluetooth mesh. Leon and Nabi \cite{exp_BT_mesh_1} evaluated packet delivery performance under a convergecast traffic pattern where all nodes periodically generate data packets for a common destination. Baert \emph{et al.} \cite{exp_BT_mesh_2} investigated latency bounds with varying number of hops and relays. Jürgens \emph{et al.} \cite{mesh_ILS} investigated application of Bluetooth mesh for indoor localization through an experimental setup.
\subsection{Contributions}
Against this background and existing literature, this paper has two main objectives. The first is to conduct experimental evaluation of Bluetooth mesh with an emphasis on performance aspects which are not well-investigated in literature. The second is to investigate potential enhancement/optimization techniques while providing insights into system-level performance. Our experimental evaluation aims to address the following important questions.
\begin{enumerate}
\item {What type of communication/traffic patterns are efficiently supported by Bluetooth mesh?}
\item {What is the latency performance of Bluetooth mesh for perfect reliability, under different modes of operation?}
\item {Can the performance of Bluetooth mesh be enhanced through simple techniques and parametric adjustments?}
\item {What kind of IoT applications can be supported by Bluetooth mesh?}
\end{enumerate}
We conduct experimental evaluation of Bluetooth mesh based on a testbed of Nordic nRF52840\footnote{https://www.nordicsemi.com/Software-and-Tools/Development-Kits/nRF52840-DK} devices. The key distinguishing aspects of our evaluation include different communication patterns, different modes of operation, varying traffic loads, impact of message segmentation, and latency performance for 100\% reliability.
In terms of performance optimization of Bluetooth mesh, we investigate the role of advertising/scanning parameter adjustment, extended advertisements, dynamic power control techniques, and varying number of relays, from a system-level perspective.
\section{Overview of Bluetooth Mesh Technology}
\subsection{Protocol Stack and Architecture}
Fig. \ref{arch_stack} shows the protocol stack of Bluetooth mesh technology. Bluetooth mesh is built over the Bluetooth low energy (BLE) standard, sharing the same Physical (PHY) and Link layers. Other layers are described as follows.
The \emph{bearer layer} handles delivery of mesh PDUs at the link layer. The default bearer is the \emph{advertising} bearer that exploits BLE advertising/scanning features to transmit/receive mesh PDUs. There is a \emph{generic attribute profile (GATT) bearer} for devices not supporting Bluetooth mesh to communicate with mesh nodes. The \emph{network layer} determines how transport messages are addressed and dealt with, and decides whether to relay, process, or reject a message. The \emph{lower transport layer} defines segmentation and reassembly process for mesh PDUs. It also provides acknowledged/unacknowledged message delivery.
The \emph{upper transport layer} handles encryption, decryption, and authentication functions. It also handles control messages which are exchanged between peer nodes. The \emph{access layer} acts as an intermediary between the model layers and upper transport layer by providing the upper transport layer with a standard format.
The \emph{foundation model layer} implements models related to management of the Bluetooth mesh network.
The \emph{model layer} defines models that provide a standardized operation for typical use-cases. Models can also be custom-defined. There are three types of models: server model, client model, and control model.
\begin{figure}
\centering
\includegraphics[scale=0.28]{BT_Mesh_Adv_Scan}
\caption{Illustration of advertising and scanning procedures in Bluetooth mesh. }
\label{adv_scan}
\vspace{-1.5em}
\end{figure}
Fig. \ref{arch_stack} also shows the architecture of Bluetooth mesh. The term \emph{node} is used for any device which is part of the Bluetooth mesh network. A device becomes part of the mesh network through the provisioning process. All nodes in the mesh network are capable of transmitting/receiving messages; however, some optional features provide additional capabilities.
The \emph{relay nodes} are able to retransmit received messages over the advertising bearers. Based on relaying, a message can traverse the whole multi-hop mesh network.
A \emph{low power node} (LPN) is power-constrained and must use its energy resources as efficiently as possible. LPNs operate in a mesh network with significantly reduced duty cycle.
A \emph{friend node} assists the operation of LPNs in the mesh network by storing messages for these nodes and forwarding upon request.
\subsection{Managed Flooding}
Bluetooth mesh utilizes a \emph{managed flooding} approach for communication. The managed flooding mechanism is completely asynchronous and provides a simple approach to propagate messages in the mesh network using broadcast. A message transmitted in the mesh network is potentially forwarded by multiple relay nodes.
Message transmissions primarily take place over the advertising bearer. Advertising is the process by which a source node injects its message in the mesh network. An advertising event is a cycle of advertising operations where mesh protocol data units (PDUs) are transmitted in sequence over each of the three (primary) advertising channels (i.e., channels 37, 38 and 39). At the network layer, multiple advertising events can be configured to improve the reliability of message injection. The time between two advertising events is dictated by the advertising interval (\emph{advInterval}) and a random advertising delay (\emph{advDelay}).
The relay nodes scan the advertising channels and listen to the advertising information of the neighbors. The scanning procedure is performed in scanning events that repeat after a scanning interval (\emph{scanInterval}). The probability of message propagation in the mesh network is increases with multiple relay nodes scanning on different advertising channels at different times.
Managed flooding provides multi-path diversity that improves reliability. However, it can also result in increased collisions on the advertising channels. Bluetooth mesh offers various configuration options to overcome this issue, for example time-to-live (TTL) limitations, message cache to restrict forwarding, and random back-off periods between different advertising events and different transmissions within an advertising event. Bluetooth mesh implements a publish/subscribe messaging paradigm. Publishing refers to the act of sending a message. Typically, messages are sent to unicast, group or virtual addresses.
\subsection{Extended Advertisements}
Bluetooth 5.0 introduces extended advertising that exploits additional data channels for transmission. A source node transmits short advertising indication PDUs (on primary advertising channels) that include a pointer to a secondary advertising channel (randomly selected from the other 37 BLE channels) over which data transmission takes place. Extended advertisements enable transmission of more data than that allowed on legacy advertisements. Data transmissions on secondary advertising channels can use any of the Bluetooth 5.0 PHY layers i.e., coded (500/125 kbps) and uncoded (1/2 Mbps).
\begin{figure}
\centering
\includegraphics[scale=0.33]{Mesh_testbed1}
\caption{Testbed for experimental evaluation of Bluetooth mesh.}
\label{testbed1}
\vspace{-1.5em}
\end{figure}
\section{Experimental Evaluation and Optimization}
\subsection{Testbed Setup and Evaluation Scenarios}
Fig. \ref{testbed1} shows our testbed for experimental evaluation. We have deployed 20 nRF52840 development boards over two floors of our office building covering approximately 600 square meters. The testbed setup depicts a challenging multi-hop mesh scenario due to weak link between the two floors. The multi-hop network stretches over a maximum of 4 hops. The testbed nodes experience shadowing (from humans) and external interference (from Wi-Fi access points and other Bluetooth devices operating in office environment). We use the Nordic nRF5 software development kit (SDK) for Bluetooth mesh evaluation. We have implemented precision time protocol (PTP)\footnote{http://linuxptp.sourceforge.net/} with software timestamping to time synchronize testbed nodes for one-way latency measurements. The hex files required to flash the boards were built using SEGGER Embedded Studio for ARM v4.12. bash scripts to setup and program the boards as well as a php script for latency/reliability capture.
We consider different evaluation scenarios for Bluetooth mesh in terms of traffic patterns (one-to-many, many-to-one, and many-to-many), varying number of concurrent senders, varying message sizes with and without segmentation, and unicast and group communication modes of operation.
The default setting of key parameters in our evaluation is given in \tablename~\ref{def_params} (unless stated otherwise). Experimental results are repeated over 100 iterations with a new message every 1000 ms. In case of many-to-many communication, the source and destination nodes are at least two hops away.
\begin{center}
\begin{table}
\caption{Default parameters and configuration for evaluation}
\begin{center}
\begin{tabular}{ll}
\hline
\toprule
Parameter & Value \\\hline
\midrule
Transmit power & 0 dBm \\
Advertising interval (\emph{advInterval}) & 20 ms \\
Scanning interval (\emph{scanInterval}) & 2000 ms \\
Relay configuration & All nodes are relays \\
No. of advertising events & 3 (source node); 2 (relay node)\\
Message size & 11 bytes (unsegmented) \\
Mode of operation & Unicast (acknowledged) \\
\hline
\end{tabular}
\end{center}
\label{def_params}
\vspace{-1.5em}
\end{table}
\end{center}
\subsection{Evaluation of Unicast and Group Communication Modes}
First, we evaluate the latency and reliability performance of unicast and group communication modes. We consider one-to-many and many-to-one communication scenarios where one of the nodes is selected as a controller while others act as slaves. The controller sends a command message to each slave node which creates a one-to-many traffic pattern. Moreover, we consider acknowledged mode for mesh messages. Hence, slave nodes acknowledge command messages from the controller which creates a many-to-one traffic pattern. We define two different groups of nodes. The first group consists of 15 nodes (1 controller and 14 slaves) in a multi-hop mesh network spanning both floors. The second group comprises 8 nodes (1 controller and 7 slaves) as a single-hop network (left part of the first floor). In the unicast mode, the controller sends a message to each node individually. The message is acknowledged; hence, the controller keeps sending until an acknowledgement is received. In the group mode, the controller sends a group message to all slave nodes. This message is also acknowledged; however, it is only sent twice (i.e., over two advertising events). For both communication modes and groups, we measure the round-trip latency and the reliability (in terms of packet delivery) under default set of parameters. The performance results, in terms of round-trip latency and reliability for each node, are shown in Fig. \ref{UG1} and Fig. \ref{UG2} and summarized in \tablename~\ref{t_ug}. As shown by the results, the unicast mode achieves 100\% reliability; however, it incurs higher latency. The group mode provides significantly lower latency; however, its reliability is affected by the fixed number of message transmissions.
\begin{figure}
\centering
\includegraphics[scale=0.28]{BT_Mesh_C_UG1}
\caption{Performance of unicast and group modes (multi-hop scenario).}
\label{UG1}
\vspace{-1 em}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.28]{BT_Mesh_C_UG3}
\caption{Performance of unicast and group modes (single-hop scenario)}
\label{UG2}
\vspace{-1 em}
\end{figure}
\begin{table}[]
\vspace{-1em}
\caption{Summary of performance in unicast and group modes}
\begin{tabular}{ccccc}
\hline
\textbf{Scenario} & \textbf{Mode} & \multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}Mean \\ Reliability\end{tabular}}} & \multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}Mean Latency \\ (round-trip)\end{tabular}}} & \multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}Max. Latency \\ (round-trip)\end{tabular}}} \\ \hline \hline
\multirow{2}{*}{Multi-hop} & Unicast & \multicolumn{1}{c}{100\%} & \multicolumn{1}{c}{152.27 ms} & \multicolumn{1}{c}{253.62 ms} \\
& \multicolumn{1}{l}{Group} & 89.6\% & 51.62 ms & 94.37 ms \\ \hline
\multicolumn{1}{l}{\multirow{2}{*}{Single-hop}} & Unicast & 100\% & 133.77 ms & 192.57 ms \\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{Group} & 99.07\% & 38.57 ms & 43.45 ms \\ \hline
\end{tabular}
\label{t_ug}
\vspace{-1.5em}
\end{table}
\subsection{Latency Performance for 100\% Reliability}
Next, we evaluate the latency performance for 100\% reliability (which is given by the unicast mode) under default set of parameters. We consider a many-to-many multi-hop communication scenario wherein either 3 concurrent senders transmit to a group of 3 distinct destinations or 7 concurrent senders transmit to a group of 7 distinct destinations. The 3 concurrent senders' scenario represents a low to medium traffic scenario whereas the 7 concurrent senders' scenario represents a high traffic scenario. The senders and receivers are randomly selected from the entire multi-hop mesh network in each iteration. The results for one-way latency measurement are shown in Fig. \ref{lat_def}.
The mean latency for many-to-many scenario with 3 concurrent senders is 24.94 ms and the 90th percentile is 50.41 ms. With 7 concurrent senders, the mean latency increases to 37.89 ms and the 90th percentile is 79.59 ms. The higher latency is due to more traffic on the advertising channels which leads to more retransmissions.
\begin{figure}
\centering
\includegraphics[scale=0.3]{BT_Mesh_Lat_Def}
\caption{Latency performance for 100\% reliability (default parameters).}
\label{lat_def}
\vspace{-1 em}
\end{figure}
\subsection{Impact of Message Segmentation}
The lower transport layer applies a segmentation and reassembly procedure for messages above 11 bytes. Hence, it is important to evaluate the impact of message segmentation. Fig. \ref{seg} shows one-way latency performance for segmented and unsegmented messages in case of many-to-many communication with 3 concurrent senders. We have used 11 byte and 19 byte messages for unsegmented and segmented scenarios, respectively. The mean latency for unsegmented messages is 24.94 ms and the 90th percentile of 50.41 ms. The mean latency for segmented messages increases more than twice to 76.32 ms and the 90th percentile is 101.67 ms. The results reveal that segmentation process incurs higher latency as more messages are injected in the mesh network.
\begin{figure}
\centering
\includegraphics[scale=0.3]{BT_Mesh_Seg}
\caption{Latency performance of segmented versus unsegmented messages.}
\label{seg}
\vspace{-1 em}
\end{figure}
\subsection{Performance Optimization with Parameter Adjustments}
Next, we investigate performance optimization of Bluetooth mesh by simple adjustment of two key parameters: \emph{advInterval} and \emph{scanInterval}. Both parameters must be adjusted in tandem so that advertising and scanning procedures are affected simultaneously. We consider three different combinations of (\emph{scanInterval})-(\emph{advInterval}): 2000-20 ms (default), 1000-10 ms, and 500-10 ms. The one-way latency performance for many-to-many communication with 3 and 7 concurrent senders is shown in Fig. \ref{lat_param} and further summarized in \tablename~\ref{t_param_adj}. The results show that default advertising/scanning parameters do not provide the best latency performance. Compared to 2000-20 ms, 1000-10 ms combination reduces the mean latency by nearly 31.5\% and 38.4\% for the cases of 3 and 7 senders, respectively. The reduction in latency is due to relatively faster advertising of messages. The 500-10 ms combination reduces latency compared to the default case; however, it increases latency compared to 1000-10 ms combination as the relays spend relatively less time scanning the advertising channels.
\begin{figure}
\centering
\includegraphics[scale=0.3]{BT_Mesh_Params}
\caption{Latency performance with different scanning/advertising parameters.}
\label{lat_param}
\vspace{-1 em}
\end{figure}
\begin{table}[]
\caption{Summary of performance with parameter adjustments.}
\begin{tabular}{cccc}
\hline
\textbf{Scenario} & \multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}Parameter \\ Combination \end{tabular}}} & \multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}Latency \\ (Mean)\end{tabular}}} & \multicolumn{1}{c}{\textbf{\begin{tabular}[c]{@{}c@{}}Latency \\ (90th percentile)\end{tabular}}} \\ \hline \hline
\multirow{3}{*}{Many-to-Many (3)} & 2000-20 (ms) & 24.94 ms & 50.41 ms \\
& 1000-10 (ms) & 17.09 ms & 31.18 ms\\
& 500-10 (ms) & 23.22 ms & 50.69 ms \\ \hline
\multicolumn{1}{l}{\multirow{3}{*}{Many-to-Many (7)}} & 2000-20 (ms) & 37.89 ms & 79.59 ms \\
\multicolumn{1}{l}{} & 1000-10 (ms) & 23.22 ms & 50.7 ms \\
\multicolumn{1}{l}{} &500-10 (ms) &26.69 ms &56.5 ms \\
\hline
\end{tabular}
\label{t_param_adj}
\end{table}
\subsection{Performance Optimization with Extended Advertisements}
Extended advertisements are promising for latency reduction due to reduced contention on primary advertising channels. Moreover, extended advertisements can exploit the 2 Mbps PHY layer on secondary advertising channels that reduces transmission time compared to the 1 Mbps PHY layer for primary advertising channels.
Since the next version of Bluetooth mesh specification with extended advertisements in still in progress, we use the Nordic proprietary Instaburst (https://infocenter.nordicsemi.com/index.jsp) feature for evaluation. Instaburst uses a subset of Bluetooth 5.0 extended advertisements with 2 Mbps PHY layer. When instaburst is enabled, all communication in the Bluetooth mesh network takes place via extended advertisements.
We evaluate the latency of legacy and extended advertisements for 100\% reliability under default parameters; however, with larger 50 byte messages. First, we evaluate the round-trip latency of one-to-many communication with 15 nodes. The results (shown in Fig. \ref{ext1}) reveal that the mean latency of legacy advertisements is 204.88 ms with 90th percentile of 343.5 ms. With extended advertisements, the mean latency is 76.35 ms with 90th percentile of 184 ms. Next, we evaluate one-way latency of many-to-many communication with 3 concurrent senders. The results (shown in Fig. \ref{ext2}) reveal that the mean latency of legacy advertisements is 120.8 ms with 90th percentile of 135.72 ms. With extended advertisements, the mean latency is 27.28 ms with 90th percentile of 45.67 ms. The results clearly highlight the effectiveness of extended advertisements for latency reduction of Bluetooth mesh. Specifically, the mean round-trip latency for one-to-many communication as well as the mean one-way latency for many-to-many communication reduces by up to 62\%.
\begin{figure}
\centering
\includegraphics[scale=0.3]{BT_Mesh_Ext_O}
\caption{Latency performance with extended advertisements (one-to-many).}
\label{ext1}
\vspace{-1 em}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.3]{BT_Mesh_Ext_M}
\caption{Latency performance with extended advertisements (many-to-many).}
\label{ext2}
\vspace{-1 em}
\end{figure}
\subsection{Performance Optimization with Power Control}
Power control is a well-known concept in wireless networks. Implementing power control in a Bluetooth mesh network provides two key benefits. The first is reduced interference to other source nodes which enables multiple transmissions at the same time. The second is higher availability of relay nodes, particularly in dense deployments, for forwarding messages from different source nodes due to reduced transmit power.
We implement a simple and widely used power control strategy wherein a source/relay node computes an optimized power level based on the received signal strength such that
\begin{equation}
\label{eq_pc}
P_{ctl}=P_{max}\cdot(P_r)^{-1}\cdot\zeta^{th}\cdot {c},
\end{equation}
where $P_{ctl}$ is the optimized transmit power, $P_{max}$ the maximum transmit power, $P_r$ is the minimum received power, $\zeta^{th}$ denotes the minimum required received
signal strength, and $c$ is a constant \cite{pwr_control1, pwr_control2}. The minimum received power is computed based on passive listening of the advertising channels. The power control strategy is implemented only if a node has overheard messages over all advertising channels.
We have evaluated the impact of dynamic power control under default parameters for many-to-many communication with 3 concurrent senders and with two different values of \(\zeta^{th}\): -80 dBm and -70 dBm. The results are shown in Fig. \ref{pwc}. The mean one-way latency for fixed power (i.e., no power control) is 24.94 ms with 90th percentile of 50.41 ms. For \(\zeta^{th}\) of -80 dBm, the mean latency increases by 10\% to 27.43 ms and the 90th percentile is 54.26 ms. For \(\zeta^{th}\) of -70 dBm, the mean latency decreases by 10.5\% and the 90th percentile is 41.67 ms. The increased latency for \(\zeta^{th}\) = -80 dBm is due to more aggressive power control which potentially increases link level failures leading to more message retransmissions. The latency reduction for \(\zeta^{th}\) = -70 dBm is due to more optimized power control which not only reduces interference but also maximizes availability of relays for message forwarding from multiple source nodes. The results reveal that \(\zeta^{th}\) must be carefully selected depending on network density.
\begin{figure}
\centering
\includegraphics[scale=0.3]{BT_Mesh_PC}
\caption{Latency performance with power control (many-to-many with 3 senders).}
\label{pwc}
\end{figure}
\subsection{Performance Optimization with Customized Relaying}
Finally, we evaluate the impact of the number of relay nodes in a Bluetooth mesh network. By default all nodes in our network are configured as relays. For comparison, we define two additional scenarios: (a) half of the network is configured as relays, and (b) quarter of the network is configured as relays. In both scenarios, the relays are randomly selected. We consider many-to-many communication with 7 concurrent senders to evaluate the impact of relaying. The one-way latency performance for 100\% reliability are shown in Fig. \ref{relay}. The mean latency for the default case of whole network as relay is 37.89 ms with 90th percentile of 79.59 ms. With half of the network configured as relays, the mean latency reduces by 25\% to 28.37 ms and the 90th percentile is 55.13 ms. With quarter of the network configured as relays, the mean latency reduces by 22.5\% to 29.35 ms and the 90th percentile is 58.83 ms. The higher latency of quarter relays compared to half relays is due to relatively lower availability of relays for source nodes. Nevertheless, the results reveal that appropriate number of relays for a given network density plays an important role in latency reduction.
\begin{figure}
\centering
\includegraphics[scale=0.3]{BT_Mesh_Relay}
\caption{Latency performance of customized relaying (many-to-many with 7 senders).}
\label{relay}
\end{figure}
\section{Key Insights}
Our experimental evaluation of Bluetooth mesh and its performance optimization provides the following key insights.
\begin{itemize}
\item Bluetooth mesh can efficiently support different communication patterns in unicast as well as in group modes.
\item Bluetooth mesh can achieve perfect reliability even under high and frequent traffic; however, default parameters do not provide optimized latency performance.
\item The performance of Bluetooth mesh can be optimized through (a) simple adjustment of parameters related to advertising/scanning, (b) simple power control techniques, and (c) selecting appropriate number of relay nodes for a given network density.
\item Extended advertisements are promising for latency reduction, particularly for transmitting larger messages.
\item The capability of Bluetooth mesh to support concurrent multicast and potential latency optimizations make it highly scalable.
\item The design flexibility of Bluetooth mesh and its ability to support bi-directional information exchange in unicast as well as group communication modes under diverse traffic patterns makes it suitable for versatile control-centric and monitoring-centric IoT applications.
\end{itemize}
The observed performance enhancement through adjustment of advertising/scanning parameters builds a strong case for adaptive advertising/scanning techniques (such as those proposed in \cite{immune_patent}). Moreover, dynamic switching between extended and legacy advertisements for mixed message sizes is promising for further latency optimization. Evaluation of such aspects will be the focus of our future work.
\section{Concluding Remarks}
The addition of mesh networking capabilities to the ubiquitous Bluetooth technology is promising for IoT applications. We have conducted experimental evaluation of Bluetooth mesh based on a testbed of Nordic nRF52840 devices in a real world environment. Our performance evaluation not only fills gaps in state-of-the-art studies but also clarifies various issues highlighted in literature. Moreover, it provides a number of insights in system-level performance of Bluetooth mesh. In particular, it reveals that Bluetooth mesh can efficiently handle different communication patterns (one-to-many, many-to-one, and many-to-many) and varying traffic loads in unicast as well as group modes. The latency performance for perfect reliability can be optimized through adjustment of advertising/scanning parameters (reducing latency by more than 30\%), extended advertisements (reducing latency by more than 60\%), simple power control techniques (providing up to 10\% latency reduction), and customized relaying (providing more than 20\% latency reduction). Bluetooth mesh provides a flexible solution which can be applied to various monitoring and control applications.
The managed flooding approach also ensures transparency to underlying network topology; hence Bluetooth mesh also can be applied to mobility-centric applications.
\bibliographystyle{IEEEtran}
|
1,477,468,751,085 | arxiv | \section{Introduction}
Self-trapped beams in nonlinear media, alias optical spatial solitons, have
long been the subject of intensive studies, promising new physical effects
and potential application to optic-communication systems and photonic
technologies \cite{yuribook,chen}. Common theoretical models for producing
optical solitons rely on the use of generalized nonlinear Schr\"{o}dinger
equations (GNLSEs), which usually do not admit exact solutions. Analytical
techniques, such as the inverse scattering method \cite{inverse} and B\"{a}%
cklund transform \cite{back}, apply only to exceptional integrable systems.
Numerical methods for generating soliton solutions, including the standard
relaxation algorithm and conjugate gradient method \cite{yang} are very
useful, but they do not always provide sufficient insight.
In particular, an iterative method was developed for finding numerical eigenfunctions and eigenvalues
corresponding to soliton solutions of the nonlinear Schr\"{o}dinger equation,
and applied it in a variety of cases (see \cite{Matusevich2008, Matusevich2009}). Similar to the situation with the standard relaxation algorithm, the success of the iterative method depends on the appropriate choice of the initial guess \cite{Trofimov2010}.
To provide deeper understanding of nonlinear systems, approximate semi-analytical methods have
been developed, the most useful one being, arguably, the variational
approximation (VA) based on the Rayleigh-Ritz optimization of a trial
function, alias \textit{ansatz}, that was introduced in the context of pulse
propagation in nonlinear fiber optics in the seminal paper by Anderson \cite%
{anderson} (see also works \cite{bondeson} and \cite{reichel}). The VA makes
it possible to predict nonlinear modes with different degrees of accuracy,
depending on the choice of the underlying \textit{ansatz} \cite{boris}, in
diverse nonlinear systems. These include, in particular, photonic lattices
in nonlinear media, induced by non-diffracting beams \cite{kartashov},
dissipative media \cite{skarka,Sahoo2017}, Bose-Einstein condensates \cite%
{bec,Mihalache2005}, parity-time-symmetric lattices \cite{Hu2014}, dynamics of spatio-temporal solitons in a periodic medium \cite{Aceves1993}, and even
the prediction of complex \textit{hopfion} modes with two topological
charges (twisted vortex ring) \cite{hopfion}. The VA was also applied to
nonlocal GNLSEs \cite{konotop,Dai2017}. Furthermore, predicted the approximate solutions predicted by the VA can be used in the context of the above-mentioned numerical methods, as an appropriate initial distribution which accelerates the convergence to numerically exact solutions.
Coming along with assets of the VA are its technical limitations, as in many
cases it may be problematic to perform analytical integration of the
underlying Lagrangian with the substitution of the chosen ansatz, and
subsequent differentiation with respect to free parameters, aiming to derive
the Euler-Lagrange (EL)\ equations. In this work, we present an algorithm
that permits a full numerical treatment of the VA based on the Rayleigh-Ritz
optimization, overcoming its intrinsic limitations. In fact, the development
of such fully numerical approach is natural, as, in most cases, the EL
equations are solved numerically, anyway. In particular, we use the
numerically implemented VA to obtain approximate solutions for rotary multipole modes, as
well as vorticity-carrying ring-shaped solitons, in the context of the GNLSEs with various nonlinear terms, cf. Ref. \cite{hopfion}.
\section{Theoretical framework}
\subsection{Lagrangian}
Evolution equations for complex amplitude $\Psi $ of the optical field are
derived from action
\begin{equation}
S=\int \int \mathcal{L}{d}t{d}\mathbf{r}=\int L{d}t,
\end{equation}%
where $t$ is the evolution variable, $\mathbf{r}$ a set of transverse
coordinates, $\mathcal{L}$ the Lagrangian density and $L=\int \mathcal{L}d%
\mathbf{r}$ the full Lagrangian. As we search for soliton solutions, the field and its derivatives must vanish at boundaries of the integration domain, which emulate the spatial infinity in terms of the numerical solution. The respective EL equation follows from the
action-minimum principle, $\delta S/\delta \Psi ^{\ast }=0$, i.e.,%
\begin{equation}
\frac{d}{dt}\left( \frac{\partial \mathcal{L}}{\partial \Psi _{t}^{\ast }}%
\right) +\sum_{r_{i}=x,y,z}\frac{d}{dr_{i}}\left( \frac{\partial \mathcal{L}%
}{\partial \Psi _{r_{i}}^{\ast }}\right) -\frac{\partial \mathcal{L}}{%
\partial \Psi ^{\ast }}=0, \label{eq:func_deriv}
\end{equation}%
where $\Psi ^{\ast }$ is the complex conjugate $\Psi $.
In particular, the GNLSE, in the form of
\begin{equation}
i\partial _{t}\Psi +\nabla ^{2}\Psi +N(|\Psi |^{2},\mathbf{r})\Psi =0,
\label{eq:GNLSE}
\end{equation}%
with the nonlinear part $N(|\Psi |^{2},\mathbf{r})\Psi $ \cite{Anderson2001}%
, is generated by
\begin{equation}
\mathcal{L}=\frac{i}{2}(\Psi \partial _{t}\Psi ^{\ast }-\Psi ^{\ast
}\partial _{t}\Psi )+|\nabla \Psi |^{2}+\mathcal{NL}(|\Psi |^{2},\mathbf{r}),
\label{eq:L_GNLSE}
\end{equation}%
where $\mathcal{NL}(|\Psi |^{2},\mathbf{r})$ is the \textit{anharmonic} term
in the Lagrangian density which gives rise to $N(|\Psi |^{2},\mathbf{r})\Psi
$ in the GNLSE
\subsection{Numerical variational method}
We start by defining vectors $\mathbf{A}=(A_{1},A_{2},...,A_{n})$ and $%
\nabla _{\mathbf{A}}L_{\mathbf{(A)}}=(\partial _{A_{1}}L_{\mathbf{(A)}%
},\partial _{A_{2}}L_{\mathbf{(A)}},...,\partial _{A_{n}}L_{\mathbf{(A)}})$,
where $A_{1}$, $A_{2}$,..., $A_{n}$ are variational parameters, such as an
amplitude, beam's width, frequency chirp, etc. We adopt ansatz $\Psi _{(%
\mathbf{r},\mathbf{A})}$, followed by the integration of $\mathcal{L}$ to
produce the respective effective Lagrangian, $L_{\mathbf{(A)}}$, as a
function of variables $A_{n}$. As we aim to find solutions for self-trapped beams, we adopt zero boundary conditions at boundaries of the numerical-integration domain.
The variation of $L_{\mathbf{(A)}}$ gives
rise to the system of the EL equations:
\begin{equation}
\frac{d}{dt}\left( \frac{\partial L_{\mathbf{(A)}}}{\partial \left(
dA_{n}/dt\right) }\right) -\frac{\partial L_{\mathbf{(A)}}}{\partial A_{n}}%
=0\ ,
\label{eq:EL_equation}
\end{equation}%
whose stationary version can be written as $\nabla _{\mathbf{A}}L_{(\mathbf{A%
})}=0$, with $\nabla _{\mathbf{A}}$ standing for the set of the derivatives
with respect to $A_{n}$. Generally, the stationary EL\ equations cannot be
solved analytically, therefore we apply the standard Newton-Raphson
multi-variable method, with the Jacobian replaced by the Hessian matrix,
\begin{equation}
HL_{(\mathbf{A})}=%
\begin{bmatrix}
\partial^2_{A_1} L_{\mathbf{(A)}} & \partial_{A_2} \partial_{A_1} L_{\mathbf{%
(A)}} & \dots & \partial_{A_n} \partial_{A_1} L_{\mathbf{(A)}} \\
\partial_{A_1} \partial_{A_2} L_{\mathbf{(A)}} & \partial^2_{A_2} L_{\mathbf{%
(A)}} & \dots & \partial_{A_n} \partial_{A_2} L_{\mathbf{(A)}} \\
\dots & \dots & \dots & \dots \\
\partial_{A_1} \partial_{A_n} L_{\mathbf{(A)}} & \partial_{A_2}
\partial_{A_n} L_{\mathbf{(A)}} & \dots & \partial^2_{A_n} L_{\mathbf{(A)}}%
\end{bmatrix}%
.
\end{equation}%
In the framework of this method, an the iterative search for the solution is
carried out as
\begin{equation}
\mathbf{A}^{(i+1)}=\mathbf{A}^{(i)}-HL_{\mathbf{(A)}}^{-1}\cdot \nabla _{%
\mathbf{A}}L_{(\mathbf{A})}, \label{eq:iterate}
\end{equation}%
starting with an initial guess.
However, the substitution of necessarily complex \textit{ans\"{a}tze }in
Lagrangians of many nonlinear models leads to analytically intractable
integrals. Thus, neither $L_{\mathbf{(A)}}$ nor $\nabla _{\mathbf{A}}L_{%
\mathbf{(A)}}$ or $HL_{\mathbf{(A)}}$ may be known in an explicit analytical
form. This difficulty is exacerbated by working with multidimensional
settings and using, if necessary, involved coordinate systems. In this work,
we develop a way to overcome this limitation, noting that, to produce $%
\nabla _{\mathbf{A}}L_{\mathbf{(A)}}$ and $HL_{\mathbf{(A)}}$, which are
needed to apply the Newton-Raphson method, one can numerically calculate $%
L_{(\mathbf{A})}$ at multiple points in the space of variables $A_{n}$,
separated by small finite values $\Delta A_{n}$. In particular, the
derivatives can be computed as
\begin{equation}
\frac{\partial L_{\mathbf{(A)}}}{\partial A_{n}}=\int \frac{\partial
\mathcal{L_{(\mathbf{A})}}}{\partial A_{n}}d\mathbf{r}\approx \int \frac{%
\mathcal{L}_{(\mathbf{A}+\Delta A_{n})}-\mathcal{L}_{(\mathbf{A}-\Delta
A_{n})}}{2\Delta A_{n}}d\mathbf{r}=\frac{L_{(\mathbf{A}+\Delta A_{n})}-L_{(%
\mathbf{A}-\Delta A_{n})}}{2\Delta A_{n}}, \label{First_derivative}
\end{equation}%
and, similarly,%
\begin{equation}
\frac{\partial ^{2}L_{\mathbf{(A)}}}{\partial A_{n}\partial A_{m}}\approx
\frac{L_{(\mathbf{A}+\Delta A_{n}/2+\Delta A_{m}/2)}-L_{(\mathbf{A}-\Delta
A_{n}/2+\Delta A_{m}/2)}-L_{(\mathbf{A}+\Delta A_{n}/2-\Delta A_{m}/2)}+L_{(%
\mathbf{A}-\Delta A_{n}/2-\Delta A_{m}/2)}}{\Delta A_{n}\Delta A_{m}}.
\label{Double_derivative}
\end{equation}%
Note that each step in this procedure can be done numerically. While there
is freedom in the choice of $\Delta A_{n}$, it is reasonable to select their
size smaller than an estimated solution for $A_{n}$ by at least three orders
of magnitude. Thus, the algorithm can be summarized as follows:
\begin{itemize}
\item Calculate $\nabla _{\mathbf{A}}L_{\mathbf{(A)}}$ and $HL_{\mathbf{(A)}%
} $ numerically with the help of Eqs. (\ref{First_derivative}) and (\ref%
{Double_derivative}), respectively. Because $HL_{\mathbf{(A)}}$ is a $%
n\times n$ symmetric matrix, only $n(n+1)/2$ different elements need to be actually
calculated.
\item Compute new $\mathbf{A}$ according to Eq. (\ref{eq:iterate}).
\item Iterate the two previous steps until achieving specified tolerance for
the convergence of $\mathbf{A}$.
\end{itemize}
A disadvantage of this algorithm is that the trivial zero solution also
satisfies the optimization procedure, hence the iterations may converge to
zero in some cases. A simple but effective way to avoid this is to select a
new starting point with larger values of $A_{n}$.
It is also worthy to note that, in the case of multistability, the algorithm can find different coexisting solutions, the convergence to a specific one depending on the choice of the initial guess.
\section{Numerical results for self-trapped modes}
Here we report results obtained for the GNLSEs generated by the Lagrangian
with anharmonic terms displayed in Table \ref{tab:Nonlinear_potentials}. We
start by looking for solutions for spatial solitons, in the form of
\begin{equation}
\Psi (x,y,z)=\psi (\mathbf{r})e^{i\lambda z}, \label{eq:lambda}
\end{equation}%
where $\mathbf{r}=\left\{ x,y\right\} $ are the transverse spatial
coordinates, longitudinal coordinate $z$ is the evolution variable, which
replaces $t$ in the outline of the variational method presented above, and $%
\lambda $ is a real propagation constant. The substitution of this waveform
simplifies GNLSE (\ref{eq:GNLSE}) and Lagrangian (\ref{eq:L_GNLSE}),
\begin{equation}
-\lambda \psi (\mathbf{r})+\nabla _{\perp }^{2}\psi (\mathbf{r})+N(|\psi
|^{2},\mathbf{r}) \psi (\textbf{r}) =0, \label{eq:red_GNLSE}
\end{equation}%
\begin{equation}
\mathcal{L}=\lambda |\psi |^{2}+|\nabla _{\perp }\psi |^{2}+\mathcal{NL}%
(|\psi |^{2},\mathbf{r}). \label{eq:Lagrangian_density}
\end{equation}
The iterative procedure for the solution of the optimization problem does not provide the
conservation of the dynamical invariants, \textit{viz}., the norm and Hamiltonian. However, this fact does
not invalidate the applicability of the procedure, similarly to the well-known imaginary-time
integration method, which provides correct stationary solutions, although the Hamiltonian is not
conserved in the course of the solution \cite{imaginary1,imaginary2} .
\begin{figure}[!htb]
\centering
\includegraphics[trim={0cm 5cm 0cm
5cm},clip,width=1\textwidth]{Images/Kerr_Vortex_m0_m1_fit_v2.pdf}
\caption{The amplitude (a) and beam's width (b), predicted by the numerical
variational procedure for the fundamental soliton ($m=0$), compared to the
results of the analytical variational approximation (see the text). The
maximum relative differences for parameters $A_{1}$ and $A_{2}$ are $%
e_{A_{1}}=1.72\times 10^{-8}\ \%$ and $e_{A_{2}}=2.40\times 10^{-8}\ \%$ .
The same comparison for the amplitude (c) and width (d) of the vortex beam
with $m=1$ in the Kerr medium, with maximum relative differences $%
e_{A_{1}}=2.17\times 10^{-7}\ \%$ and $e_{A_{2}}=1.65\times 10^{-7}\ \%$. Comparison between the exact soliton shape and the variationally obtained using Eq. (\ref{eq:iterate}) for fundamental (e) and single vortex soliton (f), in both cases $\lambda=0.5$.}%
\label{fig:Kerr_Vortex_m0_m1_fit}
\end{figure}
In this work, we focus on the optimization of the following generalized
vortex ansatz with integer topological charge $m$ in two-dimensional GNLSEs
including anharmonic terms listed in Table \ref{tab:Nonlinear_potentials}:
\begin{gather}
\psi (\mathbf{r})=A_{1}^{(m+1)/2}\exp \left[ -\left( x/A_{2}\right)
^{2}-\left( y/A_{3}\right) ^{2}\right] \notag \\
\times\left[ (\cos ^{2}\epsilon )~x^{2}+(\sin ^{2}\epsilon )~y^{2}\right]%
^{m/2}\left[(\cos \delta )\cos (m\theta )+i~\left( \sin \delta \right) \sin
(m\theta )\right] , \label{Gen_Trial_function}
\end{gather}%
$\theta \equiv \arctan (y/x)$ being the angular coordinate. The ansatz may
be construed as either an asymmetric \textit{azimuthon} \cite{azimut} or
\textit{ellipticon} \cite{servando}. Here, $A_{1}$ represents the amplitude
of the anisotropic beam, while $A_{2}$ and $A_{3}$ determine its width, $%
A_{2}=A_{3}$ corresponding to the isotropic one. Parameters $\epsilon $ and $%
\delta $ additionally control the beam's asymmetry and its phase structure.
In particular, for $\epsilon =\delta =\pi /4$ the ansatz reduces to a
standard vortex beam, while at $\delta =0$ it is reduced to a multi-pole
beam.
Generic results can be adequately presented for propagation constant $%
\lambda =1/2$ in Eq. (\ref{eq:lambda}) and values of parameters $s=0.05$ and $%
k=5$, $p=30$ in Lagrangian terms (\ref{eq:Saturation}) and (\ref{eq:Bessel}%
), respectively, unless specified otherwise.
First, we reproduce known results produced by the VA for the fundamental and
first-order vortex solitons in the Kerr medium, corresponding to Eq. (\ref%
{eq:Kerr}), based on normalized ansatz $\psi
=2^{-(m+1)/2}A_{1}^{(m+1)/2}\exp \left[ -(r/A_{2})^{2}\right] r^{m}\exp
(im\theta )$ \cite{Dimitrevski}. With this ansatz, the analytical VA yields $%
A_{1}=8\lambda $ and $A_{2}=\sqrt{2/\lambda }$ for $m=0$, and $%
A_{1}=4\lambda $ and $A_{2}=2/\sqrt{\lambda }$ for $m=1$. Using the present
algorithm leads to complete agreement with these results, as shown in Fig. %
\ref{fig:Kerr_Vortex_m0_m1_fit}, with relative errors $<3\times 10^{-7}\%$.
Each of these sets of the results can be generated by a standard computer in
less than a minute.
The general agreement between the exact soliton shape and the solution obtained by using the numerical variational approach is good, as is evident from Fig. \ref{fig:Kerr_Vortex_m0_m1_fit}(e) and Fig. \ref{fig:Kerr_Vortex_m0_m1_fit}(f) for the fundamental and single vortex soliton, respectively. In principle, more sophisticated ans\"{a}tze may provide closer agreement with numerically exact results, but in this work we focus on the basic results produced by the relatively simple ansatz.
Next, we proceed to demonstrate the utility of the algorithm for more
complex nonlinear media. We use the combination of the Kerr and first-order
Bessel-lattice terms, that is, the sum of terms given \ by Eqs. (\ref%
{eq:Kerr}) and (\ref{eq:Bessel}) . We test the utility of the algorithm by
comparing its predictions with simulations, using the standard split-step
Fourier-transform algorithm and presenting the evolution of the intensity
profile, $|\psi |^{2}$, for a certain size of the transverse display area
(TDA).
\begin{figure}[!htb]
\centering
\includegraphics[trim={5.5cm 12.5cm 4.25cm
9cm},clip,width=0.8\textwidth]{Images/Kerr_Bessel.pdf}
\caption{Simulated propagation in the medium with the combined nonlinearity
corresponding to Eq. (\protect\ref{eq:Kerr}) + Eq. (\protect\ref{eq:Bessel}%
), initiated by inputs produced by the numerically implemented variational
approximation. (a) The vortex soliton with topological charge $m=1$ with $%
\mathbf{A_{1,2,3}}=(0.7830,2.7903,2.7903)$ and TDA $15\times 15$. (b) The
propagation of the quadrupole soliton in the medium with the combined
nonlinearity corresponding to Eq. (\protect\ref{eq:Kerr}) + Eq. (\protect\ref%
{eq:Bessel}), with $\mathbf{A_{1,2,3}}=(0.5767,3.4560,3.4560)$ and TDA $20\times 20$%
. The peak-intensity evolution is shown in the right-hand column.}
\label{fig:Saturation_KB1}
\end{figure}
Figure \ref{fig:Saturation_KB1} (a) displays the numerically simulated
propagation of the isotropic vortex soliton with $m=1$, starting from the
input predicted by the numerically implemented VA. Note that, while the
direct simulation does not preserve a completely invariant shape of the
vortex beam, the VA provides a reasonable prediction. It is relevant to
mention that the simulations do not include initially imposed azimuthal
perturbations, which may lead to breakup of the vortex ring due the
azimuthal instability \cite{review}.
\begin{figure}[!htb]
\centering
\includegraphics[trim={5.5cm 9.5cm 4.25cm
9cm},clip,width=0.8\textwidth]{Images/Saturation_Bessel1.pdf}
\caption{The same as in Fig. \protect\ref{fig:Saturation_KB1}, but under the
action of the combined nonlinearity given by Eq. (\protect\ref{eq:Saturation}%
) + Eq. (\protect\ref{eq:Bessel}). (a) The vortex with $m=1$, $%
\mathbf{A_{1,2,3}}=(0.7860,2.7915,2.7915)$, and TDA $15\times 15$. (b) The azimuthon
with $m=2$, $\protect\delta =\protect\pi /3$, $%
\mathbf{A_{1,2,3}}=(0.63574,3.4760,3.4760)$, and TDA $20\times 20$. (c) The
asymmetric quadrupole with $s=0.2$, $\protect\lambda =0.8$, $\protect%
\epsilon =5\protect\pi /16$, $\mathbf{A_{1,2,3}}=(1.0334,3.4235,2.4701)$, and TDA $%
20\times 20$.}
\label{fig:Saturation_Bessel1}
\end{figure}
Then, we optimize a quadrupole soliton beam. The simulated propagation is
displayed in Fig. \ref{fig:Saturation_KB1}(b), where the persistence of the
VA-predicted shape is obvious. Note that the usual form of the VA cannot be
applied in this case, because of its complexity. Further, we proceed to
self-trapped beams supported by a combination of the saturable nonlinearity
and the first-order Bessel-lattice term, i.e., Eq. (\ref{eq:Saturation}) +
Eq. (\ref{eq:Bessel}), for three different \textit{ans\"{a}tze}: the vortex
beam with $m=1$, the azimuthon with $m=2$, and finally, an the elliptic
quadrupole. The two former \textit{ans\"{a}tze} carry the orbital angular
momentum due to their phase structure, which drives their clockwise rotation
in the course of the propagation, while preserving the overall intensity
shape, as shown in Fig. \ref{fig:Saturation_Bessel1}(a) and Fig. \ref{fig:Saturation_Bessel1}(b). It is
worthy to note that, in the pure saturable medium, an attempt to produce
asymmetric quadrupoles by means of the VA\ optimization does not produce any
robust mode, but, if the Bessel term (\ref{eq:Bessel}) is added, the
quadrupole develops into a noteworthy stable dynamical regime of periodic
transitions between vertical and horizontal dipole modes, as shown in Fig. %
\ref{fig:Saturation_Bessel1}(c).
\begin{figure}[th]
\centering
\includegraphics[trim={5.5cm 12.5cm 4.25cm
9cm},clip,width=0.8\textwidth]{Images/Saturation_Bessel0.pdf}
\caption{The same as in Fig. \protect\ref{fig:Saturation_KB1}, but in
the medium with the combination of the nonlinear terms corresponding to the
combination of Eqs. (\protect\ref{eq:Saturation}) + (\protect\ref{eq:Bessel}%
), with $s=0.05$, $p=15$ and $k=1$. (a) The fourth-order azimuthon with $\protect%
\delta =\protect\pi /3$, $\mathbf{A_{1,2,3}}=(0.4216,2.8725,2.8725)$, and TDA $%
20\times 20$. (b) The asymmetric quadrupole with $\protect\epsilon =5\protect%
\pi /16$, $\mathbf{A_{1,2,3}}=(1.2722,2.2057,1.3310)$, and TDA $15\times 15$.}
\label{fig:Saturation_Bessel0}
\end{figure}
Finally, we address the propagation of an azimuthon with $m=4$ and
asymmetric quadrupole, under the combined action of the saturable
nonlinearity and zeroth-order Bessel lattice, as shown in Fig. \ref%
{fig:Saturation_Bessel0}(a) and Fig. \ref%
{fig:Saturation_Bessel0}(b), respectively. The orbital angular
momentum of the modes drives their rotation in the course of the
propagation.\ In particular, the former pattern is predicted in quite a
robust form, while the latter one is transformed into the above-mentioned
regime of periodic transitions between vertical and horizontal dipole modes.
Additional results produced by the general ansatz given by Eq. (\ref%
{Gen_Trial_function}) and the VA outlined above, in the combination with
direct simulations, will be reported elsewhere.
\begin{table}[th]
\caption{Anharmonic terms in the Lagrangian}
\label{tab:Nonlinear_potentials}\centering
\begin{tabular}{p{0.20\textwidth}p{0.75\textwidth}}
\vspace{-0.5cm} Kerr &
\vbox{\begin{equation}
\mathcal{NL}(|\psi|^2,\textbf{r})= -1/2 |\psi|^4,
\label{eq:Kerr}
\end{equation}} \\
\vspace{-0.5cm} Saturation & \vspace{-0.7cm}
\vbox{\begin{equation}
\mathcal{NL}(|\psi|^2,\textbf{r})=(\ln(s|\psi|^2+1)-s|\psi|^2)/s^2,
\label{eq:Saturation}
\end{equation}} \\
\vspace{-0.45cm} $n$-th order Bessel lattice & \vspace{-0.7cm}
\vbox{\begin{equation}
\mathcal{NL}(|\psi|^2,\textbf{r}) = - p \left[ J_n{\ (k\sqrt{x^2+y^2} )}\right]^2|\psi|^2.
\label{eq:Bessel}\vspace{-2cm}
\end{equation}}%
\end{tabular}%
\end{table}
\section{Conclusion}
In this work we report an efficient algorithm for full numerical treatment
of the variational method predicting two-dimensional solitons of GNLSE
(generalized nonlinear Schr\"{o}dinger equation) with various nonlinear
terms, which arise in nonlinear optics. A general class of the solitons is
considered, including vortices, multipoles, and azimuthons. The method
predicts solutions for the self-trapped beams which demonstrate robust
propagation in direct simulations. Further work with the use of the
algorithm is expected, making use of more complex flexible \textit{ans\"{a}%
tze}, which should improve the accuracy (making the calculations more
complex, which is not a critical problem for the numerically implemented
algorithm). In particular, it is planned to apply the VA to models of
nonlocal nonlinear media, where the Lagrangian integrals become quite
difficult for analytical treatment. Another promising direction is
application of the algorithm to three-dimensional settings, where the
analytical work is hard too.
\section*{Funding}
Consejo Nacional de Ciencia y Tecnolog\'{i}a (CONACYT) (243284); National Science Foundation (NSF) and Binational (US-Israel) Science Foundation (2015616); Israel Science Foundation (1286/17).
\end{document}
|
1,477,468,751,086 | arxiv | \section{Computational model}
As part of a general approach to the modelling of AGN spectra, including
the self-consistent acceleration of a hadronic component, we have
developed a numerical code capable of following the time evolution not
only of the proton, but also of the photon and electron distributions in
the source (Mastichiadis \& Kirk \cite{mastichiadiskirk95}, henceforth MK95).
To accomplish this, we assumed
each component can be represented by a spatially averaged distribution,
treating escape from the source by a simple \lq catastrophic
loss\rq\ term. The system of three time-dependent kinetic equations for
the distributions as functions of energy is then amenable to numerical
solution. The relevant physical processes taken into account for the
electron and photon distributions include synchrotron radiation, inverse
Compton scattering both in the Thomson and Klein Nishina regimes, as
well as photon-photon pair production and synchrotron self-absorption.
Other processes included in the original code, such as pair
annihilation and photon downscattering on cold electrons, turn out
to be irrelevant for the case we are considering here.
Two minor modifications must be made before this code can describe the
SSC model for blazars. Firstly, since the acceleration mechanism is not
directly addressed, it is not necessary to follow the proton
distribution. Instead, an arbitrary dependence of the electron injection
function on energy and time can be implemented. Secondly, the extremely
high flux of gamma-rays observed from blazars indicates that the source
must be Doppler-boosted. Thus, quantities such as the photon spectrum,
which are computed in the rest frame of the source, require a Lorentz
transformation into the observer's frame.
With these changes, the parameters which specify the source are as
follows:
\begin{itemize}
\item
the Doppler factor $\delta=1/[\Gamma(1-
\beta\cos\theta)]$, where $\Gamma$ and $c\beta$ are the Lorentz factor
and speed of the blob, and $\theta$ is the angle between its direction
of motion and the line of sight to the observer,
\item
the radius $R$ of
the source (in its rest frame, in which it is assumed spherical)
from which the crossing
time $t_{\rm cr}=R/c$ can be defined; the variation timescale
in the galaxy frame is then given by $t_{\rm var}=R/(\delta c)$,
\item
the magnetic
field strength $B$, specified in units of the critical magnetic
field: $b=B/(4.414\times10^{13}\,{\rm G})$.
When combined with $R$, this determines the magnetic \lq\lq compactness\rq\rq\
which can be defined in analogy with the photon compactnesss (MK95) as
$\ell_{\rm B}=\sigma_{\rm T} R B^2/(8\pi m_{\rm e} c^2)$,
where $\sigma_{\rm T}$ is the Thomson cross section and $m_{\rm e}$ the
electron rest mass,
\item
the electron injection spectrum, for which we take
$Q_{\rm e}=q_{\rm e} \gamma^{-s}e^{-\gamma/\gamma_{\rm max}}$ where $\gamma$ is the
electron Lorentz factor. The three scalar parameters used to specify
this distribution in the following are $s$, $\gamma_{\rm max}$ and the electron
injection compactness $\ell_{\rm e}={{1\over 3}m_{\rm e} c\sigma_{\rm T} R^2}
\int_1^\infty {\rm d}\gamma (\gamma-1) Q_{\rm e}$ (see MK95)
\item
the effective escape time $t_{\rm esc}$
of relativistic electrons, which can be identified as the timescale over
which adiabatic expansion losses limit the accumulation of relativistic
electrons within the source.
\end{itemize}
In terms of these parameters, the equation governing the evolution of
the electron distribution $n_{\rm e}$ is:
\begin{eqnarray}
{\partial n_{\rm e}(\gamma,t)\over \partial t}
+{n_{\rm e}(\gamma,t)\overt_{\rm esc}}&=&
Q_{\rm e}(n_{\rm e},n_{\gamma},\gamma,t)+L_{\rm e}(n_{\rm e},n_{\gamma},\gamma,t)
\label{ekinetic}
\end{eqnarray}
Here $L_{\rm e}$ denotes the electron loss terms (i.e., synchrotron
losses and inverse Compton scattering), while $Q_{\rm e}$
is the
injection term. In all cases discussed in this paper the contribution
of photon--photon pair production to both photon absorption and
electron injection is negligible, so that $Q_{\rm e}$ is essentially a
function of only $\gamma$ and $t$.
The normalisation used is one in which time is
measured in units of the light crossing time of the source (of size $R$)
and the particle density refers to the number contained in a volume element of
size $\sigma_{\rm T} R$. All
quantities are measured in the rest frame of the blob.
The corresponding equation for the photon distribution $n_{\gamma}$ reads:
\begin{eqnarray}
{\partial n_{\gamma}(x,t)\over \partial t}
+n_{\gamma}(x,t)
&=&Q_{\gamma}(n_{\gamma},n_{\rm e},x,t)+L_{\gamma}(n_{\gamma},n_{\rm e},\gamma,t)
\label{gkinetic}
\end{eqnarray}
where here $Q_{\gamma}(n_{\gamma},n_{\rm e},x,t)$
represents the source terms for photons
of frequency $\nu=x m_{\rm e} c^2/h$ (i.e., synchrotron radiation and
inverse Compton scattering) and
$L_{\gamma}(n_{\gamma},n_{\rm e},\gamma,t)$ the loss term, which arises from
photon-photon pair production and is
negligibly small in the current application.
The source and loss terms are discussed
in detail in MK95.
Using the above parameters in the code it is possible
to fit to the multiwavelength spectrum as reported
in Macomb et al.~(\cite{macomb95}, \cite{macomb96}) during both the quiescent and the
flaring stages.
\begin{figure}[t]
\epsfxsize=10.2 cm
\epsffile{quiet.ps}
\vspace{-2.5 cm}
\caption{\protect\label{quiet}
The predicted and observed emission from Mkn~421 during its quiescent state.
Squares indicate observations during the quiescent phase. Triangles are
observations during the flaring state
(Macomb et al.~\protect\cite{macomb96}).
For the adopted parameters see text.}
\end{figure}
\section{Fits to the multiwavelength spectrum of Mkn 421}
\subsection{The quiescent state}
The seven parameters $\ell_{\rm e}$, $s$, $\gamma_{\rm max}$, $\delta$, $R$, $b$,
and $t_{\rm esc}$ are strongly constrained by the observations. In the case of
Mkn~421, both the X-ray emission and the TeV emission are rapidly
variable (see Macomb et al.~\cite{macomb95},
Takahashi et al.~\cite{takahashietal96},
Schubnell et al. \cite{schubnelletal96}).
Radio frequency photons, however, exhibit a quiescent
spectrum. Since the time required for electrons to cool by synchrotron
emission to energies at which they emit substantial numbers of
infra-red photons is excessively long, we
assume that either adiabatic expansion
of the blob or escape from the source intervenes to limit the time over
which a given electron radiates in this frequency range. Then the
observed infra-red to radio spectral index $\alpha\approx 0.35$ is directly
related to $s$ by the well-known formula $\alpha=(s-1)/2$.
Although variable, the general form of both the X-ray and gamma-ray
spectra indicates that in the SSC model synchrotron photons are emitted
up to a maximum frequency of approximately $10^{18}\,$Hz while inverse Compton
photons are present up to at least $10^{27}\,$Hz. Denoting these
frequencies by $\nu_{{\rm s,}18}\times10^{18}\,$Hz and
$\nu_{{\rm c,}27}\times10^{27}\,$Hz, respectively, we have
\begin{eqnarray}
\deltab\gamma_{\rm max}^2\approx 10^{-3}\nu_{{\rm s,}18}
\label{sbreak}
\\
\delta\gamma_{\rm max}\approx 3\times 10^{6} \nu_{{\rm c,}27}
\label{cbreak}
\end{eqnarray}
(MK95).
Eq.~(\ref{cbreak}) assumes that photons of frequency $\nu_{{\rm c,}27}$
are produced by inverse Compton scatterings in the Klein-Nishina regime.
From these expressions
we can immediately deduce $\gamma_{\rm max}$ and
the magnetic field strength in terms of the (as yet undetermined)
Doppler factor $\delta$:
\begin{eqnarray}
\gamma_{\rm max}&=& 3\times 10^6 \nu_{{\rm c,}27} \delta^{-1}
\\
B&=& 5\times 10^{-3} \delta\nu_{{\rm s,}18}\nu_{{\rm c,}27}^{-2} \ {\rm gauss}
\label{bequation}
\enspace.
\end{eqnarray}
The next step is to relate the observed
bolometric nonthermal flux $F$ of the
source to the power injected in relativistic electrons.
Using the normalisation of MK95
we require the injected electron compactness $\ell_{\rm e}$
in the rest frame of the source to be
\begin{eqnarray}
\ell_{\rm e}
&=&
{{3\sigma_{\rm T}\fluxD_{\rm L}^2}\over{\delta^4 R m_{\rm e} c^3}}
\label{elcomp}
\end{eqnarray}
where $D_{\rm L}$ is the luminosity distance to the source
given by $D_{\rm L}=2c[z+1-(z+1)^{1/2}]/H_0$ for a $q_0=1/2$,
$\Lambda=0$ cosmology; for the Hubble constant we assumed
$H_0=75~h~{\rm{km}~s^{-1}~Mpc^{-1}}$ and the redshift of Mkn~421
is $z=0.0308$.
Observations of Mkn~421 indicate that in the SSC model, the flux
of synchrotron photons (i.e., those of frequency $<\nu_{{\rm s,}18}$) is
comparable to the flux of inverse Compton photons (those with
frequency $>\nu_{{\rm s,}18}$).
In the rest frame of the blob, this implies
approximate equality of the two compactnesses $\ell_{\rm e}$ and $\ell_{\rm B}$ since
these quantities are proportional to the internal photon and magnetic
energy density respectively.
Writing $\eta=\ell_{\rm e}/\ell_{\rm B}$ we have
\begin{eqnarray}
R&=&3 \times 10^{19} \ell_{\rm e}\eta^{-1} B^{-2}\ {\rm cm}
\label{rle}
\end{eqnarray}
with $B$ in gauss. Using Eq.~(\ref{elcomp}) and the observationally derived
relation $FD^2_{\rm L}\approx 6\times 10^{44}$ erg/sec,
we obtain from Eq.~(\ref{rle})
\begin{eqnarray}
R\simeq1.2\times 10^{18}\left(
{{FD^2_{\rm L}}\over {6\times 10^{44}~\rm
{erg/sec}}}\right)^{1/2}
\eta^{-1/2}B^{-1}\delta^{-2}~\rm {cm}.
\label{radius}
\end{eqnarray}
The observation that
the synchrotron spectrum steepens at a frequency
somewhere between millimeter and optical wavelengths
(Brodie, Bowyer \& Tennant~\cite {brodieetal87})
enables one to estimate the escape time
first by calculating an effective Lorentz factor $\gamma_{\rm br}$ below
which escape is faster than cooling
\begin{eqnarray}
\gamma_{\rm br}\simeq {3\over{4\dbt_{\rm esc}}}
\end{eqnarray}
and then relating this to the turnover frequency by
\begin{eqnarray}
\nu_{\rm b}\simeq 1.3\times 10^{20}\delta b\gamma_{\rm br}^2.
\end{eqnarray}
This approach gives an escape time
\begin{eqnarray}
t_{\rm esc}&=8.9\times 10^{14}\delta^{1/2}B^{-3/2}R^{-1}\nu_{{\rm b,}15.3}^{-1/2}
\label{break}
\end{eqnarray}
expressed in units of $t_{\rm cr}$. Here $\nu_{{\rm b,}15.3}=\nu_{\rm
b}/10^{15.3}\,{\rm Hz}$ is the
frequency of the spectral break.
Finally, the rapid variability of the
X-ray and TeV emission
on timescales $t_{\rm var}$ of $10^5\,$secs constrains the overall size of the
source and the Doppler boosting factor. Thus using $t_{\rm var}=R/(c\delta)$
and eliminating $B$ between Eqs.~(\ref{bequation}) and (\ref{radius})
to find the value of $R$, we deduce
\begin{eqnarray}
\delta= 15t_5^{-1/4}
\label{dopplimit}
\end{eqnarray}
where $t_5=t_{\rm var}/(10^5\,{\rm s})$ and we have inserted canonical
values for the observed quantities.
Following the above guidelines one can
allow the code to reach a stationary state and
find detailed fits to the
\lq quiescent\rq\ multiwavelength spectrum of Mkn~421 (Macomb et
al.~\cite{macomb95}, \cite{macomb96}). With the exception of the slope of injected
electrons $s$, all other parameters can be expressed in terms of the
Doppler factor $\delta$. Thus, we first use the
approximate observed quantities $\nu_{{\rm s,}18}$, $\nu_{{\rm c,}27}$, $FD^2_{\rm
L}$, $\alpha$, $\nu_{{\rm b,}15.3}$, and the observed approximate
equality of $\ell_{\rm e}$ and $\ell_{\rm B}$, to fit the observed
quiescent spectrum and subsequently adjust $\delta$ to fit the observed
variation time $t_5$.
We find a satisfactory fit using
the parameters $R=4.7~10^{16}\delta_{15}^{-3}$ cm,
$B=.07\delta_{15}$ gauss, $\gamma_{\rm max}=2~10^5\delta_{15}^{-1}$, $s=1.7$,
$\ell_{\rm e}=1.93~10^{-5}\delta_{15}^{-1}$ and
$t_{\rm esc}=3.3t_{\rm cr}\delta_{15}^2$. Here $\delta_{15}=\delta/15$.
Fig.~\ref{quiet} shows the calculated quiescent spectrum for
$\delta_{15}=1$.
\begin{figure}[t]
\epsfxsize=10.2 cm
\epsffile{flare1.ps}
\vspace{-2.5 cm}
\caption{\protect\label{qinjplot}
Evolution of the multiwavelength spectrum of Mkn 421 in the case where the
amplitude of the electron injection changes impulsively by a factor of 3.
The solid line corresponds to the quiescent spectrum of Fig.~1. The short
and long dashed
lines show the spectrum at 1 and $2\protectt_{\rm var}$ after the change in the electron
injection (corresponding to 1.2 and 2.4 days respectively).
The dotted line shows the new steady state which, however, is
achieved many \protect$t_{\rm var}$ later.}
\end{figure}
\begin{figure}
\epsfxsize=10. cm
\epsffile{lightc1.ps}
\caption{\protect\label{lightqinj}
Plot of the flux at various frequencies (normalised to its quiescent value)
for the flare that corresponds to a change in \protect$q_{\rm e}$ by a factor of 3
over the quiescent state. The dotted line corresponds to a wavelength of 0.4 cm,
the small dash line to optical wavelengths, the large dash line to 2-10 keV X-rays,
the small dot-dash line to .1-30 GeV \protect$\gamma$-rays while the large dot-dash line
to > 500 GeV $\gamma$-rays.}
\end{figure}
\begin{figure}[t]
\epsfxsize=10.2 cm
\epsffile{flare2.ps}
\vspace{-2.5 cm}
\caption{\protect\label{gmaxplot}
Evolution of the multiwavelength spectrum of Mkn~421 in the case where the
maximum energy of the electrons \protect$\gamma_{\rm max}$ changes impulsively by a
factor of 5.
The solid line corresponds to the quiescent spectrum of Fig.~1. The short
and long dashed
lines show the spectrum at 1 and 2 \protect$t_{\rm var}$ after the change in the electron
injection (corresponding to 1.2 and 2.4 days respectively).
The dotted line shows the new steady state.}
\end{figure}
\begin{figure}
\epsfxsize=10. cm
\epsffile{lightc2.ps}
\caption{\protect\label{lightgmax}
Plot of the flux at various frequencies (normalised to its quiescent value)
for the flare that corresponds to a change in \protect$\gamma_{\rm max}$ by a factor of 5.
The dotted line corresponds to a wavelength of 0.4 cm,
the small dash line to optical wavelengths, the large dash line to 2-10 keV X-rays,
the small dot-dash line to .1-30 GeV \protect$\gamma$-rays while the large dot-dash line
to > 500 GeV $\gamma$-rays.}
\end{figure}
\begin{figure}[t]
\epsfxsize=10. cm
\epsffile{slopeint.ps}
\vspace{-1 cm}
\caption{\protect\label{slopeint}
Plot of the 2-10 keV photon index as a function of the flux at the same
energy band that corresponds to a change in \protect$\gamma_{\rm max}$ by a factor of 5.
The dotted line corresponds to an impulsive change in \protect$\gamma_{\rm max}$,
while the full and dashed lines correspond to a more gradual change
with the change completed in the first case in one
\protect$t_{\rm var}$ and in $2\protectt_{\rm var}$ in the second.
The triangles and squares indicate the values at 1 and
$2\protectt_{\rm var}$ respectively.}
\end{figure}
\begin{figure}
\epsfxsize=10. cm
\epsffile{slopetev.ps}
\vspace{-1 cm}
\caption{\protect\label{slopetev}
Plot of the >500 GeV photon index as a function of the flux at the same
energy band that corresponds to a change in \protect$\gamma_{\rm max}$ by a factor of 5.
The dotted line corresponds to an impulsive change in $\protect\gamma_{\rm max}$,
while the full and dashed lines correspond to a more gradual change
with the change completed in the first case in one
$\protectt_{\rm var}$ and in $2\protectt_{\rm var}$ in the second.
The triangles and squares indicate the values at 1 and
$2\protectt_{\rm var}$ respectively.}
\end{figure}
\subsection{The flaring state}
In the present model, the simplest way to explain the flaring activity
of Mkn~421 reported by Macomb et al. (\cite{macomb95})
is to change the electron
parameters $q_{\rm e}$ and/or $\gamma_{\rm max}$ or to change the magnetic field.
The first two can be understood as sudden variations in the acceleration
mechanism while the third corresponds to the blob entering a region of
higher field strength.
\subsubsection{Flares due to changes in $q_{\rm e}$}
Fig.~\ref{qinjplot} shows how a flare evolves in the case in which
the amplitude of the electron injection spectrum
$q_{\rm e}$ is suddenly increased by a factor of 3 above the quiescent
value found in the previous section and then left constant.
The short and long dashed lines correspond to snapshots of the
spectrum after 1 and $2t_{\rm var}$, respectively, measured from
the time at which $q_{\rm e}$ changed ($t_{\rm var}\simeq 1.2$ days).
The dotted line corresponds to the new steady state, which is
achieved many days later. It is evident that a change in $q_{\rm e}$
produces prompt flaring activity in the UV/X-ray and TeV $\gamma-$ray
bands.
This can be attributed to the fact that high energy electrons have shorter
cooling times than low energy ones. Therefore the newly injected
high amplitude
electron distribution cools first at the high frequency end,
producing
synchrotron X-rays and inverse Compton TeV gamma-rays.
The lower frequency photons (optical synchrotron and GeV gamma-rays)
lag behind somewhat. This can be better seen in
Fig.~\ref{lightqinj}, which shows the lightcurves of various frequency
bands as a function of time (the travel crossing time across the emitting
region has not been taken into account).
The X-rays have the fastest
response while the optical shows a slower response.
Similarly TeV radiation has a faster response than GeV radiation.
Note also that the amount by which the synchro-Compton component
increases is larger than that by which the
synchrotron component rises, since the latter varies approximately
as proportional to $q_{\rm e}^2$ whereas
the former is roughly proportional to $q_{\rm e}$.
\footnote{In the present
model self-consistent
electron cooling is included by both synchrotron and inverse Compton losses.
This breaks the simple linear dependence of synchrotron flux on $q_{\rm e}$.}
However, this holds only if the
change in $q_{\rm e}$ lasts long enough for a new quasi steady state to be
established. If, for example, the electron injection is turned off after
one crossing time, this effect will not be observed.
Finally, it can be seen from
Fig.~\ref{qinjplot}, that although the predicted flare
matches the observed increase in amplitude as observed by Whipple,
it does not reproduce the hard X-ray spectral shape.
In addition, the model underestimates the high X-ray
flux by at least one order of magnitude.
\begin{figure}[t]
\epsfxsize=10.2 cm
\epsffile{flare3.ps}
\vspace{-2.5 cm}
\caption{\protect\label{bfplot}
Evolution of the multiwavelength spectrum of Mkn~421 in the case where the
magnetic field strength changes impulsively by a factor of 3 and then
left constant.
The solid line corresponds to the quiescent spectrum of Fig.~1. The short
and long dashed
lines show the spectrum at 1 and 2 \protect$t_{\rm var}$
after the change in the electron
injection (corresponding to 1.2 and 2.4 days respectively).
The dotted line shows the new steady state.}
\end{figure}
\begin{figure}
\epsfxsize=10. cm
\epsffile{lightc3.ps}
\caption{\protect\label{lightbf}
Plot of the flux at various frequencies (normalised to its quiescent value)
for the flare that corresponds to a change in magnetic field strength by a factor of 3.
The dotted line corresponds to a wavelength of 0.4 cm,
the small dash line to optical wavelengths, the large dash line to 2-10 keV X-rays,
the small dot-dash line to .1-30 GeV \protect$\gamma$-rays while the large dot-dash line
to > 500 GeV $\gamma$-rays.}
\end{figure}
\subsubsection{Flares due to changes in $\gamma_{\rm max}$}
The evolution of a flare which corresponds to an impulsive increase
of $\gamma_{\rm max}$ by a factor of 5
is shown in Fig.~\ref{gmaxplot}. Once again
the short and long dashed lines correspond to snapshots of the
spectrum after 1 and 2~$t_{\rm cr}$ respectively as measured from the
time of the sudden change
in $\gamma_{\rm max}$. The final steady state is shown as a dotted line
and it is reached after roughly 3$t_{\rm cr}$. In contrast to the
previous case, the predicted
flaring activity due to an increase in $\gamma_{\rm max}$
gives a good fit both in X-rays and TeV $\gamma-$rays.
Figure~\ref{lightgmax} displays the light curves predicted in
frequencies from infra-red to hard gamma-rays.
The evolution of the flare is very different from the previous case
as here the source exhibits
an outburst only in X-rays and TeV $\gamma-$rays while
the other frequencies remain practically unaffected.
This behaviour was also found by Marscher \& Travis
(\cite{marschertravis96}) and it was suggested by
Macomb et al.~(\cite{macomb95})
as a possible interpretation of their data.
Note that the variability in the GeV range is of the same
order as the size of the error bars.
Furthermore we find that, contrary to the previous case, the X-ray flare
is of higher amplitude than the TeV one.
From Fig.~\ref{lightgmax} it is also evident that in this case
the outburst is strongest in the X-ray regime.
In fact, there are marked changes in the spectral index of the X-rays
during a flare. This is displayed in Fig.~\ref{slopeint}, which plots
the $2$--$10\,$keV spectral index as a function of flux in this energy
range. Three different flares are plotted, which differ in the time
scale over which $\gamma_{\rm max}$ is increased: first of all impulsively
(dotted line), then on a time scale of $t_{\rm var} $ (solid line)
and then $2t_{\rm var}$ (dashed line). In each case
a spectral hardening in X-rays with increasing flux is
predicted, in qualitative agreement
with recent ASCA observations (Takahashi et al.~\cite{takahashietal96}).
A similar effect can be observed in the hard gamma-ray
($>500\,$GeV) range. This is
shown in Fig.~\ref{slopetev}.
\subsubsection{Flares due to changes in $B$}
As a final example, we present a flare caused by a sudden increase
in the strength of the magnetic field.
Fig.~\ref{bfplot} depicts the spectral
evolution that corresponds to an impulsive increase
of the magnetic field strength by a factor of 3
and then left there. In this case,
the whole synchrotron spectrum is shifted to the right by the same factor and
is also boosted in intensity, as might be expected.
The synchro-Compton component, on the
other hand, is reduced compared to its original
steady state value. The reason is that the ratio
$\ell_{\rm e}/\ell_{\rm B}$ is reduced when the field increases,
so that the total luminosity (which is held constant)
is redistributed towards the synchrotron component. This behaviour can also
be seen in Fig.~\ref{lightbf} where one can observe a fast increase in the
flux of low frequency bands along with a decrease of the flux in the
$\gamma$-ray bands. This result contrasts strongly with
similar investigations by Bloom \& Marscher~(\cite{bloommarscher96}).
There is, however, no discrepancy, since these authors consider
a change in the magnetic field whilst holding the electron
{\em distribution} constant. In our case, we keep the electron
injection (and hence the total luminosity) constant.
\section{Discussion}
In the present paper we obtained fits to the X-ray/TeV flares
observed during the 1994 multiwavelength campaign of Mkn~421
in the context of the homogeneous synchro-Compton model.
This model assumes that the most important photon targets for
inverse Compton scattering by relativistic electrons are the
synchrotron photons they themselves produce. In the case of BL~Lac
objects such as Mkn~421, this assumption may be justified, since there
is no evidence of a more important photon component. However, this may
not be the case in other sources, where photons from an accretion disk
(Dermer, Schlickeiser \& Mastichiadis~\cite{dermeretal92}) or photons
scattered from broad line clouds (Sikora, Begelman \& Rees~\cite{sikoraetal94})
may dominate.
Nevertheless, a combination of synchrotron
radiation and inverse Compton scattering on external photons
(Dermer, Sturner \& Schlickeiser
~\cite{dermeretal96})
may be useful in modelling results such as those of the multi-wavelength
campaign on 3C273 (Lichti et al.~\cite{lichtietal95}).
Whether a homogeneous synchro-Compton model such as presented here
can explain these or similar observations (e.g., those of 3C279: Maraschi et al.~\cite{maraschietal94},
Hartman et al.~\cite{hartmanetal96}) is currently under
investigation.
We obtain the full time dependent behaviour of flares by fitting the
`quiescent' spectrum of the source and then varying one of the free
parameters. Three simple ways of producing flares were investigated:
(i) by changing the amplitude of the electron injection spectrum, (ii)
by changing the maximum energy of the electron injection spectrum and
(iii) by changing the magnetic field strength. We found a good fit to
the observations using a flare of type (ii). This produces changes only
in the X-ray and TeV bands, leaving all the other bands essentially
unaffected. It also reproduces qualitatively the observed hardening of
the X-rays with increasing intensity (Takahashi et al.~\cite{takahashietal96}).
We also found that X-rays are first to react to any change
in the electron injection. This is particularly pronounced for flares of
type (i) and (ii) (see Figs.~\ref{lightqinj} and \ref{lightgmax}).
A change in the acceleration parameters is tracked more closely by
X-ray photons than by photons in other wave-bands, since
the X-ray producing electrons
have the highest energy and, therefore, the
the fastest cooling timescale.
The parameters we find for the fits are similar to those found
by other authors (see, for example, Marscher \&
Travis~(\cite{marschertravis96}),
Sambruna, Maraschi \& Urry~(\cite {sambrunaetal96})).
The fast time variability of Mkn~421 ($\simeq$1 day) implies
a Doppler factor of $\simeq$15. However,
a faster variation can easily be accommodated in our model, since
new fits with a larger Doppler factor can be obtained simply scaling
the parameters as indicated in Sect.~3. One potentially independent
constraint on the model is provided by the effects of synchrotron
self-absorption. At the lower radio frequencies, the absorption
turnover can be seen in Figs.~\ref{quiet}, \ref{qinjplot}, \ref{gmaxplot}
and \ref{bfplot}. For higher Doppler factors,
this effect should disappear, since the required luminosity is then
provided by a lower value of $\ell_{\rm e}$. As a result, the electron
column density of the source is reduced. However, the homogeneous SSC
model presented here is so compact that the effects of induced Compton
scattering can also be expected to manifest themselves in the radio
range (Coppi, Blandford \& Rees~\cite{coppietal93}, Sincell \&
Krolik~\cite{sincellkrolik94}).
A simple estimate of the importance of this effect can be obtained
by evaluating the parameter $\tau_{\rm ind}=
N_{\rm e}\sigma_{\rm T} R T_{\rm br}/(m_{\rm e} c^2)$,
where $T_{\rm br}$ is the
brightness temperature of the radiation in
energy units and $N_{\rm e}$ is the total electron density in the source.
Even if we assume that no thermal electrons are present, this parameter exceeds
unity for frequencies less than roughly $700\,$MHz, given the
parameters of our quiescent model. Thermal electrons, however,
accumulate within the source,
although we have not calculated their density explicitly. consequently,
a modification of the simple synchrotron spectrum at gigaherz frequencies
may be possible.
Our calculations predict the time-dependent flaring activity to be
expected given certain idealised fluctuations in the injection spectrum
of high energy electrons into a relativistically moving blob.
Although we do not address a specific acceleration mechanism, it is possible to
interpret the overall picture as one in which a shock front rapidly
accelerates electrons and leaves a trail of them (i.e., injects them)
in the plasma streaming away from the shock on its downstream side. In
our model, we assume the typical dimension of the radiating part of the
downstream plasma in its rest frame is $R$. The photon escape time is
$t_{\rm cr}$, which we find to be roughly one third of the time taken for
electrons to cross the emitting region. Thus, assuming electrons are
carried along by the plasma, our picture would indicate rapid ($\sim
c/3$) movement of the downstream plasma away from the shock front. In a
more realistic model, the spectrum of injected electrons should also
be calculated in a time-dependent manner. In fact, the value of
$\gamma_{\rm max}$ itself should probably be determined by a balance between
acceleration and losses in a self-consistent model. This is possible in
a hadronic model (e.g., MK95), however, self-saturation of accelerated
protons requires a very high photon density, which would render the
source opaque to TeV gamma-rays. On the other hand, it is possible to
imagine a model in which protons saturate at extremely high energy
(e.g., Mannheim~\cite{mannheim93}), and inject electrons which
produce the entire emitted spectrum by the synchrotron mechanism. The
time-dependent properties of flares from such a model would, however,
differ strongly from those found above. A detailed investigation of
electron acceleration in the presence of losses (Heavens \&
Meisenheimer~\cite{heavensmeisenheimer87}) has so far been performed
only in the steady state, and without considering either bulk
relativistic motion or inverse Compton scattering.
Another simplification we have introduced comes from the fact that we
have used a spherical homogeneous model (for a discussion of the
problems inherent with this model see, for example, Bloom \& Marscher
\cite {bloommarscher96}). However this allows us to understand better
the significance of the various physical quantities we are using and,
at least for the case of Mkn~421, it proved adequate to fit the
multiwavelength spectrum. On the other hand, the inhomogeneous
models might be superior to
the homogeneous ones in the sense that they give better overall fits
to the AGN spectra, however they introduce a number of extra parameters
making a simple understanding of the results difficult.
\acknowledgements
We would like to thank an anonymous referee who helped us
clarify many of the issues presented here.
AM thanks the Deutsche Forschungsgemeinschaft for support under
Sonderforschungsbereich 328.
|
1,477,468,751,087 | arxiv | \section{Introduction}
For two sets $A,B\subseteq \Z_n$, we let $A\cdot B = \set{ab \pmod n \mid a\in A,b\in B}$. For a set $A$, $A^{\circ b}$ for some relation $\circ$ is defined as the set $\{a \mid a\in A, a\circ b\}$.
A subset $B$ of ring $\Z_n$ is called a \emph{$\ell$-covering set} if $\Z_n^{\leq \ell}\cdot B=\Z_n$. Let $f(n,\ell)$ be the size of the smallest $\ell$-covering set of $\Z_n$. Equivalently, we can define a \emph{segment} of slope $i$ and length $\ell$ to be $\{ ix \pmod n \mid x\in \Z_n^{\leq \ell} \}$, and we are interested in finding a set of segments that covers $\Z_n$.
$\ell$-covering sets are useful in designing codes to flash storage related design, including correct limited-magnitude errors\cite{6151153, Klove.S2014, Klove.2016}, design memory application for flash storage\cite{6476065}. Since we can \emph{compress} a segment by dividing everything by its slope, algorithm where the running time depends on the size of the numbers in the input can be cut down. The first significant improvement to modular subset sum was through partitioning by $\ell$-covering \cite{Koili.X2019}. There are also generalizations to $\Z^d_n$ \cite{Klove.2016}.
The major question is finding the right bound for $f(n,\ell)$. The trivial lower bound is $f(n,\ell) \geq \frac{n}{\ell}$. On the upper bound of $f(n,\ell)$, there are multiple studies where $\ell$ is a small constant, or $n$ has lots of structure, like being a prime number or maintaining certain divisibility conditions \cite{6151153, Klove.S2014, Klove.2016}. A fully general non-trivial upper bound for all $\ell$ and $n$ was first established by Chen et.al., which shows an explicit construction of a $O(\frac{n (\log n)^{\omega(n)}}{\ell^{1/2}})$ size $\ell$-covering set. They also showed $f(n,\ell) = \frac{n^{1+o(1)}}{\ell^{1/2}}$ using the fourth moment of character sums, but without providing a construction \cite{Chen.S.W2013}. In the same article, the authors shows $f(p,\ell) = O(\frac{p}{\ell})$ for prime $p$ with a simple explicit construction. Koiliaris and Xu improved the result for general $n$ and $\ell$ using basic number theory, and showed $f(n,\ell) = \frac{n^{1+o(1)}}{\ell}$ \cite{Koili.X2019}. A $\ell$-covering set of the same size can also be found in $O(n\ell)$ time. The value hidden in $o(1)$ could be as large as $\Omega(\frac{1}{\log \log n})$. A closer inspection of their result shows $f(n,\ell) = O(\frac{n}{\ell}\log n\log \log n)$ if $\ell$ is neither too large nor too small. That is, if $t \leq \ell \leq n/t$, where $t=n^{\Omega(\frac{1}{\log \log n})}$. See \cref{fig:comp} for comparison of the results.
The covering problem can be considered in a more general context. For any \emph{semigroup} $(M, \diamond)$, define $A \diamond B = \{a \diamond b \mid a\in A, b\in B\}$. For $A\subset M$, we are interested in finding a small $B$ such that $A \diamond B = M$. Here $B$ is called an $A$-covering. The $\ell$-covering problem is the special case where the semigroup is $(\Z_n, \cdot)$, and $A=\Z_n^{\leq \ell}$. When $M$ is a group, it was studied in \cite{Bollobas2011}. In particular, they showed for a finite group $(G,\diamond)$ and any $A\subseteq G$, there exists an $A$-covering of size no larger than $\frac{|G|}{|A|}(\log |A|+1)$. We emphasis that our problem is over the \emph{semigroup} $(\Z_n, \cdot)$, which is \emph{not a group}, and can behave very differently. For example, if $A$ consists of only elements divisible by $2$ and $n$ is divisible by $2$, then no $A$-covering of $(\Z_n,\cdot)$ exists. It was shown that there exists $A$ that is a set of $\ell$ consecutive integers, any $A$-covering of $(\Z_n,\cdot)$ has $\Omega(\frac{n}{\ell}\log n)$ size \cite{Roche.S.W2018}. This shows the choice of the set $\Z_n^{\leq \ell}$ is very special, as there are examples where $\ell$-covering has $O(\frac{n}{\ell})$ size \cite{Chen.S.W2013}. In the pursuit of our main theorem, another instance of the covering problem arises and might be of independent interest. Let the semigroup be $(\mathbb{D}_n,\odot)$, where $\mathbb{D}_n$ is the set of divisors of $n$, and $a \odot b = \gcd(ab, n)$, where $\gcd$ is the greatest common divisor function. We are interested in finding a $\mathbb{D}_n^{\leq s}$-covering set.
\begin{figure}
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
& Size of $\ell$-covering & Construction Time \\
\hline
& & \\[-1em]
Chen et. al. \cite{Chen.S.W2013} & $O\left(\frac{n (\log n)^{\omega(n)}}{\ell^{1/2}}\right)$ & $\tilde{O}\left(\frac{n (\log n)^{\omega(n)}}{\ell^{1/2}}\right)$ \\
& & \\[-1em]
\hline
& & \\[-1em]
Chen et. al. \cite{Chen.S.W2013} & $\frac{n^{1+o(1)}}{\ell^{1/2}}$ & Non-constructive \\
& & \\[-1em]
\hline
& & \\[-1em]
Koiliaris and Xu \cite{Koili.X2019} & $\frac{n^{1+o(1)}}{\ell}$ & $O(n\ell)$ \\
& & \\[-1em]
\hline
& & \\[-1em]
\cref{thm:main} & $O(\frac{n}{\ell}\log n)$ & $O(n\ell)$\\
\hline
& & \\[-1em]
\cref{thm:randconstruction} & $O(\frac{n}{\ell}\log n\log\log n)$ & $\tilde{O}(\frac{n}{\ell}) + n^{o(1)}$ randomized\\[+0.2em]
\hline
\end{tabular}
\end{center}
\caption{Comparison of results for $\ell$-covering for arbitrary $n$ and $\ell$. $\omega(n)$ is the number of distinct prime factors of $n$.}
\label{fig:comp}
\end{figure}
\subsection{Our Contributions}
\begin{enumerate}
\item We show $f(n,\ell) = O(\frac{n}{\ell}\log n)$ for all $\ell<n$.
\item We show there exists a constant $c$ and an infinite number of $n$ and $\ell$, such that $f(n,\ell) \geq c \frac{n}{\ell} \frac{\log n}{\log \log n}$.
\end{enumerate}
We also show some interesting number theoretical side results. One is a sharper bound for the relative totient function, the other is the existence of a large divisor with linear divisor sum.
\subsection{Technical overview}
Our approach is similar to the one of Koiliaris and Xu \cite{Koili.X2019}. We breifly describe their approach.
Recall $\Z_n$ is the set of integers modulo $n$. We further define $\Z_{n,d} =\set{x \mid \gcd(x,n)=d, x\in \Z_n}$, and $\Z^*_n = \Z_{n,1}$. Let $\mathcal{S}_\ell(X)$ to be the set of segments of length $\ell$ and slope in $X$.
Their main idea is to convert the covering problem over the \emph{semigroup} $(\Z_n,\cdot)$ to covering problems over the \emph{group} $(\Z^*_{n/d},\cdot)$ for all $d\in \mathbb{D}_n$.
Since $\Z_{n,d}$ forms a partition of $\Z_n$, one can reason about covering them individually. That is, covering $\Z_{n,d}$ by $\mathcal{S}_\ell(\Z_{n,d})$.
This is equivalent to cover $\Z^*_{n/d}$ with $\mathcal{S}_\ell(\Z^*_{n/d})$, and then lift to a cover in $\Z_{n,d}$ by multiply everything by $d$. Hence, now we only have to work with covering problem over $(\Z^*_{n/d},\cdot)$ for all $d$, all of which are \emph{groups}. The covering results for groups can be readily applied \cite{Bollobas2011}. Once we find the covering for each individual $(\Z^*_{n/d},\cdot)$, we take their union, and obtain a $\ell$-covering.
The approach is sufficient to obtain $f(n,\ell) = O(\frac{n}{\ell}\log n\log \log n)$ if $\ell$ is neither \emph{too small} nor \emph{too large}. However, their result suffers when $\ell$ is extreme in two ways.
\begin{enumerate}
\item $\ell=n^{1-o(\frac{1}{\log \log n})}$: Any covering obtained would have size at least the number of divisors of $n$, which in the worst case can be $n^{\Omega(\frac{1}{\log \log n})}$, and dominates $\frac{n}{\ell}$.
\item $\ell=n^{o(\frac{1}{\log \log n})}$: If we are working on covering $\Z^*_n$, we need to know $|\Z^{*\leq \ell}_n|$, also know as $\phi(n,\ell)$. Previously, the estimate for $\phi(n,\ell)$ is insufficient when $\ell$ is small.
\end{enumerate}
Our approach removes the deficiency, and also eliminate the extra $\log \log n$ factor.
First, we improve the estimate for $\phi(n,\ell)$. This value is tightly connected with how many times an element is covered by segments, which is also connected with how large a $\ell$-covering has to be.
Second, we use $\mathcal{S}_\ell(\Z^*_n)$ to cover more than just $\Z^*_n$. It might be the case that a small number of segments in $\mathcal{S}_\ell(\Z^*_n)$ can cover $\Z_{n,d}$ for many $d$, simultaneously. Therefore it would decrease the number of segments required for the cover. This change can shave off a $\log \log n$ factor.
Finally, we need to handle the case when $\ell$ is large. Clever choices are required to make sure we can shave off the $\log \log n$ factor while maintaining the set of divisors involved in the segments are small.
\paragraph{Organization}
The paper is organized as follows. \Cref{sec:prelim} are the preliminaries, which contains all the necessary number theory backgrounds. \Cref{sec:numbertheory} describes some number theoretical results on bounding $\phi(n,\ell)$ and finding a large divisor of $n$ with linear divisor sum. \Cref{sec:bounds} proves the main theorem that $f(n,\ell) = O(\frac{n}{\ell}\log n)$, discuss its construction, and also provides a lower bound.
\section{Preliminaries}\label{sec:prelim}
The paper has a few simple algorithmic ideas, but our methods are mainly analytical. Hence, we reserved a large amount of space in the preliminaries to set up the scene.
Let $\mathcal{X}$ be a collection of sets in the universe $U$. A \emph{set cover} of $U$ is a collection of subsets in $\mathcal{X}$ which together covers $U$. Formally, $\mathcal{X}'\subseteq \mathcal{X}$ such that $U= \bigcup_{X\in \mathcal{X}'} X$. The \emph{set cover problem} is the computational problem of finding a minimum cardinality set cover.
All multiplications in $\Z_n$ are modulo $n$, hence we will omit $\pmod n$ from now on.
Recall a set of the form $\set{ ix \mid x \in \Z_n^{\leq \ell}}$ is called a \emph{segment} of length $\ell$ with slope $i$. Note that the segment of length $\ell$ might have fewer than $\ell$ elements. Recall $\mathcal{S}_\ell(X)$ is the segments of length $\ell$ with slope in $X$, namely $\set{ \set{ ix \mid x\in \Z_n^{\leq \ell}} \mid i\in X}$. Hence, finding a $\ell$-covering is equivalent to set cover with segments in $\mathcal{S}_\ell(\Z_n)$, and the universe is $\Z_n$.
Set cover problem has some well-known bounds relating the size of a set cover and the frequency of element covered\cite{Lovas.1975,Stein.1974}.
\begin{theorem}[\cite{Lovas.1975,Stein.1974}]\label{thm:bettergreedy}
Let there be a collection of $t$ sets each with size at most $a$, and each element of the universe is covered by at least $b$ of the sets, then there exists a subcollection of $O(\frac{t}{b}\log a)$ sets that covers the universe.
\end{theorem}
The above theorem is the main combinatorial tool for bounding the size of a set cover. To obtain a cover of the specified size, the greedy algorithm is sufficient.
The base of the $\log$ is $e$. To avoid getting into the negatives, we take $\log(x)$ to mean $\max(\log(x),1)$. $\tilde{O}(f(n))$, the soft $O$, is a short hand for $O(f(n)\polylog n)$.
\subsection{Number theory}
We refer to some standard notation and bounds, where it can be found in various analytic number theory textbook, for example \cite{Davenport2000-nd}.
Recall $\Z_n$ is the set of integers modulo $n$, $\Z_{n,d} =\set{x |\gcd(x,n)=d, x\in \Z_n}$, and $\Z^*_n = \Z_{n,1}$. $\Z^*_n$ is the set of numbers in $\Z_n$ that are relatively prime to $n$. The notation $m|n$ means $m$ is a divisor of $n$.
\begin{enumerate}
\item $\pi(n)$, the \emph{prime counting function}, is the number of primes no larger than $n$, and $\pi(n)=\Theta(\frac{n}{\log n})$.
\item $\phi(n)$, the \emph{Euler totient function}, defined as $\phi(n) = |\Z^*_n| = n\prod_{p|n}\left(1-\frac{1}{p}\right)$, and is bounded by $\Omega(\frac{n}{\log \log n})$.
\item $\omega(n)$, the \emph{number of distinct prime factors} of $n$, has the relation $\omega(n) = O(\frac{\log n}{\log \log n})$.
\item $d(n)$, the \emph{divisor function}, is the number of divisors of $n$, and $d(n) = n^{O(\frac{1}{\log \log n})}=n^{o(1)}$.
\item $\sigma(n)$, the \emph{divisor sum function}, is the sum of divisors of $n$, and $\sigma(n) \leq \frac{n^2}{\phi(n)}$. This also implies $\sigma(n) = O(n\log \log n)$.
\item The sum of reciprocal of primes no larger than $n$ is $\sum_{p\leq n, p \text{ prime}}\frac{1}{p} = O(\log \log n)$.
\end{enumerate}
The center of our argument lies in the \emph{relative totient function}, denoted as $\phi(n,\ell) = |\Z^{*\leq \ell}_n|$.
We present a simple lemma in number theory, this is undoubtedly known, but it is easier to prove it directly.
\begin{lemma}\label{lem:tau}
Let $y\in\Z^*_n$, and $B\subset \Z_{n}^*$.
The number of $x\in \Z_{dn}^*$ such that $xb\equiv y \pmod n$, and $b\in B$ is $|B|\frac{\phi(dn)}{\phi(n)}$.
\end{lemma}
\begin{proof}
Indeed, the theorem is the same as finding the number of solutions to $x\equiv yb^{-1} \pmod n$ where $b\in B$. For a fixed $b$, let $z=yb^{-1}$. We are asking the number of $x\in \Z^*_{dn}$ such that $x\equiv z \pmod n$.
Consider the set $A=\{z+kn \mid 0\leq k\leq d-1\}$. Let the distinct prime factors set of $n$ be $P_{n}$. Note $\gcd(z,n)=1$, thus $p\in P_n$ can't divide any element in $A$. Let $P_{dn}\setminus P_{n}=P_d'\subseteq P_{d}$. Let $q$ be the product of some elements in $P_d'$, $q|d$, $(q,n)=1$. Let $A_q = \{a | a\in A,q|a\}$. Consider $q|z+kn \Leftrightarrow k\equiv -zn^{-1} \pmod q$, and note $0\leq k\leq d-1,q|d$, therefore $|A_q|=\frac{d}{q}$.\\
We can use the principle of inclusion-exclusion to count the elements $a\in A$ such that $\gcd(a,dn)=1$
$$\sum_{i=0}^{|P_d'|}(-1)^{i}\sum_{S\subseteq P_d',|S|=i}|A_{\prod_{p\in S}p}|=\sum_{i=0}^{|P_d'|}(-1)^{i}\sum_{S\subseteq P_d',|S|=i}\frac{d}{\prod_{p\in S}p}=d\prod_{p\in P_d'}(1-\frac{1}{p})=\frac{\varphi(dn)}{\varphi(n)}.$$
Because all the solution sets of $x$ for different $b\in B$ are disjoint, we obtain the total number of solutions over all $B$ is $|B|\frac{\phi(dn)}{\phi(n)}$.
\end{proof}
\begin{corollary}\label{cor:count}
Consider integers $0\leq \ell<n$, $y\in \Z_{n,d}$. The number of solutions $x\in \Z^*_n$ such that $xb\equiv y \pmod n$ for some $b\leq \ell$ is
\[
\frac{\varphi(\frac{n}{d}, \floor{\frac{\ell}{d}})}{\varphi(\frac{n}{d})} \varphi(n).
\]
\end{corollary}
\begin{proof}
Since $x\in \Z_n^*$, we see that $xb\equiv y \pmod n$ if and only if $d|b$, $x\frac{b}{d}\equiv \frac{y}{d} \pmod{\frac{n}{d}}$, and $\frac{b}{d} \leq \floor{\frac{\ell}{d}}$. We can then apply \cref{lem:tau} and obtain the number of solutions is $\phi(n/d,\floor{\ell/d})\phi(n)/\phi(n/d)$.
\end{proof}
The following is a highly technical theorem from sieve theory.
\begin{theorem}[Brun's sieve \textup{\cite[p.93]{cojocaru2006introduction}} ]\label{thm:sieve}
Let \(\mathcal{A}\) be any set of natural number \(\leq x\) (i.e.
\(\cal{A}\) is a finite set) and let \(\mathcal{P}\) be a set of primes. For each prime \(p\in\mathcal{P}\), Let \(\mathcal{A}_p\) be the set of elements of \(\mathcal{A}\) which are divisible by \(p\). Let \(\mathcal{A}_1:=A\) and for any squarefree positive integer \(d\) composed of primes of \(\mathcal{P}\) let \(\mathcal A_d:=\cap_{p|d}A_p\). Let \(z\) be a positive real number and let \(P(z):=\prod_{p\in\mathcal{P},p<z}p\).\\
We assume that there exist a multiplicative function \(\gamma(\cdot)\) such that, for any \(d\) as above,
\[|\mathcal{A}_d|=\frac{\gamma(d)}{d}X+R_d\]
for some \(R_d\), where
\[X:=|A|.\]
We set \[S(\mathcal{A},\mathcal{P},z):=|\mathcal{A}\backslash\cup_{p|P(z)}\mathcal{A}_p|=|\{a:a\in\mathcal{A},\gcd(a,P(z))=1\}|\] and
\[W(z):=\prod_{p|P(z)}(1-\frac{\gamma(p)}{p}).\]
Supposed that\\
1.\(|R_d|\leq\gamma(d)\) for any squarefree \(d\) composed of primes of \(\mathcal{P}\);\\
2.there exists a constant \(A_1\geq1\) such that
\[0\leq\frac{\gamma(p)}{p}\leq 1=\frac{1}{A_1};\]\\
3.there exists a constant \(\kappa\geq0\) and \(A_2\geq1\) such that
\[\sum_{w\leq p<z}\frac{\gamma(p)\log p}{p}\leq\kappa\log\frac{z}{w}+A_2\quad\text{if}\quad 2\leq w \leq z.\]
4.Let \(b\) be a positive integer and let \(\lambda\) be a real number satisfying
\[0\leq\lambda e^{1+\lambda}\leq 1.\]
Then
\[\begin{aligned}
S(\mathcal{A},\mathcal{P},z)\geq &XW(z)\{1-\frac{2\lambda^{2b}e^{2\lambda}}{1-\lambda^2 e^{2+2\lambda}}\exp((2b+2)\frac{c_1}{\lambda\log z})\}\\
&+O(z^{2b-1+\frac{2.01}{e^{2\lambda/\kappa}-1}}),
\end{aligned}\]
where \[c_1:=\frac{A_2}{2}\{1+A_1(\kappa+\frac{A_2}{\log 2})\}.\]\\
\end{theorem}
\section{Number theoretical results}\label{sec:numbertheory}
This section we show some number theoretical bounds. The results are technical. The reader can skip the proofs of this section on first view.
\subsection{Estimate for relative totient function}
This section proves a good estimate of $\phi(n,\ell)$ using sieve theory, the direction was hinted in \cite{252852}.
\begin{theorem}\label{thm:betterestimate}
There exists positive constant $c$, such that
\[
\phi(n,\ell) = \begin{cases}
\Omega(\frac{\ell}{n} \phi(n))& \text{ if } \ell > c \log^5 n\\
\Omega(\frac{\ell}{\log \ell}) & \text{ if } \ell > c \log n\\
\end{cases}
\]
\end{theorem}
\begin{proof}
\noindent \textit{Case 1.} $\ell > c \log^5 n$.
Let \(z\) be a value we will define later.
Let \(n_0 = \prod_{p|n,p< z}p\), we can see \(\phi(n,\ell)\) and \(\phi(n_0,\ell)\) are close.
\[
\begin{aligned}
|\phi(n,\ell)-\phi(n_0,\ell)|&=\abs{\sum_{0\leq{m}\leq{\ell},(m,n_0)=1}1-\sum_{0\leq{m}\leq{\ell},(m,n)=1}1}\\
&\leq \sum_{1\leq{m}\leq{\ell}:p|n,p\geq z,p|m}1\\
&\leq \sum_{p|n,p\geq z}\frac{\ell}{p}\\
&\leq \frac{\ell\omega(n)}{z}\\
&\leq \frac{c_1\ell\log{n}}{z\log \log n}
\end{aligned}
\]
Now, we want to estimate \(\phi(n_0,\ell)\) using the Brun's sieve. The notations are from the theorem.
Let \(\mathcal{A}=\{1,2,\ldots,\ell\}, \mathcal{P}=\{p:p|n\}, X=|\mathcal{A}|=\ell\), the multiplicative function \(\gamma(p)=1\) if \(p\in\mathcal{P}\) otherwise \(0\).
\begin{itemize}
\item \textit{Condition (1).} For any squarefree \(d\) composed of primes of \(\mathcal{P}\),
\begin{equation*}
\begin{aligned}
|R_d| &=\abs{\floor{\frac{\ell}{p}} -\frac{\ell}{p}} \leq 1 = \gamma(d).
\end{aligned}
\end{equation*}
\item \textit{Condition (2).} We choose \(A_1\) = 2, therefore \(0\leq \frac{\gamma(p)}{p}=\frac{1}{p}\leq \frac{1}{2} = 1-\frac{1}{A_1}\).
\item \textit{Condition (3).} Because \(R(x):=\sum_{p<x}\frac{\log p}{p}=\log x + O(1)\) \cite{cojocaru2006introduction}, we have
\[\sum_{w\leq p<z}\frac{\gamma(p)\log{p}}{p} \leq \sum_{w\leq p<z}\frac{\log{p}}{p} = R(z)-R(w)=\log{\frac{z}{w}}+O(1).\]
We can choose \(\kappa=1\) and some \(A_2\) large enough to satisfy Condition (3).
\item \textit{Condition (4).} By picking \(b=1,\lambda=\frac{2}{9}\), \(b\) is a positive integer and \(0<\frac{2}{9}e^{11/9}\approx 0.75<1\).
\end{itemize}
We are ready to bound \(\phi(n_0,\ell)\). Brun's sieve shows
\[
\begin{aligned}
\phi(n_0,\ell)=S(\mathcal{A},\mathcal{P},z)\geq &\ell \frac{\varphi(n_0)}{n_0}\left(1-\frac{2\lambda^{2b}e^{2\lambda}}{1-\lambda^2 e^{2+2\lambda}}\exp((2b+2)\frac{c_1}{\lambda\log z})\right)
\\
&+O(z^{2b-1+\frac{2.01}{e^{2\lambda/\kappa}-1}})\\
\geq &\ell \frac{\varphi(n_0)}{n_0}\left(1-0.3574719\exp(\frac{18 c_1}{\log z})\right)+O(z^{4.59170})
\end{aligned}
\]
Which means that there exists some positive constant \(c_2\) such that for some small $\e>0$,
\[
\phi(n_0,\ell)\geq \ell \frac{\varphi(n_0)}{n_0}\left(1-\frac{2}{5}\exp(\frac{18c_1}{\log z})\right)-c_2z^{5-\e}.
\]
We choose some constant \(z_0\) such that \(\frac{2}{5}\exp(\frac{18c_1}{\log z_0}) \leq \frac12\), if \(z>z_0\)(we will later make sure \(z>z_0\)), then
\[
\phi(n_0,\ell)\geq \frac12 \ell \frac{\varphi(n_0)}{n_0}-c_2z^{5-\e}.
\]
Note if \(n_1|n_2\), then \(\varphi(n_1)/n_1\geq \varphi(n_2)/n_2\) since \(\varphi(n)/n=\prod_{p|n}(1-1/p)\) and every prime factor of $n_1$ is also the prime factor of \(n_2\). Therefore,
\[
\phi(n_0,\ell)\geq \frac12 \ell \frac{\varphi(n)}{n}-c_2z^{5-\e}.
\]
Recall there exists a \(c_3\) such that \(\frac{\phi(n)}{n}\geq\frac{c_3}{\log\log n}\),
\[
\begin{aligned}
\phi(n,\ell)&\geq \phi(n_0,\ell)-c_1\frac{\ell\log n}{z\log \log n}\\
&\geq \frac{1}{2} \ell \frac{\phi(n)}{n}-c_2z^{5-\e} -c_1\frac{\ell\log n}{z\log \log n}\\
&= \frac{1}{4}\ell\frac{\phi(n)}{n}+(\frac{1}{8}\ell\frac{\phi(n)}{n}-c_2z^{5-\e}) +( \frac{1}{8}\ell\frac{\phi(n)}{n}-c_1\frac{\ell\log n}{z\log \log n})\\
&\geq \frac{1}{4}\ell\frac{\phi(n)}{n}+(\frac{c_3}{8}\frac{\ell}{\log\log n}-c_2z^{5-\e}) +( \frac{c_3}{8}\frac{\ell}{\log\log n}-c_1\frac{\ell\log n}{z\log \log n}).
\end{aligned}
\]
By picking
\[z = \frac{8c_1}{c_3}\log n = C\log n,\]
we obtain
\[c_1\frac{\ell\log n}{z\log \log n} \leq \frac{c_3}{8}\frac{\ell}{\log\log n}.\]
By picking $c=8\frac{c_2}{c_3}C^5 $ and
\[\ell\geq\frac{8c_2}{c_3}C^5\log^{5-\e}n\log\log n=c\log^{5-\e}n\log\log n,\]
we obtain
\[c z^{5-\e}\leq \frac{\ell}{\log\log n}.\]
Recall for the above to be true we require \(z>z_0\), note \(z=C\log n\), for \(n\) is sufficiently large is enough.
We obtain if \(n\) is sufficiently large and \(\ell \geq c\log^5n \geq c\log^{5-\e}n\log\log n\), then \(\phi(n,\ell) \geq \frac{\ell}{4n}\varphi(n)\).
Thus for all \(n\) and \(\ell \ \geq c\log^5 n\), \(\phi(n,\ell) = \Omega(\ell\frac{\varphi(n)}{n})\).
\noindent \textit{Case 2.} $\ell > c\log n$.
Observe that for all $\ell\leq n$, $\varphi(n,\ell)\geq 1+\pi(\ell) - \omega(n)$. This is because the primes no larger than $\ell$ are relatively prime to $n$ if it is not a factor of $n$, and $1$ is also relatively prime to $n$.
We show there exists a constant $c$ such that $\varphi(n,\ell)=\Omega(\frac{\ell}{\log \ell})$ for $\ell\geq c \log n$, by showing $\frac{1}{2}\pi(\ell)\geq \omega(n)$. There exists constant $c_1,c_2$ such that $\pi(\ell) \geq c_1\frac{\ell}{\log \ell}$ and $\omega(n) \leq c_2\frac{\log n}{\log \log n}$. Therefore, we want some $\ell$, such that $\frac{c_1}{2}\frac{\ell}{\log \ell} \geq c_2 \frac{\log n}{\log \log n}$. It is true as long as $\ell \geq c \log n$ for some sufficiently large $c$.
Noting the $c$ in two parts of the proof might be different, we pick the the larger of the two to be the one in the theorem.
\end{proof}
As a corollary, we prove \cref{thm:betterunitcover}.
\begin{theorem}
\label{thm:betterunitcover}
There exists a constant $c$, such that for any $n$, and a divisor $d$ of $n$, if $\frac{\ell}{c \log^5 n} \geq d$, then each element in $\Z_{n,d}$ is covered $\Omega(\frac{n}{\ell}\phi(n))$ times by $\mathcal{S}_\ell(\Z^*_n)$.
\end{theorem}
\begin{proof}
By \cref{cor:count}, the number of segments in $\mathcal{S}_\ell(\Z^*_n)$ covering some fixed element in $\Z_{n,d}$ is $\frac{\phi(n/d,\ell/d)}{\phi(n/d)}\phi(n)$. As long as $\phi(n,\ell)$ where $\ell$ is not too small, $\phi(n,\ell) = \Omega(\frac{\ell}{n}\phi(n))$. In particular, by \cref{thm:betterestimate}, if $\lfloor \ell/d\rfloor \geq c \log^5(n/d)$, we have $\phi(n/d,\ell/d)/\phi(n/d)=\Omega(\frac{\ell}{n})$. Therefore, each element in $\Z_{n,d}$ is covered $\Omega(\frac{\ell}{n}\phi(n))$ times.
\end{proof}
\subsection{Large divisor with small divisor sum}
\begin{theorem}\label{thm:largel}
If $r = n^{O(\frac{1}{\log \log \log n})}$, then there exists $m|n$, such that $m\geq r$,
$d(m)=r^{O(\frac{1}{\log \log r})}$ and $\sigma(m) = O(m)$.
\end{theorem}
\begin{proof}
If there is a single prime $p$, such that $p^e|n$ and $p^e\geq r$, then we pick $m = p^{e'}$, where $e'$ is the smallest integer such that $p^{e'}\geq r$. One can see $d(m) = e' = O(\log r) = r^{O(\frac{1}{\log\log r})}$, also $\sigma(m) = m(1-\frac{1}{p}) \geq \frac{m}{2}$, and we are done.
Otherwise, we write $n=\prod_{i=1}^k p_i^{e_i}$, where each $p_i$ is a distinct prime number. The prime $p_i$ are ordered by the weight $w_i=e_ip_i\log p_i$ in decreasing order. That is $w_i\geq w_{i+1}$ for all $i$. Let $j$ be the smallest number such that $\prod_{i=1}^j p_i^{e_i}\geq r$. Let $m=\prod_{i=1}^j p_i^{e_i}$.
First, we show $d(m)$ is small.
Let $m' = m/p_j^{e_j}$.
One can see that $m'<r$.
\[
\begin{aligned}
d(m) \leq 2d(m') = r^{O(\frac{1}{\log \log r})}
\end{aligned}
\]
To show that $\sigma(m) = O(m)$, we show $\phi(m) = \Theta(m)$. Indeed, by $\sigma(m)\leq \frac{m^2}{\phi(m)}$, we obtain $\sigma(m)=O(m)$.
For simplicity, it is easier to work with sum instead of products, so we take logarithm of everything and define $t=\log n$.
By our definition, $\log r \leq \frac{t}{\log \log t}$ and $\sum_{i=1}^k e_i \log p_i = t$.
Let $j$ be the smallest number such that $\sum_{i=1}^j e_i \log p_i \geq \log r$. This also implies $\sum_{i=1}^j e_i \log p_i< 2\log r \leq \frac{2t}{\log \log t}$.
Now, consider $e'_1,\ldots,e'_k$, such that the following holds.
\begin{itemize}
\item $\sum_{i=1}^j e_i\log p_i = \sum_{i=1}^j e_i' \log p_i$, and $e_i'p_i \log p_i = c_1$ for some $c_1$, when $1\leq i \leq j$,
\item $\sum_{i=j+1}^k e_i\log p_i = \sum_{i=j+1}^n e_i' \log p_i$, $e_i'p_i \log p_i = c_2$ for some $c_2$, where $j+1\leq i \leq k$.
\end{itemize}
Note $c_1$ and $c_2$ can be interpreted as weighted averages. Indeed, consider sequences $x_1,\ldots,x_n$ and $y_1,\ldots,y_n$, such that $\sum_{i}x_i = \sum_{i}y_i$. If for some non-negative $a_1,\ldots,a_n$, we have $a_iy_i=c$ for all $i,j$, then $c \leq \max_{i}a_ix_i$. Indeed, there exists $x_j\geq y_j$, so $\max_{i}a_ix_i \geq a_jx_j\geq a_jy_j=c$. Similarly, $c\geq \min_{i}a_ix_i$. This shows $c_1\geq c_2$, because $c_2\leq \max_{i=j+1}^k w_i = w_{j+1} \leq w_j = \min_{i=1}^j w_i\leq c_1$.
We first give a lower bound of $c_2$.
$\sum_{i=j+1}^k \frac{c_2}{p_i} = \sum_{i=j+1}^k e'_i\log p_i \geq t(1-\frac{2}{\log \log t}) \geq \frac{t}{2}$.
$\sum_{i=j+1}^k \frac{c_2}{p_i}\leq c_2 \sum_{i=1}^k \frac{1}{p_i}\leq c_2 \sum_{p\text{ prime}, p=O(t)} \frac{1}{p} \leq c_2 \log \log t$.
This shows $c_2 \log \log t \geq \frac{t}{2}$, or $c_2\geq \frac{t}{2\log \log t}$.
Since $c_1\geq c_2$,
$\sum_{i=1}^j \frac{1}{p_i} = \sum_{i=1}^j \frac{e_i'\log p_i}{c_1} \leq \frac{\frac{2t}{\log \log t}}{c_1} \leq \frac{\frac{2t}{\log \log t}}{\frac{t}{2\log\log t}} = 4$.
Note $\phi(m) = m \prod_{i=1}^j (1-\frac{1}{p_i})$. Because $-2x < \log(1-x) < -x$ for $0\leq x\leq 1/2$, so $\sum_{i=1}^j \log(1-\frac{1}{p_i})\geq -2\sum_{i=1}^j \frac{1}{p_i} = -\Theta(1)$. Hence $\prod_{i=1}^j (1-\frac{1}{p_i}) = \Theta(1)$, and $\phi(m) = \Theta(m)$.
\end{proof}
Some interesting number theoretical results are direct corollary of \cref{thm:largel}.
\begin{corollary}\label{cor:independentinterest}
For positive integer $n$, there exists a $m|n$ such that $m = n^{\Omega(\frac{1}{\log \log \log n})}$ and $\sigma(m)=O(m)$.
\end{corollary}
It would be interesting to know if the above corollary is tight.
\begin{lemma}\label{lem:anothercover}
Let $m|n$ and $m\geq \frac{n}{s}$, then $\mathbb{D}_n^{\leq s} \odot \mathbb{D}_m = \mathbb{D}_n$.
\end{lemma}
\begin{proof}
Consider divisor $d$ of $n$, let $d_1 = \gcd(m,d) \in \mathbb{D}_m$, and $d_2 = d/d_1$. $d_2 | \frac{n}{m} \leq s$, so $d_2\in \mathbb{D}_n^{\leq s}$.
\end{proof}
\begin{corollary}\label{cor:independentinterest2}
For $s\leq n$, there exists a $B$ such that $\mathbb{D}_n^{\leq s}\odot B = \mathbb{D}_n$ and $|B| = (\frac{n}{s})^{O(\frac{1}{\log \log \frac{n}{s}})}$.
\end{corollary}
\begin{proof}
Let $r = \frac{n}{s}$, and let $m$ be the one in the construction in \cref{thm:largel}. Let $B=\mathbb{D}_m$.
Note in the proof of \cref{thm:largel}, we showed $|B| = d(m) = r^{O(\frac{1}{\log \log r})}$ without using requiring any information on how large $r$ has to me. Also, $\mathbb{D}_n^{\leq s} \odot B = \mathbb{D}_n$ by \cref{lem:anothercover}.
\end{proof}
\section{$\ell$-covering}\label{sec:bounds}
In this section, we prove our bounds in $f(n,\ell)$ and also provide a quick randomized construction.
\subsection{Upper bound}
The high level idea is to split the problem to sub-problems of covering multiple $\Z_{n,d}$.
Can we cover $\Z_{n,d}$ for many distinct $d$, using only a small number of segments in $\mathcal{S}_\ell(\Z^*_n)$? We answer the question affirmatively.
For the rest of this section, $s = \frac{\ell}{c\log^5 n}$, where $c$ is the constant in \cref{thm:betterestimate}.
Define $g(n,\ell)$ to be the size of the smallest set cover of $\bigcup_{d|n,d\leq s} \Z_{n,d}$ using $\mathcal{S}_\ell(\Z^*_n)$.
We bound $g(n,\ell)$ using the fact that each element is covered many times, and the combinatorial set cover upper bound \cref{thm:bettergreedy}.
\begin{theorem}
\[g(n,\ell) \leq \begin{cases}
O(\frac{n}{\ell}\log \ell) & \text{ if } \ell\geq c \log^5 n\\
O(\frac{\phi(n)}{\ell}\log^2 \ell) & \text{ if } \ell\geq c \log n\\
\phi(n) & \text{ for all } \ell.
\end{cases}\]
\end{theorem}
\begin{proof}
We consider $3$ cases.
Case 1. If $\ell>c\log^5 n$, then $\phi(n,\ell) = \Omega(\frac{\ell}{n}\phi(n))$ by \cref{thm:betterestimate}. By \cref{thm:bettergreedy}, there exists a cover of size
\[
g(n,\ell) = O\left(\frac{\phi(n)\log \ell}{\frac{\ell}{n}\phi(n)}\right) = O\left(\frac{n}{\ell}\log \ell\right).
\]
Case 2. If $\ell\geq c\log n$, then we obtain
\[
g(n,\ell) = O\left(\frac{\phi(n)\log \ell}{\frac{\ell}{\log \ell}}\right) = O\left(\frac{\phi(n)}{\ell}\log^2 \ell\right).\]
Note this requires the fact that $\log^2 \ell = \Omega((\log \log n)^2) \geq \frac{n}{\phi(n)}=O(\log \log n)$.
Case 3. The last case is trivial, $g(n,\ell)\leq |\Z^*_n| = \phi(n)$.
\end{proof}
Our approach is to find some set $B\subseteq \mathbb{D}_n$, and for each $b\in B$, we generate a cover of all $\bigcup_{d\leq s} \Z_{n,b \odot d}$ using $\mathcal{S}_\ell(\Z_{n,b})$, by \cref{thm:betterunitcover}. Certainly, $B$ has to be chosen so $\mathbb{D}_n^{\leq s}\odot B = \mathbb{D}_n$.
Since want to find some $B$ such that $\mathbb{D}_n^{\leq s} \odot B = \mathbb{D}_n$. Then each sub-problem is precisely using $\mathcal{S}_\ell(\Z^*_\frac{n}{b})$ to cover $\bigcup_{d\leq s,d|\frac{n}{b}}\Z_{\frac{n}{b},d}$ for $b\in B$. Hence we obtain
\[
f(n,\ell) \leq \sum_{b\in B} g(\frac{n}{b},\ell).
\]
There can be many choices of $B$, but we need to make sure $|B|$ is not too large when $s$ is large. Also, we need to make sure the number of segments chosen over $\mathcal{S}_\ell(\Z^*_b)$ for all $b\in B$ is also small. Here are the two possible choice we use in our construction.
\begin{enumerate}
\item Let $B=\mathbb{D}_n^{>s} \cup \{1\}$. If $d\leq s$, then $d=d\cdot 1$, if $d > s$, then $d=1\cdot d$. Hence $\mathbb{D}_n^{\leq s} \odot B = \mathbb{D}_n$.
\item Let $m|n$, $m\geq \frac{n}{s}$ and $B=\mathbb{D}_m$. \cref{lem:anothercover} showed that $\mathbb{D}_n^{\leq s} \odot B = \mathbb{D}_n$.
\end{enumerate}
The first choice works well when $\ell$ is not too large.
\begin{lemma}\label{lem:small}
There is a constant $c$, such that $f(n,\ell) = O(\frac{n}{\ell}\log n)$ if $\ell\leq n^{1-\frac{c}{\log \log n}}$.
\end{lemma}
\begin{proof}
Let $B=\{d \mid d\in \mathbb{D}_n, d\geq s \}\cup \{1\}$. Observe that $|B| \leq d(n) = n^{O(\frac{1}{\log \log n})} \leq \frac{n}{\ell}$. $|B|$ is dominated by our desired bound of $O(\frac{n}{\ell}\log n)$, and hence irrelevant.
\paragraph{Case 1}
If $\ell<c\log n$, then we are done, since $f(n,\ell)\leq n = O(\frac{n}{\ell}\log n)$.
\paragraph{Case 2}
Consider $\ell > c\log^5 n$.
\[
\begin{aligned}
f(n,\ell) &\leq \sum_{d\in B}g(\frac{n}{d},\ell)\\
&\leq \sum_{d\in B} \frac{n}{d} \frac{\log \ell}{\ell} + 1\\
&=|B|+ \frac{n\log \ell}{\ell} + \frac{\log\ell}{\ell}\sum_{d\in B\setminus \{1\}} \frac{n}{d}\\
&=O\left(\frac{n\log n}{\ell}\right) + \frac{\log\ell}{\ell}\sum_{d\in B\setminus \{1\}} \frac{n}{d}\\
\end{aligned}
\]
Hence, we are concerned with the last term.
We further separate into $2$ cases. If $\ell < n^{\frac{c}{\log \log n}}$,
\[
\begin{aligned}
\frac{\log\ell}{\ell}\sum_{d\in B\setminus \{1\}} \frac{n}{d} &\leq \frac{\sigma(n)\log\ell}{\ell}\\
&\leq \frac{n\log \log n\log\ell}{\ell}\\
&= O\left(\frac{n\log \log n\frac{\log n}{\log \log n}}{\ell}\right)\\
&= O\left(\frac{n\log n}{\ell}\right)
\end{aligned}
\]
Otherwise $\ell \geq n^{\frac{c}{\log \log n}}$. Since all values in $B\setminus \{1\}$ is at least $s$, so we know that. $\sum_{d\in B\setminus \{1\}} \frac{n}{d} \leq |B|\frac{n}{s}$. In particular, there is a universal constant $c$, such that $|B|\leq \frac{n^{\frac{c}{\log \log n}}}{c\log^u n} \leq s$.
\[
\begin{aligned}
\frac{\log\ell}{\ell}\sum_{d\in B\setminus \{1\}} \frac{n}{d}
&\leq |B|\frac{n}{s} \frac{\log \ell}{\ell}\\
&= O\left(\frac{n\log \ell}{\ell}\right)\\
&= O\left(\frac{n\log n}{\ell}\right)
\end{aligned}
\]
\paragraph{Case 3} Finally, consider $c\log n\leq \ell \leq c\log^5 n$.
\[
\begin{aligned}
f(n,\ell) &\leq \sum_{d\in B}g(\frac{n}{d},\ell)\\
&\leq \sum_{d\in B} \left(\phi(n/d) \frac{(\log \ell)^2}{\ell} + 1\right)\\
&\leq O(\frac{n}{\ell} \log^2 \ell)\\
&= O\left(\frac{n}{\ell} (\log \log n)^2\right)\\
&=O\left(\frac{n\log n}{\ell}\right)
\end{aligned}
\]
\end{proof}
We are ready to prove the main theorem by handling the case when $\ell$ is large using the second choice.
\begin{theorem}[Main]\label{thm:main}
There exists a $\ell$-covering of size $O(\frac{n\log n}{\ell})$ for all $n, \ell$ where $\ell<n$.
\end{theorem}
\begin{proof}
When $\ell\leq n^{1-\frac{c}{\log \log n}}$, \cref{lem:small} handles it. Otherwise, $\ell> n^{1-\frac{c}{\log \log n}}$. By \cref{thm:largel}, there exists a $m|n$, such that $d(m) = m^{O(\frac{1}{\log \log m})}$, $\sigma(m)=O(m)$ and $m\geq \frac{n}{s}$.
Note $\mathbb{D}_n^{\leq s}\odot \mathbb{D}_m = \mathbb{D}_n$.
Therefore we can obtain the following.
\[
\begin{aligned}
f(n,\ell) &\leq \sum_{d|m}g\left(\frac{n}{d},\ell\right)\\
&\leq \sum_{d|m} \frac{n}{d}\left(\frac{\log n}{\ell} + 1\right)\\
&\leq \frac{n}{m} \sum_{d|m} \frac{m}{d} \frac{\log n}{\ell} + d(m)\\
&= \frac{n}{m} \sigma(m) \frac{\log n}{\ell} + O\left(\left(\frac{cn\log^5 n}{\ell}\right)^{\frac{1}{\log \log \frac{n}{s}}}\right)\\
&= \frac{n}{m} m \frac{\log n}{\ell} + O\left(\left(\frac{cn\log^5 n}{\ell}\right)^{\frac{1}{\log \log n}}\right)\\
&=O\left(\frac{n\log n}{\ell}\right)
\end{aligned}
\]
\end{proof}
The upper bound automatically leads to a construction algorithm. First find the prime factorization in $n^{o(1)}$ time, then compute the desired $B$ in $n^{o(1)}$ time, and then cover each $\bigcup_{d|n/b, d\leq s}\Z_{n/b,d}$ using $\mathcal{S}_\ell(\Z_{n,b})$ for $b\in B$. If we use the linear time greedy algorithm for set cover, then the running time becomes $O(n\ell)$ \cite{Koili.X2019}.
One can use a randomized constructive version of \cref{thm:bettergreedy}. The following result can be proven easily through set cover LP and randomized rounding.
\begin{theorem}\label{thm:randomizedrounding}
Let there be $t$ sets each with size at most $a$, and each element of the size $n$ universe is covered by at least $b$ of the sets, then there exists subset of $O(\frac{t}{b}\log n)$ size that covers the universe, and can be found with high probability using a Monte Carlo algorithm that runs in $\tilde{O}(\frac{t}{b})$ time.
\end{theorem}
\begin{proof}[Sketch]
The condition shows the set cover LP has a feasible solution where every indicator variable for each set has value $\frac{1}{b}$. The standard randomized rounding algorithm of picking each set with probability equals $\frac{1}{b}$ independently, for $\Theta(\log n)$ rounds, would cover the universe with high probability \cite{Vazir.2001}. It can be simulated through independently sample sets of size $\frac{t}{b}$ for $\Theta(\log n)$ rounds instead, which can be done in $\tilde{O}(\frac{t}{b})$ time.
\end{proof}
The main difference is the coverage size between \cref{thm:randomizedrounding} and \cref{thm:bettergreedy}. The randomized algorithm have a higher factor of $\log n$ instead of $\log a$. If we use more sophisticated rounding techniques, we can again obtain $\log a$ \cite{Srini.1999}. However, the algorithm will not be as fast. The change to $\log n$ has a consequence in the output size. In particular, following the proof of \cref{lem:small}, there will be an extra $\log \log n$ factor to the size of the cover.
The analysis is similar as before, and we can obtain the following theorem.
\begin{theorem}\label{thm:randconstruction}
A $O(\frac{n}{\ell}\log n\log \log n)$ size $\ell$-covering of $\Z_n$ can be found in $\tilde{O}(\frac{n}{\ell})+n^{o(1)}$ time with high probability.
\end{theorem}
\subsection{Lower bound}
We remark our upper bound is the best possible result obtainable through the combinatorial set covering property (\cref{thm:bettergreedy}). The $\log n$ factor cannot be avoided when $\ell = n^{\Omega(1)}$. In order to obtain a better bound, stronger \emph{number theoretical properties} has to be exploited, as it was for the case when $n$ is a prime \cite{Chen.S.W2013}.
We show that it is unlikely we can get much stronger bounds when $\ell$ is small. For infinite many $(n,\ell)$ pairs, our bound is only $\log \log n$ factor away from the lower bound.
\begin{theorem}
There exists a constant $c>0$, where there are infinite number of $n,\ell$ pairs where
$f(n,\ell) \geq c \frac{n}{\ell} \frac{\log n}{\log \log n}$.
\end{theorem}
\begin{proof}
Let $n$ be the product of the $k$ smallest prime numbers, then $k=\Theta(\frac{\log n}{\log \log n})$. Let $\ell$ be the smallest number where $\pi(\ell) = k$. Because $\pi(\ell) = \Theta(\frac{\ell}{\log \ell})$, we know $\ell = \Theta(\log n)$.
Observe that $\phi(n,\ell)=1$. Indeed, every number $\leq \ell$ except $1$ has a common factor with $n$. In order to cover all elements in $\Z^*_n\subset \Z_n$, the $\ell$-covering size is at least $\phi(n) = \Omega(\frac{n}{\log \log n}) = \Omega(\frac{n}{\ell} \frac{\log n}{\log \log n})$.
\end{proof}
\bibliographystyle{plain}
|
1,477,468,751,088 | arxiv | \section{Introduction}\label{SS-Intro}
\setcounter{equation}{0}
Random network coding, introduced by Ahlswede et al.~in~\cite{ACLY00}, has proven to be a very effective tool for maximizing the information flow in
a non-coherent network with multiple sources and sinks.
The main feature of the network is that the nodes form random linear combinations of the incoming packets (vectors) and transmit the
resulting packets further to their neighboring nodes.
As a consequence, the receiver nodes (sinks) of the network will obtain linear combinations of the packets that have been injected
into the network at its sources.
While this method is very effective in disseminating the information throughout the network, it is at the same time also highly
sensitive to error propagation. Due to the linear combinations of packets that are formed and transmitted
further, a single packet that has been corrupted (through noise, erasures, or injection of wrong packets by adversaries)
may contaminate all further packets.
In order to overcome this deficiency, K{\"o}tter and Kschischang~\cite{KoKsch08} developed an algebraic approach to
random network coding by considering messages as subspaces of some fixed vector space~$\F^n$.
This nicely captures the main feature of the network flow, namely the linear combinations of the packets.
In other words, codewords are now simply subspaces of~$\F^n$, and a code is a collection of such subspaces.
Transmitting information through the network is thus reformulated in terms of transmitting subspaces.
The relevant distance measure for this setting depends on the particular type of problem to be studied, but in essence
the distance between two subspaces amounts to the codimension of their intersection: the larger the intersection, the
smaller the distance of the subspaces.
Their ground-breaking paper~\cite{KoKsch08} initiated intensive research efforts on subspace codes, see
\cite{EKW10,EtSi09,EtVa11,GPB10,KSK09,KoKu08,RoTr13,SiKsch09,TMBR13} and the references therein.
In~\cite{SiKsch09}, Silva and Kschischang derive further details on the appropriate metrics for the various networks models, while
in~\cite{KSK09}, Khaleghi et al.\ determine various bounds for the cardinality and distance of subspace codes.
Some of these bounds are improved upon by Etzion and Vardy in~\cite{EtVa11}, and the authors also present further constructions of
subspace codes.
In~\cite{EtSi09}, Etzion and Silberstein present a construction of subspace codes with large distance and cardinality which
is based on rank-metric codes as introduced and studied earlier by Gabidulin in~\cite{Gab85}.
Decoding of such rank-metric codes is investigated by Gabidulin et al.\ in~\cite{GPB10}, and the results are further applied to
subspace codes for various combinations of errors and erasures.
The paper at hand is most closely related to the references~\cite{EKW10,EtVa11,KoKu08,RoTr13,TMBR13}.
All these papers study, or touch upon, cyclic orbit codes.
These are subspace codes that arise as an orbit of a subspace in~$\F^n$ under a cyclic subgroup
of~$\GL(n,\F)$.
If the group is irreducible, the code is called an \emph{irreducible cyclic orbit code}.
In this case, the group is conjugate to a subgroup of~$\Fqns$, and by considering the subspaces in the~$\F$-vector space~$\Fqn$,
one can utilize properties of the field extension~$\Fqn$.
In finite geometry, an element in $\GL(n,\F)$ of order~$q^n-1$ is called a Singer cycle.
They thus generate irreducible cyclic subgroups of $\GL(n,\F)$, and so their corresponding subspace codes are irreducible cyclic orbit codes.
In~\cite{KoKu08}, Kohnert and Kurz make use of the field extension~$\Fqn$ along with solving linear inequalities under a
Diophantine restriction in order to find subspace codes of constant dimension with large distance.
The interesting fact is that they impose a prescribed automorphism group on the putative solutions in order to reduce the system
of inequalities significantly, thus making the search feasible.
Using the cyclic group~$\F_{q^n}^*$, their method results in the union of cyclic orbit codes.
In \cite[Sec.~5]{KoKu08} the authors present their results for length $n=6,\ldots,14$, dimension $k=3$, and distance $\ds=4$.
Some of these codes improve upon the lower bounds for the cardinality that were known at that time, and
most of these codes are still the best ones known for given length, dimension, and distance.
In~\cite{EKW10}, Elsenhans et al. present a decoding algorithm for cyclic orbit codes of dimension~$3$.
In~\cite{EtVa11}, Etzion and Vardy introduce the notion of a \emph{cyclic subspace code}.
In our terminology this is a collection of subspaces in $\F^n=\F_{q^n}$ that is invariant under multiplication by a primitive
element~$\alpha$ of ~$\F_{q^n}$.
In other words, a cyclic subspace code is a union of cyclic orbit codes, potentially of different dimensions.
In~\cite[Sec.~III]{EtVa11} they present optimal cyclic codes of length~$8$ and~$9$ in the sense that there is no larger
cyclic code of the same length and distance.
Their example of length~$9$ improves upon the code of length~$9$, dimension~$3$, and distance~$4$
given by Kohnert and Kurz~\cite{KoKu08} because they were able to add a cyclic spread code
(hence cardinality $(2^9-1)/(2^3-1)=73$) to a collection of~$11$ cyclic orbit codes of cardinality~$2^9-1=511$.
This leads to an optimal cyclic constant dimension code of cardinality~$5694$.
This cardinality comes remarkably close to the bound $\cA_2(9,4,3)\leq 6205$, resulting from the Anticode bound, see~\cite[Thm.~1]{EtVa11}.
The above mentioned codes were all found by computer search based on
the deliberate choice of $\F_{q^n}^*$ as the automorphism group in order to make the search feasible.
Despite this restriction on the search, the codes found all come quite close to known bounds.
This indicates that cyclic orbit codes form a powerful class of constant dimension codes that needs to be investigated further.
Rosenthal and Trautmann~\cite{RoTr13} and Trautmann et al.~\cite{TMBR13} present an algebraic treatment of cyclic orbit codes
by combining the ideas of~\cite{KoKu08} with detailed methods from the theory of group actions.
They also extend their results to orbits under reducible cyclic groups.
We close this brief overview of the literature by mentioning that cyclic orbit codes also play a crucial role in the construction
of $q$-analogs of Steiner systems; for details we refer to~\cite{BEOVW13,EtVa11s} and the references therein.
In this paper we will study cyclic orbit codes generated by subspaces of~$\Fqn$ by
specifying the largest subfield of~$\Fqn$ over which the given subspace is a vector space.
This subfield, later called the \emph{best friend} of the code or subspace, is closely related to the stabilizer of the orbit.
Designing a subspace with a pre-specified best friend allows us to control cardinality and distance of the
orbit code.
In particular, we can give estimates on the distance in terms of the best friend.
Moreover, using the best friend we are able to compute the distance with the aid of multisets.
The computation improves upon earlier results in~\cite{KoKu08,RoTr13} by reducing the cardinality of the multiset.
Finally, we will present a construction that allows us to link cyclic orbit codes leading to longer and larger constant
dimension codes without compromising the distance.
\section{Preliminaries}\label{SS-Prelim}
We fix a finite field~$\F=\F_q$.
Recall that a \emph{subspace code of length~$n$} is simply a collection of subspaces in~$\F^n$.
The code is called a \emph{constant dimension code} if all subspaces have the same dimension.
The \emph{subspace distance} of a subspace code~$\cC$ is defined as
\[
\ds(\cC):=\min\{\ds(\cV,\cW)\mid \cV,\,\cW\in\cC,\,\cV\neq\cW\},
\]
where the distance between two subspaces is
\begin{equation}\label{e-dist}
\ds(\cV,\cW):=\dim\cV+\dim\cW-2\dim(\cV\cap\cW).
\end{equation}
This distance may be interpreted as the number of insertions and deletions of vectors that is needed in order to transform
a basis of~$\cV$ into a basis of~$\cW$.
It thus coincides with the corresponding graph distance (the length of the shortest path
from~$\cV$ to~$\cW$ in the graph with vertices being the subspaces of~$\F^n$ and where two subspaces are joined by an
edge if they differ by dimension one and the smaller one is contained in the larger one).
If $\dim\cV=\dim\cW=k$, then $\ds(\cV,\cW)=2(k-\dim(\cV\cap\cW))=2\big(\dim(\cV+\cW)-k\big)$.
As a consequence, if~$\cC$ is a constant dimension code of dimension~$k$ then
\begin{equation}\label{e-distC}
\ds(\cC)\leq\min\{2k,\,2(n-k)\}.
\end{equation}
The \emph{dual} of a subspace code $\cC$ is defined as
\begin{equation}\label{e-Cdual}
\cC^{\perp}:=\{\cU^\perp\mid \cU\in\cC\}.
\end{equation}
It is easy to see that $\ds(\cV^\perp,\cW^\perp)=\ds(\cV,\cW)$, and therefore $\ds(\cC)=\ds(\cC^\perp)$.
Two subspace codes~$\cC,\,\cC'$ of length~$n$ are called \emph{linearly isometric} if there exists an $\F$-linear isomorphism
$\psi:\F^n\longrightarrow\F^n$ such that $\cC'=\{\psi(\cU)\mid \cU\in\cC\}$.
This terminology stems from the fact that isomorphisms preserve dimensions of subspaces and thus preserve the distance between
any two subspaces.
Hence linearly isometric codes have the same subspace distance and even the same distance distribution, i.e., the list of all
distances between any two distinct subspaces in~$\cC$ coincides up to order with the corresponding list of~$\cC'$.
In \cite[Def.~9]{TMBR13} linear isometries are denoted as $\GL_n(\F)$-isometries.
Consider now the field extension~$\F_{q^n}$.
Since the $\F$-vector spaces $\F^n$ and $\F_{q^n}$ are isomorphic, we may consider subspace codes as collections of
subspaces in~$\F_{q^n}$.
In this paper we will focus specifically on subspace codes that are given as orbits under a particular group action.
In order to make this precise, we fix the following terminology.
An element~$\beta$ of~$\Fqn$ is called \emph{irreducible} if the minimal polynomial of~$\beta$ in~$\F[x]$, denoted by $\mipo(\beta,\F)$,
has degree~$n$.
Hence $\Fqn=\F[\beta]$.
As usual, we call~$\beta$ (and its minimal polynomial) \emph{primitive} if the multiplicative cyclic group generated by~$\beta$, denoted by
$\ideal{\beta}$, equals $\Fqns$.
We will study subspace codes that are derived from the natural action of the group $\ideal{\beta}$ on~$\Fqn$.
This action induces an action on the set of subspaces of~$\Fqn$, and thus gives rise to the following
type of constant dimension codes.
These codes were introduced in a slightly different form in~\cite{RoTr13,TMBR13};
we will comment on the relation to~\cite{RoTr13,TMBR13} after the definition.
\begin{defi}\label{D-OrbU}
Fix an irreducible element~$\beta$ of~$\Fqn$.
Let~$\cU$ be a $k$-dimensional subspace of the $\F$-vector space~$\Fqn$.
The \emph{cyclic orbit code} generated by~$\cU$ with respect to the group~$\ideal{\beta}\subseteq\Fqns$
is defined as the set
\begin{equation}\label{e-orbbU}
\orbb(\cU):=\{\cU\beta^i\mid i=0,1,\ldots,|\beta|-1\}.
\end{equation}
The code $\orbb(\cU)$ is called \emph{primitive} if~$\beta$ is primitive.
\end{defi}
Obviously, a cyclic orbit code is a constant dimension code.
Let us briefly relate our approach to~\cite{EtVa11,RoTr13,TMBR13}.
Let $f=x^n+\sum_{i=0}^{n-1}f_i x^i\in\F[x]$ be the minimal polynomial of~$\beta$.
The~$\F$-vector spaces $\Fqn$ and $\F^n$ are isomorphic via the coordinate map with respect to the basis
$1,\,\beta,\,\ldots,\,\beta^{n-1}$.
In other words, we have the $\F$-isomorphism
\begin{equation}\label{e-FFn}
\varphi:\Fqn\longrightarrow \F^n,\quad \sum_{i=0}^{n-1}a_i\beta^i\longmapsto (a_0,\ldots,a_{n-1}).
\end{equation}
Let~$M_f\in\GL_n(\F)$ be the companion matrix of~$f$, thus\footnote{Due to row vector notation, our companion
matrix is the transpose of the classical companion matrix.}
\begin{equation}\label{e-Mf}
M_f=\begin{pmatrix} 0&1& & & \\ & &1& & \\ & & &\ddots & \\ & & & &1\\
-f_0&-f_1&-f_2& \ldots& -f_{n-1}\end{pmatrix}.
\end{equation}
Since~$f$ is the minimal polynomial of~$\beta$, multiplication by~$\beta$ in~$\Fqn$ corresponds to multiplication by~$M_f$
in~$\F^n$ under the isomorphism~$\varphi$, i.e.,
\begin{equation}\label{e-betaM}
\varphi(a\beta^i)=\varphi(a)M_f^i\text{ for all $a\in\Fqn$ and $i\in\N$.}
\end{equation}
Using the isomorphism~\eqref{e-FFn} of~$\Fqn$ with~$\F^n$, the orbit code~$\cC:=\orbb(\cU)$ takes the following form.
We can write~$\varphi(\cU)$ as $\varphi(\cU)=\im U:=\{xU\mid x\in\F^k\}$, i.e., the rowspace of~$U$, for a suitable matrix
$U\in\F^{k\times n}$ of rank~$k$.
Then $\varphi(\cU\beta^i)=\im(U M_f^i)$, where~$M_f$ is as in~\eqref{e-Mf}.
Thus, under the isomorphism~\eqref{e-FFn} the orbit code~$\orbb(\cU)$ simply becomes
\begin{equation}\label{e-UMf}
\{\im(U M_f^i)\mid 0\leq i\leq|\beta|-1\}.
\end{equation}
In other words, the action of the cyclic group~$\ideal{\beta}\leq\Fqns$ on subspaces in~$\Fqn$ turns into the action of
the cyclic group $\ideal{M_f}\leq \GL_n(\F)$ on subspaces in~$\F^n$.
In~\cite{RoTr13,TMBR13} the authors introduce, more generally, orbit codes in~$\F^n$ with respect to a subgroup of~$\GL_n(\F)$.
These are subspace codes of the form $\{\im(U A)\mid A\in\cG\}$,
where~$U$ is any matrix of rank~$k$ in $\F^{k\times n}$ and $\cG$ a subgroup of~$\GL_n(\F)$.
The orbit code is called \emph{cyclic} if the group~$\cG$ is cyclic and \emph{irreducible} if~$\cG$ is irreducible, i.e.,
it does not have any nontrivial invariant subspaces in~$\F^n$.
It is easy to see that the cyclic group~$\ideal{M_f}\leq \GL_n(\F)$ is irreducible whenever~$f$ is an irreducible polynomial.
Hence the orbit codes in Definition~\ref{D-OrbU} are irreducible cyclic
orbit codes in the sense of~\cite{RoTr13,TMBR13}.
Furthermore, every irreducible matrix~$A\in\GL_n(\F)$ has an irreducible
characteristic polynomial, say~$g\in\F[x]$, and~$A$ is similar to the companion matrix~$M_g$; see also~\cite[Sec.~IV.A]{TMBR13}.
Thus $A=SM_gS^{-1}$ for some $S\in\GL_n(\F)$, and the irreducible cyclic subgroup~$\cG=\ideal{A}$ of~$\GL_n(\F)$
is conjugate to the cyclic subgroup~$\ideal{M_g}$.
As a consequence, the isomorphism of~$\F^n$ induced by~$S$ yields a linear isometry between
$\{\im(UA^i)\mid i\in\N\}$ and $\{\im(VM_g^i)\mid i\in\N\}$, where $V=U S$; see also~\cite[Thm.~9]{RoTr13}.
All of this shows that the study of irreducible cyclic orbit codes may be
restricted to orbit codes with respect to irreducible cyclic subgroups of~$\Fqns$, and these
are exactly the codes in Definition~\ref{D-OrbU}.
In this context, matrices of order~$q^n-1$ in the group~$\GL_n(\F)$ are also called \emph{Singer cycles}.
They thus correspond to the primitive elements of~$\Fqn$.
In~\cite[p.~1170]{EtVa11}, the authors introduce \emph{cyclic subspace codes} in~$\Fqn$.
These are codes that are invariant under multiplication by a primitive element, say~$\alpha$, of~$\Fqn$.
In other words, a cyclic subspace code is a union of primitive cyclic orbit codes, i.e.,
$\cC=\bigcup_{t=1}^T\hspace*{-2em}\raisebox{.4ex}{$\cdot$}\hspace*{2em} \orb_{\alpha}(\cU_t)$.
In~\cite{EtVa11} the authors do not require that~$\cC$ be a constant dimension code, hence $\cU_1,\ldots,\cU_T$ may have different
dimensions.
We close this section with the following simple fact.
\begin{rem}\label{R-Cperp}
The dual of an orbit code (in the sense of~\eqref{e-Cdual}) is an orbit code again.
Indeed, for any subspace $\cU\in\F^n$ and matrix $A\in\GL_n(\F)$ we have $(\cU A)^\perp=\cU^{\perp}(A^{\sf T})^{-1}$.
Moreover, $A^{\sf T}=SAS^{-1}$ for some $S\in\GL_n(\F)$, and therefore
$\orbb(\cU)^{\perp}$ is linearly isometric to $\orbb(\cU^{\perp})$; see also \cite[Thm.~18]{TMBR13}.
\end{rem}
As a consequence, we may and will restrict ourselves to cyclic orbit codes generated by a subspace~$\cU$ with $\dim\cU\leq n/2$.
\section{Stabilizer Subfield and Cardinality of Cyclic Orbit Codes}\label{SS-Basics}
Throughout this section we fix an irreducible element~$\beta$ of~$\Fqn$.
Consider a $k$-dimensional subspace~$\cU$ of~$\Fqn$ and its orbit code $\orbb(\cU)$.
In the following, we will mainly restrict ourselves to subspaces~$\cU$ that contain the identity~$1\in\Fqn$.
This will facilitate later considerations of the cardinality of the orbit code.
The restriction is not at all severe because if~$1\not\in\cU$ then for any nonzero element $u\in\cU$
the subspace $\tilde{\cU}:=\cU u^{-1}$ contains~$1$.
Since the $\F$-isomorphisms on~$\Fqn$ given by multiplication with nonzero constants commute,
$\tilde{\cU}\beta^i=\cU\beta^i u^{-1}$, and therefore multiplication by~$u^{-1}$ provides a linear isometry between
$\orbb(\cU)$ and~$\orbb(\tilde{\cU})$.
Note also that if $u^{-1}\in\ideal{\beta}$, e.g., if~$\beta$ is primitive, then $\orbb(\cU)$ and~$\orbb(\tilde{\cU})$ are equal.
Recall that the stabilizer of the subspace~$\cU$ under the action induced by~$\ideal{\beta}$ is defined as
\begin{equation}\label{e-stabU}
\stabb(\cU):=\{\gamma\in\ideal{\beta}\mid \cU\gamma=\cU\}=\{\gamma\in\ideal{\beta}\mid \cU\gamma\subseteq\cU\}.
\end{equation}
This is clearly a subgroup of~$\ideal{\beta}$.
Let~$N\in\N$ be the minimal integer such that $\stabb(\cU) = \ideal{\beta^N}$.
Then $N$ is a divisor of $|\beta|$ and with the aid of the orbit-stabilizer theorem from group actions we have
\begin{equation}\label{e-N}
|\stabb(\cU)|=\frac{|\beta|}{N},\quad
\orbb(\cU)=\{\cU\beta^i\mid i=0,\ldots,N-1\},\quad
|\orbb(\cU)|=N.
\end{equation}
We define the following subfield related to the stabilizer.
\begin{defi}\label{D-StabField}
Let $\stabbp(\cU)$ be the smallest subfield of~$\Fqn$ containing~$\F$ and the group $\stabb(\cU)$.
\end{defi}
It is clear that $\stabbp(\cU)$ is the field extension $\F[\beta^N]$, where $\ideal{\beta^N}=\stabb(\cU)$.
From this it follows immediately that~$\cU$ is a vector space over~$\stabbp(\cU)$.
We wish to briefly comment on the relation to representation theory.
\begin{rem}\label{R-ReprTh}
Let $G=\ideal{\gamma}=\stabb(\cU)$.
As we have already observed, $\stabbp(\cU)=\F[\gamma]$.
Via multiplication in~$\Fqn$, the group~$G$ has a natural $\F$-representation in $\GL(\cU)$, where $\GL(\cU)$
denotes the group of $\F$-automorphisms of~$\cU$.
As always for group representations, this naturally induces an $\F[G]$-module structure on~$\cU$.
In this case, this is the same as the canonical $\F[\gamma]$-vector space structure --
as we have already observed in the previous paragraph.
As a consequence, an $\F$-linear map~$f$ on~$\cU$ is $G$-linear (i.e., $f(gu)=gf(u)$ for all $g\in G$ and $u\in\cU$),
if and only if it is $\F[\gamma]$-linear.
In other words, the intertwining algebra of the above representation of~$G$ is given by
$\text{End}_{\F[\gamma]}(\cU)$.
Irreducibility of the representation (i.e.~$\cU$ does not have any non-trivial $G$-invariant subspaces)
is the same as saying that~$\cU$ does not have any non-trivial $\F[\gamma]$-subspaces.
Therefore the above representation is irreducible if and only if $\dim_{\F[\gamma]}(\cU)=1$.
\end{rem}
We fix the following notation.
For a primitive element~$\alpha$, thus $\ideal{\alpha}=\Fqn^*$, we drop the subscript~$\alpha$ and
simply write $\orb(\cU),\,\stab(\cU)$ and $\stabp(\cU)$.
The identities in~\eqref{e-stabU} and~\eqref{e-N} then read as
\begin{equation}\label{e-primnota}
\stab(\cU)=\{\gamma\in\Fqns\mid \cU\gamma=\cU\},\ \orb(\cU)=\{\cU\alpha^i\mid i=0,\ldots,L-1\},
\text{ where }L=\frac{q^n-1}{|\stab(\cU)|}.
\end{equation}
In this particular case, we have the following result.
\begin{lemma}\label{L-Stabplus}
Let~$\cU$ be a subspace of~$\Fqn$ such that $1\in\cU$.
Then $\stabp(\cU)=\stab(\cU)\cup\{0\}$ and $\stabp(\cU)$ is contained in~$\cU$.
Moreover,~$\cU$ is a vector space over $\stabp(\cU)$ with scalar multiplication being the multiplication
of the field~$\Fqn$.
\end{lemma}
\begin{proof}
We know that $\stab(\cU)=\{\gamma\in\Fqns\mid \cU\gamma=\cU\}$ is a subgroup of $\Fqns$ and
contains~$\F^*$.
Thus it remains to show that $\stab(\cU)\cup\{0\}$ is closed under addition.
Let $\gamma,\,\gamma' \in \stab(\cU)$, i.e., $\cU\gamma=\cU=\cU\gamma'$.
If $\gamma+\gamma' = 0$, then $\gamma +\gamma' \in \stab(\cU)\cup\{0\}$, and we are done.
Now let $\gamma+\gamma'\neq0$.
Then $\cU(\gamma+\gamma') \subseteq \cU\gamma+\cU\gamma'=\cU+\cU = \cU$, so $\gamma + \gamma' \in \stab(\cU)$.
All of this shows that $\stab(\cU)\cup\{0\}\subset\Fqn$ is closed under multiplication and addition,
making it a subfield, and in fact the smallest subfield containing $\stab(\cU)$.
Since $1\in\cU$, the stabilizer property shows immediately that $\stabp(\cU)$ is contained in~$\cU$ and
that~$\cU$ is a vector space over $\stabp(\cU)$ (see also Remark~\ref{R-ReprTh}).
\end{proof}
In the case where~$n$ is prime, the only proper subfield of~$\Fqn$ is~$\F$.
Thus, we have the following result.
\begin{cor}\label{C-nprime}
If~$n$ is prime, then $\stab(\cU)=\F^*$, and thus $|\orb(\cU)|=\frac{q^n-1}{q-1}$ for every proper subspace $\cU\subset\Fqn$.
\end{cor}
Let us now return to the general case where~$\beta$ is irreducible.
The containments $\stabb(\cU)\subseteq\stab(\cU)\subseteq\stabp(\cU)$ lead immediately to the following
situation for the non-primitive case.
\begin{cor}\label{C-Stabbetaplus}
For any irreducible~$\beta$, the subfield $\stabbp(\cU)$ is contained in~$\stabp(\cU)$.
Hence it is contained in~$\cU$ and~$\cU$ is a vector space over $\stabbp(\cU)$.
\end{cor}
The following example shows that the containment $\stabbp(\cU)\subseteq\stabp(\cU)$ may be strict.
\begin{exa}\label{E-F81}
Consider $\F=\F_3$ and $\Fqn=\F_{3^4}$.
Fix the primitive element~$\alpha$ with minimal polynomial $x^4+x+2$.
Consider $\beta:=\alpha^{16}$, which has order~$5$.
Then~$\beta$ is an irreducible element of~$\F_{3^4}$ because its minimal polynomial is found to be
$\mipo(\beta,\F)=x^4+x^3+x^2+x+1$.
Let~$\cU$ be the subfield~$\F_{3^2}$ (considered as a subspace of~$\F_{3^4}$).
Then clearly $\stabp(\cU)=\F_{3^2}$.
Moreover, since $1\in\cU$, any~$\gamma$ satisfying $\cU\gamma=\cU$ is already in~$\cU$.
But then the relative primeness of the orders of the groups~$\ideal{\beta}$ and $\F_{3^2}^*$ show that
$\stabb(\cU)=\{1\}$.
As a consequence, $\stabbp(\cU)=\F_3$.
Thus we see that $\stabbp(\cU)\subsetneq\stabp(\cU)$.
\end{exa}
Let us briefly consider an extreme case.
Since $\F_q^*$ is always contained in $\stab(\cU)$, we conclude that $|\stab(\cU)|\geq q-1$.
Using~\eqref{e-N} above, this leads to $|\orb(\cU)|=N\leq \frac{q^n-1}{q-1}$.
As a consequence, $N=q^n-1$ is possible only if $q=2$.
This also follows from Lemma~\ref{L-Stabplus}, because if $N=q^n-1$, then $\ideal{\alpha^N}=\{1\}$ and thus
$\stabp(\cU)=\{0,1\}=\F_2$.
\begin{prop}\label{P-Fqk}
Let~$\cU$ be a $k$-dimensional subspace of~$\Fqn$ such that $1\in\cU$.
Then
\[
\frac{|\beta|}{\gcd(|\beta|,q^k-1)}\ \text{ divides }\ |\orbb(\cU)|.
\]
Assume now that~$k$ divides~$n$, and thus~$\F_{q^k}$ is a subfield of~$\Fqn$.
If~$\F_{q^k}^*\subseteq\ideal{\beta}$ then $\frac{|\beta|}{q^k-1}$ divides $|\orbb(\cU)|$.
Furthermore, $|\orbb(\cU)|= \frac{|\beta|}{q^k-1}$ if and only if $\cU=\F_{q^k}$.
Finally we have $\ds\big(\orbb(\F_{q^k})\big)=2k$.
\end{prop}
\begin{proof}
From Corollary~\ref{C-Stabbetaplus} we know that $\stabbp(\cU)=\F_{q^r}$ for some~$r$ and that~$\cU$
is a vector space over~$\F_{q^r}$.
Thus~$r$ divides~$k$ and so $q^r-1$ divides $q^k-1$.
Moreover, since $\stabb(\cU)$ is a subgroup of~$\F_{q^r}^*\cap\ideal{\beta}$ its order divides
$q^r-1$ as well as~$|\beta|$.
All of this shows that $|\stabb(\cU)|$ divides $\gcd(|\beta|,q^k-1)$, and now the first statement follows from
the identities in~\eqref{e-N}.
For the second part, note first that by assumption, $q^k-1$ divides $|\beta|$.
Thus the first statement is just a special case of the previous part.
For the rest, set $D:= \frac{|\beta|}{q^k-1}$.
\\
\noindent"$\Rightarrow$"
With the notation as in~\eqref{e-N}, we have $D=N$.
Since $|\beta^N|=\frac{|\beta|}{\gcd(N,|\beta|)}=\frac{|\beta|}{N}=q^k-1$, the uniqueness of subgroups of a cyclic group gives us
$\ideal{\beta^N}=\F_{q^k}^*$.
Now the fact that $\ideal{\beta^N}=\stabb(\cU)$ with Corollary~\ref{C-Stabbetaplus} imply $\stabbp(\cU)=\F_{q^k}\subseteq \cU$.
Thus $\F_{q^k}= \cU$ due to dimension.
\\
\noindent"$\Leftarrow$" Let $u\in\F_{q^k}^*$.
Then $(u\beta^D)^{q^k-1} = u^{q^k-1}\beta^{D\cdot (q^k-1)}=1\cdot 1=1$.
Since the nonzero elements of~$\F_{q^k}$ are exactly the roots of $x^{q^k-1}-1$ in~$\Fqn$,
we obtain $\F_{q^k}\beta^D=\F_{q^k}$.
Hence $|\orbb(\F_{q^k})|\leq D$.
Let $0\leq i< j< D$ and let $\gamma \in \F_{q^k}\beta^i \cap \F_{q^k}\beta^j$ with $\gamma \neq 0$.
Then $\gamma = \gamma_i\beta^i = \gamma_j\beta^j$, for some $\gamma_i, \gamma_j \in \F_{q^k}^*$.
But then $\beta^{j-i}=\gamma_i\gamma_j^{-1}\in\F_{q^k}^*$.
So $j-i\equiv 0$ mod $D$, which is impossible.
Thus $\F_{q^k}\beta^i \cap \F_{q^k}\beta^j = \{0\}$.
This shows that $|\orbb(\F_{q^k})| = D$, and using~\eqref{e-dist} we also see that $\ds\big(\orbb(\F_{q^k})\big)=2k$.
\end{proof}
We have the following special case of the previous result.
Recall the notation from~\eqref{e-primnota}.
\begin{cor}\label{C-Fqk}
Let~$\cU$ be a $k$-dimensional subspace of~$\Fqn$ such that $1\in\cU$. Then
\[
|\orb(\cU)| = \frac{q^n-1}{q^k-1} \Longleftrightarrow \cU = \F_{q^k}.
\]
Furthermore, $\ds\big(\orb(\F_{q^k})\big)=2k$.
\end{cor}
\begin{rem}\label{R-Spread}
A subspace code~$\cC$ is called a \emph{spread} of~$\Fqn$ if $\cup_{\cV\in\cC}\cV=\Fqn$ and
$\cV\cap\cW=\{0\}$ for all distinct $\cV,\,\cW\in\cC$.
If only $\cV\cap\cW=\{0\}$ is true for all distinct $\cV,\,\cW\in\cC$, then~$\cC$ is called a \emph{partial spread}.
The previous result shows that $\orb(\F_{q^k})$ is a $k$-dimensional spread, and
$\orbb(\F_{q^k})$ is a partial spread for any irreducible element~$\beta$.
This result is also found in \cite[Thm.~11, Cor.~12]{RoTr13}.
\end{rem}
Let us return to Lemma~\ref{L-Stabplus}.
The following notion will arise repeatedly, so we introduce terminology for convenience.\footnote{
The best friend as in Definition~\ref{D-friend} is what is called the stabilizer subfield in the title of this paper.
However, since the terminology `friend' will also be needed frequently, we prefer `best friend' over the
more technical `stabilizer subfield'.}
\begin{defi}\label{D-friend}
Let~$\cU$ be a subspace of~$\Fqn$.
A subfield~$\F_{q^r}$ of $\Fqn$ is called a \emph{friend} of~$\cU$ if~$\cU$ is a
vector space over~$\F_{q^r}$ with scalar multiplication being the multiplication in the field $\F_{q^n}$.
The largest friend of~$\cU$ (with respect to cardinality) is called the \emph{best friend} of~$\cU$.
\end{defi}
Note that since~$\cU$ is a subspace of the $\F$-vector space~$\Fqn$, the field~$\F$ is a friend of~$\cU$,
and thus~$\cU$ also has a best friend.
\begin{rem}\label{R-BF}
For any subspace~$\cU$ of~$\Fqn$ and any friend~$\F_{q^r}$ of~$\cU$ we have $1\in\cU\Longleftrightarrow \F_{q^r}\subseteq\cU$.
\end{rem}
\begin{prop}\label{T-StabBF}
Let~$\cU$ be a $k$-dimensional subspace of~$\Fqn$ with $1\in\cU$.
Then the subfield $\stabp(\cU)$ is the best friend of~$\cU$.
Furthermore, any friend of~$\cU$ is contained in the best friend.
\end{prop}
\begin{proof}
We know from Lemma~\ref{L-Stabplus} that $\stabp(\cU)$ is a friend of~$\cU$.
Moreover, if~$\F_{q^l}$ is a friend of~$\cU$, then $\cU\gamma=\cU$ for all $\gamma\in\F_{q^l}^*$ by closure
of the scalar multiplication.
This implies $\F_{q^l}^*\subseteq\stab(\cU)$, hence $\F_{q^l}\subseteq\stabp(\cU)$.
\end{proof}
It is a consequence of the last result that all subspaces in~$\orb(\cU)$ have the same best friend, say~$\F_{q^r}$, and we
may therefore call~$\F_{q^r}$ the \emph{best friend of the orbit code}.
Example~\ref{E-F81} shows that we do not have an analogous characterization for $\stabbp(\cU)$, when~$\beta$ is not primitive.
The identities in~\eqref{e-N} now read as follows.
\begin{cor}\label{C-cardinality}
Let $\F_{q^r}$ be the best friend of $\cU$. Then
\[
|\orb(\cU)|=\frac{q^n-1}{q^r-1}\ \text{ and }\ |\stab(\cU)|=q^r-1.
\]
\end{cor}
This result facilitates the design of orbit codes with a prescribed cardinality.
In the following we use $\dim_{\F_{q^l}}(\cU)$ for the dimension of a vector space~$\cU$ over the field~$\F_{q^l}$.
We also set $\dim\cU:=\dim_{\F}\cU$.
\begin{exa}\label{E-EtVa1}
The following subspace~$\cU$ is taken from \cite[Ex.~1]{EtVa11}, where the distance and cardinality of the resulting
orbit code have been determined by straightforward testing and enumeration.
Consider $\F=\F_2$ and the field $\F_{2^6}$ with primitive element $\alpha$ having minimal polynomial $x^6+x+1\in\F[x]$.
Let $\cU:=\{0,\alpha^0,\alpha^1,\alpha^4,\alpha^6,\,\alpha^{16},\alpha^{24},\alpha^{33}\}$.
It is straightforward to check that this is a vector space over~$\F$ (generated by, for instance, $\{1,\alpha,\alpha^4\}$).
Using the isomorphism $\varphi:\sum_{i=0}^5 a_i\alpha^i\longmapsto (a_0,\ldots,a_5)$ between the vector spaces~$\F_{2^6}$ and
$\F_2^6$, see~\eqref{e-FFn}, the subspace~$\varphi(\cU)$ is given by
\[
\varphi(\cU)=\im\begin{pmatrix}1&0&0&0&0&0\\0&1&0&0&0&0\\0&0&0&0&1&0\end{pmatrix}.
\]
Since $\dim\cU=3$ it is clear that~$\cU$ is not a vector space over the subfield $\F_{2^2}$.
Furthermore,~$\cU$ does not coincide with the subfield $\F_{2^3}$ because the latter contains the element~$\alpha^9$.
All of this shows that~$\F$ is the best friend of~$\cU$ and thus $|\orb(\cU)|=2^6-1=63$ by the last corollary.
\end{exa}
\begin{exa}\label{E-EtVa2}
Consider the field $\F_{2^{12}}$ and the primitive element~$\alpha$ with minimal polynomial
$x^{12}+x^7+x^6+x^5+x^3+x+1\in\F_2[x]$.
\begin{alphalist}
\item Since $\deg(\mipo(\alpha,\F_{2^2}))=6$, it is clear that
$\cU:=\F_{2^2}+\alpha\F_{2^2}+\alpha^3\F_{2^2}$ is a direct sum, and thus~$\cU$
is a $6$-dimensional subspace of the $\F_2$-vector
space~$\F_{2^{12}}$.
Obviously $\F_{2^2}$ is a friend of~$\cU$.
Furthermore,~$\cU\neq\F_{2^6}$ because $\alpha\in\cU$, but~$\alpha\not\in\F_{2^6}$.
Along with Proposition~\ref{T-StabBF}, all of this shows that~$\F_{2^2}$ is the best friend of~$\cU$ and thus
$|\orb(\cU)|=(2^{12}-1)/(2^2-1)=1365$.
\item Similarly $\cW:=\F_{2^4}+\alpha\F_{2^2}$ is a $6$-dimensional subspace of~$\F_{2^{12}}$ with best friend $\F_{2^2}$.
Thus $|\orb(\cW)|=1365$.
\end{alphalist}
\end{exa}
\section{The Subspace Distance of Cyclic Orbit Codes}
In the previous section we determined the cardinality of an orbit code in terms of the best friend.
Now we turn to the minimum distance of these codes, again making use of the best friend.
Throughout this section, we restrict ourselves to orbit codes with respect to the cyclic group~$\Fqn^*$.
Thus we fix a primitive element~$\alpha\in\Fqn^*$.
Moreover, let~$\cU$ be a $k$-dimensional subspace of~$\Fqn$.
We usually assume $k\leq n/2$ (see Remark~\ref{R-Cperp}), but will not make explicit use of this assumption.
Recall that the orbit code~$\orb(\cU)=\orb_{\alpha}(\cU)$ contains a subspace $\cU'$ such that $1\in\cU'$.
Therefore, we may assume without loss of generality that $1\in\cU$.
Finally, let~$\F_{q^r}$ be the best friend of~$\cU$, and define
\[
t:=\frac{k}{r}=\dim_{\F_{q^r}}\cU.
\]
From Corollary~\ref{C-cardinality} we know that the cardinality of $\orb(\cU)$ is given by $N:=\frac{q^n-1}{q^r-1}$.
For the study of the subspace distance, we note that for orbit codes the minimum distance is given by
\[
\ds(\orb(\cU))=\min\{\ds(\cU,\,\cU\alpha^j)\mid 1\leq j<|\orb(\cU)|\}.
\]
This follows directly from the identity $\ds(\cU\alpha^l,\cU\alpha^m)=\ds(\cU,\cU\alpha^{m-l})$.
\subsection{Bounds on the Subspace Distance via the Stabilizer}\label{SS-DistStab}
The following lemma serves to show the simple fact that the subspace distance is a
multiple of~$2r$.
This is so because the intersection of any two subspaces in $\orb(\cU)$ is a vector space over~$\F_{q^r}$.
\begin{lemma}\label{L-distance}
Define $s:=\max_{1\leq j<N}\dim_{\F_{q^r}}(\cU \cap \cU\alpha^j)$.
Then
\begin{equation}\label{e-dists}
\ds(\orb(\cU)) = 2(k - sr)=2r(t-s).
\end{equation}
As a consequence,
\[
2r\leq \ds\big(\orb(\cU)\big)\leq 2k.
\]
\end{lemma}
Of course, the upper bound $\ds\big(\orb(\cU)\big)\leq 2k$ is true for all constant dimension codes of dimension~$k$
(which by assumption is at most $n/2$) as we saw already in~\eqref{e-distC}.
\begin{proof}
Let $1\leq j< N$.
Clearly, $\cU\alpha^j$ and thus $\cU \cap \cU\alpha^j$ are vector spaces over~$\F_{q^r}$.
Let $s_j:=\dim_{\F_{q^r}}(\cU \cap \cU\alpha^j)$.
Since $1\leq j< N$, we know $\cU \neq \cU\alpha^j$, and therefore $0\leq s_j<t$.
Thus, $\ds(\cU,\cU\alpha^j) = 2(k - \dim(\cU\cap\cU \alpha^j))=2(k-s_jr)\geq2r(t-s)\geq2r$.
\end{proof}
Comparing the lower bound~$2r$ with Corollary~\ref{C-cardinality}, we observe the usual trade-off between the cardinality
of an orbit code and its (potential) distance:
the larger the best friend, the smaller the code, but the better the lower bound for the distance.
\begin{cor}\label{C-Spread}
For any orbit code $\orb(\cU)$ we have
\[
\ds(\orb(\cU))=2k\Longleftrightarrow r=k\Longleftrightarrow\cU=\F_{q^k}.
\]
If any (hence all) of these properties are true, then~$\orb(\cU)$ is a spread code.
\end{cor}
\begin{proof}
Using the fact that $1\in\cU$ the second equivalence is obvious.
The implication ``$\Longleftarrow$'' of the first equivalence has been dealt with in Corollary~\ref{C-Fqk}.
As for ``$\Longrightarrow$'', note that Lemma~\ref{L-distance} implies that $\cU\alpha^j\cap\cU=\{0\}$ for all~$j$, hence
$\orb(\cU)$ is a partial spread.
Since $|\orb(\cU)|=(q^n-1)/(q^r-1)$, the union of all subspaces in the orbit results in $(q^k-1)(q^n-1)/(q^r-1)$
distinct nonzero points in~$\Fqn$.
Since $r\leq k$, this implies~$r=k$.
\end{proof}
The previous results have shown that the best distance for a $k$-dimensional primitive orbit code is~$2k$, in which
case the code is a spread.
On the other hand, Proposition~\ref{P-Fqk} tells us that these codes have the smallest cardinality among all
$k$-dimensional primitive orbit codes.
The next lemma shows that the worst distance, namely $\ds(\orb(\cU)) = 2r$, is attained whenever the defining
subspace~$\cU$ has a particularly regular form.
\begin{lemma}\label{L-dist-bad}
Suppose $\cU$ is of the form $\cU = \bigoplus_{i = 0}^{t-1} \alpha^{li} \F_{q^r}$ for some $1\leq l <\frac{q^n-1}{q^r-1}$, and
where~$\F_{q^r}$ is the best friend of $\cU$.
Then $\ds(\orb(\cU)) = 2r$.
\end{lemma}
\begin{proof}
Since $\alpha^l\cU = \bigoplus_{i = 1}^{t} \alpha^{li} \F_{q^r}$ we have
$\bigoplus_{i = 1}^{t-1} \alpha^{li} \F_{q^r}\subseteq \cU\cap\alpha^l\cU$.
Moreover, $l<|\orb(\cU)|$ yields
$\dim_{\F_{q^r}}(\cU\cap\alpha^l\cU)\leq t-1 = \dim_{\F_{q^r}}(\bigoplus_{i = 1}^{t-1} \alpha^{li} \F_{q^r})$.
So $\cU\cap\alpha^l\cU=\bigoplus_{i = 1}^{t-1} \alpha^{li} \F_{q^r}$, and
$\dim_{\F_{q^r}}(\cU\cap\alpha^l\cU) = t-1$, which is the maximum possible intersection between any two distinct
subspaces in the orbit code.
Hence in the notation of Lemma~\ref{L-distance} we have $s =t-1$, and $\ds(\orb(\cU))=2r$.
\end{proof}
Observe that in the previous lemma we added the requirement that~$\F_{q^r}$ be the best friend of~$\cU$ because this does
not follow from the form of~$\cU$.
Indeed, $\cU = \bigoplus_{i = 0}^{t-1} \alpha^{li} \F_{q^r}$ only implies that~$\F_{q^r}$ is a friend of~$\cU$, but it
may not be the best friend.
For instance, in~$\F_{2^6}$ with primitive element~$\alpha$ we have $\F_{2^2}=\F_2\oplus\alpha^{21}\F_2$, hence the
best friend is~$\F_{2^2}$.
The following result shows that this is essentially the only type of case where~$\F_{q^r}$ is not the best friend.
\begin{prop}\label{P-DirSumBF}
Let $\cU = \bigoplus_{i=0}^{t-1}\alpha^{il}\F_{q^r}$ for some~$l$, where $t>1$.
Then $\deg(\mipo(\alpha^l,\F_{q^r}))\geq t$.
Furthermore,
\begin{align*}
\cU=\F_{q^{rt}}&\Longleftrightarrow \deg(\mipo(\alpha^l,\F_{q^r}))=t\\
&\Longleftrightarrow \alpha^l\cU=\cU\\
&\Longleftrightarrow \text{$\F_{q^r}$ is not the best friend of~$\cU$.}
\end{align*}
In other words, $\F_{q^r}$ is the best friend of~$\cU$ if and only if~$\cU$ is not a field.
\end{prop}
\begin{proof}
First of all, the directness of the sum implies immediately that $\deg(\mipo(\alpha^l,\F_{q^r}))\geq t$.
As for the chain of equivalences we argue as follows.
\\
1) Assume $\cU=\F_{q^{rt}}$.
Then~$\cU$ is a field and the form of~$\cU$ shows that $\cU=\F_{q^r}[\alpha^l]$.
This implies $\deg(\mipo(\alpha^l,\F_{q^r}))=t$.
\\
2) $\deg(\mipo(\alpha^l,\F_{q^r}))=t$ yields $\dim_{\F_{q^r}}\F_{q^r}[\alpha^l]=t$, and
since~$\cU$ is contained in this field, we have $\cU=\F_{q^r}[\alpha^l]$.
This implies $\alpha^l\cU=\cU$.
\\
3) If $\alpha^l\cU=\cU$, then $\alpha^l\in\stab(\cU)$ and hence $\alpha^l$ is contained in the best friend.
Since due to the directness of the sum,~$\alpha^l$ is not in~$\F_{q^r}$, we conclude that~$\F_{q^r}$ is not the best friend of~$\cU$.
\\
4) Assume that the best friend of~$\cU$ is~$\F_{q^{r'}}$ for some $r'>r$.
Set $\dim_{\F_{q^{r'}}}\cU=t'$.
Then $rt=k=r't'$.
We show that $\alpha^l\cU=\cU$.
Assume to the contrary that $\alpha^l\cU\neq\cU$.
Then $\dim_{\F_{q^{r'}}}(\cU\cap\alpha^l\cU)\leq t'-1$.
On the other hand $\bigoplus_{i=1}^{t-1}\alpha^{il}\F_{q^r}\subseteq(\cU\cap\alpha^l\cU)$.
Considering dimensions over~$\F=\F_q$ we obtain the inequality $r(t-1)\leq r'(t'-1)$, and
using $rt=r't'$ this yields $r\geq r'$, a contradiction.
Thus~$\alpha^l\cU=\cU$, and this implies that $\alpha^{lt}=\sum_{i=0}^{t-1}a_i\alpha^{li}$ for some $a_i\in\F_{q^r}$.
But this means that $\deg(\mipo(\alpha^l,\F_{q^r}))=t$ and $\cU=\F_{q^r}[\alpha^l]=\F_{q^{rt}}$.
\end{proof}
Of course, there are also subspaces that are not of the form in Lemma~\ref{L-dist-bad} and yet generate orbit codes with
distance as low as~$2r$.
\begin{exa}\label{E-non-optimal}
Consider~$\F_{2^{12}}$ with primitive element~$\alpha$ as in Example~\ref{E-EtVa2}.
Let $\cW=\F_{2^4}+\alpha\F_{2^2}$. In Example~\ref{E-EtVa2}(b) we saw that the best friend is~$\F_{2^2}$.
One can check that $\ds(\orb(\cW))=4=2r$.
\end{exa}
Let us now return to the case where the distance is large.
According to Lemma~\ref{L-distance} the best distance a non-spread orbit code may achieve is $2(k-r)$.
\begin{exa}\label{E-optimal}
\begin{alphalist}
\item The code in Example~\ref{E-EtVa1} is optimal among all non-spread orbit codes:
in~\cite[p.~1170]{EtVa11} the distance has been found as~$4$, and this is $2(k-1)$.
\item Consider the code in Example~\ref{E-EtVa2}(a). In this case $k=6$ and $r=2$.
One can verify that $\dim_{\F_{2^2}}(\cU\cap\cU\alpha^j) \leq 1$ for all
$1\leq j< 1365=|\orb(\cU)|$.
Hence Lemma~\ref{L-distance} yields $\ds(\orb(\cU))=2(k-r)=8$, which means the code
is optimal among all non-spread orbit codes.
\end{alphalist}
\end{exa}
\begin{exa}\label{E-t=2}
Let $\dim_{\F_{q^r}}(\cU)=t=2$, hence $k=2r$. Then $2r=2(k-r)$, and thus
$\ds(\orb(\cU))=2(k-r)$ due to Lemma~\ref{L-distance}.
Thus any such code is optimal among all non-spread orbit codes with best friend~$\F_{q^r}$.
\end{exa}
Next we give a condition that leads to a distance less than $2(k-r)$.
Consider an intersection $\cV:=\cU\cap\cU\alpha^j$ for some~$j$.
Then~$\cV$ is an $\F_{q^r}$-subspace of~$\cU$, and thus~$\F_{q^r}$ is a friend of~$\cV$.
It is a consequence of Proposition~\ref{T-StabBF} that the best friend of $\cV$ is $\F_{q^{r'}}$
for some $r'$ such that $r\mid r'$.
\begin{prop}\label{P-dist-subspace}
Suppose there exists a subspace~$\cV$ of~$\cU$ with best friend~$\F_{q^{r'}}$ for some $r'>r$.
Then $\ds(\orb(\cU))\leq2(k-r')<2(k-r)$.
\end{prop}
\begin{proof}
Since $\F_{q^{r'}}$ is the best friend of $\cV$, Corollary~\ref{C-cardinality} yields
\[
|\orb(\cV)|=\dsfrac{q^n-1}{q^{r'}-1}<\dsfrac{q^n-1}{q^r-1} = |\orb(\cU)|.
\]
So there exists some~$j$ such that $\cV\alpha^j = \cV$, while $\cU\alpha^j\neq\cU$.
Then $\cV\subset\cU\cap\cU\alpha^j$, so $\dim_{\F_{q^r}}(\cU\cap\cU\alpha^j)\geq \dim_{\F_{q^r}}(\cV) \geq\frac{r'}{r}$.
Hence $s:=\max_{1\leq j<N}\dim_{\F_{q^r}}(\cU\cap\cU\alpha^j)\geq\frac{r'}{r}$, and
Lemma~\ref{L-distance} implies
$\ds(\orb(\cU))=2(k-sr)\leq2(k-r')<2(k-r)$.
\end{proof}
Example~\ref{E-non-optimal} illustrates this case.
The subspace~$\cW$ has best friend~$\F_{2^2}$, while also containing the field~$\F_{2^4}$.
Since that subspace is its own best friend, Lemma~\ref{L-distance} and Proposition~\ref{P-dist-subspace} guarantee
$2r=4\leq\ds(\orb(\cW))\leq 2(6-4)=4$, as we already saw.
We would like to stress that the condition in Proposition~\ref{P-dist-subspace} is not necessary for
the distance to be less than $2(k-r)$.
Lemma~\ref{L-dist-bad} provides examples of such codes.
For instance, the subspace $\cU$ of~$\F_{2^7}$ generated by $1,\,\alpha,\,\alpha^2$ (where~$\alpha$ is a primitive
element of~$\F_{2^7}$) has distance $\ds(\orb(\cU))=2r=2<2(k-r)$.
But since~$\F_2$ is the only subfield of~$\F_{2^7}$, every subspace of~$\cU$ has best friend~$\F_2$, and the assumption of
Proposition~\ref{P-dist-subspace} is not satisfied.
Unfortunately, we do not know any general construction of cyclic orbit codes with cardinality $(q^n-1)/(q^r-1)$ and distance $2(k-r)$, i.e.,
the best non-spread code case.
In~\cite[p.~7396]{TMBR13} it is conjectured that for any $n,k,q$ there exists a cyclic orbit code of cardinality
$(q^n-1)/(q-1)$ and distance $2(k-1)$.
In the same paper the conjecture is also verified for randomly chosen sets of
$(n,k,q)\in\{4,\ldots,100\}\times\{1,\ldots,10\}\times\{2,3\}$.
However, by exhausting all possible $4$-dimensional subspaces in~$\F_2^8$ via their row echelon form we could
verify that no cyclic orbit code exists with parameters $(n,k,r,q)=(8,4,1,2)$, hence with cardinality~$255$ and distance~$6$.
While there exists such a code for $(n,k,r,q)=(6,3,1,2)$ and distance~$4$, it remains open whether there is a cyclic
orbit code with parameters $(2k,k,1,q)$ and distance $2(k-1)$ for any~$k>4$.
The usual bounds, see e.g.~\cite{XiFu09}, do not rule out the existence of such codes.
\begin{exa}\label{E-bounds}
Let us consider cyclic orbit codes in $\F_{2^{12}}$ of dimension~$k=6$ and with best friend~$\F_2$.
Due to Corollary~\ref{C-cardinality}, such a code has cardinality~$2^{12}-1=4095$.
Because of the above discussion, we have doubts that there exists such a code with distance $2(k-1)=10$, but
we did not perform an exhaustive search.
The best code we could find with a random search has distance~$8$ and is generated by
$\cU=\F_2+\alpha\F_2+\alpha^4\F_2+\alpha^{10}\F_2+\alpha^{10}\beta\F_2+\alpha^8\beta^2\F_2$, where~$\alpha$ and~$\beta$ are primitive elements
of~$\F_{2^{12}}$ and $\F_{2^6}$, respectively.
\end{exa}
We close this section with the following positive observation.
Note that the codes found below will be used again in Example~\ref{E-PatchSecondBest} to build larger codes of the same quality.
\begin{exa}\label{E-k3SB}
It can be verified that for $q=2$ and all $n\in\{6,\ldots,20\}$, the cyclic orbit code $\orb(\cU)$ of
dimension~$k=3$ and cardinality $2^n-1$ with
\[
\cU=\F_2+\alpha^2\F_2+\alpha^3\F_2\subseteq\F_{2^n}, \text{ where }\ideal{\alpha}=\F_{2^n}^*,
\]
has distance $4=2(k-1)$.
The same is true (maximal cardinality and distance~$4$) for $q=3,5,7$ and $n\in\{6,7,8\}$ and the analogous subspace~$\cU$.
We did not explore larger values of~$q$ and~$n$.
\end{exa}
\subsection{Computing the Subspace Distance via Multisets}
We now turn to a more explicit computation of the subspace distance of a cyclic orbit code.
The next result improves upon \cite[Thm.~15, Prop.~16]{RoTr13}, which in turn goes back to \cite[Lem.~1]{KoKu08}.
By taking the best friend into account, we will be able to work with a smaller multiset than in~\cite{RoTr13}, and we
do not have to distinguish between orbits of size $q^n-1$ (which can occur only if $q=2$) and those of smaller size.
As before let~$\cU$ have best friend~$\F_{q^r}$.
Lemma~\ref{L-Stabplus} yields
\begin{equation}\label{e-Ualpha}
\stab(\cU)=\ideal{\alpha^N}=\F_{q^r}^*, \text{ where }N=\frac{q^n-1}{q^r-1}.
\end{equation}
Consider the group action $\Fqn \times \ideal{\alpha^N} \longrightarrow \Fqn$ given by $(v, \gamma)\mapsto v\gamma$.
For each $v\in\Fqns$ the orbit of~$v$ is
\[
\cO(v):=\{v, v\alpha^N, v\alpha^{2N},\ldots, v\alpha^{(q^r-2)N}\},
\]
and $|\cO(v)|= |\ideal{\alpha^N}|=q^r-1$, since all elements of the orbit must be distinct.
Writing $v=\alpha^b$, we have
\[
\cO(v) =\{ \alpha^{b}, \alpha^{b+N}, \ldots, \alpha^{b+N(q^r-2)}\}.
\]
By modular arithmetic there is exactly one element of this orbit whose exponent is strictly less than $N$.
Hence
\[
\Fqns=\bigcup_{b=0}^{N-1}\hspace*{-1.1em}\raisebox{.5ex}{$\cdot$}\hspace*{1.1em}\cO(\alpha^b).
\]
Since $\cU$ is an~$\F_{q^r}$-vector space, the orbit $\cO(u)$ is in~$\cU$ for every $u\in\cU$.
This shows that
\begin{equation}\label{e-Uorbit}
\cU\backslash\{0\} =\bigcup_{i=1}^S\hspace*{-.9em}\raisebox{.5ex}{$\cdot$}\hspace*{.9em}\mathcal{O}(\alpha^{b_i})\text{ for $S=\frac{q^k-1}{q^r-1}$
and suitable non-negative integers }b_1,\ldots,b_S<N.
\end{equation}
One should note that $b_1,\ldots,b_S$ are uniquely determined by~$\cU$.
Moreover, if $\alpha^c\in \cU$ and $0\leq c<N$, then $c\in\{b_1,\ldots, b_s\}$.
For the following result, recall that a \emph{multiset} is collection of elements where each element is allowed to appear more than once.
We will denote multisets by double braces $\{\!\{ \ldots\}\!\}$.
The number of times an element appears in the multiset is called its \emph{multiplicity}.
\begin{theo}\label{T-dist}
Let~$\cU$ be as above and $b_1,\ldots,b_S$ be as in~\eqref{e-Uorbit}.
Define the multiset
\[
\cD:=\{\!\{b_l-b_m~\mod N \mid 1\leq l,\,m\leq S,\,l\neq m\}\!\},
\]
and for $J\in\cD$ denote by $m(J)$ the multiplicity of~$J$ in~$\cD$.
Furthermore, set $M:= \max_{1\leq J<N}\{m(J)\}$.
If~$\cD=\emptyset$, we define $M:=0$.
Then $\dim(\cU\cap\cU\alpha^J) = \log_{q}(m(J)(q^r-1) +1)$ and
\[
\ds(\orb(\cU)) = 2(k-L),\text{ where }L = \log_q(M(q^r-1)+1).
\]
\end{theo}
\begin{proof}
Let us first consider the case where~$\cD=\emptyset$.
This happens only if~$S=1$, hence $r=k$ and $\cU=\F_{q^k}$.
In this case $\ds(\orb(\cU))=2k$ as we know from Corollary~\ref{C-Spread}.
Let now $\cD\neq\emptyset$.
Fix $J\in\{1,\ldots,N-1\}$.
For all $l\in[S]:=\{1,\ldots,S\}$ we have $\alpha^{b_l+J}\in\cU\alpha^J$, and thus $\cO(\alpha^{b_l+J})\subset\cU\alpha^J$.
Hence $(\cU\alpha^J)\backslash\{0\} = \bigcup_{l\in[S]}\hspace*{-2.5em}\raisebox{.5ex}{$\cdot$}\hspace*{2.5em}\cO(\alpha^{b_l+J})$.
Since $\cU\cap\cU\alpha^J$ is an $\F_{q^r}$-vector space contained in~$\cU$, we have
\[
(\cU\cap\cU\alpha^J)\backslash\{0\}=\bigcup_{l\in\cL_J}\hspace*{-1.2em}\raisebox{.5ex}{$\cdot$}\hspace*{1.2em}\cO(\alpha^{b_l}),
\]
where
\begin{align*}
\cL_J&=\{l\in[S]\mid \cO(\alpha^{b_l})=\cO(\alpha^{b_m+J})\text{ for some }m\in[S]\}\\
&=\{l\in[S]\mid \alpha^{b_l}=\alpha^{b_m+J}\alpha^{\lambda N}\text{ for some }m\in[S]\text{ and }\lambda \in\Z\}.
\end{align*}
Note that $\alpha^{b_l}=\alpha^{b_m+J}\alpha^{\lambda N}$ is equivalent to $b_l\equiv b_m+J+\lambda N~\mod (q^n-1)$.
Since~$N$ is a divisor of $q^n-1$, we conclude
\[
\cL_J\subseteq\{l\in[S]\mid b_l-b_m\equiv J~\mod N\text{ for some }m\in[S]\}.
\]
By assumption there are $m(J)$ pairs $(b_l,b_m)$ so that $b_l-b_m\equiv J~\mod N$.
Thus, we obtain that $(\cU\cap\cU\alpha^J)\backslash\{0\}$ is the union of at most~$m(J)$ orbits.
This shows that $|\cU\cap\cU\alpha^J|\leq m(J)(q^r-1)+1$.
To show equality, note that there are $m(J)$ pairs $(b_l,b_m)$ such that $b_l-b_m\equiv J~\mod N$.
Pick such a pair $(b_l,b_m)$ and write $b_l=b_m+J+\lambda N$ for some~$\lambda\in\Z$.
Then $\cO(\alpha^{b_l})=\cO(\alpha^{b_m+J+\lambda N})=\cO(\alpha^{b_m+J})$, and so this orbit is in $\cU\cap\cU\alpha^J$.
This shows that there are $m(J)$ orbits in the intersection, and we conclude that
$|\cU\cap\cU\alpha^J|= m(J)(q^r-1)+1$.
Thus $\dim(\cU\cap\cU\alpha^J) = \log_{q}(m(J)(q^r-1) +1)$.
Finally, $\ds(\orb(\cU)) = 2(k-\max_{0<J<N}\{\dim(\cU\cap\cU\alpha^J)\})$, which leads to the desired result.
\end{proof}
\section{A Linkage Construction}\label{S-Link}
In this section we present a construction of how to use cyclic orbit codes to construct subspace codes with longer length.
In order to do so, it will be convenient to present subspaces as rowspaces of suitable matrices.
In other words, we will now consider subspaces in~$\F_q^n$ rather than $\F_{q^n}$.
Hence the orbit code $\orb_{\alpha}(\cU)$ becomes $\{\im U M^i\mid i=0,\ldots,|\orb_{\alpha}(\cU)|-1\}$
as in~\eqref{e-UMf}, where $M$ is the companion matrix of $\mipo(\alpha,\F_q)$, and $U\in\F^{k\times n}$ is
a matrix such that its rowspace $\im U$ is $\varphi(\cU)$ with the isomorphism~$\varphi$ in~\eqref{e-FFn}.
A constant dimension code in~$\F_q^n$ of dimension~$k$, cardinality~$M$, and subspace distance~$d$
will be called a $(n,M,d,k)_q$ code.
Recall also the notation $[N]:=\{1,\ldots,N\}$.
The following construction resembles the one given in the proof of~\cite[Thm.~11]{EtVa11}.
The latter, however, is tailored specifically to the use of a spread code as~$\cC_1$ and a specific choice of~$\cC_2$.
This allows a larger code~$\tilde{\cC}_3$ than our construction below.
In Theorem~\ref{T-PatchTwoCyclic} below we will, however, generalize \cite[Thm.~11]{EtVa11} by making appropriate choices.
\begin{theo}\label{T-PatchTwo}
For $i=1,2$ let~$\cC_i=\{\im U_{i,l}\mid l\in[N_i]\}$ be $(n_i,\,N_i,\,d_i,\,k)_q$ codes.
Thus~$U_{i,l}$ are matrices of rank~$k$ in $\F^{k\times n_i}$ for all $i,l$.
Define the subspace code $\cC_1\circledast\cC_2$ of length $n:=n_1+n_2$ as
$\cC_1\circledast\cC_2:=\tilde{\cC}_1\cup\tilde{\cC}_2\cup\tilde{\cC}_3$,
where
\begin{align*}
&\tilde{\cC}_1=\{\im(U_{1,l},\,0_{k\times n_2})\mid l\in[N_1]\}, \\[.5ex]
&\tilde{\cC}_2=\{\im(0_{k\times n_1},\, U_{2,l})\mid l\in[N_2]\}, \\[.5ex]
&\tilde{\cC}_3=\{\im(U_{1,l},\,U_{2,m})\mid (l,m)\in[N_1]\times[N_2]\}.
\end{align*}
Then~$\cC_1\circledast\cC_2$ is a $(n,\,N,\,d,\,k)_q$ code, where $N=N_1+N_2+N_1N_2$ and
$d=\min\{d_1,\,d_2\}$.
\end{theo}
The reader should note that the code $\cC_1\circledast\cC_2$ depends not only on~$\cC_1$ and~$\cC_2$, but on the actual
choice of the matrices representing the subspaces.
Therefore the notation~$\cC_1\circledast\cC_2$ is not entirely correct, but this should not lead to any confusion.
\begin{proof}
The cardinality of~$\cC_1\circledast\cC_2$ is clear because the three sets~$\tilde{\cC}_i$ are pairwise disjoint.
Furthermore, it is obvious that $\ds(\tilde{\cC}_i)=\ds(\cC_i)$ for $i=1,2$.
Moreover, since each subspace in~$\tilde{\cC}_1$ intersects trivially with each subspace
in~$\tilde{\cC}_2$ or $\tilde{\cC}_3$, we conclude that
$\ds(\cW_1,\cW_2)=2k$ for all $\cW_1\in\tilde{\cC}_1$ and $\cW_2\in\tilde{\cC}_2\cup\tilde{\cC}_3$.
The same is true for the distance between subspaces in~$\tilde{\cC}_2$ and those in $\tilde{\cC}_1\cup\tilde{\cC}_3$.
It remains to consider the subspace distance between any two subspaces in~$\tilde{\cC}_3$.
Let $\cX=\im(U_{1,l},U_{2,m})$ and $\cY=\im(U_{1,l'},U_{2,m'})$ be two distinct subspaces in~$\tilde{\cC}_3$,
thus $l\neq l'$ or $m\neq m'$.
Let $(x_1,x_2)\in\cX\cap\cY$.
Then $x_1\in\im U_{1,l}\cap\im U_{1,l'}$ and $x_2\in\im U_{2,m}\cap\im U_{2,m'}$.
Moreover, $(x_1,x_2)= z(U_{1,l},U_{2,m})$ for some $z\in\F^k$, and this shows that
if $x_1=0$ or $x_2=0$, then $z=0$, and thus $(x_1,x_2)=0$.
This means that the projection of $\cX\cap\cY$ onto either component is injective.
All of this shows that
\[
\dim(\cX\cap\cY)\leq\min\big\{\dim(\im U_{1,l}\cap\im U_{1,l'}),\,\dim(U_{2,m}\cap\im U_{2,m'})\big\}.
\]
Putting this together with the definition of the subspace distance in~\eqref{e-dist} implies $\ds(\cX,\,\cY)\geq\max\{d_1,\,d_2\}$, as desired.
\end{proof}
Now we may iterate the above construction.
Choosing another subspace code~$\cC_3$ of dimension~$k$, the code $(\cC_1\circledast\cC_2)\circledast\cC_3$ consists of the
rowspaces of matrices of the form
\[
(U_{1,l},0,0),\ (0, U_{2,l},0),\ (U_{1,l},U_{2,m},0),\ (0,0,U_{3,l}),\ (U_{1,l},0,U_{3,m}),\ (0,U_{2,l},U_{3,m}),\
(U_{1,l},U_{2,m},U_{3,r}).
\]
This also shows that~$\circledast$ is associative, and we simply write $\cC_1\circledast\cC_2\circledast\cC_3$.
Using $U_{i,0}$ for the $(k\times n_i)$-zero matrix, the result for the $t$-fold $\circledast$-product can be presented
in the following form.
\begin{theo}\label{T-PatchMany}
For $i=1,\ldots,t$ let $\cC_i=\{\im U_{i,l}\mid l\in[N_i]\}$ be $(n_i,\,N_i,\,d_i,\,k)_q$ codes.
Thus the matrices $U_{i,l}\in\F^{k\times n_i}$ have rank~$k$ for all $i=1,\ldots,t$ and $l\in[N_i]$.
Define $U_{i,0}:=0_{k\times n_i}$ for all~$i$.
Then the subspace code of length $n:=\sum_{i=1}^t n_i$
\[
\cC_1\circledast\ldots\circledast\cC_t:=\big\{\im(U_{1,l_1},\,U_{2,l_2},\ldots,U_{t,l_t})\mid
l_i\in\{0,\ldots,N_i\}\text{ for all }i,\, (l_1,\ldots,l_t)\neq(0,\ldots,0)\big\}
\]
is an $(n,N,d,k)_q$ code, where $N=\Big(\!\!\prod_{i=1}^t (N_i+1)\!\Big)-1$ and
$d=\min\{d_1,\ldots,d_t\}$.
\end{theo}
One may now ask what type of structure the resulting code has if the constituent codes are cyclic orbit codes.
A partial answer is provided with the following result.
It restricts to the case of primitive cyclic orbit codes over~$\F_2$ that each have~$\F_2$ as the best friend.
\begin{prop}\label{P-COCPatch}
Let $q=2$. For $i=1,\ldots,t$ let~$\alpha_i$ be a primitive element of~$\F_{2^{n_i}}$, and let
$\cC_i=\orb_{\alpha_i}(\cU)$ be a cyclic orbit code of dimension~$k$, length~$n_i$, and cardinality $2^{n_i}-1$.
If $\gcd(n_1,\ldots,n_t)=1$ then $\cC_1\circledast\ldots\circledast\cC_t$
is a union of cyclic orbit codes with respect to a fixed cyclic subgroup of $\GL_n(\F_2)$, where $n=\sum_{i=1}^t n_i$.
\end{prop}
Note that due to Theorem~\ref{T-PatchMany} the code $\cC:=\cC_1\circledast\ldots\circledast\cC_t$ has cardinality $2^n-1$.
But one should also observe that~$\cC$ is not necessarily cyclic with respect to the group $\F_{2^n}^*$.
As we will see in the proof, the cyclic subgroup of $\GL_n(\F_2)$ referred to in the theorem has order $\prod_{i=1}^t (2^{n_i}-1)$,
and this is less than $2^n-1$ if $t>1$.
\begin{proof}
Denote by $M_i\in\GL_{n_i}(\F)$ the companion matrix of the minimal polynomial of~$\alpha_i$ over~$\F_q$, see~\eqref{e-Mf}.
Then, as in~\eqref{e-UMf}, the codes~$\cC_i$ are given by
\[
\cC_i=\{\im U_iM_i^l\mid l=0,\ldots,2^{n_i}-2\} \text{ for some suitable $U_i\in\F^{k\times n_i}$}.
\]
The code~$\cC:=\cC_1\circledast\ldots\circledast\cC_t$ has the form
\[
\cC=\Big\{\im (V_1M_1^{l_1},\ldots,V_tM_t^{l_t})\mid V_i\in\{U_i,0_{k\times n_i}\},\,
l_i\in\{0,\ldots,2^{n_i}-2\},\,(V_1,\ldots,V_t)\neq(0,\ldots,0)\Big\}.
\]
Using the notation $W_j\in\F^{k\times n},\,j=1,\ldots,2^t-1$, for the $2^t-1$ choices for $(V_1,\ldots,V_t)$, the code~$\cC$ is
the union of $2^t-1$ codes of the form
\[
\cC^{(j)}=\Big\{\im \big(W_j\cdot\text{diag}(M_1^{l_1},\ldots,M_t^{l_t})\big)\,\Big|\, l_i\in\{0,\ldots,2^{n_i}-2\}\Big\}.
\]
Note that each $M_i$ generates a cyclic group of order~$2^{n_i}-1$.
Hence the direct product of these groups is a cyclic group if and
only if $\gcd(2^{n_1}-1,\ldots,2^{n_t}-1)=1$, which in turn is the case if and only if $\gcd(n_1,\ldots,n_t)=1$.
As a consequence, if $\gcd(n_1,\ldots,n_t)=1$, then each set $\cC^{(j)}$ is an orbit code with respect to this
cyclic group, and thus~$\cC$ is the union of cyclic orbit codes.
\end{proof}
We remark that the above does not provide a necessary condition for~$\cC$ to be a union of cyclic orbit codes
because it may happen that~$\cC$ is such a code without the lengths being relatively prime.
\begin{exa}\label{E-PatchSecondBest}
Let us use the binary subspace codes found in Example~\ref{E-k3SB}.
These are~$15$ codes,~$\cC_i$, of lengths $6,7,\ldots,20$.
Thus $\cC_1\circledast\ldots\circledast\cC_{15}$ has length $\sum_{i=6}^{20}i=195$, cardinality~$2^{195}-1$
and distance~$4$.
It is the union of $2^{15}-1$ cyclic orbit codes.
Note that the cardinality of this code equals the maximum cardinality of any binary cyclic orbit code of length~$195$.
\end{exa}
As we show next, applying the linkage construction to cyclic orbit codes results in a code whose cardinality
is upper bounded by the maximum cardinality of a cyclic orbit code of the same length.
\begin{prop}\label{P-PatchCard}
Let~$\cC_i,\,i=1,\ldots,t$, be as in Proposition~\ref{P-COCPatch}, and let
$n:=\sum_{i=1}^t n_i$.
Then $\cC:=\cC_1\circledast\ldots\circledast\cC_t$ satisfies $|\cC|\leq \frac{q^n-1}{q-1}$ with equality if and only if
$q=2$ and~$\F_2$ is the best friend of~$\cC_i$ for all~$i$.
\end{prop}
\begin{proof}
Let~$\F_{q^{r_i}}$ be the best friend of~$\cC_i$.
Then $|\cC_i|=\frac{q^{n_i}-1}{q^{r_i}-1}$.
Using Theorem~\ref{T-PatchMany}
we compute
\begin{equation}\label{e-Ccard1}
|\cC|=\prod_{i=1}^t\Big(\frac{q^{n_i}-1}{q^{r_i}-1}+1\Big)-1\leq \prod_{i=1}^t\Big(\frac{q^{n_i}-1}{q-1}+1\Big)-1.
\end{equation}
We show that
\[
\prod_{i=1}^t\Big(\frac{q^{n_i}-1}{q-1}+1\Big)\leq \frac{q^{\sum_{i=1}^t n_i}-1}{q-1}+1.
\]
In order to do so, we induct on~$t$.
It is clear that it suffices to consider the case~$t=2$.
Using that $\frac{q^{n_1}-1}{q-1}+1\leq q^{n_1}$, we compute
\begin{equation}\label{e-Ccard2}
\Big(\frac{q^{n_1}-1}{q-1}+1\Big)\Big(\frac{q^{n_2}-1}{q-1}+1\Big)
\leq q^{n_1}\frac{q^{n_2}-1}{q-1}+\frac{q^{n_1}-1}{q-1}+1
=\frac{q^{n_1+n_2}-1}{q-1}+1,
\end{equation}
which is what we wanted.
This proves $|\cC|\leq \frac{q^n-1}{q-1}$.
Finally, we have equality in~\eqref{e-Ccard1} if and only if $r_i=1$ for all~$i$, and equality in~\eqref{e-Ccard2}
if and only if~$q=2$.
This concludes the proof.
\end{proof}
The cardinality of the linkage code in Theorem~\ref{T-PatchTwo} can be increased if one of the two
constituent codes is a subset of a cyclic orbit code.
In this case we can extend the code~$\tilde{\cC}_3$ by allowing all powers of the primitive element, regardless
of the stabilizer of the constituent orbit code.
As we will see in the proof, the linkage to the other constituent will guarantee that the distance is not compromised.
The construction in the proof of~\cite[Thm.~11]{EtVa11} by Etzion and Vardy may be regarded as the special case where
$N_2=1$ and~$\cC_1$ is a cyclic orbit spread code.
\begin{theo}\label{T-PatchTwoCyclic}
Let~$\cC_1=\{\im U_{1,l}\mid l\in[N_1]\}$ be a $(n_1,\,N_1,\,d_1,\,k)_q$ code,
thus $U_{1,l}\in\F^{k\times n_1}$ are matrices of rank~$k$.
Furthermore, let~$\alpha$ be a primitive element of~$\F_{q^{n_2}}$, and denote by~$M\in\GL_{n_2}(\F)$ the companion matrix of
the minimal polynomial of~$\alpha$.
Let $U_2\in\F^{k\times n_2}$ be a matrix of rank~$k$ and~$\cC_2$ be a subset of the cyclic orbit code $\orb(\im U_2)$ of length~$n_2$.
Hence there exists a set $\cL\subseteq\{0,1,\ldots,q^{n_2}-2\}$ such that $\cC_2=\{\im(U_2 M^l)\mid l\in\cL\}$ and
$\cC_2$ is a $(n_2,\,N_2,\,d_2,\,k)_q$ code where $N_2=|\cL|$.
Define the subspace code $\widetilde{\cC}$ of length~$n:=n_1+n_2$ as
$\widetilde{\cC}=\tilde{\cC}_1\cup\tilde{\cC}_2\cup\tilde{\cC}_3$,
where
\begin{align*}
&\tilde{\cC}_1=\{\im(U_{1,l},\,0_{k\times n_2})\mid l\in[N_1]\}, \\[.5ex]
&\tilde{\cC}_2=\{\im(0_{k\times n_1},\, U_2 M^l)\mid l\in\cL\}, \\[.5ex]
&\tilde{\cC}_3=\{\im(U_{1,l},\, U_2 M^m)\mid (l,m)\in[N_1]\times\{0,\ldots,q^{n_2}-2\}\}.
\end{align*}
Then~$\widetilde{\cC}$ is a $(n,\,N,\,d,\,k)_q$ code, where $N=N_1+N_2+(q^{n_2}-1)N_1$ and
$d=\min\{d_1,\,d_2\}$.
\end{theo}
\begin{proof}
Comparing with Theorem~\ref{T-PatchTwo} and its proof, we see that the only case that remains to be considered is the distance between two
subspaces of the form $\cX=\im (U_{1,l},\,U_2M^m)$ and $\cY=\im(U_{1,l'},\,U_2M^{m'})$, where $l\neq l'$ or $m\neq m'$.
If $l\neq l'$, then $\im(U_{1,l})\neq\im(U_{1,l'})$, and as in the proof of Theorem~\ref{T-PatchTwo} we conclude
$\dim(\cX\cap\cY)\leq\dim\big(\im(U_{1,l})\cap\im(U_{1,l'})\big)$, and thus $\ds(\cX,\cY)\geq\max\{d_1,\,d_2\}$, as desired.
Let now $l=l'$. Then $m\neq m'$, but since $m,\,m'\in\{0,\ldots,q^{n_2}-2\}$, we may have that $\im U_2M^m=\im U_2M^{m'}$, and thus
we have to be more detailed.
Let $(x_1,x_2)\in\cX\cap\cY$, say $(x_1,x_2)=z(U_{1,l},\,U_2 M^m)=z'(U_{1,l},\,U_2 M^{m'})$.
Then the full row rank of~$U_{1,l}$ yields $z=z'$ and thus
$x_2=z(U_2 M^m)=z(U_2 M^{m'})$.
Using the isomorphism~$\varphi$ from \eqref{e-FFn}, this translates into
\[
\varphi^{-1}(x_2)=c\alpha^m=c\alpha^{m'},\ \text{ where }c=\varphi^{-1}(z U_2).
\]
This is an identity in the field~$\Fqn$, and since $m,\,m'$ are less than the order of~$\alpha$, we have
$\alpha^m\neq\alpha^{m'}$, and so $c=0$.
Thus $z=0$ and $\cX\cap\cY=\{0\}$.
All of this shows that if~$l=l'$ then $\ds(\cX,\,\cY)=2k\geq\max\{d_1,d_2\}$, and this concludes the proof.
\end{proof}
We close the section with an example illustrating the construction.
\begin{exa}\label{E-PartSpread}
Let $k=3$.
In $\F_2^6$, choose the spread code $\cC_1=\orb(\F_{2^3})$.
This is a $(6,\,9,\,6,\,3)_2$ code, where the cardinality follows from Corollary~\ref{C-Fqk}.
In~$\F_2^7$ consider the cyclic orbit code $\cC'=\orb_{\alpha}(U)$, where~$\alpha$ is a primitive
element of~$\F_{2^7}$ and
\[
U=\begin{pmatrix}1&0&0&0&0&0&0\\0&1&0&0&1&0&1\\0&0&1&1&0&1&0\end{pmatrix}.
\]
Let~$M\in\GL_7(\F_2)$ be the companion matrix of the minimal polynomial of~$\alpha$.
Then one can check that the subset~$\cC_2$ of~$\cC'$ given by
\[
\cC_2=\big\{\im(U_2 M^j)\,\big|\, j\in\{0,2,5,10,20,23,57,72,75,91,95,109,113\}\big\}
\]
is a partial spread.
Thus~$\cC_2$ is a $(7,\,13,\,6,\,3)_2$ code.
Applying Theorem~\ref{T-PatchTwoCyclic} to~$\cC_1$ and~$\cC_2$ results in a
$(13,\,1165,\,6,\,3)_2$ code~$\widetilde{\cC}$, where the cardinality
stems from $9+13+9(2^7-1)=1165$.
This cardinality comes remarkably close to the upper bound $(2^{13}-2)/7-1=1169$ for
partial~$3$-spreads in~$\F_2^{13}$, see~\cite[Thm.~5]{EJSSS10}.
In fact, the difference of~$4$ is due to the fact that the partial spread~$\cC_2$ in~$\F_2^7$ is not maximal.
Again with~\cite{EJSSS10} it is known that one can find partial spreads in~$\F_2^7$ with~$17$ elements, as opposed to our
$13=|\cC_2|$.
While for $k=3$ and~$q=2$ maximal partial spreads are known for any~$n$ due to~\cite{EJSSS10}, we believe that
Theorem~\ref{T-PatchTwoCyclic} bears promising potential for larger~$k$.
\end{exa}
\section{Conclusion and Open Problems}
We have presented a detailed study of cyclic orbit codes based on the stabilizer subfield.
As has become clear, these codes have a rich structure which allows us to infer properties such as cardinality
and estimates on the distance.
While cyclic orbit codes themselves are in general not very large, taking unions of such codes
has resulted in constant dimension codes whose cardinalities come very close to known bounds.
Codes of this form have been found earlier in~\cite{EtVa11,KoKu08} via computer search; see the introduction of this paper
for further details.
Unfortunately, no systematic construction of unions of cyclic orbit codes or other orbit codes is known so far.
This and other observations in the previous sections lead to the following open problems.
\begin{arabiclist}
\item Find constructions of good cyclic subspace codes.
In other words, find systematic ways to take unions of cyclic orbit codes without decreasing the distance.
\item Cyclic orbit codes with maximum distance, that is, $2k$, are spread codes (Corollary~\ref{C-Spread}) and thus well understood.
The best distance a non-spread cyclic orbit code of dimension~$k$ can attain is thus $2(k-1)$, but a construction
of such codes is not yet known.
Therefore we formulate:
For given~$n$ and $k\leq n/2$ construct cyclic orbit codes of length~$n$, dimension~$k$ and distance
$2(k-1)$ or prove that no such code exists. See also Example~\ref{E-k3SB} and the paragraph right before Example~\ref{E-bounds}.
\item Make use of the algebraic structure of cyclic orbit codes in order to find efficient decoding algorithms.
This has been initiated already in~\cite{EKW10,TMBR13}, but as discussed in the conclusion of~\cite{TMBR13} further research is needed.
\item Use other groups for constructing orbit codes. For instance, in~\cite{BEOVW13} the authors discovered
non-trivial $q$-analogs of Steiner systems by testing unions of subspace orbits under the action of
the normalizer group of $\F_{q^n}^*$ in $\GL(n,\F_q)$
(thus under the combined action of~$\F_{q^n}^*$ and the Galois group $\text{Gal}(\F_{q^n}\mid\F_q)$).
\item In Section~\ref{S-Link} we have shown a linkage construction for general constant dimension codes and an improved version for
cyclic orbit codes.
We believe that this construction can be further enhanced by using suitable constituent codes.
\end{arabiclist}
\bibliographystyle{abbrv}
|
1,477,468,751,089 | arxiv |
\section*{Introduction}
The study of the Iwasawa theory of symmetric powers of CM modular
forms at supersingular primes was begun by the first author and
Antonio Lei in \cite{HL}. They constructed two types of $p$-adic
$L$-functions: ``admissible'' ones in the sense of Panchishkin and
Dabrowski, and ``plus and minus'' ones in the sense of Pollack. They
also constructed ``plus and minus'' Selmer modules in the sense of
Kobayashi, and, using Kato's Euler system, they compared them to the
latter $p$-adic $L$-functions via one divisibility in a main
conjecture. The present paper performs the analogous comparison
between the admissible $p$-adic $L$-functions and the ``finite-slope''
Selmer modules in the sense of the second author. In order to get an
identity of characteristic ideals, rather than just a divisibility, we
improve the work of Rubin on the Main Conjecture of Iwasawa theory for
imaginary quadratic fields at inert primes \cite{R,R2} to give an
equality unconditionally. Rubin's work has since been used by various
authors to derive other divisibilities; an examination of these
derivations will show that our work upgrades most of these
divisibilities to identities.
The first section of this paper is written as a direct continuation of
\cite{HL}; all numbered references to equations, theorems, etc.\ in it
are to the two papers commonly, except for bibliographical citations,
which are to the references section here. In this section, we recall
the relevant setup from \cite{HL}, as well as the theory of
finite-slope Selmer groups from \cite{P}. Then we give our results
about finite-slope Selmer modules of CM modular forms and their
symmetric powers at supersingular primes. The second section is
written independently of \cite{HL} and the first section. In it we
recall the notations from \cite{R,R2} and then treat the Iwasawa
theory of imaginary quadratic fields at inert primes.
\subsection*{Acknowledgement}
The authors would like to thank Robert Pollack and Karl Rubin for
helpful conversations and correspondence.
\section{CM modular forms and their symmetric powers}
\subsection{Notations and hypotheses of \cite{HL}}
The prime $p$ is assumed odd. We fix algebraic closures and
embeddings $\iota_\infty \cn \ov\bbQ \to \bbC$ and $\iota_p \cn
\ov\bbQ \to \ov\bbQ_p$, and use these for the definition of Galois
groups and decomposition groups. In particular, we write $c \in
\Gal(\ov\bbQ/\bbQ)$ for the complex conjugation induced by
$\iota_\infty$.
We normalize reciprocity maps of class field theory to send
uniformizers to arithmetic Frobenius elements. If $E/\bbQ_p$ is a
finite extension, we normalize duals of $E$-linear Galois
representations by $V^* = \Hom_E(V,E(1))$, and Fontaine's functors by
$\bbD_\cris(V) = \Hom_{\bbQ_p[G_{\bbQ_p}]}(V,\bbB_\cris)$ and
$\wt\bbD_\cris(V) = (\bbB_\cris \otimes_{\bbQ_p} V)^{G_{\bbQ_p}}$.
For $n \leq \infty$ we write $k_n = \bbQ(\mu_{p^n})$ and $\bbQ_{p,n} =
\bbQ_p(\mu_{p^n})$. The cyclotomic character $\chi$ induces an
isomorphism $G_\infty := \Gal(\bbQ_{p,\infty}/\bbQ_p) \cong
\bbZ_p^\times$, and $G_\infty$ factors uniquely as $\Delta \times \Ga$
in such a way that $\chi$ induces isomorphisms $\Delta \cong
\mu_{p-1}$ and $\Ga \cong 1+p\bbZ_p$. We fix a topological generator
$\ga_0$ of $\Ga$.
For a finite extension $E$ of $\bbQ_p$ and $G=G_\infty,\Ga$, we write
$\La_{\calO_E}(G) = \calO_E[\![G]\!]$ for the Iwasawa algebra of $G$
with coefficients in $\calO_E$ and we write $\La_E(G) =
\La_{\calO_E}(G)\otimes_{\calO_E} E$. We let $\calH_{r,E}(G)$ be the
$E$-valued $r$-tempered distributions on $G$ for $r \in \bbR_{\geq 0}$
and, $\calH_{\infty,E}(G) = \bigcup_r \calH_{r,E}(G)$. These objects
are stable under the involution $\iota$ (resp.\ twisting operator
$\Tw_n$ for $n \in \bbZ$) obtained by continuity and $E$-linearity by
the rule $\sigma \mapsto \sigma^{-1}$ (resp.\ $\sigma \mapsto
\chi(\sigma)^n\sigma$) on group elements $\sigma \in G$. If $G$ acts
on $M$, then $M^\iota$ denotes $M$ with $G$ action composed through
$\iota$.
We fix an imaginary quadratic field $K \subset \ov\bbQ$, considered as
a subfield of $\bbC$ via $\iota_\infty$, with ring of integers $\calO$
and quadratic character $\ep_K \cn \Gal(K/\bbQ) \cong \{\pm1\}$. We
assume $p$ is inert in $K$, i.e.\ $\ep_K(p)=-1$, and write $\calO_p$
(resp.\ $K_p$) for the completion of $\calO$ (resp.\ $K$) at $p\calO$.
We fix a newform $f$ of weight $k \geq 2$, level $\Ga_1(N)$ with $p
\nmid N$, character $\ep$, and CM by $K$. We write $\psi$ and $\psi^c
= \psi \circ c$ for the algebraic Hecke characters of $K$ associated
to $f$, and order them to have types $(k-1,0)$ and $(0,k-1)$,
respectively. We write $E$ for a finite extension of $\bbQ_p$
containing $\iota_p\iota_\infty^{-1}\psi(\bbA_{K,f}^\times)$. Note
that $E$ contains $\iota_p(K)$ and the images of the coefficients of
$f$ under $\iota_p\iota_\infty^{-1}$. We write $V_\psi$ for the
one-dimensional $E$-linear Galois representation attached to $\psi$,
so that when $v \nmid p\cond(\psi)$ the action of $\Frob_v$ on
$V_\psi$ is by multiplication by $\psi(v)$. We write $V_f$ for the
$E$-linear dual of the two-dimensional Galois representation
associated to $f$ by Deligne, with structure map $\rho_f \cn G_\bbQ
\to \GL(V_f)$, satisfying $\det(\rho_f) = \ep\chi^{k-1}$. One has
$V_f \cong \Ind_K^\bbQ V_\psi$. Since $p$ is inert in $K$, the
comparison of $L$-factors between $f$ and $\psi$ gives
$x^2-a_p(f)x+\ep(p)p^{k-1} = x^2-\psi(p)$, and in particular
$a_p(f)=0$ so that $f$ is nonordinary at $p$. After perhaps enlarging
$E$, we fix a root $\alpha \in E$ of this polynomial, so that the
other root is $\ov\alpha=-\alpha$, and
\[
\psi(p) = \psi^c(p) = -\ep(p)p^{k-1} = -\alpha\ov\alpha = \alpha^2 =
\ov\alpha^2.
\]
Let $m \geq 1$ be an integer, and write $r = \lfloor m/2 \rfloor$ and
$\wt r = \lceil m/2 \rceil$. We define $V_m = \Sym^m(V_f) \otimes
\det(\rho_f)^{-r}$. There exist newforms $f_i$ for $0 \leq i \leq \wt
r-1$ (Proposition~3.4), of respective weights $k_i = (m-2i)(k-1)+1$,
levels $\Ga_1(N_i)$ with $p \nmid N_i$, characters $\ep_i$, and having
CM by $K$ (in particular, they are nonordinary at $p$), such that
\[
V_m \cong
\bigoplus_{i=0}^{\wt r-1}
\left( V_{f_i} \otimes \chi^{(i-r)(k-1)} \right)
\oplus
\begin{cases}
\ep_K^r & m\text{ even}, \\
0 & m\text{ odd}.
\end{cases}
\]
As a consequence, the complex $L$-function (Corollary~3.5), Hodge
structure (Lemma~3.6), critical twists (Lemma~3.7), and structure of
$\bbD_\cris$ as a filtered $\vphi$-module (Lemmas~3.9 and 3.10), for
$V_m$ are all computed explicitly. The same computations show that
the roots of $x^2+\ep_i(p)p^{k_i-1}$ are
$\alpha_i,\ov\alpha_i=-\alpha_i$, where
\[
\alpha_i = \begin{cases}
p^{(r-i)(k-1)} & m\text{ even}, \\
\alpha p^{(r-i)(k-1)} & m\text{ odd}.
\end{cases}
\]
For $\eta$ a Dirichlet character of prime-to-$p$ conductor, we denote
by $L_\eta$ its $p$-adic $L$-function (Theorem~4.1), considered as an
element of $\La_{\calO_E}(G_\infty)$ if $\eta$ is nontrivial and of
$[(\ga_0-1)(\ga_0-\chi(\ga_0))]^{-1}\La_{\calO_E(G_\infty)}$ if $\eta$
is the trivial character ${\bf1}$. Let $\wt L_\eta \in
\La_{\calO_E}(G_\infty)$ then denote the regularized $p$-adic
$L$-function: if $\eta = {\bf1}$ then it is defined in \S5.2 by
removing the poles of $L_{\bf1}$, and otherwise it is defined to be
$L_\eta$. Since the roots $\alpha_i,\ov\alpha_i$ of
$x^2+\ep_i(p)p^{k_i-1}$ have $p$-adic valuation $h_i := \frac{k_i-1}2
< k_i-1$, there are $p$-adic $L$-functions
$L_{f_i,\alpha_i},L_{f_i,\ov\alpha_i} \in \calH_{h_i,E}(G_\infty)$
(Theorem~4.2). We let $\fkT$ denote the collection of tuples $\fkt =
(\fkt_0,\ldots,\fkt_{\wt r-1})$, where each $\fkt_i \in
\{\alpha_i,\ov\alpha_i\}$. For each $\fkt \in \fkT$, we define the
\emph{admissible $p$-adic $L$-functions}
\[
L_{V_m,\fkt} =
\iota \left(\prod_{i=0}^{\wt r-1} \Tw_{(r-i)(k-1)} L_{f_i,\fkt_i}\right)
\cdot
\begin{cases}
L_{\ep_K^r} & m\text{ even}, \\
1 & m\text{ odd},
\end{cases}
\]
as well as their regularized variants $\wt L_{V_m,\fkt}$ where
$L_{\ep_K^r}$ is replaced by $\wt L_{\ep_K^r}$. (The twist $\iota$
and the indexing are our only changes in conventions from \cite{HL}.
There, the index set $\fkS = \{\pm\}^{\wt r-1}$ is used, and $\fks \in
\fkS$ corresponds to $\fkt \in \fkT$ where $\fkt_i = \fks_i
p^{(r-i)(k-1)}$ if $m$ is even, and $\fkt_i = \fks_i \alpha
p^{(r-i)(k-1)}$ if $m$ is odd.) Just as in the case $m=1$, these
functions can be decomposed in terms of appropriate products of twists
of ``plus and minus'' logarithms and ``plus and minus'' $p$-adic
$L$-functions (Corollary~6.9); their trivial zeroes and
$\calL$-invariants are known (Theorem~6.13), using work of Benois.
Finally, for $\theta = {\bf 1},\ep_K$, recall the Selmer groups
$\Sel_{k_\infty}(A_\theta^*)$ of equation (8), whose Pontryagin duals
$\Sel_{k_\infty}(A_\theta^*)^\vee$ are finitely generated, torsion
$\La_{\calO_E}(G_\infty)$-modules.
\subsection{Finite-slope Selmer complexes}\label{S:selmer}
For $\calG=G_\infty,\Ga$, we write $\calH_E(\calG)$ for the $E$-valued
locally analytic distributions on $\calG$; explicitly, one has
\[
\calH_E(\Ga) =
\left\{
\sum_{n \geq 0} c_n \cdot (\ga_0-1)^n \in E[\![\ga_0-1]\!]
\cn
\lim_{n \to \infty} |c_n|s^n = 0\ \text{ for all } 0 \leq s < 1
\right\},
\]
and $\calH_E(G_\infty) = \calH_E(\Ga) \otimes_E E[\Delta]$. This ring
contains $\calH_{\infty,E}(\calG)$, and the subalgebra $\La_E(\calG)$
(hence also $\calH_{\infty,E}(\calG)$) is dense for a Fr\'echet
topology. Although the ring is not Noetherian, it is a product of
B\'ezout domains if $\calG=G_\infty$ (resp.\ is a B\'ezout domain if
$\calG=\Ga$) as well as a Fr\'echet--Stein algebra, so that the
coadmissible $\calH_E(\calG)$-modules (in the sense of \cite{ST}) form
an abelian full subcategory of all $\calH_E(\calG)$-modules.
Coadmissible $\calH_E(\calG)$-modules include the finitely generated
ones, and have similar properties to finitely generated modules over a
product of PIDs if $\calG = G_\infty$ (resp.\ over a PID if $\calG =
\Ga$), including a structure theory and a notion of characteristic
ideal. The algebra map $\La_E(\calG) \to \calH_E(\calG)$ is
faithfully flat so that the operation $M \mapsto M
\otimes_{\La_E(\calG)} \calH_E(\calG)$ is exact and fully faithful.
If $M$ is a finitely generated, torsion $\La_E(\calG)$-module, then
the operation is especially simple: the natural map $M
\xrightarrow{\otimes1} M \otimes_{\La_E(\calG)} \calH_E(\calG)$ is an
isomorphism, $\chr_{\calH_E(\calG)} M = (\chr_{\La_E(\calG)}
M)\calH_E(\calG)$, and, since $\calH_E(\calG)^\times =
\La_E(\calG)^\times$, all generators of this ideal actually belong to
$\chr_{\La_E(\calG)} M$.
Write $S$ for the set of primes dividing $Np$, write $\bbQ_S$ for the
maximal extension of $\bbQ$ inside $\ov\bbQ$ unramified outside $S
\cup \{\infty\}$, and let $G_{\bbQ,S} = \Gal(\bbQ_S/\bbQ)$ denote the
corresponding quotient of $G_\bbQ$. Recall that $k_\infty \subset
\bbQ_S$, and that the natural map from $G_\infty$ to the quotient
$\Gal(k_\infty/\bbQ)$ of $G_{\bbQ,S}$ is an isomorphism; we henceforth
identify $G_\infty$ with this quotient of $G_{\bbQ,S}$. The embedding
$\iota_p$ determines a decomposition group $G_p \subset G_{\bbQ,S}$,
and choosing additional algebraic closures and emeddings $\iota_\ell
\cn \ov\bbQ \hookrightarrow \ov\bbQ_\ell$ similarly determines
decomposition groups $G_\ell \subset G_{\bbQ,S}$ for each $\ell \mid
N$. If $X$ is a continuous representation of $G_{\bbQ,S}$ and $G$ is
one of $G_{\bbQ,S}$ or $G_v$ with $v \in S$, we write $\bfR\Ga(G,X)$
for the class in the derived category of the complex of continuous
cochains of $G$ with coefficients in $X$, and we write $H^*(G,X)$ for
its cohomology.
We write $\La_E(G_\infty)^\iota$ (resp. $\calH_E(G_\infty)^\iota$) for
$\La_E(G_\infty)$ (resp.\ $\calH_E(G_\infty)$) considered with
$G_\infty$-action, and hence also $G_{\bbQ,S}$-action, with $g \in
G_\infty$ acting by multiplication by $g^{-1} \in G_\infty \subset
\La_E(G_\infty)^\times \subset \calH_E(G_\infty)^\times$. If $V$ is a
continuous $E$-linear $G_{\bbQ,S}$-representation, then its classical
Iwasawa cohomology over $G=G_{\bbQ,S},G_v$ ($v \in S$) is defined by
choosing a $G_{\bbQ,S}$-stable $\calO_E$-lattice $T \subset V$ and
forming $[\llim_n H^*(G \cap \Gal(\bbQ_S/\bbQ(\mu_{p^n})),T)]
\otimes_{\calO_E} E$; a variant of Shapiro's lemma identifies it with
$H^*(G,V \otimes_E \La_E(G_\infty)^\iota)$, and in particular it is
canonically independent of the choice of lattice $T$. The natural map
\[
H^*(G,V \otimes_E \La_E(G_\infty)^\iota)
\otimes_{\La_E(G_\infty)} \calH_E(G_\infty)
\to
H^*(G,V \otimes_E \calH_E(G_\infty)^\iota)
\]
is an isomorphism. We define $\bfR\Ga_\Iw(G,V) = \bfR\Ga(G,V
\otimes_E \calH_E(G_\infty)^\iota)$ and $H^*_\Iw(G,V) = H^*(G,V
\otimes_E \calH_E(G_\infty)^\iota)$. We refer to $H^*_\Iw(G,V)$ as
the \emph{rigid analytic Iwasawa cohomology}, or, because we have no
use for classical Iwasawa cohomology in this paper, simply the
\emph{Iwasawa cohomology}. Iwasawa cohomology groups are coadmissible
$\calH_E(G_\infty)$-modules.
There is an equivalence of categories $V \mapsto \bbD_\rig(V)$ between
continuous $E$-linear $G_p$-representations and
$(\vphi,G_\infty)$-modules over $\calR_E = \calR \otimes_{\bbQ_p} E$,
where $\calR$ is the Robba ring. Given any $(\vphi,G_\infty)$-module
$D$ over $\calR$, we define $\bfR\Ga_\Iw(G_p,D)$ to be the class of
\[
[D \xrightarrow{1-\psi} D]
\]
in the derived category, where $\psi$ is the canonical left inverse to
$\vphi$ and the complex is concentrated in degrees $1,2$, and we
define $H^*_\Iw(G_p,D)$ to be its cohomology, referring to the latter
as the \emph{Iwasawa cohomology} of $D$. These Iwasawa cohomology
groups are also coadmissible $\calH_E(G_\infty)$-modules. Note the
comparison
\[
\bfR\Ga_\Iw(G_p,V) \cong \bfR\Ga_\Iw(G_p,\bbD_\rig(V)).
\]
We define $\wt\bbD_\cris(D) = D[1/t]^{G_\infty}$ and $\bbD_\cris(D) =
\wt\bbD_\cris(\Hom_{\calR_E}(D,\calR_E))$ (where $t \in \calR$ is
Fontaine's $2\pi i$), and we say that $D$ is crystalline if $\dim_E
\bbD_\cris(D) = \rank_{\calR_E} D$. Note the comparisons
\[
\bbD_\cris(V) \cong \bbD_\cris(\bbD_\rig(V)),
\qquad
\wt\bbD_\cris(V) \cong \wt\bbD_\cris(\bbD_\rig(V)).
\]
The functor $\wt\bbD_\cris$ provides an exact, rank-preserving
equivalence of exact $\otimes$-categories with Harder--Narasimhan
filtrations, from crystalline $(\vphi,G_\infty)$-modules over
$\calR_E$ to filtered $\vphi$-modules over $E$, under which those
$(\vphi,G_\infty)$-modules of the form $\bbD_\rig(V)$ correspond to
the weakly admissible filtered $\vphi$-modules. In particular, if we
tacitly equip any $E[\vphi]$-submodule of a filtered $\vphi$-module
with the induced filtration, then for $D$ crystalline $\wt\bbD_\cris$
induces a functorial, order-preserving bijection
\[
\{\text{$t$-saturated $(\vphi,G_\infty)$-submodules of }D\}
\leftrightarrow
\{\text{$E[\vphi]$-stable subspaces of }\wt\bbD_\cris(D)\}.
\]
In the remainder of this subsection, we assume given a continuous
$E$-representation $V$ of $G_{\bbQ,S}$ that is crystalline at $p$, as
well as a fixed $E[\vphi]$-stable $F \subseteq \bbD_\cris(V|_{G_p})$,
and we associate to these data an Iwasawa-theoretic Selmer complex.
We begin by defining a local condition for each $v \in S$, by which we
mean an object $U_v$ in the derived category together with a morphism
$i_v \cn U_v \to \bfR\Ga_\Iw(G_v,V)$. If $v \neq p$, we denote by
$I_v \subset G_v$ the inertia subgroup, and we let $U_v =
\bfR\Ga_\Iw(G_v/I_v,V^{I_v})$ and $i_v$ be the inflation map. If $v =
p$, we write $F^\perp \subseteq \wt\bbD_\cris(V)$ for the orthogonal
complement of $F$, and then $D^+_F := \wt\bbD_\cris^{-1}(F^\perp)
\subseteq \bbD_\rig(V)$ and $D^-_F = \bbD_\rig(V)/D^+_F$. Then we let
$U_v = \bfR\Ga_\Iw(G_p,D^+_F)$, and we let $i_v$ be the functorial map
to $\bfR\Ga_\Iw(G_p,\bbD_\rig(V)) \cong \bfR\Ga_\Iw(G_p,V)$.
We now define the \emph{Selmer complex} $\bfR\wt\Ga_{F,\Iw}(\bbQ,V)$
to be the mapping fiber of the morphism
\[
\bfR\Ga_\Iw(G_{\bbQ,S},V)
\oplus
\bigoplus_{v \in S} U_v
\xrightarrow{\bigoplus_{v \in S} \res_v - \bigoplus_{v \in S} i_v}
\bigoplus_{v \in S} \bfR\Ga_\Iw(G_v,V),
\]
where $\res_v \cn \bfR\Ga_\Iw(G_{\bbQ,S},X) \to \bfR\Ga_\Iw(G_v,X)$
denotes restriction of cochains to the decomposition group. We write
$\wt H^*_{F,\Iw}(\bbQ,V)$ for its cohomology, referring to it as the
\emph{extended Selmer groups}. Then $\bfR\wt\Ga_{F,\Iw}(\bbQ,V)$ is a
perfect complex of $\calH_E(G_\infty)$-modules for the range $[0,3]$.
We will have need for a version without imposing local conditions at
$p$. Namely, we write $\bfR\wt\Ga_{(p),\Iw}(\bbQ,V)$ for the mapping
fiber of
\[
\bfR\Ga_\Iw(G_{\bbQ,S},V)
\oplus
\bigoplus_{v \in S^{(p)}} U_{v,\Iw}
\xrightarrow{\bigoplus_{v \in S^{(p)}} \res_v
- \bigoplus_{v \in S^{(p)}} i_v}
\bigoplus_{v \in S^{(p)}} \bfR\Ga_\Iw(G_v,V),
\]
where $S^{(p)} = S \bs \{p\}$, and we write $\wt
H^*_{(p),\Iw}(\bbQ,V)$ for its cohomology. Bearing in mind the exact
triangle
\[
\bfR\Ga_\Iw(G_p,D^+_F) \to \bfR\Ga_\Iw(G_p,V) \to
\bfR\Ga_\Iw(G_p,D^-_F) \to \bfR\Ga_\Iw(G_p,D^+_F)[1],
\]
we deduce from the definitions of the Selmer complexes an exact
triangle
\begin{equation}\label{E:remove-p}
\bfR\wt\Ga_{F,\Iw}(\bbQ,V) \to \bfR\wt\Ga_{(p),\Iw}(\bbQ,V) \to
\bfR\Ga_\Iw(G_p,D^-_F) \to \bfR\wt\Ga_{F,\Iw}(\bbQ,V)[1].
\end{equation}
\subsection{The Main Conjecture for $f$ and its symmetric powers}
We remind the reader of the fixed newform $f$ of weight $k$, level
$\Ga_1(N)$ with $p \nmid N$ and character $\ep$, with CM by $K$, and
the roots $\alpha,\ov\alpha$ of $x^2 + \ep(p)p^{k-1}$.
Since the elements $\alpha,\ov\alpha$ are distinct, the
$\vphi$-eigenspace with eigenvalue $\alpha$ determines an
$E[\vphi$]-stable subspace $F_\alpha \subseteq \bbD_\cris(V_f)$. We
apply the constructions of Iwasawa-theoretic extended Selmer groups,
with their associated ranks and characteristic ideals, to the data of
$V_f$ equipped with $F_\alpha$.
The following is the ``finite-slope'' form of the Main Conjecture of
Iwasawa theory for $f$.
\begin{theorem}\label{t:MC}
Assume that $p$ does not divide the order of the nebentypus $\ep$.
The coadmissible $\calH_E(G_\infty)$-module $\wt
H^2_{F_\alpha,\Iw}(\bbQ,V_f)$ is torsion, and
\[
\chr_{\calH_E(G_\infty)} \wt H^2_{F_\alpha,\Iw}(\bbQ,V_f)
=
(\Tw_{-1} L_{f,\alpha}).
\]
\end{theorem}
\begin{proof}
We reproduce the argument of \cite[\S5]{P2}, adapted to the
normalizations of this paper.
In the notation of \S\ref{S:selmer}, the object $D^-_{F_\alpha}$ is
crystalline, and $\wt\bbD_\cris(D^-_{F_\alpha})$ has
$\vphi$-eigenvalue $\alpha\inv$ and Hodge--Tate weight $0$. This
implies that $H^2_\Iw(G_p,D^-_{F_\alpha}) = 0$. (If $k$ is odd,
$\ep(p)=-1$, and $\alpha=+p^{(k-1)/2}$ then
$H^1_\Iw(G_p,D^-_{F_\alpha})_\tors \cong E(\chi^{(k-1)/2})$ is
nonzero, but this ``exceptional zero'' does not affect the present
proof.)
Write $f^c = f \otimes \ep\inv$ for the eigenform with Fourier
coefficients complex conjugate to those of $f$, and recall the duality
$\Hom_E(V_{f^c},E) \cong V_f(1-k)$. Let $z'_{f^c} \in \wt
H^1_{(p),\Iw}(\bbQ,\Hom_E(V_{f^c},E))$ denote Kato's zeta element
derived from elliptic units (denoted $z_\ga^{(p)}(f^*)$ for suitable
$\ga \in \Hom_E(V_{f^c},E)$ in \cite{K}), and let
\[
z_f = \Tw_{k-1} z'_{f^c} \in
\wt H^1_{(p),\Iw}(\bbQ,\Hom_E(V_{f^c},E)(k-1)) \cong
\wt H^1_{(p),\Iw}(\bbQ,V_f).
\]
For a crystalline $(\vphi,G_\infty)$-module $D$ satisfying
$\Fil^1\bbD_\dR(D) = 0$, recall the dual of the big exponential map
treated in \cite[\S3]{Nak}:
\[
\Exp^*_{D^*}
\cn
H^1_\Iw(G_p,D)
\to
\wt\bbD_\cris(D) \otimes_E \calH_E(G_\infty).
\]
By naturality in $D$, there is a commutative diagram
\[\begin{array}{r@{\ }c@{\ }c@{\ }c}
\wt H^1_{(p),\Iw}(\bbQ,V_f)
\xrightarrow{\loc_{V_f}} H^1_\Iw(G_p,V_f)
\cong
& H^1_\Iw(G_p,\bbD_\rig(V_f))
& \xrightarrow{\Col_\alpha} &
H^1_\Iw(G_p,D^-_{F_\alpha}) \\
& \Exp^*_{V_f^*}\downarrow\phantom{\Exp^*_{V_f^*}} &
& \phantom{\Exp^*_{D^{-,*}_{F_\alpha}}} \downarrow \Exp^*_{D^{-,*}_{F_\alpha}} \\
& \wt\bbD_\cris(V_f) \otimes_E \calH_E(G_\infty)
& \to &
\wt\bbD_\cris(D^-_{F_\alpha}) \otimes_E \calH_E(G_\infty).
\end{array}\]
Write $\loc_\alpha = \Col_\alpha \circ \loc_{V_f}$, where the maps
$\loc_{V_f}$ and $\Col_\alpha$ are as in the preceding diagram.
Identifying $\wt\bbD_\cris(D_{F_\alpha}^-) = \Hom_E(Ee_\alpha,E)$,
\cite[Theorem~16.6(2)]{K} shows that
\begin{equation}\label{E:compute-kato}
(\Tw_1 \Exp^*_{D^{-,*}_{F_\alpha}} \loc_\alpha z_f)(e_\alpha)
=
(\Exp^*_{\Hom_E(V_f,E)} \loc_{V_f(1)} \Tw_1 z_f)(e_\alpha)
=
L_{f,\alpha},
\end{equation}
after perhaps rescaling $e_\alpha$. In particular, $\loc_\alpha$ is a
nontorsion morphism.
The exact triangle \eqref{E:remove-p} gives rise to an exact sequence
\begin{multline*}
0
\to
\wt H^1_{F_\alpha,\Iw}(\bbQ,V_f)
\to
\wt H^1_{(p),\Iw}(\bbQ,V_f)
\xrightarrow{\loc_\alpha}
H^1_\Iw(G_p,D^-_{F_\alpha}) \\
\to
\wt H^2_{F_\alpha,\Iw}(\bbQ,V_f)
\to
\wt H^2_{(p),\Iw}(\bbQ,V_f)
\to
0.
\end{multline*}
It follows from \cite[Theorem~12.4]{K} that the finitely generated
$\calH_E(G_\infty)$-module $\wt H^1_{(p),\Iw}(\bbQ,V_f)$ (resp.\ $\wt
H^2_{(p),\Iw}(\bbQ,V_f)$) is free of rank $1$ (resp.\ is torsion).
Employing the local Euler--Poincar\'e formula and the fact that
$\loc_\alpha$ is nontorsion, we see from the preceding exact sequence
that $\wt H^1_{F_\alpha,\Iw}(\bbQ,V_f) = 0$, $\wt
H^2_{F_\alpha,\Iw}(\bbQ,V_f)$ is torsion, and
\begin{multline*}
\left(\chr_{\calH_E(G_\infty)}
\frac{\wt H^1_{(p),\Iw}(\bbQ,V_f)}{\calH_E(G_\infty)z_f}\right)
\left(\chr_{\calH_E(G_\infty)} \wt H^2_{F_\alpha,\Iw}(\bbQ,V_f)\right) \\
=
\left(\chr_{\calH_E(G_\infty)}
\frac{H^1_\Iw(G_p,D^-_{F_\alpha})}{\calH_E(G_\infty)\loc_\alpha z_f}\right)
\left(\chr_{\calH_E(G_\infty)} \wt H^2_{(p),\Iw}(\bbQ,V_f)\right).
\end{multline*}
Applying $\Tw_{k-1}$ to the claim of \cite[Theorem~12.5(3)]{K} with
$f^*$ in place of $f$, we deduce that
\[
\chr_{\calH_E(G_\infty)}
\frac{\wt H^1_{(p),\Iw}(\bbQ,V_f)}{\calH_E(G_\infty)z_f}
=
\chr_{\calH_E(G_\infty)} \wt H^2_{(p),\Iw}(\bbQ,V_f),
\]
and therefore
\[
\chr_{\calH_E(G_\infty)} \wt H^2_{F_\alpha,\Iw}(\bbQ,V_f)
=
\chr_{\calH_E(G_\infty)}
\frac{H^1_\Iw(G_p,D^-_{F_\alpha})}{\calH_E(G_\infty)\loc_\alpha z_f}.
\]
Although only a divisibility of characteristic ideals is claimed by
Kato, one easily checks that his proof, especially
\cite[Proposition~15.17]{K}, gives an equality whenever Rubin's method
gives an equality. Under the hypothesis that $\ep$ has order prime to
$p$, the required extension of Rubin's work is precisely
Theorem~\ref{T:rubin} below. It remains to compute the right hand
side of the last identity. In fact, one has the exact sequence
\begin{multline*}
0 \to
H^1_\Iw(G_p,D^-_{F_\alpha})_\tors
\to
\frac{H^1_\Iw(G_p,D^-_{F_\alpha})}{\calH_E(G_\infty)\loc_\alpha z_f} \\
\xrightarrow{\Exp^*_{D^{-,*}_{F_\alpha}}}
\frac{\wt\bbD_\cris(D^-_{F_\alpha}) \otimes_E \calH_E(G_\infty)}
{\calH_E(G_\infty)\Exp^*_{D^{-,*}_{F_\alpha}}\loc_\alpha z_f}
\to
\coker \Exp^*_{D^{-,*}_{F_\alpha}}
\to 0,
\end{multline*}
and because $D^-_{F_\alpha}$ has Hodge--Tate weight zero and
$H^2_\Iw(G_p,D^-_{F_\alpha})=0$, \cite[Theorem~3.21]{Nak} shows that
\[
\chr_{\calH_E(G_\infty)} H^1_\Iw(G_p,D^-_{F_\alpha})_\tors
=
\chr_{\calH_E(G_\infty)} \coker \Exp^*_{D^{-,*}_{F_\alpha}},
\]
and hence
\[
\chr_{\calH_E(G_\infty)}
\frac{H^1_\Iw(G_p,D^-_{F_\alpha})}{\calH_E(G_\infty)\loc_\alpha z_f}
=
\chr_{\calH_E(G_\infty)}
\frac{\wt\bbD_\cris(D^-_{F_\alpha}) \otimes_E \calH_E(G_\infty)}
{\calH_E(G_\infty)\Exp^*_{D^{-,*}_{F_\alpha}}\loc_\alpha z_f}.
\]
Finally, \eqref{E:compute-kato} shows that the right hand side above
is generated by $\Tw_{-1} L_{f,\alpha}$.
\end{proof}
We now turn to the Main Conjecture of Iwasawa theory for $V_m$ in its
``finite-slope'' form, beginning with two remarks. First, we remind
the reader that since $\Sel_{k_\infty}(A_{\ep_K^r}^*)^\vee$ is a
finitely generated, torsion $\La_{\calO_E}(G_\infty)$-module, it
follows that
\[
\Sel_{k_\infty}(A_{\ep_K^r}^*)^\vee[1/p]
=
\Sel_{k_\infty}(A_{\ep_K^r}^*)^\vee
\otimes_{\La_{\calO_E}(G_\infty)} \La_E(G_\infty)
\stackrel\sim\to
\Sel_{k_\infty}(A_{\ep_K^r}^*)^\vee
\otimes_{\La_{\calO_E}(G_\infty)} \calH_E(G_\infty),
\]
and therefore $\Sel_{k_\infty}(A_{\ep_K^r}^*)^\vee[1/p]$ is naturally
a finitely generated (hence coadmissible), torsion
$\calH_E(G_\infty)$-module. Second, for $\fks \in \fkS$ we note that
the ``plus and minus'' Iwasawa-theoretic Selmer groups satisfy the
arithmetic duality
\[
H^{1,\fks_i}_f(k_\infty,A_{f_i}^*((r-i)(k-1)))^\vee[1/p]
\cong
\wt H^2_{\fks_i,\Iw}(\bbQ,V_{f_i}((i-r)(k-1)))^\iota,
\]
where $\wt H^2_{\fks_i,\Iw}$ denotes the cohomology of an
Iwasawa-theoretic Selmer complex with local condition at $p$
appropriately built from the choice $\fks_i$. These isomorphic
modules are also finitely generated (hence coadmissible), torsion
$\calH_E(G_\infty)$-modules, by Theorem~5.6.
With the preceding remarks in mind, what follows is the finite-slope
analogue of Definition~5.3. Fix $\fkt = (\fkt_0,\ldots,\fkt_{\wt
r-1}) \in \fkT$. For each $i=0,\ldots,\wt r-1$, the elements
$\alpha_i,\ov\alpha_i$ are distinct, so the $\vphi$-eigenspace with
eigenvalue $\fkt_i p^{(r-i)(k-1)}$ determines an $E[\vphi]$-stable
subspace $F_i \subseteq \bbD_\cris(V_{f_i}((i-r)(k-1)))$. We may
apply the constructions of Iwasawa-theoretic extended Selmer groups,
with their associated ranks and characteristic ideals, to the data of
$V_{f_i}((i-r)(k-1))$ equipped with $F_i$.
\begin{definition}\label{d:Selmer}
For $\fkt \in \fkT$, we define the coadmissible
$\calH_E(G_\infty)$-module
\[
\Sel_{k_\infty}^\fkt(V_m^*)^\vee
:=
\left(\bigoplus_{i=0}^{\wt r-1}
\wt H^2_{F_i,\Iw}\left(\bbQ,V_{f_i}((i-r)(k-1))\right)^\iota\right)
\oplus
\begin{cases}
\Sel_{k_\infty}(A_{\ep_K^r}^*)^\vee[1/p] & m\text{ even}, \\
0 & m\text{ odd}.
\end{cases}
\]
\end{definition}
\begin{remark}
Although the notation $\Sel_{k_\infty}^\fkt(V_m^*)^\vee$ in the
finite-slope case was chosen for symmetry with
$\Sel_{k_\infty}^\fks(A_m^*)^\vee[1/p]$ in the ``plus and minus''
case, this notation is highly misleading: it is an essential feature
of the finite-slope theory that $\Sel_{k_\infty}^\fkt(V_m^*)^\vee$ is
coadmissible but typically \emph{not} finitely generated over
$\calH_E(G_\infty)$, and therefore does not arise as the Pontryagin
dual (with $p$ inverted) of direct limits of finite-layer objects, as
$\Sel_{k_\infty}^\fks(A_m^*)^\vee[1/p]$ does. This fact forces us to
work on the other side of arithmetic duality, as in the first summand
above.
\end{remark}
\begin{theorem}\label{t:MC2}
For all $\fkt \in \fkT$, the coadmissible $\calH_E(G_\infty)$-module
$\wt H^2_{\fkt,\Iw}(\bbQ,V_m)$ is torsion, and
\[
\chr_{\calH_E(G_\infty)} \Sel_{k_\infty}^\fkt(V_m^*)^\vee
=
(\Tw_1 \wt L_{V_m,\fkt}).
\]
\end{theorem}
\begin{proof}
Just as in the proof of Theorem~5.9, this theorem follows from
Theorem~5.5 and from Theorem~\ref{t:MC} applied to each $f_i$.
\end{proof}
\section{The Main Conjecture for imaginary quadratic fields at inert
primes}
In the fundamental works \cite{R,R2}, Rubin perfected the Euler system
method for elliptic units. From this he deduced a divisibility of
characteristic ideals as in the Main Conjecture of Iwasawa theory. In
most cases, he used the analytic class number formula to promote the
divisibilities to identities. In this section we extend the use of
the analytic class number formula to the remaining cases. The
obstruction in these problematic cases is that the control maps on
global/elliptic units and class groups are far from being
isomorphisms. Our approach is to use base change of Selmer complexes
to get a precise description of the failure of control, and then to
apply a characterization of $\mu$- and $\la$-invariants that is valid
even in the presence of zeroes of the characteristic ideal at
finite-order points. This section is written independently of the
preceding notations and hypotheses of this paper and \cite{HL}; we
employ notations as in \cite{R}, recalled as follows.
We take $K$ to be an imaginary quadratic field, and $p$ an odd prime
inert in $K$. Let $K_0$ be a finite abelian extension with $\Delta =
\Gal(K_0/K)$ and $\de = [K_0:K]$, and assume that $p \nmid \de$. Let
$K_\infty$ be an abelian extension of $K$ containing $K_0$, such that
$\Ga = \Gal(K_\infty/K_0)$ is isomorphic to $\bbZ_p$ or $\bbZ_p^2$.
One has $\scrG = \Gal(K_\infty/K) = \Delta \times \Ga$. Accordingly,
$K_\infty = K_0 \cdot K_\infty^\Delta$, where
$\Gal(K_\infty^\Delta/K)$ is identified with $\Ga$.
We write $\La = \La(\scrG) =\bbZ_p[\![\scrG]\!]$. The letter $\eta$
will always range over the irreducible $\bbZ_p$-representations of
$\Delta$. One has $\bbZ_p[\Delta] = \bigoplus_\eta
\bbZ_p[\Delta]^\eta$, where $\bbZ_p[\Delta]^\eta$ is isomorphic to the
ring of integers in the unramified extension of $\bbQ_p$ of degree
$\dim(\eta)$, and, accordingly, $\La = \bigoplus_\eta
\bbZ_p[\Delta]^\eta[\![\Ga]\!]$. The sum map $\summ \cn
\bbZ_p[\Delta] \to \bbZ_p$, $\sum_\sigma n_\sigma\sigma \mapsto
\sum_\sigma n_\sigma$, is identified with the projection onto the
component $\bbZ_p[\Delta]^{\bf1}$ indexed by the trivial character
${\bf1}$; write $\bbZ_p[\Delta]^!$ for the kernel of the sum map,
which is equal to $\bigoplus_{\eta\neq{\bf1}} \bbZ_p[\Delta]^\eta$,
and satisfies $\bbZ_p[\Delta] = \bbZ_p[\Delta]^{\bf1} \oplus
\bbZ_p[\Delta]^!$.
For $\{a_n\}$ a sequence of positive real numbers, if there exist real
numbers $\mu,\la$ such that $\log_p a_n = \mu p^n + \la n + O(1)$ as
$n \to +\infty$, then these numbers $\mu,\la$ are uniquely determined
by $\{a_n\}$, and we write $\mu = \mu(\{a_n\})$ and $\la =
\la(\{a_n\})$.
\begin{lemma}\label{L:numerics}
Assume that $\Ga$ is isomorphic to $\bbZ_p$, and let $M$ be a finitely
generated, torsion $\bbZ_p[\![\Ga]\!]$-module. Then for $n \gg 0$ the
quantity $\rank_{\bbZ_p} M_{\Ga^{p^n}}$ stabilizes to some integer $r
\geq 0$, so that $M_{\Ga^{p^n}} \approx \bbZ_p^{\oplus r} \oplus
M_{\Ga^{p^n}}[p^\infty]$, and Iwasawa's $\mu$- and $\la$-invariants of
$M$ satisfy $\mu(M) = \mu(\{\#M_{\Ga^{p^n}}[p^\infty]\})$ and $\la(M)
= r + \la(\{\#M_{\Ga^{p^n}}[p^\infty]\})$.
\end{lemma}
\begin{proof}
One easily sees that if $M \to M'$ is a pseudo-isomorphism, then both
sides of the desired identities are invariant under replacing $M$ by
$M'$. Using the structure theorem and additivity over direct sums, it
therefore suffices to check the case where $M = \bbZ_p[\![\Ga]\!]/(f)$
for prime $f \in \bbZ_p[\![\Ga]\!]$. The case where $f$ is relatively
prime to all the augmentation ideals $I(\Ga^{p^k}) = (f_k)$ of
$\Ga^{p^k}$ for $k \geq 0$, or equivalently where $r=0$, is
well-known. The remaining case is where $f = f_k/f_{k-1}$ for $k \geq
0$ (we set $f_{-1}=1$), whence one has
\[
(\bbZ_p[\![\Ga]\!]/(f))_{\Ga^{p^n}} = \bbZ_p[\![\Ga]\!]/(f,f_n) =
\bbZ_p[\![\Ga]\!]/(f) \approx \bbZ_p^{\oplus (p-1)p^{k-1}}
\]
for $n \geq k$, agreeing with the Iwasawa invariants.
\end{proof}
Let $F$ be a subextension of $K_\infty/K_0$. If $F/K_0$ is finite, we
associate to it the following objects:
\begin{itemize}
\item $A(F) = \Pic(\calO_F) \otimes_\bbZ \bbZ_p$ is the $p$-part of
its ideal class group,
\item $X(F) = \Pic(\calO_F,p^\infty) = \llim_n (\Pic(\calO_F,p^n)
\otimes_\bbZ \bbZ_p)$ is the inverse limit of the $p$-parts of its
ray class groups of conductor $p^n$,
\item $U(F) = (\calO_F \otimes_\bbZ \bbZ_p)^\times_\prop$ is the
pro-$p$ part of its group of semilocal units,
\item $\scrE(F) = \calO_F^\times \otimes_\bbZ \bbZ_p$ is its group of
global units $\otimes \bbZ_p$, and
\item $\scrC(F)$ is its group of elliptic units $\otimes \bbZ_p$, as
defined in \cite[\S1]{R}.
\end{itemize}
If $F/K_0$ is infinite, and $? \in \{A,X,U,\scrE,\scrC\}$, we let
$?(F) = \llim_{F_0} ?(F_0)$, where $F_0$ ranges over the finite
subextensions of $F$, obtaining a finitely generated
$\bbZ_p[\![\Gal(F/K)]\!]$-module. Note that Leopoldt's conjecture is
known in this case, so by the definition of ray class groups one has a
short exact sequence
\[
0 \to \scrE(F) \to U(F) \to X(F) \to A(F) \to 0.
\]
Class field theory identifies $A(F)$ (resp.\ $X(F)$) with the Galois
group of the maximal $p$-abelian extension of $F$ which is everywhere
unramified (resp.\ unramified at primes not dividing $p$).
The following improvement of Rubin's work is the main result of this
section, and the remainder of this section consists of its proof.
\begin{theorem}\label{T:rubin}
One has the equality of characteristic ideals,
\[
\chr_\La A(K_\infty) = \chr_\La(\scrE(K_\infty)/\scrC(K_\infty)).
\]
\end{theorem}
In \cite[Theorem~4.1(ii)]{R} and \cite[Theorem~2(ii)]{R2} it is proved
that both sides are nonzero at each $\eta$-factor, that $\chr_\La
A(K_\infty)$ divides $\chr_\La(\scrE(K_\infty)/\scrC(K_\infty))$, and
that the $\eta$-factors are equal when $\eta$ is nontrivial on the
decomposition group of $p$ in $\Delta$. To get equality for the
remaining $\eta$, we may thus reduce to the case where $p$ is totally
split in $K_0/K$. We also specialize our notation to where
$K_\infty^\Delta$ is any $\bbZ_p^1$-extension. We index finite
subextensions $F$ of $K_\infty/K_0$ as $F = K_n =
K_\infty^{\Ga^{p^n}}$ for $n \geq 0$. Fix a topological generator
$\ga \in \Ga$, and for brevity write $\La_n = \bbZ_p[\scrG/\Ga^{p^n}]
= \La/(\ga^{p^n}-1)$.
There is unique $\bbZ_p^2$-extension of $K$, and it contains all
$\bbZ_p$-extensions of $K$. This extension is unramified at all
primes not dividing $p$, and Lubin--Tate theory shows it is totally
ramified at $p$. The same ramification behavior is true of any
$\bbZ_p$-extension, as well as of $K_\infty/K_0$ because $p$ is
totally split in $K_0/K$. In particular, if $S_n$ denotes the set of
places of $K_n$ lying over $p$, then the restriction maps $S_{n+1} \to
S_n$ are bijections, and $S_n$ is a principal homogeneous
$\Delta$-set. Fixing once and for all $v_0 \in S_0$, with unique lift
$v_n \in S_n$, declaring $v_n$ to be a basepoint of $S_n$ gives an
identification $\bbZ_p[S_n] \cong \bbZ_p[\Delta]$ of
$\bbZ_p[\Delta]$-modules. We write $\invt$ for the composite of the
semilocal restriction map, the invariant maps of local class field
theory, and this identification:
\[
\invt \cn
H^2(G_{K,\{p\}},\bbZ_p(1))
\to
\bigoplus_{v \in S_n} H^2(G_{K_{n,v}},\bbZ_p(1))
\cong
\bbZ_p[S_n]
\cong
\bbZ_p[\Delta].
\]
Also, it follows that $p-1$ does not divide the ramification degree of
$K_\infty/\bbQ$ at $p$, so that $\mu_{p^\infty}(K_{\infty,v}) = 1$ for
any place $v$ of $K_\infty$ lying over $p$. Therefore, for $F/K_0$
finite the group $(\calO_F \otimes_\bbZ \bbZ_p)^\times$ is already
pro-$p$.
Since $\chr_\La A(K_\infty)$ divides
$\chr_\La(\scrE(K_\infty)/\scrC(K_\infty))$, their Iwasawa $\mu$- and
$\la$-invariants, considered as a $\bbZ_p[\![\Ga]\!]$-modules, satisfy
\begin{equation}\label{E:rubin}
\mu(A(K_\infty)) \leq \mu(\scrE(K_\infty)/\scrC(K_\infty)),
\qquad
\la(A(K_\infty)) \leq \la(\scrE(K_\infty)/\scrC(K_\infty)),
\end{equation}
We shall improve these inequalities to the claim that for some $\ep
\in \{0,1\}$ one has
\[
\mu(A(K_\infty)) = \mu(\scrE(K_\infty)/\scrC(K_\infty)),
\qquad
\ep + \la(A(K_\infty)) = \la(\scrE(K_\infty)/\scrC(K_\infty)),
\]
and additionally
\[
\rank_{\bbZ_p} A(K_\infty)_\scrG = 0,
\qquad
\rank_{\bbZ_p} (\scrE(K_\infty)/\scrC(K_\infty))_\scrG = \ep.
\]
These computations are equivalent to the claim that
\begin{equation}\label{E:subtheorem}
(\chr_\La \bbZp)^\ep \cdot \chr_\La A(K_\infty)
= \chr_\La \scrE(K_\infty)/\scrC(K_\infty).
\end{equation}
Granted \eqref{E:subtheorem}, let us show how to deduce the theorem.
Let $K'_\infty$ denote the compositum of $K_0$ with the unique
$\bbZ_p^2$-extension of $K$, and write $\scrG' = \Gal(K'_\infty/K) =
\Delta \times \Ga'$, $\La' = \La(\scrG') = \bbZ_p[\![\scrG']\!]$, and
$\proj \cn \La' \twoheadrightarrow \La$. By Rubin's theorem, there
exist $f' \in \La(\scrG')$ and $f \in \La(\scrG)$ with
\[
f' \cdot \chr_{\La'} A(K'_\infty)
=
\chr_{\La'} \scrE(K'_\infty)/\scrC(K'_\infty),
\quad
f \cdot \chr_\La A(K_\infty)
=
\chr_\La \scrE(K_\infty)/\scrC(K_\infty).
\]
By \cite[Corollary 7.9(i)]{R} one has $\proj(f') = f$ up to a unit in
$\La$. Since $\proj$ is a homomorphism of semilocal rings that is a
bijection on local factors and restricts to a local homomorphism on
each local factor, it follows that $f'$ is a unit (resp.\ restricts to
a unit over a given local factor) in $\La'$ if and only if $f$ is a
unit (resp.\ restricts to a unit over the corresponding local factor)
in $\La$. On the other hand, \eqref{E:subtheorem} implies that $f$
divides $\chr_\La \bbZ_p$ in $\La$. Since $(\chr_\La \bbZ_p)^\eta =
\La^\eta$, the unit ideal, if $\eta \neq {\bf1}$, we deduce the
identity of the theorem for both $\bbZ_p^1$- and $\bbZ_p^2$-extensions
over each such $\eta$-factor. We only have left to consider the case
where $\eta={\bf1}$, or rather where $\Delta$ is trivial and $K_0=K$.
\begin{lemma}
Write $R = \bbZ_p[\![S,T]\!]$, and for $a,b \in \bbZ_p$ not both
divisible by $p$, write $R_{a,b} = R/((1+S)^a(1+T)^b-1)$ with
$\pi_{a,b} \cn R \twoheadrightarrow R_{a,b}$. We identify $R_{a,b}
\cong \bbZ_p[\![U]\!]$, where $U = \pi_{a,b}(S)$ if $p \nmid b$ and
$U = \pi_{a,b}(T)$ otherwise.
Suppose $g \in R$ is such that for all $a,b$ above, $\pi_{a,b}(g)$
divides $U$ in $R_{a,b}$. Then $g$ is a unit.
\end{lemma}
\begin{proof}
Write $g = x + yS + zT + O((S,T)^2)$ with $x,y,z \in \bbZ_p$; we are
to show that $p \nmid x$. Since $\pi_{0,1}(g)$ divides $U$ in
$R_{0,1}$, and $R_{0,1}$ is a UFD with $U$ a prime element, it follows
that $\pi_{0,1}(g)$ is either a unit or $U$ times a unit. As
$\pi_{0,1}(g) = x + yU + O(U^2)$, the first case is equivalent to $p
\nmid x$, and the second case is equivalent to $x=0$ and $p \nmid y$.
But in the second case the identity
\[
g = yS + zT + O((S,T)^2) = (1+S)^y(1+T)^z-1 + O((S,T)^2)
\]
would imply $\pi_{y,z}(g) = 0 + O(U^2)$, that is $U^2$ divides
$\pi_{y,z}(g)$ in $R_{y,z}$, contradicting that $\pi_{y,z}(g)$ divides
$U$.
\end{proof}
Choose a $\bbZ_p$-basis $\ga_1,\ga_2 \in \Ga'$, so that $\ker(\Ga'
\twoheadrightarrow \Ga) = (\ga_1^a\ga_2^b)^{\bbZ_p}$ for some $a,b \in
\bbZ_p$ not both divisible by $p$. Set $S=\ga_1-1,T=\ga_2-1 \in
\La'$, and note that $\ker(\La' \twoheadrightarrow \La)$ is generated
by $(1+S)^a(1+T)^b-1$, so that the map $\La' \twoheadrightarrow \La$
is identified with the map $\pi_{a,b} \cn R \twoheadrightarrow
R_{a,b}$ of the preceding lemma. Under this identification, the
augmentation ideal $\chr_\La \bbZ_p$ is generated by $U \in R_{a,b}$,
so we have that $\pi_{a,b}(f') = f$ divides $U$. Since
$K_\infty^\Delta = K_\infty$ was allowed to be any $\bbZ_p$-extension
of $K$, and conversely every such pair of $a,b$ arises from some
choice of $K_\infty$, the preceding lemma shows that $f'$ is a unit,
and therefore so is $f$, proving the theorem at once for $\bbZ_p^1$-
and $\bbZ_p^2$-extensions.
We begin the proof of \eqref{E:subtheorem} (and no longer assume that
$K_0=K$). As mentioned at the beginning of this section, our approach
is to use base change of Selmer complexes to measure the failure of
the maps $(\scrE(K_\infty)/\scrC(K_\infty))_{\Ga^{p^n}} \to
\scrE(K_n)/\scrC(K_n)$ and $A(K_\infty)_{\Ga^{p^n}} \to A(K_n)$ to be
isomorphisms.
Since we use base change in the derived category, we give some
generalities on the operation $\Lotimes_\La \La_n$. We first compute
that $\La_n[0] \cong [\La \xrightarrow{\ga^{p^n}-1} \La]$ as objects
in the derived category of $\La$-modules, the latter concentrated in
degrees $-1,0$, so that for any $\La$-module (resp.\ complex of
$\La$-modules) $X$ one may compute $X \Lotimes_\La \La_n$ as $[X
\xrightarrow{\ga^{p^n}-1} X]$ (resp.\ as the mapping cone of
$\ga^{p^n}-1$ on $X$). The induced map $X \Lotimes_\La \La_{n+1} \to
X \Lotimes_\La \La_n$ corresponds to the morphism $[X
\xrightarrow{\ga^{p^{n+1}}-1} X] \to [X \xrightarrow{\ga^{p^n}-1}
X]$ given by multiplication by $1+\ga^{p^n}+\cdots+\ga^{(p-1)p^n}$
in shift degree $-1$, and by the identity in shift degree $0$.
Alternatively, the $\Tor$ spectral sequence degenerates to short exact
sequences
\begin{equation}\label{E:generic-base-change}
0 \to
H^i(X)_{\Ga^{p^n}}
\to
H^i(X \Lotimes_\La \La_n)
\to
H^{i+1}(X)^{\Ga^{p^n}}
\to 0,
\end{equation}
and the natural morphism from the above sequence for $n+1$ to the
sequence for $n$ is given by the natural projection on the first term,
and by multiplication by $1+\ga^{p^n}+\cdots+\ga^{(p-1)p^n}$ on the
last term. The Bockstein homomorphism $\beta = \beta_X$, defined as
the connecting homomorphism in the exact triangle
\begin{multline*}
X \Lotimes_\La
\left(\La_n \xrightarrow{\ga^{p^n}-1} \La/(\ga^{p^n}-1)^2
\to \La_n \to \La_n[1]\right) \\
\cong
\left(X \Lotimes_\La \La_n
\xrightarrow{\ga^{p^n}-1}
X \Lotimes_\La \La/(\ga^{p^n}-1)^2
\to
X \Lotimes_\La \La_n
\xrightarrow\beta
X \Lotimes_\La \La_n[1]\right),
\end{multline*}
is computed on cohomology as the composite
\begin{multline*}
H^i(\beta) \cn
H^i(X \Lotimes_\La \La_n)
\twoheadrightarrow
H^i(X \Lotimes_\La \La_n)/H^i(X)_{\Ga^{p^n}} \\
\cong
H^{i+1}(X)^{\Ga^{p^n}}
\hookrightarrow
H^{i+1}(X)
\twoheadrightarrow
H^{i+1}(X)_{\Ga^{p^n}}
\hookrightarrow
H^{i+1}(X \Lotimes_\La \La_n).
\end{multline*}
Note that if $Z$ is a finitely generated, torsion $\La$-module, then
$\rank_{\bbZ_p} Z^{\Ga^{p^n}} = \rank_{\bbZ_p} Z_{\Ga^{p^n}}$.
If $X$ satisfies $X = X^\Ga$, then the above computations reduce to $X
\Lotimes_\La \La_n \cong X[1] \oplus X$, in such a way that the
natural map $X \Lotimes_\La \La_{n+1} \to X \Lotimes_\La \La_n$ is
identified with multiplication by $p$ in shift degree $-1$, and with
the identity map in shift degree $0$. The Bockstein homomorphism
\[
\beta \cn
X[1] \oplus X = X \Lotimes_\La \La_n
\to
X \Lotimes_\La \La_n[1] = X[2] \oplus X[1]
\]
is the identity map on $X[1]$ and zero on the other factors. In this
scenario, we write $\beta\inv = \beta_X\inv \cn X[2] \oplus X[1] \to
X[1] \oplus X$ for the map that is inverse to this identity map on
$X[1]$ and zero on the other factors. Any morphism $f \cn Y \to X$
gives rise to a morphism $f \Lotimes_\La \La_n \cn Y \Lotimes_\La
\La_n \to X \Lotimes_\La \La_n = X[1] \oplus X$. Writing $f
\otimes_\La \La_n$ for the projection of $f \Lotimes_\La \La_n$ onto
the second component, $X$, the commutative diagram
\[\begin{array}{ccccc}
Y \Lotimes_\La \La_n
& \xrightarrow{f \Lotimes_\La \La_n} &
X \Lotimes_\La \La_n
& = &
X[1] \oplus X \\
\beta_Y\downarrow\phantom{\beta_Y}
& &
\beta_X\downarrow\phantom{\beta_X}
& &
\phantom\sim\searrow\sim \\
Y \Lotimes_\La \La_n[1]
& \xrightarrow{f \Lotimes_\La \La_n[1]} &
X \Lotimes_\La \La_n[1]
& = &
X[2] \oplus X[1]
\end{array}\]
shows that the projection of $f \Lotimes_\La \La_n$ onto the first
component, $X[1]$, is computed by $\beta_X\inv \circ (f \otimes_\La
\La_n)[1] \circ \beta_Y$.
We now return to the setting of the theorem, recalling \Nekovar's
constructions of the fundamental invariants of number fields in terms
of Selmer complexes in \cite[\S9.2,\S9.5]{N} (with notations adapted
to our situation). Throughout, $n \geq 0$ ranges over nonnegative
integers. For brevity we write
\[
\bfR\Ga_n = \bfR\Ga_\cont(G_{K_n,\{p\}},\bbZ_p(1)),
\qquad
\bfR\Ga_\Iw
= \bfR\Ga_\Iw(K_\infty/K_0,\bbZ_p(1))
= \bfR\!\llim_n \bfR\Ga_n,
\]
and $H^i_? = H^i(\bfR\Ga_?)$ for $? \in \{n,\Iw\}$. Then one
has the computations
\begin{gather*}
H^i_n = 0,\ i \neq 1,2, \qquad
H^1_n = \calO_{K_n,\{p\}}^\times \otimes_\bbZ \bbZ_p, \\
0 \to
\Pic(\calO_{K_n,\{p\}}) \otimes_\bbZ \bbZ_p
\to
H^2_n
\xrightarrow{\invt}
\bbZ_p[\Delta]
\xrightarrow{\summ}
\bbZ_p
\to 0,
\end{gather*}
and, passing to inverse limits (Mittag-Leffler holds by compactness),
\begin{gather*}
H^i_\Iw = 0,\ i \neq 1,2, \qquad
H^1_\Iw = \llim_n (\calO_{K_n,\{p\}}^\times \otimes_\bbZ \bbZ_p), \\
0 \to \llim_n (\Pic(\calO_{K_n,\{p\}}) \otimes_\bbZ \bbZ_p)
\to
H^2_\Iw
\xrightarrow{\invt}
\bbZ_p[\Delta]
\xrightarrow{\summ}
\bbZ_p \to 0.
\end{gather*}
Let $U^- = \bbZ_p[\Delta][-1] \oplus \bbZ_p[\Delta][-2]$, considered
as a perfect complex of $\La$-modules, or as a complex of
$\La_n$-modules. One constructs a map $i^-_n \cn \bfR\Ga_n \to U^-$
via the local valuation maps in degree one and the local invariant
maps in degree two, and obtains a map $i^-_\Iw \cn \bfR\Ga_\Iw \to
U^-$ from the $i^-_n$ by taking the inverse limit on $n$. By taking
mapping fibers of $i^-_n$ and $i^-_\Iw$, one obtains complexes
$\bfR\wt\Ga_{f,n}$ of $\La_n$-modules and a perfect complex
$\bfR\wt\Ga_{f,\Iw}$ of $\La$-modules sitting in exact triangles
\[
\bfR\wt\Ga_{f,n}
\to
\bfR\Ga_n
\xrightarrow{i^-_n}
U^-
\to
\bfR\wt\Ga_{f,n}[1]
\]
and
\[
\bfR\wt\Ga_{f,\Iw}
\to
\bfR\Ga_\Iw
\xrightarrow{i^-_\Iw}
U^-
\to
\bfR\wt\Ga_{f,\Iw}[1].
\]
Writing $\wt H^i_{f,?} = H^i(\bfR\wt\Ga_{f,?})$ for $? \in
\{n,\Iw\}$, one has the computations
\[
\wt H^i_{f,n} = \begin{cases}
0 & i \neq 1,2,3 \\
\scrE(K_n) & i=1 \\
A(K_n) & i=2 \\
\bbZ_p & i=3,
\end{cases}
\qquad \text{and} \qquad
\wt H^i_{f,\Iw} = \begin{cases}
0 & i \neq 1,2,3 \\
\scrE(K_\infty) & i=1 \\
A(K_\infty) & i=2 \\
\bbZ_p & i=3.
\end{cases}
\]
By control for Galois cohomology, the natural map $\bfR\Ga_\Iw
\Lotimes_\La \La_n \to \bfR\Ga_n$ is an isomorphism, compatible with
varying $n$. Since $U^- = (U^-)^\Ga$, one has the computation $U^-
\Lotimes_\La \La_n \cong U^-[1] \oplus U^-$. It follows from the
definition of $i^-_\Iw$ as an inverse limit that $i^-_\Iw \otimes_\La
\La_n = i^-_n$, so that $i^-_\Iw \Lotimes_\La \La_n = (\beta_{U^-}\inv
\circ i^-_n[1] \circ \beta_{\bfR\Ga_n}, i^-_n)$. Thus we have a
commutative diagram
\[\begin{array}{ccccccc}
\bfR\wt\Ga_{f,\Iw} \Lotimes_\La \La_n
& \to &
\bfR\Ga_n
& \xrightarrow{i^-_\Iw \Lotimes_\La \La_n} &
U^-[1] \oplus U^-
& \to &
\bfR\wt\Ga_{f,\Iw} \Lotimes_\La \La_n[1] \\
& & =\downarrow\phantom= & & \pr_2\downarrow\phantom{\pr_2} \\
\bfR\wt\Ga_{f,n}
& \to &
\bfR\Ga_n
& \xrightarrow{i^-_n} &
U^-
& \to &
\bfR\wt\Ga_{f,n}[1],
\end{array}\]
which we complete to a morphism of exact triangles via a morphism
$\BC_n \cn \bfR\wt\Ga_{f,\Iw} \Lotimes_\La \La_n \to
\bfR\wt\Ga_{f,n}$. Taking mapping fibers of the resulting morphism of
triangles gives an exact triangle
\[
\Fib(\BC_n) \to 0 \to U^-[1] \to \Fib(\BC_n)[1],
\]
hence an isomorphism $\Fib(\BC_n) \cong U^-$ and an exact triangle
\begin{equation}\label{E:BC-cone}
U^-
\xrightarrow{j_n}
\bfR\wt\Ga_{f,\Iw} \Lotimes_\La \La_n
\xrightarrow{\BC_n}
\bfR\wt\Ga_{f,n}
\xrightarrow{k_n}
U^-[1].
\end{equation}
It is easy to compute that $j_n$ is the composite of the inclusion
$U^- \hookrightarrow U^- \oplus U^-[-1]$ and the shifted connecting
homomorphism $U^- \oplus U^-[-1] \to \bfR\wt\Ga_{f,\Iw} \Lotimes_\La
\La_n$. The construction of the snake lemma shows that $k_n$ is the
composite
\[
\bfR\wt\Ga_{f,n}
\to
\bfR\Ga_n
\xrightarrow{i^-_\Iw \Lotimes_\La \La_n}
U^-[1] \oplus U^-
\xrightarrow{\pr_1}
U^-[1],
\]
or in other words the composite of $\bfR\wt\Ga_{f,n} \to \bfR\Ga_n$
with $\beta_{U^-}\inv \circ i^-_n[1] \circ \beta_{\bfR\Ga_n}$. Of
course, the source or target of $H^i(k_n) \cn \wt H^i_{f,n} \to
H^{i+1}U^-$ is zero if $i \neq 1$, and if $i=1$ this computation
simplifies to
\[
\scrE(K_n)
\to
H^1_n
\xrightarrow\beta
H^2_n
\xrightarrow{\invt}
\bbZ_p[\Delta].
\]
The kernel of $\beta$ contains the universal norms in $\scrE(K_n)$ for
$K_\infty/K_n$, and in particular $\scrC(K_n)$ (see
\cite[Proposition~II.2.5]{D} for the norm relations), which itself is
of finite index in $\scrE(K_n)$. Since $\bbZ_p[\Delta]$ is torsion
free, it follows that $H^1(k_n) = 0$, too. Since $H^*(k_n)=0$, the
long exact sequence associated to the triangle \eqref{E:BC-cone}
breaks up into the short exact rows in the following diagrams, and
\eqref{E:generic-base-change} gives the short exact columns:
\begin{equation}\label{E:crosses}
\begin{array}{r@{\ }c@{\ }l}
& \scrE(K_\infty)_{\Ga^{p^n}} \\ & \downarrow \\
\bbZ_p[\Delta] \to & H^1(\bfR\wt\Ga_{f,\Iw} \Lotimes_\La \La_n)
& \to \scrE(K_n), \\
& \downarrow \\ & A(K_\infty)^{\Ga^{p^n}}
\end{array}
\quad
\begin{array}{r@{\ }c@{\ }l}
& A(K_\infty)_{\Ga^{p^n}} \\ & \downarrow \\
\bbZ_p[\Delta] \to & H^2(\bfR\wt\Ga_{f,\Iw} \Lotimes_\La \La_n)
& \to A(K_n). \\
& \downarrow \\ & \bbZ_p
\end{array}
\end{equation}
(The triangle \eqref{E:BC-cone} also gives the computation
$H^3(\bfR\wt\Ga_{f,\Iw} \Lotimes_\La \La_n) \cong \bbZ_p$.) The
composite arrows from the top to the right points of the two diagrams
give the respective control maps $\scrE(K_\infty)_{\Ga^{p^n}} \to
\scrE(K_n)$ and $A(K_\infty)_{\Ga^{p^n}} \to A(K_n)$.
It is crucial to compute the transition morphisms from the diagrams
\eqref{E:crosses} associated to $n+1$ to those associated to $n$.
Explicitly, the transition maps on the upper (resp.\ lower, right)
points are the natural projections (resp.\ multiplication by
$1+\ga^{p^n}+\cdots+\ga^{(p-1)p^n}$, the norm maps), and the maps on
the left points are \emph{multiplication by $p$} because the term
$U^-[1]$ in the sequence \eqref{E:BC-cone} is identified with first
summand of $U^- \Lotimes_\La \La_n \cong U^-[1] \oplus U^-$.
We consider the second diagram in \eqref{E:crosses}. The computation
$\rank_{\bbZ_p} A(K_\infty)_{\Ga^{p^n}} = \de-1$ is immediate. A
diagram chase identifies the composite map $\bbZ_p[\Delta] \to \bbZ_p$
as the sum map. Since $\bbZ_p$ is uniquely a $\La$-direct summand of
$\bbZ_p[\Delta]$, we may canonically refine the diagram to the short
exact sequence
\begin{equation}\label{E:SES-A}
0 \to \bbZ_p[\Delta]^!
\to A(K_\infty)_{\Ga^{p^n}} \to
A(K_n) \to 0,
\end{equation}
and in particular there is an injection
$A(K_\infty)_{\Ga^{p^n}}[p^\infty] \hookrightarrow A(K_n)$ of finite
abelian groups. Applying the snake lemma to the commutative diagram
\[\begin{array}{rcccccl}
0 \to &
\bbZ_p[\Delta]^!
& \to &
\displaystyle
\frac{A(K_\infty)_{\Ga^{p^{n+1}}}}{A(K_\infty)_{\Ga^{p^{n+1}}}[p^\infty]}
& \to &
\displaystyle
\frac{A(K_{n+1})}{A(K_\infty)_{\Ga^{p^{n+1}}}[p^\infty]}
& \to 0 \\
& p \downarrow \phantom{p} & & \downarrow & & \downarrow \\
0 \to &
\bbZ_p[\Delta]^!
& \to &
\displaystyle
\frac{A(K_\infty)_{\Ga^{p^n}}}{A(K_\infty)_{\Ga^{p^n}}[p^\infty]}
& \to &
\displaystyle
\frac{A(K_n)}{A(K_\infty)_{\Ga^{p^n}}[p^\infty]}
& \to 0,
\end{array}\]
and examining the final column, we get the exact sequence
\[
0 \to \bbZ_p[\Delta]^!/p
\to
\frac{A(K_{n+1})}{A(K_\infty)_{\Ga^{p^{n+1}}}[p^\infty]}
\to
\frac{A(K_n)}{A(K_\infty)_{\Ga^{p^n}}[p^\infty]} \to 0.
\]
This implies that
\[
\frac{\#A(K_n)}{\#A(K_\infty)_{\Ga^{p^n}}[p^\infty]}
=
p^{(\de-1)n} \frac{\#A(K_0)}{\#A(K_\infty)_{\Ga^{p^0}}[p^\infty]},
\]
so that
\begin{gather*}
\mu(A(K_\infty))
= \mu(\{\#A(K_\infty)_{\Ga^{p^n}}[p^\infty]\})
= \mu(\{\#A(K_n)\}), \\
\la(A(K_\infty))
= \de-1 + \la(\{\#A(K_\infty)_{\Ga^{p^n}}[p^\infty]\})
= \la(\#\{A(K_n)\}).
\end{gather*}
We now consider the first diagram in \eqref{E:crosses}. Since
$\scrE(K_n)$ is a free $\bbZ_p$-module, we may choose a splitting
$H^1(\bfR\wt\Ga_{f,\Iw} \Lotimes_\La \La_n) \cong \bbZ_p[\Delta]
\oplus \scrE(K_n)$. It follows from the norm relations on elliptic
units that the map $\scrC(K_\infty)_{\Ga^{p^n}} \to \scrC(K_n)$ is
surjective; combining this fact with the proof of
\cite[Theorem~7.7]{R} shows that ther kernel of this map is
$(\scrC(K_\infty)_{\Ga^{p^n}})^\Ga$, is $\scrG$-isomorphic to $\bbZp$,
and is a $\bbZ_p$-direct summand of $\scrC(K_\infty)_{\Ga^{p^n}}$.
Moreover, this map followed by the inclusion $\scrC(K_n) \subseteq
\scrE(K_n)$ is equal to the composite
\[
\scrC(K_\infty)_{\Ga^{p^n}} \to \scrE(K_\infty)_{\Ga^{p^n}} \to
\scrE(K_n),
\]
which shows that the subset of $\scrC(K_\infty)_{\Ga_{p^n}}$ mapping
into the summand $\bbZ_p[\Delta]$ is again
$(\scrC(K_\infty)_{\Ga_{p^n}})^\Ga$. Writing $\bbZ_p[\Delta] =
\bbZ_p[\Delta]^{\bf1} \oplus \bbZ_p[\Delta]^!$, the image $I_n$ of
$(\scrC(K_\infty)_{\Ga_{p^n}})^\Ga \to \bbZ_p[\Delta]$ is equal to
either $0$ or $p^{e_n}\bbZ_p[\Delta]^{\bf1}$ with $e_n \geq 0$. If
$I_n = 0$ we set $e_n = 0$, so that in all cases we have
$(\bbZ_p[\Delta]/I_n)[p^\infty] \cong \bbZ/p^{e_n}$. Write $\ep_n = 1
- \rank_{\bbZ_p} I_n$. Again considering the proof of
\cite[Theorem~7.7]{R} shows that, in the commutative diagram
\[\begin{array}{rcccccl}
0 \to &
\bbZ_p
& \to &
\scrC(K_\infty)_{\Ga^{p^{n+1}}}
& \to &
\scrC(K_{n+1})
& \to 0 \\
& f \downarrow \phantom{f} & & \downarrow & & \downarrow \\
0 \to &
\bbZ_p
& \to &
\scrC(K_\infty)_{\Ga^{p^n}}
& \to &
\scrC(K_n)
& \to 0,
\end{array}\]
the map $f$ is multiplication by $p$ (up to a unit). Let $v_n$ be a
basis vector for $I_n$ if $\ep_n=0$, and $v_n=0$ if $\ep_n=1$. The
commutativity of the square
\[\begin{array}{cccc}
\bbZ_p & \xrightarrow{\cdot v_{n+1}} & I_{n+1} \subseteq & \bbZ_p[\Delta] \\
f \downarrow \phantom{f} & & & \phantom{p} \downarrow p \\
\bbZ_p & \xrightarrow{\cdot v_n} & I_n \subseteq & \bbZ_p[\Delta] \\
\end{array}\]
implies that $v_n=0$ if and only if $v_{n+1}=0$, so that
$\ep_n=\ep_{n+1}$ is independent of $n$; denote it henceforth by
$\ep$. When $\ep=0$, it is also easy to deduce from this
commutativity that $e_n=e_{n+1}$ is independent of $n$; denote it
henceforth by $e$.
The definition of $I_n$ allows us to modify the first diagram in
\eqref{E:crosses} to a short exact sequence
\begin{equation}\label{E:SES-EC}
0 \to (\scrE(K_\infty)/\scrC(K_\infty))_{\Ga^{p^n}}
\to
\bbZ_p[\Delta]/I_n \oplus \scrE(K_n)/\scrC(K_n)
\to
A(K_\infty)^{\Ga^{p^n}} \to 0.
\end{equation}
One has $\rank_{\bbZ_p} A(K_\infty)^{\Ga^{p^n}} = \rank_{\bbZ_p}
A(K_\infty)_{\Ga^{p^n}} = \de-1$, and combining this with the above
sequence gives $\rank_{\bbZ_p}
(\scrE(K_\infty)/\scrC(K_\infty))_{\Ga^{p^n}} = \ep$. The above
sequence also gives an exact sequence of finite abelian groups
\[
0
\to (\scrE(K_\infty)/\scrC(K_\infty))_{\Ga^{p^n}}[p^\infty]
\to \bbZ/p^e \oplus \scrE(K_n)/\scrC(K_n)
\to A(K_\infty)^{\Ga^{p^n}}[p^\infty],
\]
where $\#A(K_\infty)^{\Ga^{p^n}}[p^\infty]$ is bounded independently
of $n$. It follows that
\begin{gather*}
\mu(\scrE(K_\infty)/\scrC(K_\infty))
= \mu(\{\#\scrE(K_n)/\scrC(K_n)\}), \\
\la(\scrE(K_\infty)/\scrC(K_\infty))
= \ep + \la(\{\#\scrE(K_n)/\scrC(K_n)\}).
\end{gather*}
The analytic class number formula gives
\[
\#A(K_n) = \#\scrE(K_n)/\scrC(K_n),
\]
and the computations $\rank_{\bbZ_p} A(K_\infty)_\scrG = 0$ and
$\rank_{\bbZ_p} (\scrE(K_\infty)/\scrC(K_\infty))_\scrG = \ep$ follow
from \eqref{E:SES-A} and \eqref{E:SES-EC}. This establishes
\eqref{E:subtheorem}, and completes the proof of the theorem.
\thebibliography{Nak}
\bibitem[dS]{D} de~Shalit, Ehud, \emph{Iwasawa theory of elliptic curves
with complex multiplication}. Persectives in Math.\ {\bf 3},
Academic Press Inc., Boston, MA, 1987.
\bibitem[HL]{HL} Harron, Robert and Lei, Antonio, Iwasawa theory of
symmetric powers of CM modular forms at supersingular primes. To
appear in Journal de Th\'eorie des Nombres de Bordeaux.
\bibitem[Ka]{K} Kato, Kazuya, $p$-adic Hodge theory and values of zeta
functions of modular forms. Cohomologies $p$-adiques et
applications arithm\'etiques III. \emph{Ast\'erisque} {\bf 295}
(2004), 117--290.
\bibitem[Nak]{Nak} Nakamura, Kentaro, Iwasawa theory of de~Rham
$(\vphi,\Ga)$-modules over the Robba ring.
\emph{J.\ Inst.\ Math.\ Jussieu} {\bf 13} (2014), no.\ 1, 65--118.
\bibitem[Nek]{N} \Nekovar, Jan, \emph{Selmer complexes}.
\emph{Ast\'erisque} {\bf 310} (2006).
\bibitem[P]{P} Pottharst, Jonathan, Analytic families of finite-slope Selmer
groups. \emph{Algebra and Number Theory} {\bf 7} (2013), no.\ 7,
1571--1612.
\bibitem[P2]{P2} Pottharst, Jonathan, Cyclotomic Iwasawa theory of motives.
\emph{Preprint}, version 30 July 2012.
\bibitem[Ru]{R} Rubin, Karl, The ``main conjectures'' of Iwasawa theory for
imaginary quadratic fields. \emph{Invent.\ Math.} {\bf 103} (1991),
no.\ 1, 25--68.
\bibitem[Ru2]{R2} Rubin, Karl, More ``Main Conjectures'' for imaginary quadratic
fields. \emph{Elliptic curves and related topics}, CRM
Proc.\ Lecture Notes {\bf 4}, Amer.\ Math.\ Soc., Providence, RI,
1994, 23--28.
\bibitem[ST]{ST} Schneider, Peter and Teitelbaum, Jeremy, Algebras of
$p$-adic distributions and admissible representations.
\emph{Invent.\ Math.} {\bf 153} (2003), no.\ 1, 145--196.
\end{document}
|
1,477,468,751,090 | arxiv | \section{Introduction}
The theory of matrix factorizations was introduced by Eisenbud \cite{Ei} in 1980.
It is useful to study hypersurfaces in commutative algebra.
In fact, Eisenbud proved the following famous theorem.
\begin{theorem}[{\cite[Section 6]{Ei} (cf. \cite[Theorem 7.4]{Y})}] \label{thm.mot}
Let $S$ be a regular local ring, $f \in S$ a nonzero non-unit element, and $A=S/(f)$.
Let $\MF_S(f)$ denote the category of matrix factorizations of $f$ over $S$.
\begin{enumerate}
\item If $M$ is a maximal Cohen-Macaulay module over $A$ with no free summand, then the minimal free resolution of $M$ is obtained from a matrix factorization of $f$ over $S$, and hence it is periodic of period 1 or 2.
\item The factor category $\MF_S(f)/\add\{(1,f)\}$ is equivalent to the category $\CM(A)$ of maximal Cohen-Macaulay $A$-modules.
\item The factor category $\MF_S(f)/\add\{(1,f), (f,1)\}$ is equivalent to the stable category $\uCM(A)$ of maximal Cohen-Macaulay $A$-modules.
\end{enumerate}
\end{theorem}
Nowadays, matrix factorizations are related to several areas of mathematics, including representation theory of Cohen-Macaulay modules, singularity category, Calabi-Yau category, Khovanov-Rozansky homology, and homological mirror symmetry.
In this paper, to investigate noncommutative hypersurfaces, which are important objects of study in noncommutative algebraic geometry (see \cite{SV}, \cite{KKZ}, \cite{MU}), we introduce a notion of noncommutative matrix factorization for an arbitrary nonzero non-unit element $f$ of a ring.
First, we will show that the category of noncommutative graded matrix factorizations $\NMF^{\ZZ}_S(f)$ is invariant under the operation called twist, which explains why the period of a noncommutative matrix factorization can be arbitrary (compare with Theorem \ref{thm.mot} (1)).
\begin{theorem}[{Theorem \ref{thm.nmft}}]
Let $S$ be a graded algebra, and $f\in S_d$ an element.
If $\theta = \{\theta _i\}_{i \in \ZZ}$ is a twisting system on $S$ such that $\theta_i(f)=\l^if$ for some $0\neq \l\in k$ and for every $i\in \ZZ$,
then $\operatorname {NMF}^{\ZZ}_S(f) \cong \operatorname {NMF}^{\ZZ}_{S^{\theta}}(f^{\theta})$.
\end{theorem}
Suppose that $f$ is a regular normal element.
In this case, Cassidy-Conner-Kirkman-Moore \cite {CCKM} defined the notion of twisted matrix factorization of $f$,
and we will show that the category of noncommutative graded matrix factorizations of $f$ is equivalent to the category of twisted graded matrix factorizations of $f$ (Proposition \ref{prop.ctmf}),
so the above result is a generalization of \cite[Theorem 3.6]{CCKM}.
Next, we will show noncommutative graded and non-hypersurface analogues of Theorem \ref{thm.mot} (2) and (3).
Let $S$ be a graded algebra, $f\in S_d$ a regular normal element and $A=S/(f)$.
There are two types of trivial noncommutative matrix factorizations, $\phi_F$ and $_F\phi$, as defined in Definition \ref{def.trivnmf}.
We define $\cF =\{\phi_F \in \operatorname {NMF}^{\ZZ}_S(f) \mid F\; \textnormal{is free} \}$ and $\cG =\{\phi_F\oplus {_G\phi} \in \operatorname {NMF}^{\ZZ}_S(f) \mid F, G \; \textnormal{are free} \}$.
Let $\TR_S^\ZZ(A)$ denote the category of finitely generated graded totally reflexive $A$-modules which have finite projective dimension over $S$.
We define $\cP =\{P \in \grmod A \mid P\; \textnormal{is free}\}$.
With these notations, our result is stated as follows.
\begin{theorem}[{Theorem \ref{thm.m3}, Theorem \ref{thm.m4}}]
Let $S$ be a graded quotient algebra of a right noetherian connected graded regular algebra, $f\in S_d$ a regular normal element, and $A=S/(f)$.
Then the factor category $\NMF_S^{\ZZ}(f)/\cF$ is equivalent to $\TR^{\ZZ}_S(A)$.
Moreover, the factor category $\NMF_S^{\ZZ}(f)/\cG$ is equivalent to the factor category $\TR^{\ZZ}_S(A)/\cP$.
\end{theorem}
As an application of our theory of noncommutative matrix factorizations, we will describe indecomposable noncommutative matrix factorizations over skew exterior algebras (which are hardly noncommutative hypersurfaces).
In particular, using techniques of noncommutative algebraic geometry, we will show that, over a certain class of skew exterior algebras, every noncommutative graded matrix factorization comes from a direct sum of free modules and extensions of co-point modules (Theorem \ref{thm.last}).
An application to noncommutative hypersurfaces will be discussed in a subsequent paper.
\subsection{Basic Terminologies and Notations}
Throughout this paper, we fix a field $k$.
Unless otherwise stated, an algebra means an algebra over $k$, and a graded ring means an $\NN$-graded ring.
For a ring $S$, we denote by $\Mod S$ the category of right $S$-modules, and by $\mod S$ the full subcategory consisting of finitely generated modules.
We denote by $S^o$ the opposite ring of $S$.
We say that $S$ is regular of dimension $n$ if $\gldim S =n < \infty$.
For a graded ring $S =\bigoplus_{i\in\NN} S_i$,
we denote by $\GrMod S$ the category of graded right $S$-modules, and by $\grmod S$ the full subcategory consisting of finitely generated modules.
Morphisms in $\GrMod S$ are right $S$-module homomorphisms preserving degrees.
For $M \in \GrMod S$ and $n \in \ZZ$, we define $M_{\geq n} := \bigoplus_{i\geq n} M_i \in \GrMod S$, and the shift $M(n) \in \GrMod S$ by $M(n)_i = M_{n+i}$. For $M, N \in \GrMod S$, we write
$\Ext^i_S(M,N) :=\bigoplus_{n\in\ZZ}\Ext^i_{\GrMod S}(M,N(n))$ (by abuse of notation, we use the same symbol as in the ungraded case).
For a graded algebra $S =\bigoplus_{i\in\NN} S_i$, we say that $S$ is connected graded if $S_0 = k$,
and we say that $S$ is locally finite if $\dim_k S_i <\infty$ for all $i \in \NN$.
We denote by $\GrAut S$ the group of graded $k$-algebra automorphisms of $S$.
If $S$ is a locally finite graded algebra and $M \in \grmod S$, then we define the Hilbert series of $M$ by
$H_M (t) := \sum_{i \in \ZZ} (\dim_k M_i)t^i \in \ZZ[[t, t^{-1}]].$
A connected graded algebra $S$ is called an AS-regular (resp. AS-Gorenstein) algebra of dimension $n$ if
\begin{enumerate}
\item{} $\gldim S =n <\infty$ (resp. $\injdim_S S = \injdim_{S^o} S= n <\infty$), and
\item{} $\Ext^i_S(k ,S) \cong \Ext^i_{S^o}(k ,S) \cong
\begin{cases}
0 & \textnormal { if }\; i\neq n,\\
k(\ell) \; \textrm{for some}\; \ell \in \ZZ & \textnormal { if }\; i=n.
\end{cases}$
\end{enumerate}
Let $S$ be a noetherian AS-Gorenstein algebra of dimension $n$.
We define the local cohomology modules of $M \in \grmod S$ by
$\H^i_\fm(M):= \lim _{n \to \infty} \Ext^i_S(S/S_{\geq n}, M)$.
It is well-known that $\H_{\fm}^i(S)=0$ for all $i\neq n$.
We say that $M \in \grmod S$ is graded maximal Cohen-Macaulay if $\H_{\fm}^i(M)=0$ for all $i\neq n$.
We denote by $\CM^{\ZZ} (S)$ the full subcategory of $\grmod S$ consisting of graded maximal Cohen-Macaulay modules.
\section{Noncommutative Matrix Factorizations}
\begin{definition}
Let $S$ be a ring and $f\in S$ an element.
A noncommutative right matrix factorization of $f$ over $S$ is a sequence of right $S$-module homomorphisms $\{\phi^i:F^{i+1}\to F^i\}_{i\in \ZZ}$
where $F^i$ are free right $S$-modules of rank $r$ for some $r\in \NN$ such that there is a commutative diagram
\[\xymatrix@R=2pc@C=3pc{
F^{i+2} \ar[d]_{\cong} \ar[r]^{\phi^i\phi^{i+1}} &F^i \ar[d]^{\cong} \\
S^r \ar[r]^{f\cdot} &S^r
}\]
for every $i\in \ZZ$.
A morphism
$$\mu :\{\phi^i:F^{i+1}\to F^i\}_{i\in \ZZ}\to \{\psi^i:G^{i+1}\to G^i\}_{i\in \ZZ}$$
of noncommutative right matrix factorizations is a sequence of right $S$-module homomorphisms $\{\mu ^i:F^i\to G^i\}_{i\in \ZZ}$ such that the diagram
\[\xymatrix@R=2pc@C=3pc{
F^{i+1} \ar[d]_{\mu ^{i+1}} \ar[r]^{\phi^i} &F^i \ar[d]^{\mu ^{i}} \\
G^{i+1} \ar[r]^{\psi^i} &G^{i}
}\]
commutes for every $i\in \ZZ$.
We denote by $\NMF_S(f)$ the category of noncommutative right matrix factorizations.
Let $S$ be a graded ring and $f\in S_d$ a homogeneous element.
A noncommutative graded right matrix factorization of $f$ over $S$ is a sequence of graded right $S$-module homomorphisms $\{\phi^i:F^{i+1}\to F^i\}_{i\in \ZZ}$
where $F^i$ are graded free right $S$-modules of rank $r$ for some $r\in \NN$ such that
there is a commutative diagram
\[\xymatrix@R=2pc@C=0.75pc{
F^{i+2} \ar[d]_{\cong} \ar[rrr]^{\phi^i\phi^{i+1}} &&&F^i \ar[d]^{\cong} \\
\bigoplus _{s=1}^rS(-m_{i+2,s})\ar@{=}[r] &\bigoplus _{s=1}^rS(-m_{is}-d) \ar[rr]^(0.54){f\cdot} &&\bigoplus _{s=1}^rS(-m_{is})
}\]
for every $i\in \ZZ$.
We can similarly define the category of noncommutative graded right matrix factorizations $\NMF^{\ZZ}_S(f)$.
We can also similarly define a noncommutative (graded) left matrix factorization of $f$ over $S$.
\end{definition}
\begin{remark} \label{rem.lambda}
Let $S$ be a (graded) ring and $f\in S$ a (homogeneous) element.
\begin{enumerate}
\item{} Let $\{\phi^i:F^{i+1}\to F^i\}_{i\in \ZZ}$ be a noncommutative right matrix factorization of $f$ over $S$ of rank $r$.
We often identify $F^i=S^r$. In this case, every $\phi^i$ is the left multiplication of a matrix $\Phi^i$ whose entries are elements in $S$, so that $\Phi^i\Phi^{i+1}=fE_r$ where $E_r$ is the identity matrix of size $r$.
\item{} Let $\{\phi^i:F^{i+1}\to F^i\}_{i\in \ZZ}$ be a noncommutative graded right matrix factorization of $f$ over $S$ of rank $r$ such that $F^i=\bigoplus _{s=1}^rS(-m_{is})$.
In this case, we may write $\phi^i=(\phi^i_{st})$ where $\phi^i_{st}:S(-m_{i+1, t})\to S(-m_{is})$ is the left multiplication of an element in $S_{m_{i+1, t}-m_{is}}$, so $\phi^i$ is the left multiplication of a matrix $\Phi^i$ whose entries are homogeneous elements in $S$, so that $\Phi^i\Phi^{i+1}=fE_r$ where $E_r$ is the identity matrix of size $r$.
\item{} Two noncommutative (graded) right matrix factorizations $\phi$ and $\psi$ are isomorphic if and only if there are invertible matrices $P^i$ whose entries are in $S$ such that $\Psi^{i}=P^i\Phi^{i}(P^{i+1})^{-1}$ for every $i\in \ZZ$.
$$\begin{CD}
\cdots & @>\Phi ^{i+2}\cdot >> F^{i+2} @>\Phi ^{i+1}\cdot >>
F^{i+1} @>\Phi ^i\cdot >> F^i @>\Phi ^{i-1}\cdot >>& \cdots \\
& & & @VP^{i+2}\cdot VV @VP^{i+1}\cdot VV @VP^i\cdot VV \\
\cdots & @>\Psi^{i+2}\cdot>>G^{i+2} @>\Psi^{i+1}\cdot>> G^{i+1} @>\Psi^i\cdot >> G^i @>\Psi^{i-1}\cdot>>& \cdots
\end{CD}$$
\item{} Let $\{\phi^i:F^{i+1}\to F^i\}_{i\in \ZZ}$ be a sequence of right $S$-module homomorphisms between free right $S$-modules $F^i$ such that $\phi^i\phi^{i+1}=\l_if\cdot :F^{i+2}\to F^i$ for some unit element $\l_i\in S$ for every $i\in \ZZ$. If $\{\psi^i:F^{i+1}\to F^i\}_{i\in \ZZ}$ is a sequence of right $S$-module homomorphisms inductively defined by
\begin{align*}
\dots, &\psi^{-3}=\frac{\l_{-2}}{\l_{-1}\l_{-3}}\phi^{-3}, \psi^{-2}=\frac{\l_{-1}}{\l_{-2}}\phi^{-2}, \psi^{-1}=\frac{1}{\l_{-1}}\phi^{-1}, \\
&\psi^0=\phi ^0, \psi^1=\frac{1}{\l _0}\phi^1, \psi^2=\frac{\l_0}{\l_1}\phi^2, \psi ^3=\frac{\l_1}{\l_0\l_2}\phi^3, \psi ^4=\frac{\l_0\l_2}{\l_1\l_3}\psi^4, \dots,
\end{align*}
then $\psi^i\psi^{i+1}=f\cdot :F^{i+2}\to F^i$.
Since we have a commutative diagram
\[\xymatrix@R=2pc@C=6pc{
F^{i+2} \ar[d]_{\cong} \ar[r]^{\phi^i\phi^{i+1}=\l_if\cdot} &F^{i} \ar[d]^{\cong} \\
F^{i+2} \ar[r]^{\psi^i\psi^{i+1}=f\cdot} &F^{i},
}\]
it follows that $\phi=\{\phi^i\}_{i\in \ZZ} \in \NMF_S(f)$.
In particular, $\NMF_S(f)=\NMF_S(\l f)$ for every unit element $\l\in S$.
\end{enumerate}
\end{remark}
\begin{example} \label{ex.2.3}
In practice, it is often easier to find $\phi\in \NMF_S(f)$ such that $\phi^i\phi^{i+1}=\l_if\cdot $ for some unit element $\l_i\in S$ than the one such that $\phi^i\phi^{i+1}=f\cdot $ for every $i\in \ZZ$. For example, let $S=k\<x, y\>/(x^2, y^2)$,
$f=\a xy+yx\in S$ where $0\neq \a\in k$, $A=S/(f)$, and
$M=A/(ax+by)A\in \mod A$ where $0\neq a, b\in k$. The ``natural" choice of the differentials in the free resolution of $M$ is induced by $\phi^i=(ax+\a^iby)\cdot :S\to S$, but
$$\Phi^i\Phi^{i+1}=(ax+\a^iby)(ax+\a^{i+1}by)=a^2x^2+\a^iab(\a xy+yx)+\a^{2i+1}b^2y^2=\a^iabf.$$
If we define
\begin{align*}
\Psi^{2i} := \a^{-i}\Phi^{2i}=\a^{-i}ax+\a ^iby \quad \textrm{and}\quad
\Psi^{2i+1} := \a^{-i}a^{-1}b^{-1}\Phi^{2i+1}=\a^{-i}b^{-1}x+\a ^{i+1}a^{-1}y,
\end{align*}
then
\begin{align*}
\Psi ^{2i}\Psi ^{2i+1} & =(\a ^{-i}ax+\a^iby)(\a^{-i}b^{-1}x+\a^{i+1}a^{-1}y)\\
&=\a ^{-2i}ab^{-1}x^2+\a xy+yx+\a^{2i+1}ba^{-1}y^2=f, \\
\Psi ^{2i+1}\Psi ^{2i+2} & =(\a^{-i}b^{-1}x+\a^{i+1}a^{-1}y)(\a ^{-i-1}ax+\a^{i+1}by) \\
& =(\a ^{-2i-1}ab^{-1}x^2+\a xy+yx+\a^{2i+2}ba^{-1}y^2)=f,
\end{align*}
so $\psi=\{\psi^i\}_{i\in \ZZ}\in \NMF_S(f)$, hence $\phi=\{\phi^i\}_{i\in \ZZ}\in \NMF_S(f)$.
\end{example}
\begin{lemma} \label{lem.dmf}
Let $S$ be a (graded) ring and $f\in S$ a (homogeneous) element. If $\phi=\{\phi^i:F^{i+1}\to F^i\}_{i\in \ZZ}$ is a noncommutative (graded) right matrix factorization of $f$ over $S$, then
\begin{align*}
& \Hom_S(\phi , S):=\{\Hom_S(\phi^{-i-1}, S):\Hom_S(F^{-i-1}, S)\to \Hom_S(F^{-i}, S)\}_{i\in \ZZ}
\end{align*}
is a noncommutative (graded) left matrix factorization of $f$ over $S$.
\end{lemma}
\begin{proof}
We have $\Hom_S(\phi ^{-i-1}, S)\Hom_S(\phi ^{-i-2}, S)=\Hom_S(\phi^{-i-2}\phi ^{-i-1}, S)=\Hom_S(f\cdot , S)=\cdot f:\Hom_S(F^{-i-2}, S)\to \Hom_S(F^{-i}, S)$, so the result follows.
\end{proof}
Let $S$ be a (graded) ring, $f\in S$ a (homogeneous) element, and $A=S/(f)$.
{\bf We tacitly assume that $f$ is a nonzero non-unit element for the rest of the paper, i.e, $\deg f\geq 1$ in the graded case}.
For a noncommutative (graded) right matrix factorization $\phi$ of $f$ over $S$, we define the complex $C (\phi)$ of (graded) right $A$-modules by
$$\begin{CD} \cdots @>\overline {\phi^2} >> \overline {F^2} @>\overline {\phi^1} >> \overline {F^1} @>\overline {\phi^0} >> \overline {F^0} @>\overline {\phi^{-1}}>> \overline {F^{-1}} @>\overline {\phi^{-2}}>> \cdots \end{CD}$$
Since $\overline {\phi^i}\; \overline {\phi ^{i+1}}=\overline {\phi^i\phi^{i+1}}=\overline {f\cdot}=0$, $C (\phi)$ is in fact a complex of (graded) right $A$-modules.
If $\mu :\{\phi^i:F^{i+1}\to F^i\}_{i\in \ZZ}\to \{\psi^i:G^{i+1}\to G^i\}_{i\in \ZZ}$ is a morphism of noncommutative (graded) right matrix factorizations,
then the commutative diagram
$$\begin{CD}
F^{i+1} @>\phi ^i>> F^i \\
@V\mu ^{i+1}VV @VV\mu ^iV \\
G^{i+1} @>\psi^i>> G^i
\end{CD}$$
in $\Mod S$ ($\GrMod S$) induces the commutative diagram
$$\begin{CD}
\overline {F^{i+1}} @>\overline {\phi ^i}>> \overline {F^i} \\
@V\overline{\mu ^{i+1}}VV @VV\overline {\mu ^i}V \\
\overline {G^{i+1}} @>\overline {\psi^i}>> \overline {G^i}
\end{CD}$$
in $\Mod A$ ($\GrMod A$) for every $i\in \ZZ$, so $C$ is a functor from the category of noncommutative (graded) right matrix factorizations to the category of cochain complexes of (graded) right $A$-modules.
\section{Twisting Systems}
Twist is a nice operation which is available only on graded rings. In this section, we only deal with graded algebras.
\begin{definition} Let $S$ be a graded algebra. A twisting system on $S$ is a sequence $\theta=\{\theta_i\}_{i\in \ZZ}$ of graded $k$-linear automorphisms of $S$ such that $\theta_i(a\theta_j(b))=\theta_i(a)\theta_{i+j}(b)$ for every $i, j\in \ZZ$ and every $a\in S_j, b\in S$. The twisted graded algebra of $S$ by a twisting system $\theta $ is a graded algebra $S^{\theta}$ where $S^{\theta}=S$ as a graded $k$-vector space with the new multiplication $a^{\theta}b^{\theta}=(a\theta_{i}(b))^{\theta}$ for $a^{\theta}\in S^{\theta}_i, b^{\theta}\in S^{\theta}$. Here we write $a^{\theta}\in S^{\theta}$ for $a\in S$ when viewed as an element of $S^{\theta}$ and the product $a\theta_{i}(b)$ is computed in $S$.
\end{definition}
Let $S$ be a graded algebra. If $\s\in \GrAut S$, then $\{\s^i\}_{i\in \ZZ}$ is a twisting system of $S$. In this case, we simply write $S^{\s}:=S^{\{\s^i\}}$.
By \cite [Proposition 2.2, Proposition 2.4]{Zh}, {\bf we tacitly assume that $\theta_0=\id$ for every twisting system $\theta$ for the rest of the paper.} We recall some results from \cite {Zh} which are needed in this paper.
\begin{lemma}[{\cite [Proposition 2.5]{Zh}}] \label{lem.gra}
Let $S$ be a graded algebra.
If $\theta=\{\theta_i\}_{i\in \ZZ}$ is a twisting system on $S$, then $\theta^{-1}=\{\theta_i^{-1}\}_{i\in \ZZ}$ is a twisting system on $S^{\theta}$ such that $(S^{\theta})^{\theta^{-1}}=S$.
\end{lemma}
Let $S$ be a graded algebra. For $0\neq \l\in k$, the map $\e_{\l}:S\to S$ defined by $\e_{\l}(a)=\l^ia$ for $a\in S_i$ is a graded algebra automorphism of $S$. We will leave the reader to check the following lemma.
\begin{lemma} \label{lem.el}
Let $S$ be a graded algebra and $\theta=\{\theta_i\}_{i\in \ZZ}$ a twisting system on $S$.
For $0\neq \l\in k$, $\e_{\l}\theta :=\{ \e_{\l}^ i \theta_i \}_{i \in \ZZ}$ is a twisting system on $S$ and the map $\phi:S^{\theta}\to S^{\e_{\l}\theta}$ defined by $\phi (a^{\theta})=\l^{i(i-1)/2}a^{\e_{\l}\theta}$ for $a\in S_i$ is an isomorphism of graded algebras.
\end{lemma}
\begin{remark}
\label{rem.imp}
Let $S$ be a graded algebra, $f\in S_d$, and $\theta$ a twisting system on $S$.
\begin{enumerate}
\item{} Suppose that $\theta_i(f)=\l_if$ for some $0\neq \l_i\in k$ for every $i\in \ZZ$. Since $\theta_i(1)=1$ for every $i\in \ZZ$ by \cite [Proposition 2.2]{Zh},
$$\l_j\l_if = \l_j\theta_i(f) = \theta _i(\l_jf) = \theta_i(\theta_j(f))= \theta_i(1)\theta_{i+j}(f)=\l_{i+j}f, $$
so $\l_i=\l^i$ for some $0\neq \l\in k$ for every $i\in \ZZ$.
\item{} Suppose that $\theta_i(f)=\l^if$ for some $0\neq \l\in k$ for every $i\in \ZZ$. If $\theta'=\e_{\l^{-1/d}}\theta$, then
$$\theta'_i(f)=\e_{\l^{-1/d}}^i\theta_i(f)=\l^{(-1/d)di}\l^if=f.$$
Moreover, there exists a graded algebra isomorphism $\phi:S^{\theta}\to S^{\theta'}$ such that $\phi(f^{\theta})=\l^{d(d-1)/2}f^{\theta'}$ by Lemma \ref{lem.el}, so
$$\NMF^{\ZZ}_{S^{\theta}}(f^{\theta})\cong \NMF^{\ZZ}_{S^{\theta'}}(\l ^{d(d-1)/2}f^{\theta'})=\NMF^{\ZZ}_{S^{\theta'}}(f^{\theta'})$$
by Remark \ref{rem.lambda} (4).
\item{}
By (1) and (2), we may replace the condition $\theta_i(f)=\l_if$ for some $0\neq \l_i\in k$ for every $i\in \ZZ$ by the simpler condition $\theta_i(f)=f$ for every $i\in \ZZ$ in some situations (see Theorem \ref{thm.cckm}).
\end{enumerate}
\end{remark}
Let $S$ be a graded algebra and $\theta $ a twisting system on $S$. For $M\in \GrMod S$, we define $M^{\theta}\in \GrMod S^{\theta}$ by $M^{\theta}=M$ as a graded $k$-vector space with the new action $m^{\theta}a^{\theta}=(m\theta_i(a))^{\theta}$ for $m^{\theta}\in M^{\theta}_i, a^{\theta}\in S^{\theta}$. For $\phi:M\to N$ in $\GrMod S$, we define $\phi ^{\theta}:M^{\theta}\to N^{\theta}$ in $\GrMod S^{\theta}$ by $\phi^{\theta}(m^{\theta})=\phi (m)^{\theta}$. In fact, for $m^{\theta}\in M_i^{\theta}, a^{\theta}\in S^{\theta}$,
\begin{align*}
\phi^{\theta} (m^{\theta}a^{\theta}) & = \phi ^{\theta}((m\theta_i(a))^{\theta})=\phi (m\theta_i(a))^{\theta} =(\phi (m)\theta_i(a))^{\theta}
=\phi (m)^{\theta}a^{\theta}=\phi^{\theta}(m^{\theta})a^{\theta},
\end{align*}
so $\phi^{\theta}$ is a graded right $S^{\theta}$-module homomorphism.
\begin{theorem}[{\cite [Theorem 3.1]{Zh}}] \label{thm.Z}
For a graded algebra $S$ and a twisting system $\theta$ on $S$, $(-)^{\theta}:\GrMod S\to \GrMod S^{\theta}$ is an equivalence functor.
\end{theorem}
\begin{lemma} \label{lem.si}
If $S$ is a graded algebra and $\theta$ is a twisting system on $S$, then the map $\theta_{m}:S^{\theta}(-m)\to S(-m)^{\theta}; \; a^{\theta}\to \theta_{m}(a)^{\theta}$ is a graded right $S^{\theta}$-module isomorphism.
\end{lemma}
\begin{proof} This follows from the proof of \cite [Theorem 3.4]{Zh}.
\end{proof}
By Lemma \ref{lem.si}, a map $\phi:S(-m-m')\to S(-m)$ in $\GrMod S$ induces the map
$$\begin{CD} \widetilde {\phi^{\theta}}:S^{\theta}(-m-m') @>\theta_{m+m'}>\cong > S(-m-m')^{\theta} @>\phi^{\theta}>> S(-m)^{\theta} @>\theta_{m}^{-1}>\cong > S^{\theta}(-m),\end{CD}$$
in $\GrMod S^{\theta}$. The map $\widetilde {\phi^{\theta}}$ is defined by $\widetilde {\phi^{\theta}}(c^{\theta})=(\theta_{m}^{-1}\phi\theta_{m+m'}(c))^{\theta}$. If $\psi:S(-m-m'-m'')\to S(-m-m')$ is a map in $\GrMod S$, then
\begin{align*}
\widetilde {\phi^{\theta}}\widetilde {\psi^{\theta}}(c^{\theta})
& =\widetilde {\phi^{\theta}}((\theta_{m+m'}^{-1}\psi\theta_{m+m'+m''}(c))^{\theta})=(\theta_{m}^{-1}\phi\theta_{m+m'}\theta_{m+m'}^{-1}\psi\theta_{m+m'+m''}(c))^{\theta} \\
& = (\theta_{m}^{-1}\phi\psi\theta_{m+m'+m''}(c))^{\theta} = \widetilde {(\phi\psi)^{\theta}}(c^{\theta}),
\end{align*}
so $\widetilde {\phi^{\theta}}\widetilde {\psi^{\theta}}=\widetilde {(\phi\psi)^{\theta}}$.
The map $\widetilde {\phi^{\theta}}$ must be the left multiplication of a certain element in $S^{\theta}_{m'}$. In fact, if $\phi=a\cdot :S(-m-m')\to S(-m)$ for $a\in S_{m'}$,
then
\begin{align*}
\widetilde {\phi^{\theta}}(c^{\theta}) &
= (\theta_{m}^{-1}\phi\theta_{m+m'}(c))^{\theta}
= \theta_{m}^{-1}(a\theta_{m+m'}(c))^{\theta}
= \theta_{m}^{-1}(\theta_m(\theta_m^{-1}(a))\theta_{m+m'}(c))^{\theta}\\
&=\theta_{m}^{-1}\theta_m(\theta_m^{-1}(a)\theta_{m'}(c))^{\theta}
= (\theta_m^{-1}(a)\theta_{m'}(c))^{\theta}=\theta_m^{-1}(a)^{\theta}c^{\theta},
\end{align*}
so $\widetilde {\phi^{\theta}}$ is the left multiplication of $\theta_{m}^{-1}(a)^{\theta}\in S^{\theta}_{m'}$. That is, we have the following commutative diagram
\[\xymatrix@R=2pc@C=6pc{
S(-m-m')^{\theta} \ar[r]^(0.55){(a\cdot)^{\theta}} &S(-m)^{\theta} \\
S^{\theta}(-m-m') \ar[r]^(0.55){\theta_m^{-1}(a)^{\theta}\cdot} \ar[u]^{\theta_{m+m'}}_{\cong} &S^{\theta}(-m). \ar[u]_{\theta_{m}}^{\cong}
}
\]
If $\phi=(\phi_{ij})$ is a map of graded free right $S$-modules,
then we define the map $\widetilde {\phi^{\theta}}=(\widetilde {\phi^{\theta} _{ij}})$ of graded free right $S^{\theta}$-modules.
\begin{theorem} \label{thm.nmft} \label{thm.cckm}
Let $S$ be a graded algebra, and $f\in S_d$. If $\theta$ is a
twisting system on $S$ such that $\theta_i(f)=\l^if$ for some $0\neq \l\in k$ and for every $i\in \ZZ$,
then
$$\operatorname {NMF}^{\ZZ}_S(f)\to \operatorname {NMF}^{\ZZ}_{S^{\theta}}(f^{\theta}); \; \{\phi^i\}_{i \in \ZZ}\mapsto \{(\phi ^i)^{\theta}\}_{i \in \ZZ}$$
is an equivalence functor.
\end{theorem}
\begin{proof}
By Remark \ref{rem.imp} (3), we may assume that $\theta_i(f)=f$ for every $i\in \ZZ$.
Since $(-)^{\theta}:\GrMod S\to \GrMod S^{\theta}$ is a functor by Theorem \ref{thm.Z}, and $\theta_i^{-1}(f)^\theta = f^\theta$ for every $i\in \ZZ$,
if $\{\phi^i\}_{i \in \ZZ}\in \NMF^{\ZZ}_S(f)$, then the commutative diagram
$$
\xymatrix@R=2pc@C=2pc{
F^{i+2} \ar[d]_{\cong} \ar[r]^{\phi^i\phi^{i+1}} &F^i \ar[d]^{\cong} \\
\bigoplus _{s=1}^rS(-m_{is}-d) \ar[r]^(0.54){f\cdot} &\bigoplus _{s=1}^rS(-m_{is})
}
$$
induces the commutative diagram
$$
\xymatrix@R=2pc@C=8pc{
(F^{i+2})^{\theta} \ar[d]_{\cong}
\ar[r]^{(\phi^i)^\theta (\phi^{i+1})^\theta\ =\ (\phi^i \phi^{i+1})^\theta}
&(F^i)^{\theta} \ar[d]^{\cong} \\
\bigoplus _{s=1}^rS(-m_{is}-d)^\theta
\ar[r]^(0.54){(f\cdot)^\theta}
&\bigoplus _{s=1}^rS(-m_{is})^\theta\\
\bigoplus _{s=1}^rS^\theta(-m_{is}-d) \ar[u]^{\bigoplus_{s=1}^r \theta_{m_{is}+d}}_{\cong} \ar[r]^(0.54){
f^\theta \cdot} &\bigoplus _{s=1}^rS^\theta(-m_{is}), \ar[u]_{\bigoplus_{s=1}^r \theta_{m_{is}}}^{\cong}
}
$$
so $\{(\phi ^i)^{\theta}\}_{i \in \ZZ}\in \NMF^{\ZZ}_{S^{\theta}}(f^{\theta})$.
Since $(-)^{\theta}:\GrMod S\to \GrMod S^{\theta}$ is a functor, a commutative diagram
$$\begin{CD}
F^{i+1} @>\phi ^i>> F^i \\
@V\mu ^{i+1}VV @VV\mu ^iV \\
G^{i+1} @>\psi^i>> G^i
\end{CD}$$
in $\GrMod S$ induces a commutative diagram
$$\begin{CD}
(F^{i+1})^{\theta} @>(\phi ^i)^{\theta}>> (F^i)^{\theta} \\
@V(\mu ^{i+1})^{\theta}VV @VV(\mu ^i)^{\theta}V \\
(G^{i+1})^{\theta} @>(\psi^i)^{\theta}>> (G^i)^{\theta}
\end{CD}$$
in $\GrMod S^{\theta}$, so
$$\operatorname {NMF}^{\ZZ}_S(f)\to \operatorname {NMF}^{\ZZ}_{S^{\theta}}(f^{\theta}); \; \{\phi^i\}_{i \in \ZZ}\mapsto \{(\phi ^i)^{\theta}\}_{i \in \ZZ}$$
is a functor. By Lemma \ref{lem.gra},
$$\operatorname {NMF}^{\ZZ}_{S^{\theta}}(f^{\theta})\to \operatorname {NMF}^{\ZZ}_{(S^{\theta})^{\theta^{-1}}}((f^{\theta})^{\theta^{-1}})\cong \operatorname {NMF}^{\ZZ}_{S}(f); \; \{\psi^i\}_{i \in \ZZ}\mapsto \{(\psi ^i)^{\theta^{-1}}\}_{i \in \ZZ}$$ is the inverse functor.
\end{proof}
\begin{remark} If $f$ is a ``regular normal" element and $\nu$ is the ``normalizing" automorphism of $f$, then Cassidy-Conner-Kirkman-Moore \cite {CCKM} defined the notion of twisted matrix factorization of $f$ (see the next section). In this case, we will show that the category of noncommutative graded matrix factorizations of $f$ is equivalent to the category of twisted graded matrix factorizations of $f$ (Proposition \ref{prop.ctmf}), so Theorem \ref{thm.cckm} was proved in \cite [Theorem 3.6]{CCKM} with the technical condition $\theta_i\nu^{-1} \theta_d=\nu^{-1}\theta_{i+d}$ for every $i\in \ZZ$. Since Theorem \ref{thm.cckm} does not require $f$ to be regular normal, it is a generalization of \cite [Theorem 3.6]{CCKM}.
\end{remark}
Let $S$ be a graded algebra, $f\in S_d$, and $\theta$ a
twisting system on $S$ such that $\theta_i(f)=f$ for all $i\in \ZZ$. If $\phi=\{\phi^i:F^{i+1}\to F^i\}_{i \in \ZZ}\in \NMF^{\ZZ}_S(f)$ such that $F^i\cong S(-m_i)^r$, then
$\{\widetilde {(\phi^i)^{\theta}}\}_{i \in \ZZ}=\{\theta_{m_i}^{-1}(\phi^i)^{\theta}\}_{i \in \ZZ}\in \NMF^{\ZZ}_{S^{\theta}}(f^{\theta})$.
In particular, if $F^0\cong S^r$, $d=2$ and $\theta =\{\s^i\}_{i \in \ZZ}$ for $\s\in \GrAut S$, then $m_i=i$ for all $i\in \ZZ$, so
$\{\widetilde {(\phi^i)^{\s}}\}_{i \in \ZZ}=\{\s^{-i}(\phi^i)^{\s}\}_{i \in \ZZ}\in \NMF^{\ZZ}_{S^{\s}}(f^{\s})$. The following simple example illustrates the difference between $\{(\phi^i)^{\theta}\}_{i \in \ZZ}$ and $\{\widetilde {(\phi^i)^{\theta}}\}_{i \in \ZZ}$.
\begin{example}
Let $S=k[x, y]$ with $\deg x=\deg y=1$, $f=xy\in S_2$, and $\s\in \GrAut S$ such that $\s(x)=y, \s(y)=x$ so that $\s(f)=f$. The noncommutative graded right matrix factorization of $f$ over $S$
$$\begin{CD}
\phi:\cdots @>y\cdot >> S(-3) @>x\cdot >> S(-2) @>y\cdot >> S(-1) @>x\cdot >> S @>y\cdot >> \cdots
\end{CD}$$
induces the commutative diagram
$$\begin{CD}
\phi^{\s}: \cdots @>(y\cdot)^{\s} >> S(-3)^{\s} @>(x\cdot )^{\s}>> S(-2)^{\s} @>(y\cdot)^{\s} >> S(-1)^{\s} @>(x\cdot)^{\s} >> S^{\s} @>(y\cdot)^{\s} >> \cdots \\
& & @V\cong VV @V\cong VV @V\cong VV @V\cong VV \\
\widetilde {\phi^{\s}}:\cdots @>x^{\s}\cdot >> S^{\s}(-3) @>x^{\s}\cdot >> S^{\s}(-2) @>x^{\s}\cdot >> S^{\s}(-1) @>x^{\s}\cdot >> S^{\s} @>x^{\s}\cdot >> \cdots \\
\end{CD}$$
It follows that $\widetilde {\phi^{\s}}$ is a noncommutative graded right matrix factorization of $f^{\s}=(xy)^{\s}=x^{\s}x^{\s}$ over $S^{\s}$ in the strict sense, but
the above commutative diagram shows that ${\phi^{\s}}$ is also (isomorphic to) a noncommutative graded right matrix factorization of $f^{\s}$ over $S^{\s}$.
\end{example}
With the above example understood, we often identify $\{(\phi^i)^{\theta}\}_{i\in \ZZ}$ with $\{\widetilde {(\phi^i)^{\theta}}\}_{i\in \ZZ}$.
\begin{example}
Let $S=k\<x, y\>/(x^2, y^2)$ with $\deg x=\deg y=1$, $f=xy+yx\in S_2$, and $A=S/(f)$.
If $\s'=\begin{pmatrix} \a & 0 \\ 0 & 1 \end{pmatrix}\in \GrAut S$, then $\s'(f)=\a f$.
If we modify $\s'$ as $\s=\begin{pmatrix} \a^{1/2} & 0 \\ 0 & \a^{-1/2} \end{pmatrix}=\a^{-1/2}\s'\in \GrAut S$, then $S^{\s'}\cong S^{\s}$ and $\s(f)=f$.
For $0\neq a, b\in k$, $\{\phi^i\}_{i\in \ZZ}$ defined by $\Phi^{2i}=ax+by, \Phi^{2i+1}=b^{-1}x+a^{-1}y$ is a noncommutative graded right matrix factorization of $f$ over $S$, so
$\{\s^{-i}(\phi^i)\}_{i\in \ZZ}$ defined by
$\Psi^{2i}=\s^{-2i}(\Phi^{2i})=\a^{-i}ax+\a ^{i}by, \Psi^{2i+1}=\s^{-2i-1}(\Phi^{2i+1})=\a^{-(2i+1)/2}b^{-1}x+\a ^{(2i+1)/2}a^{-1}y$, that is,
$$\Psi ^i=\begin{cases} \a^{-i/2}ax+\a^{i/2}by & \textnormal { if $i$ is even, } \\
\a^{-i/2}b^{-1}x+\a^{i/2}a^{-1}y & \textnormal { if $i$ is odd, }
\end{cases}$$
is a noncommutative graded right matrix factorization of $f^{\s}=\a ^{1/2}xy+\a ^{-1/2}yx$ over $S^{\s}$. (Compare with Example \ref{ex.2.3}.)
\end{example}
\section{Regular Normal Elements}
\begin{definition}
Let $S$ be a ring, and $f\in S$.
\begin{enumerate}
\item{} We say that $f$ is regular if, for every $a\in S$, $af=0$ or $fa=0$ implies $a=0$.
\item{} We say that $f$ is normal if $Sf=fS$.
\end{enumerate}
\end{definition}
\begin{remark}
Let $S$ be a (graded) ring and $f\in S$ a (homogeneous) element (of degree $d$).
\begin{enumerate}
\item{} $f$ is regular if and only if the map $f\cdot :S\to S$ ($f\cdot :S(-d)\to S$) is an injective (graded) right $S$-module homomorphism and $\cdot f:S\to S$ ($\cdot f:S(-d)\to S$) is an injective (graded) left $S$-module homomorphism.
\item{} $f$ is regular normal if and only if there exists a unique (graded) ring automorphism $\nu_f$ of $S$ such that $af=f\nu_f(a)$ for $a\in S$.
We call $\nu := \nu_f$ the normalizing automorphism of $f$.
\end{enumerate}
\end{remark}
\begin{remark}
Let $S$ be a noetherian AS-regular algebra and $f \in S_d$ a regular normal element.
Then $A= S/(f)$ is a noetherian AS-Gorenstein algebra, which is called a noncommutative hypersurface of degree $d$.
\end{remark}
If $\Phi=(a_{st})$ is a matrix whose entries are (homogeneous) elements in $S$, and $\s$ is a (graded) algebra automorphism of $S$, then we write $\s(\Phi)=(\s(a_{st}))$. If $\phi$ is a (graded) right $S$-module homomorphism
given by the left multiplication of $\Phi$, then we write $\s(\phi)$ for the (graded) right $S$-module homomorphism given by the left multiplication of $\s(\Phi)$.
Let $f\in S$ be a (homogeneous) regular normal element. Since $f\sf(\Phi)=(f\sf(a_{st}))=(a_{st}f)=\Phi f$, we have $f\sf(\phi)=\phi f$
($f(\sf(\phi)(-d))=\phi f$ where $d=\deg f$).
\begin{theorem} \label{thm.nmf}
Let $S$ be a (graded) ring and $f\in S$ a (homogeneous) regular normal element (of degree $d$).
\begin{enumerate}
\item{} If $\phi$ is a noncommutative (graded) right matrix factorization of $f$ over $S$, then $\phi^{i+2}=\sf(\phi^i)$ ($\phi^{i+2}=\sf(\phi^i)(-d)$) for every $i\in \ZZ$. It follows that $\phi$ is uniquely determined by $\phi^0$ and $\phi^1$.
\item{} Let $\phi^0, \phi^1$ be (graded) right $S$-module homomorphisms between (graded) free right $S$-modules of rank $r$ such that $\phi^0\phi^1=f\cdot$. If $\phi^0$ is injective, then there exists a unique (graded) right matrix factorization $\phi$ extending $\phi^0, \phi^1$.
\item{} If $\mu:\phi\to \psi$ is a morphism of noncommutative (graded) right matrix factorizations of $f$ over $S$, then $\mu^{i+2}=\sf(\mu^i)$ ($\mu^{i+2}=\sf(\mu^i)(-d)$) for every $i\in \ZZ$. It follows that $\mu$ is uniquely determined by $\mu^0$ and $\mu^1$.
\item{} Let $\phi, \psi$ be noncommutative (graded) right matrix factorizations of $f$ over $S$ and $\mu^0:F^0\to G^0, \mu^1:F^1\to G^1$ are (graded) right $S$-module homomorphisms such that $\mu^0 \phi^0=\psi^0\mu^1 $, then there exists a unique morphism $\mu:\phi\to \psi$ extending $\mu ^0, \mu ^1$.
\end{enumerate}
\end{theorem}
\begin{proof}
(1) Since $\phi^i\phi^{i+1}=\phi^{i+1}\phi^{i+2}=f\cdot $ for every $i\in \ZZ$,
$$f\sf(\phi^i)=\phi^i f=\phi^i\phi^{i+1}\phi^{i+2}=f\phi^{i+2}.$$
Since $f$ is regular (the map $f\cdot$ is injective), $\phi^{i+2}=\sf(\phi^i)$.
(2) Let $\phi^0:G\to F, \phi ^1:F\to G$ such that $\phi^0\phi^1=f\cdot :F\to F$.
Since $\phi^0\phi^1\sf(\phi^0)=f\sf(\phi^0)=\phi^0 f$, and $\phi^0$ is injective, $\phi^1\sf(\phi^0)=f\cdot :G\to G$, so define $\phi^2=\sf(\phi^0)$. Since
$\sf^{-1}(\phi^1)\phi^0=\sf^{-1}(\phi^1\sf(\phi^0))=\sf^{-1}(f)\cdot =f\cdot : G\to G$, define $\phi^{-1}=\sf^{-1}(\phi^1)$.
Inductively we can construct a noncommutative (graded) right matrix factorization $\phi$ extending $\phi^0, \phi^1$.
The uniqueness follows from (1).
(3) By (1), $\sf^{-1}(\psi^{i+1})\psi^i= \psi^{i-1}\psi^i =f\cdot$, so
\begin{align*}
f\psi^{i+1}\mu^{i+2} & = f\mu ^{i+1}\phi^{i+1} = \sf^{-1}(\psi^{i+1})\psi^i\mu ^{i+1}\phi^{i+1}\\
&= \sf^{-1}(\psi^{i+1})\mu ^i\phi^i\phi^{i+1} = \sf^{-1}(\psi^{i+1})\mu ^if = f\psi^{i+1}\sf(\mu ^i).
\end{align*}
Since $f\psi^{i+1}$ is injective, $\mu^{i+2}=\sf(\mu^i)$.
(4) Since $\psi ^0\psi^1\sf(\mu ^0)=f\sf(\mu ^0)=\mu ^0f=\mu ^0\phi^0\phi^1=\psi^0\mu ^1\phi^1$ and $\psi^0$ is injective, $\psi^1\sf(\mu^0)=\mu^1\phi^1$, so define $\mu^2=\sf(\mu^0)$.
Since $\sf^{-1}(\mu^1)\phi^{-1}=\sf^{-1}(\mu^1)\sf^{-1}(\phi^1)=\sf^{-1}(\mu^1\phi^1)=\sf^{-1}(\psi^1\sf(\mu^0))=\sf^{-1}(\psi^1)\mu^0=\psi^{-1}\mu^0$, define $\mu^{-1}=\sf^{-1}(\mu^1)$.
Inductively we can construct a morphism $\mu:\phi\to \psi$ extending $\mu^0, \mu^1$.
The uniqueness follows from (3).
\end{proof}
In the case that $f\in S$ is a homogeneous regular normal element, the notion of twisted graded matrix factorization was defined in \cite {CCKM}. We will show that a twisted graded matrix factorization and a noncommutative graded matrix factorization are the same if $f\in S$ is a homogeneous regular normal element.
Let $S$ be a connected graded algebra.
For $\s \in \GrAut S$ and $M \in \GrMod S$, we define $M_\s \in \GrMod S$ where $M_\s = M$ as a graded $k$-vector space with the new action $m*a = m\s(a)$ for $m \in M, a \in S$.
If $\phi: M \to N$ is a graded right $S$-module homomorphism, then $\phi_{\s} :M_\s \to N_\s; \; m \mapsto \phi(m)$ is also a graded right $S$-module homomorphism.
Let $f \in S_d$ be a regular normal element, and $\nu$ the normalizing automorphism of $f$.
Then we write $M_{\tw} = M_{\nu^{-1}}(-d)$. Note that a graded right $S$-module homomorphism $\phi: M \to N$ naturally induces a graded right $S$-module homomorphism $\phi_{\tw}: M_{\tw} \to N_{\tw}$.
\begin{definition}[\textnormal{\cite[Definitions 2.2, 2.5]{CCKM}}]
Let $S$ be a locally finite connected graded algebra, $f \in S_d$ a regular normal element, and $\nu$ the normalizing automorphism of $f$.
A twisted graded right matrix factorization of $f$ over $S$ is an ordered pair of graded right $S$-module homomorphisms
\[ (\psi :F \to G, \; \t: G_{\tw} \to F) \]
where $F, G$ are graded free right $S$-modules of rank $r$ for some $r \in \NN$ such that $\psi \t= \cdot f$
and $\t \psi_{\tw}= \cdot f$.
A morphism $(\psi, \t) \to (\psi', \t')$ of twisted graded right matrix factorizations is a pair $(\mu_G, \mu_F)$ of graded right $S$-module homomorphisms such that the diagram
\[\xymatrix@R=2pc@C=3pc{
F \ar[d]_{\mu_F} \ar[r]^{\psi} &G \ar[d]^{\mu_G} \\
F' \ar[r]^{\psi'} &G'
}\]
commutes. We denote by $\TMF_S^{\ZZ}(f)$ the category of twisted graded right matrix factorizations.
\end{definition}
\begin{remark} \label{rem.tmf}
The comment in \cite [Section 1]{CCKM} is misleading. The tensor algebra $T:=T(V)$ of an $n$-dimensional vector space $V$ over $k$ is a locally finite connected graded algebra. Since the minimal free resolution of $k$ over $T$ is given by $0\to T(-1)^n\to T\to k$, so, in particular, $T(-1)^n\to T$ is injective, however, if $n>1$, then $\rank T(-1)^n=n>\rank T=1$.
For this reason, it is unclear that $(\psi, \t)\in \TMF_S^{\ZZ}(f)$ implies $\rank F=\rank G$ in general as claimed in \cite [Section 2]{CCKM}, so we impose the condition $\rank F=\rank G$ in the definition of a twisted graded matrix factorization in this paper. (This problem will be avoided by assuming that $S$ is a graded quotient algebra of a right noetherian connected graded regular algebra in Section 6.)
\end{remark}
\begin{proposition} \label{prop.ctmf}
If $S$ is a locally finite connected graded algebra, and $f \in S_d$ is a regular normal element,
then $\NMF_S^{\ZZ}(f) \cong \TMF_S^{\ZZ}(f)$.
\end{proposition}
\begin{proof}
If $(\psi :F \to G, \; \t: G_{\tw} \to F)\in \TMF_S^{\ZZ}(f)$ such that $\rank F = \rank G=r$,
then we have the following commutative diagram
\[\xymatrix@R=2.25pc@C=5pc{
G_{\tw} \ar[d]_{\nu}^{\cong} \ar[r]^{\t} &F \ar[r]^{\psi} \ar@{=}[d] & G \ar@{=}[d]\\
G(-d)\ar[r]^{\t \nu^{-1}} &F \ar[r]^{\psi} &G.
}\]
Put $\phi^0:= \psi$ and $\phi^1:= \t \nu^{-1}$. For $x \in G(-d)$, it follows that $\phi^0\phi^1(x) = \psi\t \nu^{-1}(x) = \nu^{-1}(x)f = fx$,
so $\phi^0\phi^1 = f\cdot$. By Theorem \ref{thm.nmf} (2), we obtain a unique $\phi_{(\psi,\t)} := \{ \phi^i\}_{i \in \ZZ}\in \NMF_S^{\ZZ}(f)$ extending $\phi^0, \phi^1$.
By Theorem 4.7 (4), we see that a morphism $(\psi, \t) \to (\psi', \t')$ in $\TMF_S^{\ZZ}(f)$ induces a unique morphism $\phi_{(\psi,\t)} \to \phi_{(\psi',\t')}$ in $\NMF_S^{\ZZ}(f)$. Hence this construction defines a functor ${\mathfrak F}:\TMF_S^{\ZZ}(f) \to \NMF_S^{\ZZ}(f)$.
Conversely, if $\{\phi^i:F^{i+1}\to F^i\}_{i\in \ZZ}\in \NMF_S^{\ZZ}(f)$ is of rank $r$, then we have the following commutative diagram
\[\xymatrix@R=2.25pc@C=5pc{
F(-d) \ar[d]_{\nu^{-1}}^{\cong} \ar[r]^{\phi^2} &G(-d) \ar[d]_{\nu^{-1}}^{\cong} \ar[r]^{\phi^1} &F \ar[r]^{\phi^0} \ar@{=}[d] & G \ar@{=}[d]\\
F_{\tw} \ar[r]^{\phi^0_{\tw}} &G_{\tw}\ar[r]^{\phi^1 \nu} &F \ar[r]^{\phi^0} &G.
}\]
where $G := F^0, F:=F^1$. In fact, for $x \in F(-d)$, we have $\nu^{-1}\phi^2(x) = \nu^{-1}(\nu(\phi^0)(x)) = \nu^{-1}(\nu(\Phi^0)x) = \Phi^0 \nu^{-1}(x) =
\phi^0_{\tw}\nu^{-1}(x)$ by Theorem \ref{thm.nmf} (1).
Put $\psi:= \phi^0$ and $\t:= \phi^1 \nu$.
For $y \in G_{\tw}, x \in F_{\tw}$, it follows that $\psi\t(y) = \phi^0\phi^1\nu (y) = f\nu(y) =yf$ and $\t\psi_{\tw}(x)= \phi^1 \nu \phi^0_{\tw}(x)= \phi^1\phi^2 \nu(x)= f\nu(x) =xf$,
so $\psi \t= \cdot f$ and $\t\psi_{\tw}= \cdot f$. Thus we obtain $(\psi,\t)\in \TMF_S^{\ZZ}(f)$.
This construction defines a functor ${\mathfrak G}:\NMF_S^{\ZZ}(f) \to \TMF_S^{\ZZ}(f)$, which is an inverse to $\mathfrak F$.
\end{proof}
\begin{remark}
The category $\NMF_S^{\ZZ}(f)$ is hardly abelian.
Presumably, the correct statement of \cite [Proposition 3.1]{CCKM} is ``$\TMF_S^{\ZZ}(f)$ is an additive category" instead of an abelian category.
\end{remark}
Let $S$ be a locally finite connected graded algebra, $f \in S_d$ a regular normal element, and $A=S/(f)$.
By \cite[Proposition 2.4]{CCKM}, for $(\psi :F \to G, \; \t: G_{\tw} \to F)\in \TMF_S^{\ZZ}(f)$, we have the complex $C'(\psi, \t)$ of graded right $A$-modules defined by
\[\xymatrix@R=2pc@C=2pc{
\cdots \ar[r]^(0.3){\overline{\t_{\tw^{i+1}}}} &\overline{F_{\tw^{i+1}}} \ar[r]^(0.5){\overline{\psi_{\tw^{i+1}}}} &\overline{G_{\tw^{i+1}}} \ar[r]^(0.6){\overline{\t_{\tw^i}}} &\overline{F_{\tw^i}} \ar[r]^{\overline{\psi_{\tw^i}}} &\overline{G_{\tw^i}} \ar[r] &\cdots.
}\]
\begin{proposition} \label{prop.tmf}
Let $S$ be a locally finite connected graded algebra, and $f \in S_d$ a regular normal element.
For every $\phi\in \NMF_S^{\ZZ}(f)$, there exists $(\phi^0, \phi^1\nu)\in \TMF_S^{\ZZ}(f)$ such that $C(\phi) \cong C'(\phi^0, \phi^1\nu)$.
\end{proposition}
\begin{proof}
By the proof of Proposition \ref{prop.ctmf}, $(\phi^0, \phi^1\nu)\in \TMF_S^{\ZZ}(f)$. Moreover, we can check that
\[\xymatrix@R=2.25pc@C=5pc{
F^{2i+2} \ar[d]_{\nu^{-i-1}}^{\cong} \ar[r]^{\phi^{2i+1}} &F^{2i+1} \ar[d]_{\nu^{-i}}^{\cong} \ar[r]^{\phi^{2i}} &F^{2i} \ar[d]_{\nu^{-i}}^{\cong}\\
G_{\tw^{i+1}} \ar[r]^{(\phi^1 \nu)_{\tw^{i}}} &F_{\tw^{i}}\ar[r]^{(\phi^0)_{\tw^{i}}} &G_{\tw^{i}}.
}\]
commutes for every $i \in \ZZ$,
so $C(\phi) \cong C'(\phi^0, \phi^1\nu)$.
\end{proof}
\begin{definition} Let $S$ be a (graded) ring and $f\in S$ a (homogeneous) element.
The period of a noncommutative (graded) right matrix factorization $\phi=\{\phi^i\}_{i\in \ZZ}$ of $f$ over $S$, denoted by $|\phi|$, is defined to be
the smallest positive integer $\ell\in \NN^+$ such that $\Coker (\overline {\phi^{\ell}})\cong \Coker (\overline {\phi^0})$
($\Coker (\overline {\phi^{\ell}})\cong \Coker (\overline {\phi^0})(-m)$ for some $m\in \ZZ$).
\end{definition}
The lemma below follows from Theorem \ref{thm.nmf} (1), which also follows from the combination of Proposition \ref{prop.tmf} and \cite [Proposition 2.12]{CCKM}. (See Theorem \ref{thm.mot} (1).)
\begin{lemma}
Let $S$ be a (graded) ring and $f\in S$ a (homogeneous) regular normal element.
If $\phi$ is a noncommutative (graded) right matrix factorization of $f$ over $S$, then $|\phi|\leq 2|\sf|$. In particular, if $f$ is central, then $|\phi|$ is 1 or 2.
\end{lemma}
\begin{example} Let $S=k\<x, y\>/(x^2, y^2)$ with $\deg x=\deg y=1$, $f=\a xy+yx\in S_2$, and $A=S/(f)$.
Since $H_S(t)=(1+t)/(1-t)$ and $H_{S/(f)}(t)=(1+t)^2$,
$$\begin{CD} 0 @>>> S(-2) @>f\cdot >> S @>>> S/(f) @>>> 0 \end{CD}$$
is an exact sequence in $\grmod S$ and
$$\begin{CD} 0 @>>> S(-2) @>\cdot f>> S @>>> S/(f) @>>> 0 \end{CD}$$
is an exact sequence in $\grmod S^o$, so $f\in S_2$ is a regular element.
Since
\begin{align*}
& xf=x(\a xy+yx)=xyx=(\a xy+yx)\a ^{-1}x=f\a ^{-1}x, \\
& yf=y(\a xy+yx)=\a yxy=(\a xy+yx)\a y=f\a y,
\end{align*}
$f\in S_2$ is a normal element with the normalizing automorphism $\sf$ given by $\sf(x)=\a^{-1}x, \sf(y)=\a y$.
For
$0\neq a, b\in k$,
$\psi=\{\psi^i\}_{i\in \ZZ}\in \NMF^{\ZZ}_S(f)$ defined by $\Psi^{2i}=\a^{-i}ax+\a ^iby, \Psi^{2i+1}=\a^{-i}b^{-1}x+\a ^{i+1}a^{-1}y$
has the property $\Psi^{i+1}\Psi^i=f$
by Example \ref{ex.2.3}, and it is easy to see that $\psi ^{i+2}=\sf(\psi ^i)(-2)$ for every $i\in \ZZ$. To compute $|\psi|$, we use $\phi=\{\phi^i\}_{i\in \ZZ}\in \NMF^{\ZZ}_S(f)$ defined by $\Phi^{i}=ax+\a ^iby$, which is isomorphic to $\psi$ by Example \ref{ex.2.3}. Then it is easy to see that $|\psi|=|\phi|=|\a|=|\sf|$.
\end{example}
\section{Totally Reflexive Modules}
\begin{definition} Let $A$ be a (graded) ring.
A (graded) right $A$-module $M$ is called totally reflexive if
\begin{enumerate}
\item{} $\Ext^i_A(M, A)=0$ for all $i\geq 1$,
\item{} $\Ext^i_{A^o}(\Hom_A(M, A), A)=0$ for all $i\geq 1$, and
\item{} the natural biduality map $M\to \Hom_{A^o}(\Hom_A(M, A), A)$ is an isomorphism
\end{enumerate}
(that is, $\RHom_A(M, A)\cong \Hom_A(M, A)$ and $\RHom_{A^o}(\Hom_A(M, A), A)\cong M$).
The full subcategory of $\mod A$ consisting of totally reflexive modules is denoted by $\TR(A)$.
(The full subcategory of $\grmod A$ consisting of graded totally reflexive modules is denoted by $\TR^{\ZZ}(A)$.)
\end{definition}
\begin{remark}
A totally reflexive module is called a module of G-dimension zero or a Gorenstein-projective module in some literature.
\end{remark}
We recall some basic properties of totally reflexive modules.
\begin{lemma} \label{lem.tr}
Let $A$ be a ring.
\begin{enumerate}
\item{} If $P\in \mod A$ is
projective, then $P\in \TR(A)$.
\item{} If $M$ is a totally reflexive right $A$-module, then $M^*:=\Hom_A(M, A)$ is a totally reflexive left $A$-module.
In particular, if $A$ is left noetherian, then $M\in \TR(A)$ implies $M^*\in \TR(A^o)$.
\item{} If $M\in \TR(A)$, then $\Omega M$ is a totally reflexive right $A$-module.
In particular, if $A$ is right noetherian, then $M\in \TR(A)$ implies $\Omega ^iM\in \TR(A)$ for every $i\in \NN$ (cf. \cite[Lemma 2.2]{AM}).
\end{enumerate}
\end{lemma}
\begin{lemma} \label{lem.ev}
Let $A$ be a right noetherian ring, and $M\in \mod A$. Suppose that either $\pd(M)<\infty$ or $\id_{A^o}A<\infty$. Then $M\in \TR(A)$ if and only if $\Ext^i_A(M, A)=0$ for all $i\geq 1$.
\end{lemma}
\begin{remark}
If $A$ is a noetherian AS-Gorenstein algebra, then a finitely generated totally reflexive graded module is the same as a finitely generated maximal Cohen-Macaulay graded module, that is, $\TR^{\ZZ}(A)=\CM^{\ZZ}(A)$.
\end{remark}
Here is another characterization of a totally reflexive module.
\begin{definition}
Let $A$ be a (graded) ring.
A complete resolution of a finitely generated (graded) right $A$-module $M$ is an infinite exact sequence
$$\begin{CD}
\cdots @>d^{i+2}>> P^{i+2} @>d^{i+1}>> P^{i+1} @>d^i>> P^i @>d^{i-1}>> P^{i-1} @>d^{i-2}>> \cdots \end{CD}$$
where $P^i$ are finitely generated (graded) projective right $A$-modules such that $\Coker \phi^0\cong M$ and
$$\begin{CD}
\cdots @<(d^{i+2})^*<< (P^{i+2})^* @<(d^{i+1})^*<< (P^{i+1})^* @<(d^i)^*<< (P^i)^* @<(d^{i-1})^*<< (P^{i-1})^* @<(d^{i-2})^*<< \cdots \end{CD}$$
is an exact sequence.
\end{definition}
\begin{lemma} [{\cite[Theorem 3.1]{AM}}]
\label{lem.comp}
Let $A$ be a noetherian ring.
Then $M\in \TR(A)$ if and only if $M$ has a complete resolution.
\end{lemma}
Let $S$ be a ring, $f\in S$, and $A=S/(f)$. For $\phi\in \NMF_S(f)$, we define $\Coker \phi:=\Coker \overline{\phi^0}\in \mod A$.
\begin{lemma} \label{lem.cvp}
Let $S$ be a ring, $f\in S$ a regular normal element, and $A=S/(f)$. For $\phi\in \NMF_S(f)$, $\Coker {\phi^0}$ is $\Coker \phi$ viewed as a right $S$-modules.
\end{lemma}
\begin{proof} By the commutative diagram
\[\xymatrix@R=1.5pc@C=3pc{
F^1 \ar[d] \ar[r]^{\phi^0} &F^0 \ar[d] \ar[r]^(0.4){\e} &\Coker \phi^0 \ar[r] &0\\
\overline {F^1} \ar[d] \ar[r]^{\overline {\phi^0}} &\overline {F^0} \ar[d] \ar[r]^(0.4){\overline \e} &\Coker \overline {\phi^0} \ar[r] &0\\
0 &0
}\]
of right $S$-modules, it is standard that $\Coker \phi^0\to \Coker \phi; \; \e(a)\mapsto \overline {\e}(\overline a)$ where $a\in F^0$ is a surjective right $S$-module homomorphism.
For $\e(a)\in \Coker \phi^0$, if $\overline {\e}(\overline a)=0$, then there exists $b\in F^1$ such that $\overline {\phi^0(b)}=\overline {\phi^0}(\overline b)=\overline a$.
Since $f$ is normal, $(f)=SfS=fS$, so there exists $u\in S^r\cong F^2$ such that $a-\phi^0(b)=fu$. It follows that $a-\phi^0(b)\in \Im (f\cdot)=\Im (\phi^0\phi^1)\subset \Im \phi^0$, so $a\in \Im \phi^0$, hence $\e(a)=0$.
\end{proof}
\begin{lemma} \label{lem.cok}
Let $S$ be a ring, $f\in S$ a regular normal element, and $A=S/(f)$.
If $\phi=\{\phi^i:F^{i+1}\to F^i\}_{i\in \ZZ}\in \NMF_S(f)$,
then
$C (\phi)$ is a complete resolution of $\Coker \phi$ in $\mod A$.
\end{lemma}
\begin{proof}
If $\overline v\in \Ker (\overline {\phi^i})$ where $v\in F^{i+1}\cong S^r$, then $\overline {\phi ^i(v)}=\overline {\phi ^i}(\overline v)=\overline 0$.
Since $f$ is normal, there exists $u\in S^r\cong F^{i}$ such that $\phi ^i(v)=fu$.
Since $f$ is regular and
$$f(v-\phi ^{i+1}(u))=\phi ^{i-1}\phi ^i(v-\phi ^{i+1}(u))=\phi ^{i-1}(\phi ^i(v)-\phi ^i\phi ^{i+1}(u))=\phi ^{i-1}(\phi ^i(v)-fu)=0,$$
it follows that $v=\phi ^{i+1}(u)$, so $\overline v\in \Im (\overline {\phi ^{i+1}})$ for every $i\in \ZZ$, hence $C (\phi)$ is an exact sequence.
In the diagram
\[\xymatrix@R=1.5pc@C=5.5pc{
\overline {\Hom_S(F^i, S)} \ar[d]_{\cong} \ar[r]^{\overline{\Hom_S(\phi^i, S)}} &\overline{\Hom_S(F^{i+1}, S)} \ar[d]^{\cong} \\
A^r \ar[d]_{\cong} &A^r \ar[d]^{\cong}\\
\Hom_A(\overline {F^i}, A) \ar[r]^{\Hom_A(\overline {\phi ^i}, A)} &\Hom_A(\overline {F^{i+1}}, A),
}\]
both horizontal maps are given by $\cdot \overline {\Phi^i}$, so the diagram commutes, hence we have $\Hom_A(C (\phi), A)\cong C (\Hom_S(\phi, S))$.
By Lemma \ref{lem.dmf}, $\Hom_S(\phi, S)$ is a noncommutative left matrix factorization of $f$ over $S$, so $\Hom_A(C (\phi), A)\cong C (\Hom_S(\phi, S))$ is an exact sequence, hence $C (\phi)$ is a complete resolution of $\Coker \phi$ in $\mod A$.
\end{proof}
\begin{proposition} \label{q.fun}
Let $S$ be a noetherian ring, $f\in S$ a regular normal element, and $A=S/(f)$.
\begin{enumerate}
\item{} $\Coker :\NMF_S(f)\to \TR (A)$
is a functor.
\item{} $\Coker (\overline {\phi^i})\cong \Omega ^i\Coker \phi\in \TR(A)$ for every $i\in \ZZ$.
\end{enumerate}
\end{proposition}
\begin{proof} This follows from Lemma \ref{lem.comp} and Lemma \ref{lem.cok}.
\end{proof}
\begin{definition} For a (graded) ring $A$ and a (graded) right $A$-module $M$, we define
$e(M):=\sup\{i\mid \Ext_A^i(M, A)\neq 0\}$.
\end{definition}
\begin{remark} Let $A$ be a ring.
\begin{enumerate}
\item{} For a right $A$-module $M$, we have $e(M)\in \NN \cup \{-\infty\}$ where $e(M)=-\infty$ if and only if $\Ext_A^i(M, A)=0$ for all $i$.
\item{} If $0\neq M\in \TR(A)$, then $e_A(M)=0$ and $e_{A^o}(M^*)=0$ by definition.
\item{} Suppose that $A$ is right noetherian and $M\in \mod A$. If either $\pd (M) < \infty$ or $\id_{A^o}(A)<\infty$,
then $e(M)=-\infty$ if and only if $M=0$, and $e(M)=0$ if and only if $0\neq M\in \TR(A)$ by Lemma \ref{lem.ev}.
\end{enumerate}
\end{remark}
\begin{lemma} \label{lem.jAS}
Let $A$ be a right noetherian ring and $0\neq M\in \mod A$.
If $\pd (M)<\infty$ (e.g. if $A$ is regular), then $\pd (M)=e(M)$.
In particular, $M\in \mod A$ is a projective module if and only if $\pd (M)<\infty$ and $M\in \TR(A)$.
\end{lemma}
\begin{proof}
Since $M \neq 0$ and $\pd (M)<\infty$, we clearly have $d:=\pd (M)\geq e(M)=: e \geq 0$.
Suppose that $d>e$.
If $K:=\Omega ^{d-1}M\in \mod A$, then $\pd (K)=1$.
Let $0\to G\to F\to K\to 0$ be a projective resolution of $K$ in $\mod A$. Since $G$ is finitely generated projective and $d>e$,
$\Ext_A^1(K, G)\cong \Ext_A^d(M, G)=0$, so the exact sequence $0\to G\to F\to K\to 0$ splits.
It follows that $K$ is projective, which is a contradiction, hence $d=e$.
\end{proof}
\begin{definition}
Let $S$ be a (graded) ring, $f\in S$ a (homogeneous) element, and $A=S/(f)$. We define
\begin{align*}
&\TR_S(A):=\{M\in \TR(A)\mid \pd _S(M)<\infty\}\\
(&\TR_S^{\ZZ}(A):=\{M\in \TR^{\ZZ}(A)\mid \pd _S(M)<\infty\}).
\end{align*}
\end{definition}
Note that if $S$ is a (graded) regular ring, then $\TR_S(A)=\TR(A)$ ($\TR_S^{\ZZ}(A)=\TR^{\ZZ}(A)$).
\begin{lemma} \label{lem.pd}
Let $S$ be a right noetherian ring, $f\in S$ a regular normal element, and $A=S/(f)$.
For $0\neq M\in \mod A$, if $\pd_S(M)<\infty$ (e.g. if $S$ is regular), then $\pd_S(M)=e_A(M)+1$.
In particular, if $0\neq M\in \TR_S(A)$,
then $\pd_S(M)=1$.
\end{lemma}
\begin{proof} An exact sequence
\[
\xymatrix@R=1.5pc@C=2pc{
0 \ar[r] &{_{\sf}}S \ar[r]^{f\cdot} &S \ar[r] &A \ar[r] &0
}
\]
in $\mod S^e$ induces the following commutative diagram
\[
\xymatrix@R=1.5pc@C=2pc{
0 \ar[r] &\Hom_S(A,S) \ar[r] &\Hom_S(S,S) \ar_{\cong}[d] \ar[rr]^{\Hom_S(f\cdot,S)} &&\Hom_S({_{\sf}}S,S) \ar^{\cong}[d] \ar[r] &\Ext^1_S(A,S) \ar[r] &0\\
&0 \ar[r] &S \ar[rr]^{\cdot f} &&S_{\sf} \ar[r]& A_{\sf} \ar[r] &0
}
\]
in $\mod S^e$, so $\Hom_S(A, S)=0$ and $\Ext^1_S(A, S)\cong A_{\sf}$ as $S$-$S$ bimodules. Note that $\Ext^1_S(A, S)$ has a right $A$-module structure induced by the left action on $A$, which recovers the original right $S$-module structure on $\Ext^1_S(A, S)$ via the map $S\to A$.
On the other hand, since $\nu(f)=f$, we see that $\nu$ induces $\overline {\nu}\in \Aut A$, so $A_{\nu}$ has a right $A$-module structure by the identification $A_{\sf}=A_{\overline {\nu}}$, which recovers the original right $S$-module structure on $A_{\nu}$ via the map $S\to A$. Hence $\Ext^1_S(A, S)\cong A_{\sf}=A_{\overline {\nu}}\cong A$ as right $A$-modules.
Moreover, since $\pd _S(A)=1$, we have $\Ext_A^i(A, S)=0$ for every $i\geq 2$, so
\begin{align*}
\RHom_S(M, S) & \cong\RHom_S(M\lotimes _AA, S)\cong \RHom_A(M, \RHom_S(A, S)) \\
& \cong \RHom_A(M, A[-1])\cong \RHom_A(M, A)[-1].
\end{align*}
Since $S$ is right noetherian, $\pd_S(M)<\infty$ and $e_S(M)\geq 0$, we obtain $\pd_S(M)=e_S(M)=e_A(M)+1$ by Lemma \ref{lem.jAS}.
\end{proof}
\begin{remark}
With obvious changes, all statements in this section also hold true in the graded case.
\end{remark}
\section{Factor Categories}
In this section, we generalize the graded version of Eisenbud's matrix factorization theorem (Theorem \ref{thm.mot} (2), (3)) to noncommutative, non-hypersurface algebras.
Let $S$ be a graded quotient algebra of a right noetherian connected graded regular algebra and $M\in \grmod S$.
By \cite[Theorem 2.4]{SZ}, $H_M(t)$ is a rational function and $\GKdim M\in \NN$ is finite, which is given by the order of the pole of $H_M(t)$ at $t=1$.
It follows that $H_M(t)=q_M(t)/(1-t)^{\GKdim M}$ where $q_M(t)$ is a rational function such that $0\neq q_M(1)\in \QQ$ is finite.
We define
$$\hrank M=\lim _{t\to 1}H_M(t)/H_S(t).$$
If $F=\bigoplus _{i=1}^rS(-m_i)$ is a graded free right $S$-module of rank $r$, then $H_F(t)=\sum _{i=1}^rt^{m_i}H_S(t)$, so $\hrank F=r=\rank F$.
On the other hand, since $H_M(t)/H_S(t)=(1-t)^{\GKdim S-\GKdim M}q_M(t)/q_S(t)$, we see that $\hrank M=0$ if and only if $\GKdim M<\GKdim S$.
If $0\to L\to M\to N\to 0$ is an exact sequence in $\grmod S$, then $H_M(t)=H_L(t)+H_N(t)$, so $\hrank M=\hrank L+\hrank N$.
(Thus the problem of Remark \ref{rem.tmf} can be avoided.)
\begin{theorem} \label{thm.m1}
Let $S$ be a graded quotient algebra of a right noetherian connected graded regular algebra, $f\in S_d$ a regular normal element, and $A=S/(f)$.
Then the image of the functor $\Coker:\NMF^{\ZZ}_S(f)\to \TR^{\ZZ}(A)$ is $\TR^{\ZZ}_S(A)$.
\end{theorem}
\begin{proof}
For $\phi\in \NMF_S^{\ZZ}(f)$, $\phi^0:F^1\to F^0$ is injective, so
$$\begin{CD} 0 @>>> F^1 @>\phi^0>> F^0 @>>> \Coker \phi^0 @>>> 0\end{CD}$$
is a free resolution of $\Coker \phi^0$ in $\grmod S$. Since $\Coker \phi^0$ is the same as $\Coker \phi:=\Coker \overline {\phi^0}\in \grmod A$ viewed as a graded right $S$-module by Lemma \ref{lem.cvp}, $\pd_S(\Coker \phi)\leq 1$, so $\Coker \phi\in \TR_S^{\ZZ}(A)$. (Note that if $\Coker \phi\neq 0$, then $\pd_S(\Coker \phi)=1$.)
Conversely, if $0\neq M\in \TR^{\ZZ}_S(A)$,
then there exists a graded free right $S$-module $F$ of finite rank such that $\overline {\e}:\overline {F}\to M\to 0$ is an exact sequence in $\grmod A$. Since $\pd_S(M)=1$ by Lemma \ref{lem.pd}, there exists a graded free right $S$-module $G$ of finite rank such that
$$\begin{CD} 0 @>>> G @>\phi>> F @>\e>> M @>>> 0\end{CD}$$
is a free resolution of $M$ in $\grmod S$.
Since $S$ is a graded quotient algebra of a noetherian connected graded regular algebra, $H_S(t)$ is a rational function and $H_A(t)=(1-t^d)H_S(t)$, so $\GKdim M\leq \GKdim A=\GKdim S-1<\infty$.
It follows that $\hrank_S M=0$,
so $r:=\rank F = \hrank_S F =\hrank_S G =\rank G$. Since $\e:F\to \overline {F}\to M$, for every $u\in F$, $fu\in \Ker \e=\Im \phi$. Since $\phi$ is injective, there exists a unique $\psi (u)\in G$ such that $\phi(\psi(u))=fu$. For $u, v\in F$ and $a, b\in S$,
$$\phi(\psi(ua+vb)) =f(ua+vb)=fua+fvb=\phi(\psi(u))a+\phi(\psi(v))b=\phi(\psi(u)a+\psi(v)b),$$
so $\psi:F(-d)\to G$ is a graded right $S$-module homomorphism such that $\phi\psi=f\cdot:F(-d)\to F$. By Theorem \ref{thm.nmf} (2), there exists a unique $\phi\in \NMF^{\ZZ}_S(f)$ such that $\phi^0=\phi$ and $\phi^1=\psi$ so that $\Coker \phi =\Coker \overline {\phi^0}\cong M$.
\end{proof}
\begin{proposition} \label{prop.mfr}
Let $S$ be a graded quotient algebra of a right noetherian connected graded regular algebra, $f\in S_d$ a regular normal element, and $A=S/(f)$.
If $M\in \TR^{\ZZ}_S(A)$ has no free summand, then there exists
$\phi
\in \NMF_S^{\ZZ}(f)$ such that $C (\phi)^{\geq 0}$ is a minimal free resolution of $M$.
\end{proposition}
\begin{proof}
By Theorem \ref{thm.m1}, there exists $\phi\in \NMF^{\ZZ}_S(f)$ such that $\Coker \phi\cong M$. By Proposition \ref{prop.tmf}, $(\phi^0, \phi^1\s^{-1})\in \TMF_S^{\ZZ}(f)$ and $C'(\phi^0, \phi^1\s^{-1})\cong C(\phi)$. Since $M$ has no free summand, $C(\phi)^{\geq 0}\cong C' (\phi^0, \phi^1\s^{-1})^{\geq 0}$ is a minimal free resolution of $M$ by \cite[Proposition 2.9]{CCKM}.
\end{proof}
Let $\cC$ be an additive category and $\cF$ a set of objects of $\cC$.
Then the factor category $\cC/\cF$ has $\Obj(\cC/\cF) =\Obj(\cC)$ and $\Hom_{\cC/\cF}(M, N) = \Hom_{\cC}(M, N)/\cF(M,N)$ for $M, N\in \Obj(\cC/\cF)=\Obj(\cC)$,
where $\cF(M,N)$ is the subgroup consisting of all morphisms from $M$ to $N$ that factor through objects in $\cF$. Note that $\cC/\cF$ is also an additive category.
\begin{definition} \label{def.trivnmf}
Let $S$ be a ring and $f\in S$. For a free right $S$-module $F$, we define $\phi_F, {_F\phi}\in \NMF_S(f)$ by
$$\begin{array}{lll}
& \phi_F^{2i}=\id_F:F\to F, & \phi_F^{2i+1}=f\cdot : F\to F, \\
& {_F\phi}^{2i}=f\cdot:F\to F, & {_F\phi}^{2i+1}=\id _F: F\to F.
\end{array}$$
We define
$\cF :=\{\phi_F\mid F\in \mod S \; \textnormal{is free} \}$,
$\cG :=\{\phi_F\oplus {_G\phi} \mid F, G\in \mod S \; \textnormal{are free} \}$, and
$\uNMF_S(f):=\NMF_S(f)/\cG$.
Let $S$ be a graded ring and $f\in S_d$. For a graded free right $S$-module $F$,
we define $\phi_F, {_F\phi}\in \NMF_S^{\ZZ}(f)$ by
$$\begin{array}{lll}
& \phi_F^{2i}=\id_F:F\to F, & \phi_F^{2i+1}=f\cdot : F(-d)\to F, \\
& {_F\phi}^{2i}=f\cdot: F(-d)\to F, & {_F\phi}^{2i+1}=\id _F: F\to F.
\end{array}$$
We define
$\cF :=\{\phi_F\mid F\in \grmod S \; \textnormal{is free} \}$,
$\cG :=\{\phi_F\oplus {_G\phi} \mid F, G\in \grmod S \; \textnormal{are free} \}$, and
$\uNMF_S^{\ZZ}(f):=\NMF_S^{\ZZ}(f)/\cG$.
\end{definition}
\begin{proposition}
\label{prop.m2}
Let $S$ be a ring, $f\in S$ and $A=S/(f)$. Then the functor $\Coker :\NMF_S(f)\to \mod A$ induces a fully faithful functor $\NMF_S(f)/\cF\to \mod A$.
A similar result holds in the graded case.
\end{proposition}
\begin{proof}
Since $\Coker \phi_F=0$, the functor $\Coker :\NMF_S(f)\to \mod A$ induces a functor $\NMF_S(f)/\cF\to \mod A$.
For any $\phi, \psi\in \NMF_S(f)$, it is enough to show that $\Coker :\Hom_{\NMF_S(f)}(\phi, \psi)\to \Hom_A(\Coker \phi, \Coker \psi)$ induces an isomorphism
$$\Hom_{\NMF_S(f)}(\phi, \psi)/\cF(\phi, \psi)\to \Hom_A(\Coker \phi, \Coker \psi).$$
Every $\a\in \Hom_A(\Coker \phi, \Coker \psi)$ can be viewed as a right $S$-module homomorphism $\a:\Coker \phi^0\to \Coker \psi^0$, which extends to a commutative diagram
\[\xymatrix@R=2pc@C=2.5pc{
0 \ar[r] &F^{1} \ar[d]_{\mu^1} \ar[r]^{\phi^0} &F^0 \ar[r] \ar[d]^{\mu^0} &\Coker \phi \ar[r] \ar[d]^{\a}& 0 \\
0 \ar[r] &G^{1} \ar[r]^{\psi^0} &G^0 \ar[r] &\Coker \psi \ar[r] & 0.
}\]
By Theorem \ref{thm.nmf} (4), there exists $\mu\in \Hom_{\NMF_S(f)}(\phi, \psi)$ such that $\Coker \mu=\a$.
If $\mu \in \cF(\phi, \psi)$ so that it factors through $\mu :\phi \to \phi_F\to \psi$ for some free right $S$-module $F$, then we have the following commutative diagram
\[\xymatrix@R=1.5pc@C=2.5pc{
0 \ar[r] &F^{1} \ar[d] \ar[r]^{\phi^0} &F^0 \ar[r] \ar[d] &\Coker \phi \ar[r] \ar[d]& 0 \\
0 \ar[r] &F \ar[d] \ar[r]^{\id_F} &F \ar[r] \ar[d] &\Coker \id_F = 0 \ar[r] \ar[d]& 0 \\
0 \ar[r] &G^{1} \ar[r]^{\psi^0} &G^0 \ar[r] &\Coker \psi \ar[r] & 0,
}\]
so $\Coker \mu=0$.
Conversely, if $\Coker \mu=0$ so that
\[\xymatrix@R=2pc@C=2.5pc{
0 \ar[r] &F^{1} \ar[d]_{\mu^1} \ar[r]^{\phi^0} &F^0 \ar[r] \ar[d]^{\mu^0} &\Coker \phi \ar[r] \ar[d]^{0}& 0 \\
0 \ar[r] &G^{1} \ar[r]^{\psi^0} &G^0 \ar[r] &\Coker \psi \ar[r] & 0,
}\]
then there exists a right $S$-module homomorphism $\mu':F^0\to G^1$ such that $\psi^0\mu'=\mu^0$.
Since we have a commutative diagram
\[\xymatrix@R=2pc@C=2.5pc{
0 \ar[r] &F^{1} \ar[d]_{\phi^0} \ar[r]^{\phi^0} &F^0 \ar[r] \ar[d]^{\id_{F^0}} &\Coker \phi \ar[r] \ar[d]& 0 \\
0 \ar[r] &F^0 \ar[d]_{\mu'} \ar[r]^{\id_{F^0}} &F^0 \ar[r] \ar[d]^{\mu^0} &\Coker \id_{F^0} = 0 \ar[r] \ar[d]& 0 \\
0 \ar[r] &G^{1} \ar[r]^{\psi^0} &G^0 \ar[r] &\Coker \psi \ar[r] & 0,
}\]
there exist morphisms $\phi\to \phi_F$ and $\phi_F\to \psi$ by Theorem \ref{thm.nmf} (4) (existence). Since $\psi^0\mu'\phi^0=\mu^0\phi^0=\psi^0\mu^1$ and $\psi^0$ is injective, $\mu'\phi^0=\mu^1$, so $\mu$ factors through $\mu :\phi \to \phi_{F^0}\to \psi$ by Theorem \ref{thm.nmf} (4) (uniqueness), hence the functor $\NMF_S(f)/\cF\to \mod A$ is fully faithful.
\end{proof}
The following two theorems are noncommutative graded versions of Eisenbud's matrix factorization theorem (Theorem \ref{thm.mot} (2), (3)).
\begin{theorem} \label{thm.m3}
Let $S$ be a graded quotient algebra of a right noetherian connected graded regular algebra, $f\in S_d$ a regular normal element, and $A=S/(f)$.
Then the functor $\Coker:\NMF^{\ZZ}_S(f)\to \TR^{\ZZ}(A)$ induces an equivalence functor $\NMF_S^{\ZZ}(f)/\cF\to \TR^{\ZZ}_S(A)$.
In particular, if $S$ is a noetherian AS-regular algebra, then
$\NMF^{\ZZ}_S(f)/\cF\cong \TR_S^{\ZZ}(A)=\TR^{\ZZ}(A)=\CM^{\ZZ}(A)$.
\end{theorem}
\begin{proof}
It follows from Theorem \ref{thm.m1} and Proposition \ref{prop.m2}.
\end{proof}
Let $S$ be a graded algebra, $f\in S_d$ a regular normal element and $A=S/(f)$.
If $0\neq P\in \grmod A$ is free, then as we have seen that $P\in \TR^{\ZZ}(A)$ and $\pd _S(P)=1$, so $P\in \TR_S^{\ZZ}(A)$.
We define
$\cP:=\{P\in \grmod A\mid P \; \textnormal{is free}\}$, and $\uTR_S^{\ZZ}(A):=\TR_S^{\ZZ}(A)/\cP$.
The following theorem is analogous to \cite [Theorem 5.8]{CCKM} in the case of noncommutative graded hypersurfaces.
\begin{theorem} \label{thm.m4}
If $S$ is a graded quotient algebra of a right noetherian connected graded regular algebra, $f\in S_d$ is a regular normal element, and $A=S/(f)$,
then the functor $\Coker:\NMF^{\ZZ}_S(f)\to \TR^{\ZZ}(A)$ induces an equivalence functor $\underline {\Coker}:\uNMF^{\ZZ}_S(f)\to \underline {\TR}_S^{\ZZ}(A)$.
In particular, if $S$ is a noetherian AS-regular algebra, then
$\uNMF^{\ZZ}_S(f)\cong \underline {\TR}_S^{\ZZ}(A)
=\uTR^{\ZZ}(A)= \uCM^{\ZZ}(A)$.
\end{theorem}
\begin{proof} For every $\phi_F\oplus {_G\phi}\in \cG$, $\Coker (\phi_F\oplus{_G\phi})=\overline G\in \cP$.
On the other hand, suppose that $\Coker \phi=\overline F\in \cP$ where $F\in \grmod S$ is free.
Since
$$\begin{CD} 0 @>>> F(-d) @>f\cdot >> F @>>> \overline F @>>>0\end{CD}$$
is the minimal free resolution of $\overline F$ in $\grmod S$, we have a commutative diagram
\[\xymatrix@R=1.5pc@C=2.5pc{
&0 \ar[d]&0 \ar[d] &0 \ar[d]\\
0 \ar[r] &F(-d) \ar[d] \ar[r]^{f\cdot} &F \ar[r] \ar[d] &\overline F \ar[r] \ar[d]& 0 \\
0 \ar[r] &F^1 \ar[d] \ar[r]^{\phi^0} &F^0 \ar[r] \ar[d] &\Coker \phi \ar[r] \ar[d]& 0 \\
0 \ar[r] &G^1 \ar[d] \ar[r]^{\psi^0} &G^0 \ar[r] \ar[d] &0 \ar[r] \ar[d]& 0\\
&0 &0 &0
}\]
where vertical sequences are split exact. Since $\psi^0:G^1\to G^0$ is an isomorphism, $\phi\cong {_F\phi}\oplus \phi_{G^0}\in \cG$. It follows that the equivalence functor $\NMF_S^{\ZZ}(f)/\cF\to \TR^{\ZZ}_S(A)$ of Theorem \ref{thm.m3} restricts to a bijection from $\cG/\cF$ to $\cP$, so
it induces an equivalence functor $\underline {\Coker}:\uNMF^{\ZZ}_S(f)\to \underline {\TR}_S^{\ZZ}(A)$.
\end{proof}
\begin{remark}
For $\phi\in \NMF_S(f)$, we define $\phi[1]\in \NMF_S(f)$ by $\phi[1]^i=\phi^{i+1}$ for every $i\in \ZZ$.
Note that $\phi_F[1]\cong {_F\phi}$ and ${_F\phi}[1]\cong \phi_F$.
In the above theorem, if $S$ is a noetherian AS-regular algebra, then $\uTR^{\ZZ}(A)=\uCM^{\ZZ}(A)$ has a structure of a triangulated category with the translation functor $\Omega ^{-1}$,
so $\uNMF_S^{\ZZ}(f)$ has a structure of a triangulated category with the translation functor $[-1]$ (i.e, $\{\phi^i\}_{i \in \ZZ}\mapsto \{\phi^{i-1}\}_{i \in \ZZ}$).
\end{remark}
\section{An Application to Skew Exterior Algebras}
In this last section, we will apply our theory of noncommutative matrix factorizations to skew exterior algebras,
which are hardly noncommutative hypersurfaces.
An application to noncommutative quadric hypersurfaces will be discussed in a subsequent paper.
Throughout this section, {\bf we assume that every graded algebra is finitely generated in degree 1 over $k$.}
\subsection{Co-point Modules}
\begin{definition}
Let $A$ be a graded algebra and $M\in \grmod A$.
\begin{enumerate}
\item{} We say that $M$ is a point module over $A$ if $M=M_0A$ and $H_M(t)=1/(1-t)$.
\item{} We say that $M$ is a co-point module over $A$ if $M$ has a linear resolution of the form
\[ \cdots \to A(-3) \to A(-2)\to A(-1)\to A\to M\to 0.\]
\end{enumerate}
\end{definition}
\begin{definition}
Let $A$ be a graded algebra. We say that $M\in \grmod A$ is an $r$-extension of (shifted) point modules if $M$ has a filtration
$$M=M_0\supset M_1\supset \cdots \supset M_r=0$$
such that $M_i/M_{i+1}\in \grmod A$ is a (shifted) point module for every $i=0, \dots, r-1$.
We can also similarly define an $r$-extension of (shifted) co-point modules.
\end{definition}
Write $A=k\<x_1, \dots, x_n\>/I$ where $I$ is a homogeneous ideal of $k\<x_1, \dots, x_n\>$ (and $\deg x_i=1$ for all $i=1, \dots, n$ as always assumed).
For a point $p=(a_1, \dots, a_n)\in \PP^{n-1}$, we define $N_p:=A/(a_1x_1+\cdots +a_nx_n)A\in \grmod A$.
Note that, for $p, q\in \PP^{n-1}$, $N_p\cong N_q$ if and only if $p=q$. If $N\in \grmod A$ is a co-point module, then $N\cong N_p$ for some $p\in \PP^{n-1}$.
Let $E:=\{p\in \PP^{n-1}\mid N_p \textnormal {\; is a co-point module over $A$}\}$. If $N$ is a co-point module, then $\Omega N(1)$ is also a co-point module, so there is a map $\t:E\to E$ defined by $\Omega N_p(1)\cong N_{\t(p)}$.
The pair $\cP^!(A):=(E, \t)$ is called the co-geometric pair of $A$ (see \cite {Mck}).
We denote by $\lin A$ the full subcategory of $\grmod A$ consisting of modules having linear resolutions.
For $M\in \grmod A$, we define $E(M):=\bigoplus _{i\in \NN}\Ext^i_A(M, k)$. If $A$ is a Koszul algebra, then $A^!:=E(k)=\bigoplus _{i\in \NN}\Ext^i_A(k, k)$ is also a Koszul algebra, and $E$ induces a duality functor $E:\lin A\to \lin A^!$.
\begin{lemma}[{\cite [Theorem 3.4]{Mck}}]
If $A$ is a Koszul algebra, then the duality functor $E:\lin A\to \lin A^!$ induces a bijection from the set of isomorphism classes of co-point modules over $A$ to the set of isomorphism classes of point modules having linear resolutions over $A^!$.
\end{lemma}
For a co-point module $N_p\in \lin A$ where $p\in E$, we denote by $M_p:=E(N_p)\in \lin A^!$ the corresponding point module.
\begin{definition} A noetherian $n$-dimensional AS-regular algebra $A$ is called a quantum polynomial algebra if
\begin{enumerate}
\item{} $H_A(t)=1/(1-t)^n$, and
\item{} $j(M)+\GKdim M=\GKdim A \;(=n)$ for every $M\in \grmod A$ where $j(M):=\inf\{i\in \NN\mid \Ext^i_A(M, A)\neq 0\}$.
\end{enumerate}
\end{definition}
\begin{example} \label{ex.qpa}
\begin{enumerate}
\item{} A (commutative) polynomial algebra is obviously a quantum polynomial algebra.
\item{} Every noetherian 3-dimensional quadratic AS-regular algebra is a quantum polynomial algebra
(see \cite[Corollary 6.2]{L}).
\item {} The skew polynomial algebra $T=k\<x_1, \dots, x_n\>/(\a_{ij}x_ix_j+x_jx_i)_{1\leq i, j\leq n, i\neq j}$ where $\a_{ij}\in k$ such that $\a_{ij}\a_{ji}=1$ for every $1\leq i, j\leq n, i\neq j$
is a quantum polynomial algebra by \cite[Lemma (\rnum{2}) on page 184]{LS}.
\end{enumerate}
\end{example}
\begin{remark} Every co-point module has a linear resolution by definition, but not every point module has a linear resolution even if $A$ is Koszul. However, if $A^!$ is a quantum polynomial algebra,
then $A$ and $A^!$ are Koszul, and every point module over $A^!$ has a linear resolution by \cite [Corollary 5.7]{Mck}, so the duality functor $E:\lin A\to \lin A^!$ induces a bijection from the set of isomorphism classes of co-point modules over $A$ to the set of isomorphism classes of point modules over $A^!$.
\end{remark}
\begin{lemma} \label{lem.liex}
The following categories are closed under extensions.
\begin{enumerate}
\item{} $\lin A$ for a
graded algebra $A$.
\item{} $\TR^{\ZZ}(A)$ for a
graded algebra $A$.
\item{} $\TR^{\ZZ}_S(A)$ for a graded algebra $S$, $f\in S_d$, and $A=S/(f)$.
\end{enumerate}
\end{lemma}
\begin{proof} Left to the reader.
\end{proof}
Let $S$ be a graded algebra, $f\in S_d$ a regular normal element, and $A=S/(f)$. We define
\begin{align*}
\NMF_S^0(f)&:= \{\{\phi^i:F^{i+1}\to F^i\}_{i\in \ZZ}\in \NMF^{\ZZ}_S(f)\mid F^0\;\textnormal{is generated in degree 0} \; (\textnormal{i.e,}\; F^0\cong S^r )\}, \\
\TR^0(A)&:=\{M\in \TR^{\ZZ}(A)\mid M=M_0A\}, \\
\TR^0_S(A)&:=\{M\in \TR^{\ZZ}_S(A)\mid M=M_0A\}.
\end{align*}
\begin{proposition} \label{prop.lin}
Let $S$ be a graded quotient algebra of a right noetherian connected graded regular algebra, $f\in S_2$ a regular normal element, and $A=S/(f)$.
\begin{enumerate}
\item{} $\TR^0_S(A)=\TR^{\ZZ}_S(A)\cap \lin A$. In particular, $\TR^0_S(A)$ is closed under extensions.
\item{} If $\phi\in \NMF^0_S(f)$ is of rank 1 such that $\Coker \phi$ has no free summand, then $\Coker \phi$ is a co-point module.
\end{enumerate}
\end{proposition}
\begin{proof}
(1) If $M\in \TR^0_S(A)$ is free, then $M\in \TR^{\ZZ}_S(A)\cap \lin A$. If $M\in \TR^0_S(A)$ has no free summand, then there exists $\phi=\{\phi^i:F^{i+1}\to F^i\}_{i\in \ZZ}\in \NMF^{\ZZ}_S(f)$ such that $C (\phi)^{\geq 0}$ is the minimal free resolution of $M$ by Proposition \ref{prop.mfr}.
Since $\deg f=2$, we have $F^{i+2}\cong F^i(-2)$ for every $i\in \ZZ$.
Since $M=M_0A$, we see $\overline {F^0}\cong A^r$, so $C (\phi)^{\geq 0}$ is a linear resolution of $M$, hence $M\in \TR^{\ZZ}_S(A)\cap \lin A$. The converse is clear.
(2) If $\phi=\{\phi^i:F^{i+1}\to F^i\}_{i\in \ZZ}\in \NMF^0_S(f)$ is of rank 1 such that
$\Coker \phi$ has no free summand, then $C(\phi)^{\geq 0}$ is the linear resolution of $\Coker \phi\in \TR^0_S(A)$ by (1).
Since $\overline {F^i}\cong A(-i)$,
$\Coker \phi$ is a co-point module.
\end{proof}
\begin{example}
Since the skew polynomial algebra $T=k\<x_1, \dots, x_n\>/(\a_{ij}x_ix_j+x_jx_i)$
is a quantum polynomial algebra,
$S:=T/(x_1^2, \dots, x_{n-1}^2)$ is a graded quotient algebra of a noetherian AS-regular algebra.
In this case, $f=x_n^2\in S_2$ is a regular normal element, $A:=S/(f)$ is a skew exterior algebra, and $A^!\cong k\<x_1, \dots, x_n\>/(x_ix_j-\a_{ij}x_jx_i)$ is again a skew polynomial algebra.
\end{example}
\begin{theorem} \label{thm.copo}
Let $T=k\<x_1, \dots, x_n\>/(\a_{ij}x_ix_j+x_jx_i)$ be a skew polynomial algebra, and put $S:=T/(x_1^2, \dots, x_{n-1}^2)$.
Let $f=x_n^2\in S_2$ so that $A=S/(f)$ is a skew exterior algebra, and let $\cP^!(A)=(E, \t)$.
\begin{enumerate}
\item{} For $p\in E$, $N_p\in \TR^0_S(A)$ if and only if $p\not\in \cV(x_n)$.
\item{} If $M\in \grmod A$ is an $r$-extension of co-point modules $N_{p_i}$ for $p_i\in E\setminus \cV(x_n)$, then there exists $\phi\in \NMF^0_S(f)$ of rank $r$ such that $M\cong \Coker \phi$.
\end{enumerate}
\end{theorem}
\begin{proof} (1) Recall that $\ell_{ij}=\cV(x_1, \dots, x_{i-1}, x_{i+1}, \dots, x_{j-1}, x_{j+1}, \dots, x_n)
\subset E$ and $\t(\ell_{ij})=\ell_{ij}$ for every $ 1\leq i<j\leq n$ by \cite[Theorem 4.1]{Ue}, so
$E\setminus \cV(x_n)\neq \emptyset$. For $p\in E$ and $i\in \ZZ$, let $\t^i(p)=(a_{i1}, \dots, a_{in})\in E\subset \PP^{n-1}$. Then $p\in E\setminus \cV(x_n)$ if and only if $\t^i(p)\in E\setminus \cV(x_n)$ for every $i\in \ZZ$ if and only if $a_{in}\neq 0$ for every $i\in \ZZ$. Since
$$\begin{CD} \cdots @>{(\sum_{j=1}^na_{2j}x_j)\cdot }>>A(-2) @>{(\sum_{j=1}^na_{1j}x_j)\cdot }>> A(-1) @>{(\sum_{j=1}^na_{0j}x_j)\cdot }>> A @>>> N_p @>>> 0\end{CD}$$
is a linear resolution of $N_p$ over $A$, we have
$(\sum_{j=1}^na_{i+1, j}x_j)(\sum_{j=1}^na_{ij}x_j)=0$ in $A$. It follows that $(\sum_{j=1}^na_{i+1, j}x_j)(\sum_{j=1}^na_{ij}x_j)$ is a linear combination of $\{\a_{ij}x_ix_j+x_jx_i\}_{1\leq i<j\leq n}\cup \{x_i^2\}_{1\leq i\leq n}$ in $k\<x_1, \dots, x_n\>$, so $(\sum_{j=1}^na_{i+1, j}x_j)(\sum_{j=1}^na_{ij}x_j)=a_{i+1, n}a_{in}x_n^2=a_{i+1, n}a_{in}f$ in $S$ for every $i\in \ZZ$. Let $\Phi^i:=\sum_{j=1}^na_{ij}x_j$ for $i\in \ZZ$. By the uniqueness of the linear resolution, $N_p\in \TR^0_S(A)$ if and only if $\Phi$ induces $\phi\in \NMF^0_S(f)$ such that $\Coker \phi\cong N_p$
if and only if $a_{in}\neq 0$ for every $i\in \ZZ$ if and only if $p\not \in \cV(x_n)$.
(2) Since $N_{p_i}\in \TR^0_S(A)$ for every $p_i\in E\setminus \cV(x_n)$ by (1), $M\in \TR^0_S(f)$ by Proposition \ref{prop.lin} (1).
\end{proof}
\subsection{Indecomposable Noncommutative Matrix Factorizations}
Let $A$ be a right noetherian graded algebra.
We denote by $\tors A$ the full subcategory of $\grmod A$ consisting of finite dimensional modules over $k$, and by $\tails A:=\grmod A/\tors A$ the Serre quotient category. Note that $\Obj(\tails A)=\Obj(\grmod A)$, and, for $M, N\in \Obj(\tails A)=\Obj(\grmod A)$, $M\cong N$ in $\tails A$ if and only if $M_{\geq n}\cong N_{\geq n}$ in $\grmod A$ for some $n\in \ZZ$.
We call $\tails A$ the noncommutative projective scheme associated to $A$ since if $A$ is commutative (and finitely generated in degree 1 as always assumed), then $\tails A$ is equivalent to the category of coherent sheaves on $\Proj A$.
Let $\Tors A$ be the full subcategory of $\GrMod A$ consisting of direct limits of modules in $\tors A$, and $\Tails A:=\GrMod A/\Tors A$.
It is known that the quotient functor $\pi :\GrMod A\to \Tails A$ is exact and has a right adjoint $\omega :\Tails A\to \GrMod A$. See \cite{AZ} for details.
\begin{proposition} \label{prop.fpm}
Let $A$ be a quantum polynomial algebra.
If $M\in \grmod A$ is an $r$-extension of shifted point modules, then the following hold:
\begin{enumerate}
\item{} $\R^1\omega\pi M=0$.
\item{} $\dim _k(\omega \pi M)_n=r$ for every $n\in \ZZ$.
\item{} $(\omega \pi M)(n)_{\geq 0}\in \lin A$ for every $n\in \ZZ$.
\end{enumerate}
\end{proposition}
\begin{proof}
By induction on $r$. For the case $r=1$, it is known that if $M\in \grmod A$ is a shifted point module, then $\R^1\omega \pi M=\H^2_{\fm}(M)=0$.
By \cite [Corollary 5.6]{Mck}, $M\in \lin A$, so (2) and (3) follows from \cite [Lemma 6.5, Proposition 6.6]{Mcfk}.
For $r\geq 1$, if $M\in \grmod A$ has a filtration
$$M=M_0\supset M_{1}\supset \cdots \supset M_r=0$$
such that $M_i/M_{i+1}\in \grmod A$ is a shifted point module for every $i=0, \dots, r-1$,
then $M_1$ is an $(r-1)$-extension of shifted point modules, so
we have $\R^1\omega \pi M_1=0$, $\dim _k(\omega \pi M_1)_n=r-1$, and $(\omega \pi M_1)(n)_{\geq 0}\in \lin A$ for every $n\in \ZZ$ by induction.
An exact sequence $0\to M_1\to M\to M/M_1\to 0$ induces an exact sequence
$$0\to \omega \pi M_1\to \omega \pi M\to \omega \pi (M/M_1)\to \R^1\omega \pi M_1\to \R^1\omega \pi M\to \R^1\omega \pi (M/M_1).$$
Since $M/M_1\in \grmod A$ is a shifted point module, $\R^1\omega \pi (M/M_1)=0$, so $\R^1\omega \pi M=0$ and we have an exact sequence
$$0\to (\omega \pi M_1)(n)_{\geq 0}\to (\omega \pi M)(n)_{\geq 0}\to (\omega \pi (M/M_1))(n)_{\geq 0}\to 0.$$
Since $\dim _k(\omega \pi M_1)_n=r-1$ and $\dim _k(\omega \pi (M/M_1))_n=1$, we have $\dim _k(\omega \pi M)_n=r$ for every $n\in \ZZ$.
Since $(\omega \pi M_1)(n)_{\geq 0}, (\omega \pi (M/M_1))(n)_{\geq 0}\in \lin A$, we have $(\omega \pi M)(n)_{\geq 0}\in \lin A$ by Lemma \ref{lem.liex}.
\end{proof}
Let $A$ be a Koszul algebra. Then $A$ is the polynomial algebra in $n$ variables if and only if $A^!$ is the exterior algebra in $n$ variables if and only if $\cP^!(A^!)=(\PP^{n-1}, \id)$. The following lemma is well-known.
\begin{lemma} \label{lem.qpa}
If $A$ is an $n$-dimensional quantum polynomial algebra such that $\cP^!(A^!)=(\PP^{n-1}, \t)$, then there exists an equivalence functor $\GrMod A\to \GrMod k[x_1, \dots, x_n]$ which induces a bijection from the set of isomorphism classes of point modules over $A$ to the set of isomorphism classes of point modules over $k[x_1, \dots, x_n]$.
\end{lemma}
The following lemma may be standard in commutative algebra.
For the reader's convenience, we include our proof.
\begin{lemma} \label{lem.inde}
Let $A=k[x_1, \dots, x_n]$ and $M=A/(x_1^r, x_2, \dots, x_{n-1})\in \grmod A$ for $r\in \NN^+$. For every $m\in \NN$, $M_{\geq m}$ is indecomposable as a graded $A$-module.
\end{lemma}
\begin{proof} If $A'=A/(x_2, \dots, x_{n-1})\cong k[x_1, x_n]$, then $M$ can be viewed as $M'=A'/(x_1^r)$ as a graded $A'$-module. If $M_{\geq m}$ is decomposable as a graded $A$-module, then $M'_{\geq m}$ is decomposable as a graded $A'$-module, so we may assume that $n=2$. To simplify the notation, let $A=k[x, y]$ and $M=A/(x^r)$. If $M_{\geq m}$ is decomposable, then $M_{\geq m'}$ is decomposable for every $m'\geq m$, so we may assume that $m\geq r-1$. In this case, since $M_m$ is spanned by $x^{r-1}y^{m-r+1}, x^{r-2}y^{m-r+2}, \dots, xy^{m-1}, y^m$ over $k$, every $0\neq u\in M_m$ can be written as $u=\sum _{i=1}^ja_ix^{r-i}y^{m-r+i}\in M_m$ for some $1\leq j\leq r$ such that $a_j\neq 0$ (and $a_{j+1}=\cdots =a_r=0$). For $x^{j-1}y^{r-j}\in A_{r-1}$, since $r-i+j-1\geq r$ for every $i\leq j-1$,
$$ux^{j-1}y^{r-j}=\sum _{i=1}^ja_ix^{r-i+j-1}y^{m+i-j}=a_jx^{r-1}y^m\in M_{m+r-1}.$$
It follows that $x^{r-1}y^m\in uA$ for every $0\neq u\in M_m$. Suppose that there exists a nontrivial decomposition $M_{\geq m}=L\oplus N$ of $M$. Since $M_{\geq m}=M_mA$, we see that $L_m\neq 0, N_m\neq 0$, so there must exist $0\neq u\in L_m\subset M_m, 0\neq v\in N_m\subset M_m$ such that $uA\cap vA=0$. Since $x^{r-1}y^m\in uA\cap vA$ by the above argument, we have a contradiction, so $M_{\geq m}$ is indecomposable.
\end{proof}
\begin{lemma} \label{lem.pinf}
Let $A$ be an $n$-dimensional quantum polynomial algebra such that $\cP^!(A^!)=(\PP^{n-1}, \t)$. For every $r\in \NN^+$ and every $p\in \PP^{n-1}$, there exist an indecomposable $r$-extension of $M_p$.
\end{lemma}
\begin{proof}
By Lemma \ref{lem.qpa}, we may assume that $A=k[x_1, \dots, x_n]$.
Without loss of generality, we may also assume that $p=(0, \dots, 0, 1)\in \PP^{n-1}$ so that $M_p=A/(x_1, \dots, x_{n-1})$. Then $M:=A/(x_1^r, x_2, \dots, x_{n-1})$ is an $r$-extension of shifted point modules having the filtration
$$M=M_0\supset M_1\supset \cdots \supset M_r=0$$
such that $M_i/M_{i+1}\cong M_p(-i)$ for every $i=0, \dots, r-1$. Since $\omega \pi :\GrMod A\to \GrMod A$ is a left exact functor, we have a filtration
$$(\omega \pi M)_{\geq 0}=(\omega \pi M_0)_{\geq 0}\supset (\omega\pi M_1)_{\geq 0}\supset \cdots \supset (\omega \pi M_r)_{\geq 0}=0.$$
Since $M_i$ is an $(r-i)$-extension of shifted point modules, we have
$$(\omega \pi M_i)_{\geq 0}/(\omega \pi M_{i+1})_{\geq 0}\cong (\omega \pi (M_i/M_{i+1}))_{\geq 0}\cong (\omega \pi M_p(-i))_{\geq 0}\cong M_p$$
for every $i=0, \dots, r-1$ by the proof of Proposition \ref{prop.fpm} and \cite[Proposition 6.6]{Mcfk}, so $(\omega \pi M)_{\geq 0}$ is an $r$-extension of $M_p$.
If $(\omega \pi M)_{\geq 0}$ is decomposable, then
$M_{\geq m}\cong (\omega\pi M)_{\geq m}$ is decomposable for some $m\gg 0$, which is not the case by Lemma \ref{lem.inde}, so $(\omega \pi M)_{\geq 0}$ is indecomposable.
\end{proof}
\begin{theorem} \label{thm.nmfext}
Let $T=k\<x_1, \dots, x_n\>/(\a_{ij}x_ix_j+x_jx_i)$ be a skew polynomial algebra, and put $S=T/(x_1^2, \dots, x_{n-1}^2)$.
Let $f=x_n^2\in S_2$ so that $A=S/(f)$ is a skew exterior algebra. Suppose that
$\a_{ij}\a_{jk}\a_{ki}=1$ for all $1\leq i, j ,k\leq n$ (e.g. $A$ is the exterior algebra).
For each $r\in \NN^+$, there exist infinitely many indecomposable $\phi\in \NMF^0_S(f)$ of rank $r$.
\end{theorem}
\begin{proof}
By \cite[Theorem 4.1]{Ue}, if $\a_{ij}\a_{jk}\a_{ki}=1$ for all $1\leq i, j ,k\leq n$, then $A^!$ is a skew polynomial algebra such that $\cP^!(A)=(\PP^{n-1}, \t)$,
so $A^!$ is an $n$-dimensional quantum polynomial algebra and there exists an indecomposable $r$-extension $M\in \lin A^!$ of the point module $M_p$ over $A^!$ for each $p\in \PP^{n-1}\setminus \cV(x_n)$ by Lemma \ref{lem.pinf}.
Since $E(M)\in \lin A$ is an indecomposable $r$-extension of a co-point module $N_p$ over $A$ where $p\in \PP^{n-1}\setminus \cV(x_n)$,
we have $E(M)\in \TR^0_S(A)$ by Theorem \ref{thm.copo},
so there exists an indecomposable $\phi\in \NMF^0_S(f)$ of rank $r$ such that $\Coker \phi\cong E(M)$.
For $p, p'\in \PP^{n-1}\setminus \cV(x_n)$, let $M, M'$ be $r$-extensions of $M_p, M_{p'}$, respectively.
Reducing to the commutative case by Lemma \ref{lem.qpa}, if $p\neq p'$, then
$M\not \cong M'$, so there exist infinitely many indecomposable $\phi\in \NMF^0_S(f)$ of rank $r$.
\end{proof}
\begin{remark} Let $A$ be a Koszul algebra. Since the functor $E:\lin A\to \lin A^!$ is a duality, $0\to L\to M\to N\to 0$ is an exact sequence in $\lin A$ if and only if $0\to E(N)\to E(M)\to E(L)\to 0$ is an exact sequence in $\lin A^!$. In fact, the exact sequence $0\to L\to M\to N\to 0$ induces an exact sequence
$$\begin{array}{cccccccccccc} \Ext^{i-1}_A(L, k)_{-i} & \to & \Ext^i_A(N, k)_{-i} & \to & \Ext^i_A(M, k)_{-i}& \to & \Ext^i_A(L, k)_{-i} & \to & \Ext^{i+1}_A(N, k)_{-i} \\
\parallel & & \parallel & & \parallel & & \parallel & & \parallel \\
0 & & E(N)_i & & E(M)_i & & E(L)_i & & 0 \end{array}$$
for every $i\in \ZZ$. It follows that $N\in \lin A$ is an (indecomposable) extension of co-point modules $N_p$ if and only if $E(N)$ is an (indecomposable) extension of point modules $M_p$.
\end{remark}
\subsection{Extensions of Co-point Modules}
\begin{proposition} \label{prop.extp}
Let $A$ be an $n$-dimensional quantum polynomial algebra such that $\cP^!(A^!)=(E, \t)$. Suppose that either
\begin{enumerate}
\item{} $E=\PP^{n-1}$, or
\item{} $n=3$ and $||\t||:=\inf\{i\in \NN^+\mid \t^i\in \Aut \PP^{n-1}\}=\infty$.
\end{enumerate}
For $M\in \grmod A$, if $(\omega \pi M)_{\geq 0}\cong M$ and $H_M(t)=r/(1-t)$, then $M$ is an $r$-extension of point modules.
\end{proposition}
\begin{proof}
The assumption on $A$ implies that
the isomorphism classes of simple objects in $\tails _0A:=\{\pi M\in \tails A\mid \GKdim M\leq 1\}$ are given by $\{\pi M_p\}_{p\in E}$ by \cite [Lemma 3.5, Proposition 4.4]{Mrb}. We prove it by induction on $r$.
Suppose that $(\omega \pi M)_{\geq 0}\cong M$ and $H_M(t)=1/(1-t)$. Since $0\neq \pi M\in \tails _0A$, there exists $p\in E$ such that $\pi M_p\subset \pi M$. Since $\omega :\Tails A\to \GrMod A$ is a left exact functor, $M_p\cong (\omega \pi M_p)_{\geq 0}\subset (\omega \pi M)_{\geq 0}\cong M$. Since $H_M(t)=1/(1-t)=H_{M_p}(t)$, it follows that $M\cong M_p$ is a point module.
Suppose that $(\omega \pi M)_{\geq 0}\cong M$ and $H_M(t)=r/(1-t)$. Since $0\neq \pi M\in \tails _0A$, there exist $p\in E$
and an exact sequence
$$0\to \pi M_p\to \pi M\to \cF\to 0$$
in $\tails A$ for some $\cF\in \tails A$. Since $\R^1\omega \pi M_p=\H_{\fm}^2(M_p)=0$, we have an exact sequence
$$0\to (\omega \pi M_p)_{\geq 0}\cong M_p\to (\omega \pi M)_{\geq 0}\cong M\to (\omega \cF)_{\geq 0}\to (\R^1\omega \pi M_p)_{\geq 0}=0.$$
If $M':=(\omega \cF)_{\geq 0}\cong M/M_p\in \grmod A$, then
$$(\omega \pi M')_{\geq 0}=(\omega \pi (\omega \cF)_{\geq 0})_{\geq 0}\cong(\omega \pi \omega \cF)_{\geq 0}\cong (\omega \cF)_{\geq 0}=M'$$
and $H_{M'}(t)=H_M(t)-H_{M_p}(t)=(r-1)/(1-t)$, so $M'$ is an $(r-1)$-extension of point modules by induction, hence $M$ is an $r$-extension of point modules.
\end{proof}
\begin{proposition} \label{prop.extp2}
Let $A$ be a Koszul algebra such that $\cP^!(A)=(E, \t)$ and $A^!$ is an $n$-dimensional quantum polynomial algebra. Suppose that either
\begin{enumerate}
\item{} $E=\PP^{n-1}$, or
\item{} $n=3$ and $||\t||=\infty$.
\end{enumerate}
If $M\in \lin A$ has no free summand and $H_{E(M)}(t)=r/(1-t)$, then $M$ is an $r$-extension of co-point modules.
\end{proposition}
\begin{proof}
It is enough to show that $(\omega \pi E(M))_{\geq 0}\cong E(M)$ by Proposition \ref{prop.extp}. Consider the exact sequence
$$0\to \H_{\fm}^0(E(M))\to E(M)\to \omega \pi E(M)\to \H_{\fm}^1(E(M)).$$
Since $E(M)\in \lin A^!$, it follows that
$\H_{\fm}^0(E(M))_{\geq 1}=\H_{\fm}^1(E(M))_{\geq 0}=0$ by \cite [Corollary 4.9, Theorem 5.4]{Mr}, so we have an exact sequence
$$0\to \H_{\fm}^0(E(M))_0\cong k^m\to E(M)\to (\omega \pi E(M))_{\geq 0}\to 0$$
for some $m\in \NN$.
Since $k^m, E(M)\in \lin A^!$, we have a surjection $M\cong E(E(M))\to E(k^m)\cong A^m$.
Since $M$ has no free summand, $m$ must be $0$, so $(\omega \pi E(M))_{\geq 0}\cong E(M)$.
\end{proof}
\begin{lemma} \label{lem.gs} Let $A$ be an $n$-dimensional quantum polynomial algebra such that $\cP^!(A^!)=(\PP^{n-1}, \t)$.
For point modules $M, M'\in \grmod A$, $\Ext^1_{\GrMod A}(M, M')\neq 0$ if and only if $M\cong M'$.
\end{lemma}
\begin{proof}
By Lemma \ref{lem.qpa}, we may assume that $A=k[x_1, \dots, x_n]$. Let $M=M_p, M'=M_{p'}$ for $p, p'\in \PP^{n-1}$. Note that $M\cong M'$ if and only if $p=p'$. Without loss of generality, we may assume that $p=(1, 0 \dots, 0)$ so that $M=A/(x_2, \dots, x_{n})A$.
The minimal free resolution of $M$ starts with
$$\begin{CD} A(-2)^{(n-1)(n-2)/2} @>{\left(\begin{smallmatrix}
x_n & 0 & 0 & \cdots \\
0 & x_n & 0 & \cdots \\
0 & 0 & x_n & \cdots \\
\vdots & \vdots & \vdots & \\
-x_2 & -x_3 & -x_4 & \cdots
\end{smallmatrix}\right)\cdot }>> A(-1)^{n-1} @>{\left(\begin{smallmatrix} x_2 & x_3 & \cdots & x_{n}\end{smallmatrix}\right) \cdot }>> A \to M \to 0,\end{CD}$$
so $\Ext^1_{\GrMod A}(M, M')$ is the degree 0 part of the homology of
$$\begin{CD} M' @>{\cdot \left(\begin{smallmatrix} x_2 & x_3 & \cdots & x_n\end{smallmatrix}\right)}>\phi> M'(1)^{n-1} @>{\cdot \left(\begin{smallmatrix}
x_n & 0 & 0 & \cdots \\
0 & x_n & 0 & \cdots \\
0 & 0 & x_n & \cdots \\
\vdots & \vdots & \vdots & \\
-x_2 & -x_3 & -x_4 & \cdots
\end{smallmatrix}\right)}>\psi> M'(2)^{(n-1)(n-2)/2}.\end{CD}$$
If $p=p'$, then $\phi=\psi=0$, so $\Ext^1_{\GrMod A}(M, M')=M'(1)^{n-1}_0=(kx_1)^{n-1}\neq 0$.
If $p\neq p'$, then, without loss of generality, we may assume that $p'=(0, \dots, 0, 1)$ so that $M'=A/(x_1, \dots, x_{n-1})A$.
In this case,
$$(\Im \phi)_0=\{(0, \dots, 0, ax_n)\in (kx_n)^{n-1}
=M'(1)_0^{n-1}\mid a\in k\}=(\Ker \psi)_0,$$
so $\Ext^1_{\GrMod A}(M, M')=0$.
\end{proof}
\begin{theorem} \label{thm.last}
Let $T=k\<x_1, \dots, x_n\>/(\a_{ij}x_ix_j+x_jx_i)$ be a skew polynomial algebra, and put $S=T/(x_1^2, \dots, x_{n-1}^2)$.
Let $f=x_n^2\in S_2$ so that $A=S/(f)$ is a skew exterior algebra, and let $\cP^!(A)=(E, \t)$.
Suppose that either
\begin{enumerate}
\item{} $\a_{ij}\a_{jk}\a_{ki}=1$ for all $1\leq i, j ,k\leq n$, or
\item{} $n=3$ and $\a_{12}\a_{23}\a_{31}$ is not a root of unity.
\end{enumerate}
Then $M\in \TR^0_S(A)$ has no free summand if and only if $M$ is a finite extension of co-point modules $N_{p_i}$ over $A$ where $p_i\in E\setminus \cV(x_n)$.
\end{theorem}
\begin{proof} Note that if (1) $\a_{ij}\a_{jk}\a_{ki}=1$ for all $1\leq i, j ,k\leq n$, then $E=\PP^{n-1}$ by \cite[Theorem 4.1]{Ue}, and if (2) $n=3$ and $\a_{12}\a_{23}\a_{31}$ is not a root of unity, then $||\t||=\infty$ by \cite [Lemma 4.13]{Mrb}. In either case, $A^!$ is a quantum polynomial algebra by Example \ref{ex.qpa}.
If $M\in \TR^0_S(A)$ has no free summand, then there exists $\{\phi^i:F^{i+1}\to F^i\}_{i \in \ZZ}\in \NMF^0_S(f)$ of some rank $r$ such that $\Coker \phi\cong M$ and
$\overline {F^i}\cong A^r(-i)$ for every $i\in \ZZ$ by Proposition \ref{prop.mfr}. Since $M\in \lin A$ by Proposition \ref{prop.lin},
$H_{E(M)}=r/(1-t)$,
so $E(M)$ is an $r$-extension of point modules $M_{p_i}$ for some $p_i\in E$ over $A^!$ by Proposition \ref{prop.extp2}, hence $M$ is an $r$-extension of co-point modules $N_{p_i}$ over $A$.
We will show that $p_i\not \in \cV(x_n)$ by induction on $r$. The case $r=1$ follows from Theorem \ref{thm.copo} (1). Suppose that $M'$ is an extension of co-point modules $N_{p_1}, \dots, N_{p_{r-1}}$ such that there exists an exact sequence
$$0\to M'\to M\to N_{p_r}\to 0.$$
By induction, $p_1, \dots, p_{r-1}\in E\setminus \cV(x_n)$. Since $\TR^0_S(A)$ is closed under direct summand, if the above exact sequence splits, then $N_{p_r}\in \TR^0_S(A)$, so $p_r\in E\setminus \cV(x_n)$. On the other hand, if the above exact sequence does not split, then the exact sequence $0\to E(N_{p_r})=M_{p_r}\to E(M)\to E(M')\to 0$ does not split.
Since $E(M')$ is an extension of point modules $M_{p_1}, \dots, M_{p_{r-1}}$, we have $\Ext^1_{\GrMod A}(M_{p_i}, M_{p_r})\neq 0$ for some $i=1, \dots, r-1$.
If (1) $E=\PP^{n-1}$, then
$\Ext^1_{\GrMod A}(M_{p_i}, M_{p_r})\neq 0$ implies $p_r=p_i\in E\setminus \cV(x_n)$ by Lemma \ref{lem.gs}.
If (2) $n=3$ and $||\t||=\infty$, then either $p_r=p_i\in E\setminus \cV(x_n)$ or $p_r=\varphi ^{-1}\t^{-3}(p_i)$ where $\varphi$ is the Nakayama automorphism of $A^!$ by \cite [Lemma 2.16]{Aj} (cf. \cite [Proposition 4.22]{Mrb}). Since $\t(\cV(x_n))=\cV(x_n)$ and $\varphi (\cV(x_n))=\cV(x_n)$, we have $\varphi ^{-1}\t^{-3}(p_i)\in E\setminus \cV(x_n)$ by \cite [Theorem 4.1]{Ue}, hence the result.
Conversely, if $M$ is a finite extension of co-point modules $M_{p_i}$ over $A$ where $p_i\in E\setminus \cV(x_n)$, then $M\in \TR^0_S(A)$ by Theorem \ref{thm.copo} (2). If $M$ has a free summand, then $E(M)$ is not torsion-free. However, since $E(M)$ is a finite extension of point modules, $E(M)$ is torsion-free, so $M$ has no free summand.
\end{proof}
|
1,477,468,751,091 | arxiv |
\section{Introduction}
A/B testing (also known as split testing or bucket testing) is a method of comparing two versions of a webpage or app against each other to determine which one performs better. It is essentially a special case of multiple variants of a page which are shown to users at random, and statistical analysis is used to determine which variation performs better for a given key performance indicator (KPI). Sometimes, the results from A/B testing are used as a baseline to be compared with other digital marketing services, e.g. personalization. Thus, it is important that A/B testing or multi-variate testing results are truly valid so that we make the correct inference. One important aspect is to get rid of any confounding. In other words, we want to make sure that the randomly assigned groups have the same profile distributions. In this way any difference in the KPI actually results from the
testing objects, instead of from some characteristics of the users such as age or gender. Despite the large volume of literature on A/B testing design, to our knowledge, there has not been any papers which aims to validate the balance of population distribution of different groups. We want to point out that our distribution testing methods are non-parametric by design and can have much more general applications beyond A/B testing, such as in randomized medical clinical trials, in design of experiments in manufacturing fault detection, and in bandits problem in personalized medicine, basically anywhere when uniformly random data splitting is required in a multi-dimensional space.
\par The rest of this paper has the following structure: Section \ref{relatedwork} discusses past literature on theory and application of randomized controlled trials (RCT) and observational studies. Section \ref{methodology} introduces some statistical tools, i.e., distance-covariance analysis, propensity score method, randomized chi-square test, and a resampling technique, to test equivalence of multi-dimensional categorical distribution. Section \ref{simulations} compares and analyzes the above methods on simulated data. Section \ref{realdata} applies our methods on real traffic data of online marketing from Adobe Experience Cloud. Section \ref{discussion} discusses further generalization and implementations of our methodology.
\section{Past Work}
\label{relatedwork}
Randomized controlled trials are the simplest method to yield unbiased estimates of the treatment effects under a minimal set of assumptions. Researchers have been developing new methodology that boosts the efficiency of randomized trials when sample size is small.
Sample size may not be a problem in some e-commerce settings as the number of online visitors can easily build up to hundreds of thousands. However, in other settings such as clinical trials, the number of samples (patients) is often small \cite{bhat2017}. \cite{bhat2017} formulates online/offline A-B testing as a (computationally challenging) dynamic optimization problem and develops approximation and exact algorithms. In particular, the paper assumes that response is linear in the treatment and covariates: $y_k = x_k \theta + Z_k \kappa + \epsilon_k$, where $x_k=0$ or $1$ according to whether subject $k$ is assigned to treatment or control group, and $Z_k$ denotes covariates of subject $k$. The objective is to maximize overall precision, which is the inverse of the standard error of $\hat{\theta}$. In online setting, $x_k$ is $\mathcal{F}_{k-1}$ measurable. It can also incorporates other goals such as controlled selection bias and endogenous stopping. Despite clean theoretical guarantees and relatively easy implementation of this algorithm, the linear model is not applicable in many real cases, for example, if $y_k$ is 0 or 1 denoting whether a user convert or not. In this case, a generalized linear model makes more sense but we are unclear about any theoretical results in this scenario. Also the theory requires $Z_k$ to have elliptical distribution, which is not the case in many real applications, though one real experiment with categorical data in the paper still performs well. More importantly, the number of covariates may not be clear at the beginning of A/B testing as online user profile may not be available at the time of experiments. Since the current paper is concerned with experiments in e-commerce, the sample size does not pause a big issue.
\par There are numerous works on the implementation, validity, and efficiency of A/B testing. \cite{kohavi2009} gives a detailed summary of the things data scientists need to pay attention to when carrying out A/B testing, and in general multivariate testing, such as determining the right sample size before testing and stopping the experiments when buggy or unintentionally poor results come up. When there are more than one variant to test, such as the choice of the picture or the color of the text on a website, online experimenters can either do multiple pairwise comparison of each individual variant, and apply methods of multiple testing adjustments, such as Bonferroni adjustment, if the number of variants is small, or do a multivariate test altogether. This paper mentions some possibilities that A/B testing split might not be truly random. One of the most common in practice is the effect of robots. Robots can introduce significant skew into estimates, enough to render the split invalid, and cause many metrics to be significant when they should not have been. For some websites robots are thought to provide up to half the pageviews on the site \cite{kohavi2004}. However, it is difficult to clearly delineate between the human users and robots \cite{tan2002}. Despite mentioning possible confounding factors in online experiments, this paper does not go into details on how to get rid of them, and how to test whether the splitting of traffic is truly random.
\par When randomized trials are not possible, techniques in observational studies are needed to get valid inference. \cite{gordon2016} focuses on the evidence of advertising measurements from big field experiments in Facebook. It uses the advertising effects measured from RCTs as benchmark, and compares the causal effects measured from observational studies to the benchmark. Two methods proposed by the paper to get rid of confounding in non randomized experiments are matching and regression adjustments. Exact matching and propensity matching are two common ways in practice for the former. Inverse-probability-weighed regression adjustment (IPWRA) \cite{wooldridge2007} incorporates propensity information into regression adjustments to provide more robust results. This paper also mentions checking the equivalence of distribution in A/B split so as to create valid benchmark. Yet it simply checks each covariate independently without adjusting for multiplicity. This can incur false positives when the number of covariates is large and cannot detect difference in covariance structure. A/B testing is a design experiment so we do not need to use the methodology of observational studies in our paper.
\par There are also papers on the continuous monitoring of A/B testing and how to effectively run a large number of A/B testing. If we plot online experiments results continuously and declare significance the first time p-value is smaller than a pre-defined threshold, we will find that the actual type I error can be much larger than the threshold. \cite{johari2015} introduces methods to continuously monitoring A/B while still controlling type I error or the false discovery rate. \cite{tang2010} describes overlapping experiments infrastructure that helps run experiments faster and produce better decisions. \cite{zhang2017} describes an adaptive sample size modification for cold start of A/B testing. Network effect can also come into play, especially for experiments conducted in companies like Facebook or LinkedIn. \cite{gui2015} proposes an A/B testing effect analysis by taking into account network effect into a linear model.
\par All these works contribute to the validity or efficiency of online experiments. We emphasize that our "randomness check" has a very general application, regardless of how traffic is split, the distribution of the covariates, or whether the experiments are done offline or monitored continuously.
\section{Methodology}
\label{methodology}
\par To formulate the problem mathematically, assume the distributor
has assigned incoming users to $k$ groups. Each user is represented
by a row vector of length $m$, with each entry representing certain
profile information, such as age, area, gender, or number of times of past visit. Hence, we have $k$
data sets, each of dimension $n_i \times m, 1 \leq i \leq k$. Our goal
is to infer whether the split achieves desired randomness by testing whether the $k$ $m$-dimensional data have the same
distribution. Throughout our analysis, we assume each column is
categorical and each row (user) is independent. These are reasonable
assumptions because many important profile informations, such as
gender, age, region, are categorical. We assume the number of user in each group is large, and the network effect is negligible compared to the scale of user numbers.
\par In the following subsections, we first state the method that is used currently by online experimenters to do the A/B split validity check \cite{gordon2016} as baseline. Then we describe a method proposed by \cite{rizzo2010}, called distance components (DISCO), of measuring
the total dispersion of the samples, which admits a partition of the
total dispersion into components analogous to the variance components in
ANOVA. We will also apply propensity method and introduce a randomized chi-square test. Each of these distribution test method has its own advantage as we show in Section \ref{simulations}. Finally, we propose a resampling technique that controls family-wise error rate (FWER) under any condition while still maintaining high power compared to some other multiplicity adjustment methods, as again shown in Section \ref{simulations}. Our contribution is to apply and modify existing testing methods to the ubiquitous A/B-multi testing validity check.
\subsection{Baseline}
\cite{gordon2016} and some other papers apply F-test \footnote{see
\url{https://en.wikipedia.org/wiki/F-test} for an introduction of
F-test.} to each of the $m$ column (covariate) or some key columns of the split data sets, and claim the split is truly random if none of the p-value is smaller than 0.05. This is perhaps the most straightforward and simple way of approaching the randomization check problem, but with some potential issues. First, F-test can only test difference in mean, but cannot detect other distributional differences, such as variance (we will see in the next subsection DISCO provides a solution to this). Second, even if each column has the same distribution among the $k$ data sets, the interaction among the $m$ columns can be different for each data set, which also contribution to multi-dimensional distribution heterogeneity. Third, multiplicity adjustment is needed if the dimension $m$ is large, which is often the case for online marketing data. Otherwise, even an A/A test can have very high false positive rate.
\par To address the third issue, we apply a resampling technique (introduced later) to adjust for multiplicity. It has the advantage of controlling FWER under any dependence structure of p-values, while maintaining high power compared to other p-value adjustment methods, such as Bonferroni or Holmes method.
\subsection{DISCO one-step test}
\cite{rizzo2010} propose a new method, called distance components (DISCO), of measuring the total dispersion of the samples in multi-dimensions, which admits a partition of the
total dispersion into components analogous to the variance components in
ANOVA. They introduce a measure of
dispersion based on Euclidean distances between all pairs of sample elements,
for any power $\alpha$, which is called the index, of distances such that $\alpha \in (0,2)$. The method is based on the following key definitions and theorem.
\par Suppose that $X$ and $X'$ are independent and identically distributed (i.i.d.),
and $Y$ and $Y'$ are i.i.d., independent of $X$. If $\alpha$ is a constant such that $E||X||^\alpha < \infty$ and $E||Y||^\alpha < \infty$, define the $\mathcal{E}_\alpha$-distance (energy distance) between the distributions of $X$ and $Y$ as
\begin{equation}
\label{energy}
\mathcal{E}_\alpha(X, Y) = 2E||X - Y||^\alpha - E||X - X'||^\alpha - E||Y - Y'||^\alpha.
\end{equation}
Then we have the following theorem.
\begin{theorem}
\label{theorem1}
Suppose that $X, X' \in \mathbb{R}^p$ are i.i.d. with distribution $F$, $Y, Y' \in \mathbb{R}^p$ are i.i.d. with distribution $G$, and $Y$ is independent of $X$. If $0<\alpha \leq 2$ is a constant such tha $E||X||^\alpha < \infty$ and $E||X||^\alpha < \infty$, then the
following statements hold:\\
(i) $\mathcal{E}_\alpha(X, Y ) \geq 0.$ \\
(ii) If $0<\alpha \leq 2$, then $\mathcal{E}_\alpha(X, Y ) = 0$ if and only if $X \stackrel{\mathcal{D}}{=} Y$. \\
(iii) If $\alpha = 2$, then $\mathcal{E}_\alpha(X, Y ) = 0$ if and only if $E[X] = E[Y]$.
\end{theorem}
The proof of Theorem \ref{theorem1} can be read in \cite{rizzo2010}. Based on this theorem, DISCO statistics, which can be thought of as a variation of the ANOVA statistics, can be developed. Define the empirical distance between distributions as follows. For two $p$-dimensional samples $A = \{a_1,..., a_{n_1}
\}$ and $B = \{b_1,..., b_{n_2}
\}$, the $d_\alpha$-distance between $A$ and $B$ is defined as
\begin{equation}
\label{distance}
d_\alpha(A,B) = \frac{n_1 n_2}{n_1+n_2} [2g_\alpha(A,B)-g_\alpha(A,A)-g_\alpha(B,B)],
\end{equation}
where
\begin{equation}
\label{gfunction}
g_\alpha(A,B) = \frac{1}{n_1 n_2} \sum \limits_{i=1}^{n_1} \sum \limits_{m=1}^{n_2} ||a_i-b_m||^\alpha.
\end{equation}
Note that $d_\alpha(A,A)$ is within-sample dispersion and $d_\alpha(A,B)$ is between-sample dispersion. Similar to ANOVA analysis, we can also write total dispersion as summation of between-sample and within-sample dispersion here. Let $A_1,...,A_K$ be $p$-dimensional samples with sizes $n_1,...,n_K$. The $K$-sample $d_\alpha$-distance statistic that takes the role of ANOVA sum of squares for treatments is the weighted sum of dispersion statistics:
\begin{equation}
\label{between}
S_\alpha(A_1,...,A_K) = \sum \limits_{1 \leq j<k \leq K} \bigg(\frac{n_j+n_k}{2N}\bigg) d_\alpha(A_j,A_k).
\end{equation}
Similarly, the total dispersion of the observed response is
\begin{equation}
\label{total}
T_\alpha(A_1,...,A_K) = \frac{N}{2} g_\alpha (A,A),
\end{equation}
where $A = \sum \limits_{i=1}^K A_i$ is the pooled sample, and the within-sample dispersion is
\begin{equation}
\label{within}
W_\alpha (A_1,...,A_K) = \sum \limits_{j=1}^K \frac{n_j}{2} g_\alpha(A_j,A_j).
\end{equation}
Note that we have $T_\alpha(A_1,...,A_K) = S_\alpha(A_1,...,A_K)+W_\alpha(A_1,...,A_K)$, and when $p=1, \alpha=2$, the decomposition $T_2=S_2+W_2$ is exactly the ANOVA decomposition of the
total squared error: $SS(\text{total}) = SST + SSE$. Hence, ANOVA is a special case of DISCO method.
\par Based on the decomposition, the final statistics for testing equality of distribution is
\begin{equation}
\label{final_stat}
D_{n,\alpha} = \frac{S_\alpha/(K-1)}{W_\alpha/(N_K)},
\end{equation}
with $0<\alpha<2$ (if $\alpha=2$, the above statistics can only test equality of mean). The distribution of $D_{n,\alpha}$ is complicated so \cite{rizzo2010} uses permutation, which is a simplified version of our resampling technique, to obtain rejection thresholds. As we will see in Section \ref{simulations}, DISCO one-step test, though simple to implement, does not provide information about which dimensions are problematic when the null hypothesis of equal distribution is rejected.
\subsection{Propensity score method}
Propensity score comparison \footnote{see
\url{https://en.wikipedia.org/wiki/Propensity_score_matching}} tests
whether the $k$ $m$-dimensional data sets have the same distribution
by fitting a model to the data to obtain the likelihood of a data point being assigned to a particular data set. To be specific, combine the $K$ data sets $A_1,...,A_K$ to be one big data set $A$, which is also the $p$-dimensional covariate space. The response is a length-$n$ vector with each entry being the class label (1,2,...,or $K$) of each data point. We can then fit logistic regression, tree-based model, support vector machine, or any class prediction model to the data to obtain a predicted label for each data point. Here we use multiple logistic regression. If the $K$ original data sets truly have the same distribution, the predicted labels should have about uniform distribution on $\{1,2,...,K\}$. We use chi-square test \footnote{see
\url{https://en.wikipedia.org/wiki/Chi-squared_test}} on a $K \times K$ contingency table to test the uniformity of distribution. However, we can show, for logistic regression, that only when the $K$ data sets are exactly the same is the distribution of the predicted labels truly uniform. Otherwise, the predicted label will tend to be the same as the actual label due to overfitting. To resolve this, we can randomly choose a $c$ proportion of rows in each data set to be training data, and the rest are test data. But this reduces the
effective test sample size in the stage of chi-square test. For example, if we choose $p=0.8$, then only 1/5 of the data are
used to chi-square test. This can shrink power when the total number of data points are not too large. Here, like in DISCO one-step test introduced in the previous subsection, we use permutation method to get the rejection threshold for the p-value obtained from the chi-square test.
\par The propensity score method, in some sense, takes into account both marginal distributions and
interactions because all the covariates appear together on the right hand side of the link function. Interaction terms can also be added, but it is unclear how many of the interaction terms are enough since adding all the interactions are quite unfeasible when the number of covariates is large. However, as we will show in the simulation section, the propensity score method is not very sensitive to non-mean difference (such as variance difference) in distribution. Next, we will give a more direct approach to testing marginal and
interaction homogeneity.
\subsection{Randomized chi-square test}
Since the data are categorical, the distribution is determined by the
multinomial distribution parameters $p=(p_1,...,p_l)$, where each $p_i$ denotes the probability of seeing a particular combination of categories from the $m$ columns, with $\sum
\limits_{i=1}^l p_i = 1$, where $l$ is the total number of
categories. With $m$ columns in total, the total number of categories
can be huge ($2^m$ is most likely a lower bound). Theoretically, we
can create a $K \times l$ contingency table and apply chi-square test
to test whether the multinomial distribution is independent of data
sets (equivalent to the $k$ data sets having the same distribution). Yet this is not feasible under computation time
constraints. Nor can we reduce the dimensions of the table without
losing information. For example, two distributions can have the same two-way
interaction but different three-way interaction. However, if we have
further information about the number of categories of each column, we
can reduce $l$ accordingly. A simple example is that all columns are
binary (two categories). The distribution in this case is determined
by marginal distribution and two-way interaction. Without further
information, we compromise by
introducing randomization:
choosing a small subset of columns each time, say $C$ columns, to do
chi-square test, and then repeat the process $D$ times. The argument
is that even if the
non-null columns are sparse, by repeatedly choosing $D$ times for
relatively large $D$, the non-null columns still have large
probability of being picked at least once. Assume there are $m$ columns in total, the probability that one single column is not picked even once is $(1-C/m)^D$, which is approximately $1-CD/m$ when $C/m$ is small. Thus, $C$ and $D$ have equal impacts on the selection probability. In practice, increasing $D$ costs less computation than increasing $C$.
\par Note that randomized chi-square only applies to categorical data among the three methods we introduce. In practice, we can ``categorize'' continuous data into buckets, which sometimes gives us higher heterogeneous distribution detection power than applying the other methods directly on the continuous data.
\subsection{A resampling technique}
Resampling \footnote{see
\url{https://en.wikipedia.org/wiki/Resampling_(statistics)}} is
often used when the distribution of the test statistics is not known
or hard to derive. In our randomized chi-square test, for example, the
distribution of P-values depends on the unknown distribution of the
original data sets. Thus, we can use resampling to set the threshold
of rejection. If we are in single hypothesis testing scenario (one-step DISCO and propensity score methods), our proposed resampling technique is equivalent to permutation test (see Algorithm 1) \footnote{In all the algorithms here, without loss of generality we assume the computed statistics are P-values.}.
\begin{algorithm}
\begin{algorithmic}[1]
\State Get original statistics $P_0$ (e.g. from propensity score method).
\State Randomly permute rows (users) among the $k$ data sets, and calculate the P-value from the permuted data.
\State Repeat step 2 $B$ times, and we obtain $B$ P-values from resampling, denoted by a vector $\tilde{P}^* = (\tilde{P}^*_1,...,\tilde{P}^*_B)$.
\State Choose the threshold $t$ as the $\alpha$(5) percentile of $\tilde{P}^*$, and reject the null hypothesis if $P_0<t$.
\end{algorithmic}
\caption{resampling for single hypothesis}
\label{alg:algorithm1}
\end{algorithm}
It is easy to see that Algorithm 1 controls FWER exactly, because all permutations have equal probability under the null. Under multiple hypotheses testing scenario, we modify Algorithm 1 a little (see Algorithm 2).
\begin{algorithm}
\begin{algorithmic}[1]
\State Assume there are $m$ hypotheses. Get the original vector of statistics, denoted $\hat{P}_0=(P_{01},...,P_{0m})$.
\State Randomly permute rows (users) among the $k$ data sets, and calculate the vector of P-values from the permuted data, denoted by $\tilde{P}^*_1 = (\tilde{P}^*_{11},...,\tilde{P}^*_{1m})$. Let $Pmin_1 = \min(\tilde{P}^*_1)$.
\State Repeat step 2 $B$ times, and we obtain $B$ minimum P-values from permuted data, denoted by a vector $\tilde{P}min = (Pmin_1,...,Pmin_B)$.
\State Choose the threshold $t$ as the $\alpha$(5) percentile of $\tilde{P}min$, and reject the null hypothesis i if $P_{0i}<t$.
\end{algorithmic}
\caption{resampling for multiple hypotheses}
\label{alg:algorithm2}
\end{algorithm}
We claim that Algorithm 2 controls FWER under any hypotheses configuration.
\begin{theorem}
Assume P-values are marginally Uniform(0,1). Under any covariance structure, Algorithm 2 controls FWER, so it controls FDR as well.
\end{theorem}
\begin{proof}
The proof is conceptually simple. Assume there are $r$ non-null and $m-r$ null hypotheses among the $m$ hypotheses. Then
the probability of making at least one false rejection is the probability that the minimum of a subset of $m-r$ null P-values is less than the $\alpha$th percentile of the distribution of the minimum of total $m$ null P-values, which is less than $\alpha$.
\end{proof}
\par In practice, resampling can also be applied after hypotheses selection (see Algorithm 3).
\begin{algorithm}
\begin{algorithmic}[1]
\State Assume there are $m$ hypotheses. A selection rule $S$ selects $s$ hypotheses with statistics $\hat{P}^S=(P_{m1},...,P_{ms})$.
\State Randomly permute rows (users) among the $k$ data sets, and apply selection rule S, and denote selected P-values by $\tilde{P}^*_1 = (\tilde{P}^*_{11},...,\tilde{P}^*_{1s1})$. Let $Pmin_1 = \min(\tilde{P}^*_1)$.
\State Repeat step 2 $B$ times, and we obtain $B$ minimum selected P-values from permuted data, denoted by a vector $\tilde{P}min = (Pmin_1,...,Pmin_B)$.
\State Choose the threshold $t$ as the $\alpha$(5) percentile of $\tilde{P}min$, and reject the null hypothesis i in the originally selected $s$ P-values if $P_{mi}<t$.
\end{algorithmic}
\caption{resampling after hypotheses selection}
\label{alg:algorithm3}
\end{algorithm}
Empirical results show that Algorithm 3 also controls FWER, thus FDR, when the selection rule $S$ satisfies certain properties. That is, when non-null hypotheses are more likely to be selected than null hypotheses. Intuitively, this controls FWER because the minimum of a subset of a total $s$ null P-values is stochastically larger than the minimum of all the $s$ null P-values.
\par We will compare this resampling technique to other multiplicity adjustment methods such as Holm's procedure \footnote{See \url{https://en.wikipedia.org/wiki/Holm0-Bonferroni_method} for Holm's procedure.}, and the Benjamini-Yekutieli (BY) procedure proposed in \cite{benjamini2001} in the next section.
\section{Simulations and analysis}
\label{simulations}
Since real data sets are huge and messy, we first use simulations to compare
our methods introduced in the previous section. Heterogeneity in multi-dimensional distributions has two compositions: (a) heterogeneity in marginal distribution of one (or more) dimension(s); (b) heterogeneity in the covariance (interaction) structure among dimensions. In the following simulations, the data is
generated with either (a) or (b), or both.
\par For randomized chi-square method, deciding the choices of the maximum number of columns to sample each time ($C$) and the number of times to sample ($D$) is tricky. When dimensions become large, we hope to increase $C$ to capture more complicated interaction structure. Yet the number in each cell of the $R \times C$ table will decrease, which will diminish the power of chi-square test. Empirically, when the number of columns is not too large, and if the sum of each column of the $R \times C$ table is greater than or equal to 5, we pick $C$ between $1/10 m$ and $1/5 m$, where $m$ is the dimension of the data sets.
\par We do not do any feature selection to reduce dimension in our simulations, because in reality online marketing data has a great range of variety depending on the company, and each company has its own way of trimming data sets (either with general feature selection technique such as LASSO, or with domain knowledge). Thus, we do not apply Algorithm 3 in the previous section in this paper.
\subsection{Detection of heterogeneity in marginal distribution or interaction structure}
In this first simulation, we consider detection of marginal distribution difference and interaction structure difference separately. We simulate four data sets, each with 10 columns (dimensions) and 100 rows. Three of them have iid entry with multinomial distribution with probability vector (0.25,0.25,0.25,0.25) (so the number of categories of each column is also four).
\par In Scenario 1, the last data set has different marginal distribution than the other three data sets: for weak signal, the probability vector is (0.3,0.25,0.25,0.2); for medium signal, it is (0.4,0.25,0.2,0.15); for strong signal, it is (0.5,0.2,0.2,0.1). For each heterogeneity we also vary the number of columns (from 1 to 10) that have heterogeneous marginal distribution to test the sensitivity of each method.
\par In Scenario 2, the last data set has different interaction structure among columns keeping the marginal distribution the same as the other three sets (the other three have independent columns): first sort each column, and then for strong signal, each column is rotated by 10 (turning an array $A$ to $A[10:] + A[:10]$. Here $+$ denotes concatenation) from the previous column, so the columns are positively correlated; for medium signal, after rotation 40\% of the data in each column is permuted to mitigate the correlation; for weak signal, after rotation 80\% of each column is permuted randomly. Like in Scenario 1, we also vary the number of columns (from 2 to 10) with heterogeneous interaction structure. We compare
four methods: baseline t-test, propensity score method with resampling, DISCO method, and randomized chi-square test with resampling.
\begin{figure}[H]
\includegraphics[scale=0.5]{plot1.eps}
\caption{All three plots are the results of detection of heterogeneity in marginal distribution, the top one for small difference, the middle one for medium difference, the bottom one for big difference}
\label{plot1}
\end{figure}
\begin{figure}[H]
\includegraphics[scale=0.5]{plot2.eps}
\caption{All three plots are the results of detection of heterogeneity in interaction structure, the top one for weak correlation, the middle one for medium correlation, the bottom one for strong correlation}
\label{plot2}
\end{figure}
\par From Figure \ref{plot1} and \ref{plot2} we see that baseline t-test has high power for detecting marginal heterogeneity, but not for detecting interaction heterogeneity. This agrees with our conjecture. Thus, in the following simulations, we will not test this method anymore. DISCO and propensity score methods have relatively higher power in detecting marginal difference, while randomized chi-square method has significant advantage in detecting interaction heterogeneity. Furthermore, while DISCO or propensity score methods can only tell whether the multi-dimensional distributions are the same, randomized chi-square test can also flag individual problematic columns when the distributions are imbalanced. The flagged columns may not be exhaustive due to randomness in column selection, but it definitely provides a starting point for the follow-up detail diagnosis procedure.
\par We also see from the power plots that interaction heterogeneity is in general harder to detect than marginal distribution heterogeneity since the former has convex power line (power only increase significantly when the number of heterogeneous columns is large) while the latter has concave power line.
\subsection{Varying dimension}
We next vary the dimension of data sets from 10 to 50 while holding the number of rows to be 100. We compare the three methods, propensity methods, DISCO, and randomized chi-square test, on weak heterogeneity (weak marginal heterogeneity + weak interaction heterogeneity as defined in the previous simulation), medium heterogeneity (weak marginal heterogeneity + medium interaction heterogeneity), and strong heterogeneity (medium marginal heterogeneity + medium interaction heterogeneity). In each of the three signal level the number of heterogeneous columns is 1/5 of the dimensions.
\begin{figure}[H]
\includegraphics[scale=0.5]{plot3.eps}
\caption{All three plots are the results of detection power of heterogeneity as dimension increases, the top one for weak heterogeneity, the middle one for medium heterogeneity, the bottom one for strong heterogeneity}
\label{plot3}
\end{figure}
\par From Figure \ref{plot3} we see randomized chi-square behaves almost the same, or a little better, when the difference in distribution is not too large, while the other two methods, especially DISCO, behave significantly better when the dimension of data sets is relatively large. In other words, randomized chi-square test has higher power in detecting many small effects.
\par We also notice that compared to the other two methods, increasing the dimension of the data has little positive effect in power for randomized chi-square test. The reason is that since the portion of heterogeneous columns is unchanged as dimension increases, the probability of picking the "problematic columns" is also relatively unchanged as dimension increases. For the other two methods, the power has an increasing trend as dimension increases.
\par We use resamling procedure to get threshold of rejection in all the four methods, so here we also compare the power of resampling procedure, and one popular multiplicity adjustment procedure, the Holm's procedure \footnote{See \url{https://en.wikipedia.org/wiki/Holm\%E2\%80\%93Bonferroni_method} for an introduction of Holm's procedure.} Both resampling and Holm's procedure can control type I error under any P-value structure.
\begin{figure}[H]
\includegraphics[scale=0.4]{plot4.eps}
\caption{Power comparison of resampling and Holm's procedure for randomized chi-square method, the top one for weak heterogeneity, the middle one for medium heterogeneity, the bottom one for strong heterogeneity}
\label{plot4}
\end{figure}
Figure \ref{plot4} plots the power comparison of applying randomized chi-square test + resampling and applying randomized chi-square test + Holm's procedure. We see that for any heterogeneity level, resampling has higher power than Holm's procedure, which justifies the use of resampling procedure in multiple hypotheses testing problems.
\subsection{Simulated real-world scenario}
Next, we simulate a California-local real online marketing A/B testing scenario. Assume the company is an online retailer for fashion, skin care, and cosmetics. The data is collected from a 24-hour window from 12AM February 21st to 12AM February 22nd. There are 8 columns representing area of living, browser type, gender, age, employment status, income, accumulated number of visits (in the past year), and converted or not before respectively. Table 1 below summarizes the encoding details of these 8 features.
\begin{table}[H]
\begin{center}
\begin{tabular}{ |c|c| }
\hline
Column names & Encoding \\ \hline
Area of living & 0: south Cal; 1: north Cal; 2: mid Cal \\ \hline
Browser type & 0: Chrome; 1: Safari; 2: Firefox; \\
& 3: Internet Explorer; 4: others \\ \hline
Gender & 0: male; 1: female \\ \hline
Age & 0: <20; 1: 20-30; 2: 30-40; \\
& 3: 40-50; 4: >50 \\ \hline
Employment status & 0: student; 1: employed; 2: unemployed \\ \hline
Income & 0: <50,000; 1: 50,000-100,000; \\ & 2: 100,000-200,000; 3: >200,000 \\ \hline
Accumulated & 0: <3; 1: 3-10; 2: 10-20; 3: >20\\
number of visits & \\ \hline
Converted before & 0: no; 1: yes\\ \hline
\end{tabular}
\label{table1}
\caption{Column names and encodings}
\end{center}
\end{table}
The number of categories for each column ranges from 2 to 5. The real interaction structure among these 8 columns can be complicated. We simplify this relationship and summarize it in Figure 5 below. Area of living and Browser type are independent variables, while the other 6 variables have complicated correlations. It is by no means accurate. For example conditioning on employment status, age and gender may not be independent in reality. We consider the scenario where the data traffic is not randomly split for a period of time. For example, almost all traffic is assigned to set A for two hours in the morning, which results in set A has more data than set B. To balance the number of data points in the two sets, the experimenter assigns most of the traffic in the evening to set B. A direct result is that employment status has different distributions in set A and B --- unemployed people take up much higher proportion in the morning time when employed people and students are busy working than they do in the evening time.
\begin{figure}[H]
\centering
\begin{tikzpicture}[auto,node distance=1.5cm]
%
\node[entity] (node1) {Employment status}
[grow=up,sibling distance=2cm];
\node[relationship] (rel3) [above = of node1] {Income};
\node[relationship] (rel2) [below left = of node1] {Gender};
\node[relationship] (rel1) [below right = of node1] {Age};
\node[entity] (node2) [above right = of rel1, xshift=-1.6cm] {Accumulated number of visits};
\node[relationship] (rel4) [above = of node2] {Converted or not};
\node[entity] (node3) [above = of rel3]{Area of living};
\node[entity] (node4) [right = of node3]{Browser type};
\path (rel1) edge node {1} (node1)
edge node {4} (node2)
(rel2) edge node {2} (node1)
edge node {4} (node2)
(rel3) edge node {3} (node1)
edge node {4} (node2)
(rel4) edge node {5} (node2)
(rel4) edge node {5} (rel3);
\end{tikzpicture}
\caption{Interaction structure among the 8 columns}
\label{diagram}
\end{figure}
Figure \ref{diagram} also shows a way to generate our data. All marginal distributions are multinomial. Area of living and Browser type are independent which can be easily generated. Then we generate the Employment status with certain probability vector. Conditioning on Employment status, we generate age (the edge with 1), gender (the edge with 2), and income (the edge with 3). Then conditioning on income, gender, age, we generate Accumulated number of visits (the edges with 4). Finally we generate Converted or not conditioning on income and Accumulated number of visits (edges with 5). A detailed data generation procedure is summarized in the Appendix. In particular, the probability vector for Employment status is (0.3,0.3,0.4) in set A (more people in the morning) and (0.4,0.4,0.3) in set B (more people in the evening), which results in difference in marginal distribution and correlation structure in other dimensions. We control the number of data points to be 800 in either data sets.
\par The data is simulated 100 times. The power for the propensity method, DISCO method, and randomized chi-square test is 0.93, 0.91, and 0.87 respectively. All three methods can detect the multi-dimensional distribution heterogeneity with high power. Again there is a trade-off: randomized chi-square method has comparably lower power but can point out the most imbalanced columns.
\begin{table}[H]
\begin{center}
\begin{tabular}{ |c|c|| }
\hline
Column names & Number of times being rejected \\ \hline
Area of living & 28 \\ \hline
Browser type & 23 \\ \hline
Employment status &163 \\ \hline
gender & 43 \\ \hline
age & 85 \\ \hline
income & 44 \\ \hline
Accumulated number of visits & 50 \\ \hline
Converted or not & 43\\ \hline
\end{tabular}
\caption{the number of time each column gets rejected in 100 simulations with 10 column sampling times per simulation}
\label{table2}
\end{center}
\end{table}
Table \ref{table2} displays the number of time each column gets rejected in 200 simulations with $S=10$ (sampling columns 10 times per simulation) and $C=3$ (sampling maximum 3 columns each time). We see Area of living and Browser type have significantly fewer times of being rejected because they are balanced and independent. Employment status has the largest number of sampling times since it is directly affected by the timing of the day, with the rest 5 columns having milder imbalance depending on its correlation with Employment status.
\section{Anonymous real data}
\label{realdata}
We also try our tests on some auto-personalization datasets provided by Adobe Digital Marketing Cloud. Auto-personalization is personalized recommendation to individual customers learned from past customer behavior in the database. Usually a random A/B splitting is needed to serve as baseline to evaluate any recommendation methodology. We obtain such a pair of randomly-split data sets, and test the propensity method, DISCO method, and randomized chi-square test on a data set from the online marketing section of a company. The data is split into set A and B, each of dimension $5055 \times 383$, with 2 to 50 categories per column. For privacy concern, we do not get to know the encodings of the columns and the detailed data collecting procedure. However, propensity score with resampling, DISCO method, and
randomized chi-square tests all reject the hypothesis that the two
distributions are the same. Randomized chi-square also provides some combinations of columns that are the most imbalanced, one of which is columns 196, 57, 248, 260, 271, 342, 374, and 239. These 8 columns have 50 different combinations (for example if one column can take on 2 values, another 3 values, then the two columns together can take on at most $2 \times 3 = 6$ values). Figure \ref{plot5} shows a bar plot of the 15 combinations that have the most counts from both data sets, and a ratio of counts in set B to those in set A. We see the two categorical distributions do differ a great deal, with counts ratio being as high as 3. If we get more information on the these two sets we can do further analysis on these selected columns to identify the source of heterogeneity.
\begin{figure}[H]
\includegraphics[scale=0.45]{plot5.eps}
\caption{comparison of a subset of columns from two categorical data sets, the red bar representing counts from set A, the green bar from set B, purple line denoting ratio of counts in A to counts in B}
\label{plot5}
\end{figure}
\section{Discussion}
\label{discussion}
In summary, A/B testing has a wide range of application in online marketing and other areas, so making sure the test indeed provides valid results is crucial. This paper formulates the validity of A/B testing results as a hypothesis testing problem in multi-dimensional space. We propose three ways to test whether two or more data sets come from the same distribution and each has its own advantage. The propensity score and DISCO methods can both be generalized to continuous data, but randomized chi-square test is only applicable to categorical data. Of course, the simplest way is to categorize the continuous data. In fact, even when the data is one-dimensional normal with the same mean but difference variance, both propensity and DISCO methods have very low power, but categorize each data point $x$ to be the floor of $2x$ and apply randomized chi-square test yields high power.
\par Besides testing equivalence of distribution after the experiments have been carried out, we also hope to make sure the online experiments are correctly carried out from the beginning. This is a meaningful and promising area of research that involves multivariate analysis, multiple testing, feature selection, observational data and sequential analysis. There has been active research going on in some parts of this big area, including the papers we mentioned in Section 2. We also need to work closely with data engineers to effectively implement these methodology.
|
1,477,468,751,092 | arxiv | \section{Supporting information}
\label{SI}
\subsection{Theoretical model}
Here we derive a model to describe the vibrational resonance amplification of a weak signal in a thermo-optic waveguide-coupled optical mode. We consider a single optical mode with complex amplitude $a$. We write $\omega_c$, $\kappa_i$, $\kappa_e$, $s_i$ and $\omega_L$ respectively the mode frequency, its intrinsic loss rate, its external loss rate, the laser amplitude and the laser frequency.
The system can be modelled within coupled mode theory approach:
\begin{equation}
\label{eq1}
\dot{a}(t) = \Big(j(\omega_L-\omega_c) - \frac{\kappa_t}{2}\Big)a(t) + \sqrt{\kappa_e}\times s_i(t)
\end{equation}
where $\kappa_t=\kappa_i+\kappa_e$ is the optical mode total resonance linewidth.
We introduce the thermo-optic nonlinearity through the cavity temperature dynamics, with $\Delta T$ the temperature elevation in the optical cavity:
\begin{equation}
\label{eq2}
\dot{\Delta T} = \mathcal{K}_t\Big(\kappa_{th}|a|^2 - \Delta T\Big)
\end{equation}
with $\mathcal{K}_t$ the thermalization rate of the cavity and $\kappa_{th}$ the thermal resistance (in K/J).
The optical mode resonance wavelength is thermally shifted by an amount:
\begin{equation}
\label{eq3}
\Delta\lambda = \frac{\lambda_0}{n_0}\frac{d\text{n}}{dT} \Delta T
\end{equation}
with $\lambda_0 = 2\pi c/\omega_0$ the optical mode wavelength -- $\omega_0$ and $n_0$ being respectively the cavity natural frequency and refractive index, both taken at room temperature -- and $\frac{d\text{n}}{dT}$ the thermo-optic coefficient of the material.
\begin{table}
\begin{center}
\begin{tabular}{ p{5cm} p{2.8cm}}
\hline
Physical parameter & value\\
\hline
$\lambda_0$ : resonance wavelength & 1566.3 nm\\
$\kappa_i$ : internal loss rate & 28.2 GHz\\
$\kappa_e$ : external loss rate & 18.4 GHz\\
$n_0$ : refractive index & 3.16 \\
$\frac{d\text{n}}{dT}$ : thermo-optic coef. & $1.9298\cdot10^{-4} \text{K}^{-1}$\\
$\mathcal{K}_t$ : Thermalization rate & 125 kHz\\
$\kappa_{th}$ : thermal resistance & 1.62 $\text{K.fJ}^{-1}$
\end{tabular}
\end{center}
\caption{List of parameters used in the numerics.}
\label{AllVars}
\end{table}
In the numerical simulation, we integrate \cref{eq1,eq2,eq3} using the real and imaginary parts of $a$, and considering a temperature dependent resonance frequency for the optical mode:
\begin{equation}
\omega_c = \omega_0 (1 - \frac{1}{n_0}\frac{d\text{n}}{dT}\Delta T)
\end{equation}
The ODEs are integrated after time-normalization $t\longrightarrow \mathcal{K}_t t$, as $\mathcal{K}_t$ constitutes the cut-off frequency of the thermo-optic nonlinearity. The input field is modelled with a square modulation at $\wmod$ plus a high frequency sinusoidal signal at $\whf$. Both components have respective amplitudes given by the modulation depths, respectively $\amod$ and $\ahf$:
\begin{align}
s_i(t) = \sqrt{\pin}\Big[ 1 + e^{j\pi\big(-\frac{1}{2} + \amod f(\wmod t) + \ahf\cos(\whf t) \big)} \Big]
\end{align}
where $\pin$ is the laser input power and $f(t)$ is the square signal function,
\begin{align*}
f(t)=\frac{4}{\pi} \sum_{p=0}^N \frac{\sin\big((2p+1)t\big)}{2p+1}
\end{align*}
In the numerics we use N=15. All the constants are given in \cref{AllVars}.
\subsection{Numerical simulations}
\begin{figure*}[htp]
\begin{center}
\includegraphics[scale=1]{simu_fig1.pdf}
\end{center}
{\phantomsubcaption\label{simuFig1:a}}
{\phantomsubcaption\label{simuFig1:b}}
\caption{a) We evaluate the mode energy $|a|^2$ while the laser wavelength $\lambda_L = 2\pi c/\omega_L$ is swept forward (red) and backward (black), using $\pin=10$ mW, giving rise to a $\sim1$ nm wide bistability (red area). $\amod=\ahf=0$. b) Residence probability as a function of the modulation depth for $\lambda_L=1566.30$ nm and $\wmod/2\pi=500$ Hz.}
\end{figure*}
In \cref{simuFig1:a} the mode intensity is numerically evaluated as a function of the laser wavelength under forward and backward scans (resp. black and red curves).
Tuning the input power $\pin$, the spectral span ($\sim1$ nm) of the bistability (red region) is adjusted to qualitatively match the experiment. Note that playing with the power of with the thermo-optic coefficient is equivalent in this model. We find the best agreement for $\pin=10$ mW.
Using the laser wavelength $\lambda_L=1566.3$ nm), we numerically resolve the ODE for increasing modulation depth $\amod$. The modulation frequency is set to $\wmod/2\pi=500$ Hz rather than 10 Hz in the experiments, in order to reduce the integration time. The time trace $|a|^2(t)$ is used to calculate the residence probability of the optical mode (see \cref{simuFig1:b}).
Enabling the high-frequency modulation with $\whf/2\pi=10$ kHz, the amplification curve is obtained by evaluating the signal-to-noise ratio (SNR) of the weak low-frequency modulation in the mode response, as a function of the HF signal amplitude, $\ahf$. The results are shown for two different values of $\amod$, in \cref{fig2}. Significative amplification is obtained for $\amod=0.005$ (black curve), but not for $\amod=10^{-5}$ (gray curve). The shape of the amplification curve as well as its amplitude of nearly 22 dB show a good qualitative agreement with the experimental results.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=1]{simu_fig2.pdf}
\end{center}
\caption{Signal-to-noise ratio (SNR) plotted as a function of the high-frequency signal amplitude, $\ahf$, for $\amod=0.005$ (black) and $\amod=10^{-5}$ (gray). $\wmod/2\pi=500$ Hz and $\whf/2\pi=10$ kHz.}
\label{fig2}
\end{figure}
Note that the model here accounts for a single optical mode. Therefore the significative overlap between bistable optical modes that we observe experimentally is not captured by the present simple model.
|
1,477,468,751,093 | arxiv | \section{Scheduling and Resource Allocation Algorithm}
\label{s:sra}
\begin{algorithm}[t]
\noindent
\begin{algorithmic}[1]
\Require
\Statex The up-to-date $CSI_{u}$ and $BSR_{u}$ $\forall u \in U$
\While{$\sum_{u}^{U} BSR_{u} > 0$}
\State $MCS_{u}$ = \textbf{Rate-Adaptation}($CSI_{u}$) $\forall u \in U$
\If{OFDMA}
\State Compute $R_{u} = BSR_{u}/T_{MaxTime}$ $\forall u \in U$
\State Find $R_{RU(0)}, \dots, R_{RU(L)}$ given $MCS_{u}$ $\forall u \in U$
\State $U_{l} \cap \left \{ u \right \}$ where $l \geq \operatorname*{argmin}_{l \in L} \left \| R_{RU(l)} - R_{u} \right \| $
\State $g_{opt} = \textbf{Divide-Conquer}(U, CSI, 0,0)$
\Else
\State $g_{opt} = \textbf{Greedy-Algorithm}(U, CSI,0,0)$
\EndIf
\While{empty $RU$ in $g_{opt}$}
\State Pop $U_{l}$ if empty $\forall l \in L$
\State $g_{opt} = \textbf{Divide-Conquer}(U, CSI,0,0)$
\EndWhile
\State Set $T_{s} = max(min_{k \in g}(T_{k}), T_{MinTime})$
\State Transmit trigger frame
\State Receive \text{scheduled-packet}
\State $BSR_{u} = BSR_{u} - \text{Scheduled-Packet } \forall u \in g_{SA}$
\EndWhile
\caption{Our SRA algorithm}
\label{alg:general}
\end{algorithmic}
\end{algorithm}
\parahead{Our algorithm.}
Algorithm~\ref{alg:general} summarizes our SRA algorithm:
\textbf{(1)} we first find a proper MCS index for each user based on the acquired\fshyp{}predicted
full\hyp{}bandwidth channel information.
To do so, we employ ESNR\hyp{}based
rate adaptation~\cite{halperin2010predictable};
\textbf{(2)} we then assign each user into one or more
user groups $U_{0}, \dots , U_{L}$
based on user's MCS index and BSR.
Here, high-level idea is to increase probability of users with low data rate to be assigned to smaller RUs while increasing likelihood of users with high data rate to be assigned to larger RUs.
Specifically, we first compute data-rate $R_{u}$ for each user $u$ via dividing user's buffer length by a maximum packet duration $T_{MaxTime}$, which is $5.484 ms$~\cite{802.11ax-wc16}.
Then, we compare data-rate with predefined PHY rate~\cite{802.11ax-wc16} at the selected MCS index.
If user's data rate $R_{u}$ is less than the predefined PHY rate $R_{RU(l)}$ at the level $l$ and selected MCS index,
we assign user $u$ to user groups $U_{l}, U_{l+1}, \dots, U_{L}$;
\textbf{(3)} Given CSIs from all users and $L+1$ user groups, we run a divide\hyp{}conquer
(DQ) algorithm~\cite{wang2018scheduling}.
Here, users can be allocated to RUs whose level $l$ corresponds with its assigned user group.
For example, if user $u$ is assigned to user group $U_{2}$ and $U_{3}$ from step 2, then it can only be allocated to RUs with its size equivalent to level $2$ and $3$;
\textbf{(4)} if the schedule result in Step~$3$ contains RU with no assigned user,
we repeat from Step~$2$ to rearrange users in $L+1$ user groups;
\textbf{(5)} finally, we set packet duration equal to the minimum duration among all scheduled users.
\section{Supporting Applications}
\section{Primer: ML Background for CLCP{}}
The goal of CLCP{} is to correlate the occurrence of distinct wireless links given that they share \textit{some} views on the wireless environment.
Specifically, we treat each channel reading
like a photo of an environment taken at a particular \textit{viewpoint}, and
combine multiple different views to form a joint representation of the environment.
We then exploit
this representation to predict
unobserved wireless CSI readings.
To do so, we must accomplish two tasks:
\begin{enumerate}
\item From the observed channels, we must discard radio-specific information and extract a feature representation that conveys information on the dynamics like moving reflectors.
\item To synthesize unseen channels of a nearby radio, we need to integrate the extracted representation with radio-specific properties, including the signal paths and noises.
\end{enumerate}
However,
radio-specific information and environment-specific information
in the channel superimpose in channel readings and thus are not easily separable.
We exploit representation learning
to capture a meaningful representation from
a raw observation.
An encoder network of the representation learning model accomplishes
the first task, and a decoder network of the model achieves the second task.
Before discussing the details of our CLCP{} design, we first
provide some background on representation learning.
\begin{figure*}[t]
\includegraphics[width=\linewidth]{figures/design4.png}
\caption{\emph{\textbf{System overview for uplink transmission:}}
(1) an AP receives uplink traffic from multiple users simultaneously.
(2) when channels become outdated, the AP extracts the path parameters of partial CSIs estimated from the latest OFDMA packet and predicts unobserved links using CLCP{} and unobserved bands using CBCP in a server; (3) Then, the AP schedules uplink traffic based on predicted CSIs and triggers the users. (4) Finally, the AP receives the scheduled packet.}
\label{fig:framework}
\end{figure*}
\parahead{Autoencoder.}
The \emph{autoencoder}
learns lower\hyp{}dimensional representation $z$, that contains the information relevant for a given task.
Specifically, an \textit{encoder} deep neural network (DNN) compresses the input data $h$ from the initial
space to the encoded space, also known as a latent space $z$, and
a \textit{decoder} decompresses $z$ back to the data $\hat{p}$.
However, it is not generalizable to new data and
prone to a severe overfitting.
\parahead{Variational Autoencoder (VAE).}
To enhance generalizability, the VAE \cite{kingma2013auto} integrates
non\hyp{}determinism with the foregoing autoencoder.
In a nutshell, the VAE's encoder compresses data $h$ into a
normal probability \emph{distribution} $z$, rather than discrete values.
Then, it samples a point from $z$,
and its decoder decompresses the sampled point back to the original data $\hat{h}$.
Mathematically, we represent a VAE model with a DNN $\theta$ as
$p_{\theta}(h,z)= p_{\theta}(z)p_{\theta}(h|z)$
where $p_{\theta}(z)$ is a prior (\textit{i.e.} Gaussian distribution)
and $p_{\theta}(h|z)$ is a decoder.
The goal of the training is to find a distribution that best describes the data.
To do so, it uses the
\emph{evidence lower bound} (ELBO):
\begin{equation}
\mathrm{ELBO}
=\mathbb{E}_{q_{\phi}(z|h)}\left[\log\, p_{\theta}(h|z)\right]
- D_{\mathrm{KL}}(q_{\phi}(z|h)||p(z))
\label{eq:vae_term}
\end{equation}
where
$q_{\phi}(z|h)$ is the encoder.
Its loss function consists of two terms where
the first term of the ELBO is
the reconstruction error
and the second term is a regularization term.
This probabilistic approach has an advantage in predicting cross-link wireless channels
as it makes possible generalization beyond the training set~\cite{schonfeld2019generalized, klushyn2019increasing, bozkurt2019evaluating, bahuleyan2017variational}.
\parahead{Multiview Representation Learning.}
\textit{Multiview or multimodal
representation learning}~\cite{wu2018multimodal, pu2016variational,
spurr2018cross, tsai2018learning}
has proven effective in
capturing the correlation relationships of
information that comes as different \emph{modalities} (distinct data types or data sources).
For instance, different sets of photos of faces, each set having been
taken at different angles, could each be considered different
modalities.
Such model learns correlations
between different modalities
and represents them jointly,
such that the model can generate a (missing) instance of one modality
given the others.
Like VAE, multiview learning encodes the primary view into a low-dimensional feature that contains useful information about a scene and decodes this feature, which describes the scene, into the secondary view.
A more advanced form of multiview learning adopts multiple different views as input data and encodes them into a joint representation.
By analyzing multiple information sources simultaneously,
we present an opportunity to learn a better, comprehensive feature representation.
For example, past work~\cite{sun2018multi} uses multiview learning to synthesize unseen image at an arbitrary angle given the images of a scene taken at various angles.
Likewise, we treat each wireless link like a photo of a scene taken at a particular view-point.
We obtain the wireless link at many different view-points
and combine these views to form a joint representation of the channel environment.
We then exploit this joint representation to predict the wireless link at unobserved view-points.
\subsection{CLCP Model Design}
\label{s:design}
\begin{figure*}
\includegraphics[width=1\linewidth]{figures/modules_param8.pdf}
\caption{CLCP{} ML model with $N$ measured channels,
each represented as a set of wireless path parameters $\{\theta_{l}, d_{l}, a_{l}, \phi_{l}\}^{L}_{l=0}$
with $L$ paths estimated from measured channels.
Each set of the parameters are served by
a \textbf{Single-view Encoder} network $E_i$ ($i \in [1, N]$)
that compresses the measured wireless path information of its dedicated radio and
outputs variational parameters $\mu_{i}$ and $\sigma_{i}$.
The \textbf{Multi-view Combiner} integrates
all variational parameters into $\mu$ and $\sigma$, based on which
\textbf{Single-view Decoder}
networks $D_{K}$ generate a set of path parameters that are unobserved.
If any input channel is not observed, CLCP{} drops the respective encoder network ($E_{2}$, for example).}
\label{f:modules}
\end{figure*}
In the context of learning,
cross-band channel prediction \cite{R2F2-sigcomm16, bakshi2019fast} is a
markedly different problem than cross-link channel prediction.
In the former, uplink and downlink channels
share exactly the same paths in the wireless channel.
Therefore, the learning task is simply to map the (complicated) effect of
changing from one frequency band to another, given a fixed set of
wireless channel paths.
For cross-link channel prediction, on the other hand,
the channels of nearby radios have distinct paths, and the
learning task is to elucidate the correlations between the two links.
Since two different links share \textit{some}
views on the wireless environment as shown in \cref{f:feature_embedding},
our learning task is to first discard radio-specific information
from observed channels and extract features,
representing information about link dynamics like moving reflectors.
Then we integrate those extracted features with
radio-specific properties of the users, to synthesize unseen channels.
The first task is hard to accomplish because radio-specific
and environment-specific information in the channel superpose
in CSI readings and thus are not easily separable.
Therefore, we exploit representation learning to capture a
meaningful representation from CSI observations.
Our CLCP{} ML model is summarized in \cref{f:modules}:
there is a \emph{single-view encoder}
network $q_{\phi}(z|h)$ (\S\ref{s:sve}) dedicated to each single
\emph{view}
(\textit{i.e.}, channel $h$ of each radio).
Every encoder outputs the distribution parameters, $\mu$ and $\sigma$, and a
\emph{multi-view combiner} (\S\ref{s:mvc}) fuses all output parameters into a joint
low-dimensional representation $z$.
When channel is not observed, we drop its respective encoder network
(\textit{e.g.} $E_{2}$ in \cref{f:modules}).
A decoder network $p_{\theta} (h|z)$ (\S\ref{s:svd}) serves each single
view of a \emph{target} radio whose CSI we seek to synthesize.
Each decoder samples a data point from the joint latent representation $z$ to
construct a \emph{cross-link predicted} channel.
A key challenge in designing CLCP{} is
that across different prediction instances, the channel inputs vary
as we exploit channels in existing OFDMA transmission as an input.
In \cref{f:modules}, the input channels are two RUs from a previously acquired OFDMA packet,
which were assigned to Sensor $1$ and Sensor $N$, respectively.
At the next prediction instance,
an OFDMA packet is likely to contain a different set of RUs, each assigned to a different radio.
This inconsistency makes the learning process highly complex.
We will address how to make CLCP{} robust against the observations that vary with respect to frequency (\S\ref{s:param}) and link (\S\ref{s:mvc}).
\subsubsection{Path Parameter Estimator}
\label{s:param}
CLCP{} aims to infer geometric transformations
between the physical paths traversed by different links.
We reduce learning complexity of CLCP{} by extracting the geometric information (\textit{i.e.} wireless path parameters) from raw CSIs and directly using them as input data.
More importantly, the path parameters are \textit{frequency-independent}.
Hence, using the path parameters
makes the model robust against the observation with varying combination of RUs.
Specifically, we represent channel $h$ observed at antenna $M_{i}$
as a defined number of paths $L$,
each described by an arrival angle \emph{$\theta_{l}$},
a time delay \emph{$d_{l}$}, an attenuation \emph{$a_{l}$}, and a reflection \emph{$\phi_{l}$} as follow:
\begin{equation}
h_{M_{i}, \lambda} = \sum_{l}^{L}(a_{l}e^{\frac{-j2\pi d_{l}}{\lambda}+j\phi_{l}})e^{\frac{-j2\pi ikcos(\theta_{l})}{\lambda}}
\label{eq:channel}
\end{equation}
where $\lambda$ and $k$ are wavelength and antenna distance.
To extract the 4-tuple of parameters $\{(\theta_{l},d_{l},a_{l},\phi_{l})\}^{L}_{l=0}$,
we use maximum likelihood estimation.
For simplicity, we now denote the 4-tuple as $\ddot{h}$.
\subsubsection{\shortnameposs{} Single-View Encoder}
\label{s:sve}
The goal of each CLCP{} single-view encoder $q_\phi(z|\ddot{h})$ is to learn an efficient
compression of its corresponding view (\textit{i.e.} wireless path parameters) into a low-dimensional feature.
Like VAE, it encodes the corresponding channel into Gaussian distribution parameters, $\mu$ and $\sigma$, for better generalizability.
Each of single\hyp{}view encoder consists of
the long short term memory layer (LSTM)
followed by two-layer stacked convolutional layers (CNN) and
fully connected (FC) layers.
In each layer of CNNs, 1D kernels are used
as the filters, followed by a batch norm layer that normalizes
the mean and variance of the input at each layer.
At last, we add a rectified linear unit (ReLU) to
non\hyp{}linearly embed the input into the latent space.
We learn all layer weights of the encoders and decoders end-to-end through backpropagation.
For the links that are not observed, we drop the respective encoder networks.
For example, in \cref{f:modules}, Radio~2 was not a part of OFDMA transmission when CLCP{} initiates prediction.
Therefore, CLCP{} simply drops the single-view encoder dedicated to Radio~2.
\label{s:poe}
\subsubsection{\shortnameposs{} Multi-view Combiner}
\label{s:mvc}
A na\"{\i}ve approach to learn from varying multi-view inputs
is to have an encoder network for each combination of input.
However, this approach would significantly increase the number of
trainable parameters, making CLCP{} computationally intractable.
Using a multiview combiner, we assign one encoder network per each view and
efficiently fuse the latent feature of all $N$ encoders into a joint representation.
We model the multiview combiner after the \emph{product-of-experts} (PoE)
\cite{hinton2002training, wu2018multimodal} whose core idea is
to combine several probability distributions (\emph{experts}),
by multiplying their density functions, allowing
each expert to make decisions based on a few dimensions
instead of observing the full dimensionality.
PoE assumes that
$N$ inputs are conditionally independent given the latent feature $z$, a valid
assumption since the channels from different radios are conditionally independent due
to independently fading signal paths.
Let encoder networks
$q_{\phi}(z|\mathbf{\ddot{H}})$ for each subset of input channels
$\mathbf{\ddot{H}} =\left \{ \right.\ddot{h}_{i}$ | channel of $i^{\mathrm{th}}$ radio $\left. \right \}$.
Then with any combination of the
measured channels, we can write the joint posterior distribution as:
\begin{equation}
q_{\phi}(z|\mathbf{\ddot{H}})\propto p(z)\prod _{\ddot{h}_{n}\in \ddot{H}}\widetilde{q}(z|\ddot{h}_{n})
\label{eq:poe}
\end{equation}
where $p(z)$ is a Gaussian prior, and $\widetilde{q}(z|\ddot{h}_{n})$ is an encoder
network dedicated to the $n^{\mathrm{th}}$ radio.
\Cref{eq:poe} shows that we can approximate the distribution
for the joint posterior as a product of individual posteriors.
The foregoing conditional independence assumption allows factorization of
the variational model as follows:
\begin{equation}
p_{\theta}(\ddot{h}_{1},\dots,\ddot{h}_{N},z) = p(z)p_{\theta}(\ddot{h}_{1}|z)p_{\theta}(\ddot{h}_{2}|z)\dots p_{\theta}(\ddot{h}_{N}|z).
\end{equation}
where $p_{\theta}(\ddot{h}_{1},\dots,\ddot{h}_{N},z)$ is a CLCP{} model, and $\ddot{h}_{N}$ is the channel of Radio $N$.
With this factorization, we can
simply ignore unobserved links,
which we will later discuss in \cref{s:obj}.
Finally, we sample from the joint distribution parameters $\mu$ and $\sigma$ to obtain
the representations $z$ where $z=\mu+\sigma\odot\epsilon$ and $\epsilon\sim\mathcal{N}(0,\mathbf{I})$.
\subsubsection{\shortnameposs{} Single-View Decoder}
\label{s:svd}
Our single-view decoder $p_\theta(\ddot{h}|z)$ is another DNN, parameterized by
$\theta$, whose input is the joint representation $z$.
The goal of each decoder is to synthesize an accurate channel estimate of its dedicated radio.
The decoder architecture is in the exact opposite order of the encoder architecture.
It consists of two-layer stacked CNNs and FCs followed by an LSTM.
In each CNN layer, 1D kernels are
used as the filters with a batch norm layer and ReLU for
the activation function.
The LSTM layer predicts the path parameters of a target radio,
which in turn we use to construct a full-band channel based on \cref{eq:channel}.
In practice, estimating path parameters is
likely to cause a loss of information to some extent.
To compensate such loss,
constructed channel is fed to an extra neural network layer,
a \emph{resolution booster}, to generate a final channel estimate $h_{pred}$.
\subsection{\large Objective function and training algorithm}
\label{s:obj}
Recall the objective function of the VAE in \cref{eq:vae_term}.
With $\mathbf{\ddot{H}} =\left \{ \right.\ddot{h}_{i}$ |
channel of $i${th} radio$\left. \right \}$,
our ELBO is redefined as:
\begin{equation}
\begin{aligned}
\mathrm{ELBO}
= \mathbb{E}_{q_{\phi}(z|\mathbf{H})}\left[\log\, p_{\theta}(\ddot{h}|z)\right]
-\beta D_{\mathrm{KL}}(q_{\phi}(z|\mathbf{H})||p(z))
\end{aligned}
\end{equation}
where $\beta$ is a weight for balancing the coefficient in the ELBO.
Like VAE, the second term
represents the
regularization loss ($Loss_{reg}$) that makes the approximate joint posterior
distribution close to the prior $p(z)$, which is Gaussian.
For the reconstruction error, we compute a mean squared error between the predicted and ground-truth channels as follows:
$Loss_{mse, csi} = \frac{1}{S}\sum_{s=0}^{S} (
\left \|h_{s,gt}- h_{s,pred} \right \|_{2})$
where $S$ is the number of subcarriers and $h_{gt}$ is the ground-truth CSI.
Besides the CSI prediction loss, we compute the intermediate path parameter loss,
which is a mean squared error between the predicted and ground-truth path parameters.
However, some paths are stronger than others when superimposed.
Hence, we weight the error of each path based on its path amplitude $a$ as follow:
$Loss_{mse, param} = \sum_{l=0}^{L} (a_{l}
\left \|\ddot{h}_{l,gt}- \ddot{h}_{l,pred} \right \|_{2})$.
Then, the first term of our $\mathrm{ELBO}$ becomes $Loss_{mse}=-(\alpha Loss_{mse,csi}+\eta Loss_{mse,param})$ where $\alpha$ and $\eta$ are weight terms.
Finally, negating $\mathrm{ELBO}$ defines
our loss function: $\boldsymbol{Loss}_{\mathrm{clcp}} = - \mathrm{ELBO}$.
By minimizing $\boldsymbol{Loss}_{\mathrm{clcp}}$,
we are maximizing the lower bound of the probability of generating the accurate channels.
\begin{figure}
\includegraphics[width=.95\linewidth]{figures/training2.png}
\caption{Multi-step training paradigm.}
\label{f:training}
\end{figure}
\parahead{Multi-stage training paradigm.}
To adopt varying channel inputs,
CLCP{} employs a special multi-step training paradigm~\cite{wu2018multimodal}.
If we train all encoder networks altogether,
then the model is incapable of generating accurate prediction
when some links are unobserved at the test time.
On the other hand, if we individually train each encoder networks,
we fail to capture the relationship across different links.
Therefore, we train CLCP{} in multiple steps as shown in \cref{f:training}.
First, our loss function consists of three ELBOs:
(1) one from feeding all $N$ full-band channel observation,
(2) the sum of $N$ ELBO terms from feeding each full-band channel at a time, and
(3) the sum of $k$ ELBO terms from feeding $k$ randomly chosen subsets of full-band channel, $H_{k}$.
We then back\hyp{}propagate the sum of the three to train all networks end-to-end.
Lastly, (4) we repeat this 3-step training procedure
with random subset of subcarriers to mimic channels in actual OFDMA transmission.
\begin{figure}
\centering
\centering
\includegraphics[width=.9\linewidth]{figures/latent2.pdf}
\caption{\textbf{\emph{CLCP{} explainability:}} A fully trained latent space example with 2D t-SNE visualization. The estimated path parameters, ToF for y-axis and AoA of x-axis. Encoded CSI instances of Radio 1 are highlighted in red with empty pointer and Radio 2 are colored in blue with a filled pointer.}
\label{f:latent}
\end{figure}
\subsection{Cross-Link Joint Representation}
In the latent space, the closer two encoded low-dimensional features are to each other,
the more relevant they are for a given task.
Assume that we encoded the channels of two radios into our low-dimensional features.
If these features are closely located in the latent space,
it is likely that the two nearby links have been affected by the
same moving reflectors, simultaneously.
By visualizing the low\hyp{}dimensional features in the latent space
with the path parameters,
we attempt to get some insight on the learning mechanism of CLCP{}.
For visibility, we reduce the dimensionality of the latent space into two
by performing t-SNE~\cite{maaten2008visualizing} dimension reduction technique.
Specifically, we collected the channel instances of two radios for $3$ hours and
randomly selected these channels at $250$ different time-stamps.
Then we fed the selected channels into the CLCP{} encoders.
Each encoder output $\mu$ is represented by a color\hyp{}coded data point where
red and blue data point denote a low-dimensional feature of Radio 1's channel and Radio 2's channel, respectively.
\cref{f:latent} provides an in-depth analysis on the fully trained latent embedding via two wireless
path parameters, Time-of-Flight and Angle-of-Arrival.
For each radio, low-dimensional features
are closely located when their corresponding path parameters are similar.
For example, Radio 1's two low-dimensional features on right are in close proximity,
and their corresponding path parameters resemble each other.
More importantly, we can also observe a pattern across different radios.
Although two radios have distinct path parameter values,
the number of strong reflectors shown in Radio 1's spectrum and Radio 2's spectrum are similar when their low-dimensional features are close.
For instance, the upper-left spectrum shows a lot of reflectors for both Radio 1 and 2 while the bottom-left spectrum has only one for both.
These observations demonstrate that the model is capable of properly encoding the wireless channels and
distributing the encoded features based on their relevance, which depends on the movement of reflectors.
Also, when unseen channels are fed to the model, the model can still locate the encoded feature onto the latent space and make a good generalization based on the prior instances.
\section{Conclusion}
\label{s:conclusion}
This paper presents the first study to explore cross-link channel prediction
for the scheduling and resource allocation algorithm in the context of 11ax.
Our results show that CLCP{}
provides a $2\times$ to $3\times$ throughput gain over baseline and
a $30\%$ to $40\%$ throughput gain over R2F2 and OptML
in a 144-link testbed.
To our knowledge, this is the first paper to apply the a
deep learning-based model to predict channels across links.
\section{Evaluation}
\label{s:eval}
We begin by presenting the methodology for our experimental evaluation
(\S\ref{s:methodology}), followed by
the end\hyp{}to\hyp{}end performance comparison on throughput and power consumption (\S\ref{s:e2e_perf}).
Lastly, we present a microbenchmark on prediction accuracy,
channel capacity, packet error rate, and PHY\hyp{}layer bit rates (\S\ref{s:microbenchmarks}).
\subsection{Experimental methodology}
\label{s:methodology}
\parahead{Use cases.}
We evaluate CLCP{} in two use case scenarios: a cashierless store and a smart warehouse.
\textbf{Cashierless stores} typically experience a high demand of data traffic
as densely deployed video cameras continuously stream their data to the AP for product and customer monitoring.
To reflect a realistic cashierless store application,
we configure all users to continuously deliver standard quality video of $1080$p using UDP protocol for trace-driven simulation.
Also, we leverage $80$ and $160$ MHz bandwidth for every uplink OFDMA packet.
In \textbf{smart warehouses}, IoT devices transmit relatively little data traffic and are widely and sparsely deployed compared to the cashierless store use cases.
Hence, each uplink packet has $20$ and $40$ MHz bandwidth, and the users transmit UDP data in NLoS settings.
\parahead{Evaluation metrics.}
To quantify the network performance, we define uplink throughput as
a number of total data bits delivered to the AP divided by the duration.
We measure it for every $500$ ms.
Moreover, to evaluate the power consumption,
we report a total number of Target Wake Time (TWT) packets.
By definition, TWT is 11ax's power-saving mechanism where the client devices sleep between AP beacons,
waking up only when they need to transmit the signal (\textit{e.g.,} uplink data transmission and channel report).
When a client is not scheduled for uplink transmission nor reporting CSIs to AP,
the AP does not trigger TWT packet for the corresponding client.
By doing so, it effectively increases device sleep time and helps to conserve the battery of IoT devices.
\parahead{Baselines.}
Our baselines follow
sounding protocols
in which the AP periodically requests BSRs and CSIs from all users.
Upon receiving the NDP from the AP, all users calculate the feedback matrix for each OFDM subcarrier as follows~\cite{8672643}:
\begin{equation}
\frac{\mathrm{CSI\;tones} \times \mathrm{CSI\;bits} \times \mathrm{Tx Antenna} \times
\mathrm{Rx Antenna} \times T_{c}}{\mathrm{Subcarrier\;Group} \times \mathrm{Feedback\;Period}}
\end{equation}
where $T_{c}$ signifies the wireless channel coherence time.
We use $8$-bit CSI quantization, a channel coherence time of $15$ ms, and subcarrier grouping of $4$.
The other control protocols we consider are BSR report ($32$ bytes), BSR poll ($21$ bytes), CSI poll ($21$ bytes), MU-RTS ($20$ bytes), CTS ($14$ bytes), TF ($28+(5 \times K)$ bytes), and BlockAck\fshyp{}BA ($22+(5 \times K)$ bytes), where K denotes the number of users. Lastly, SIFS takes $10 us$. We note that BSRs and CSIs are delivered to the AP via OFDMA transmission to minimize the overhead.
\begin{figure}
\centering
\begin{subfigure}[b]{.72\linewidth}
\includegraphics[width=\linewidth]{figures/vars_dynamic2.pdf}
\end{subfigure}
\caption{Channel variability. We indicate our channel data as a red line and idle channels as a dotted blue line.}
\label{f:variance}
\end{figure}
\begin{figure*}[t!]
\begin{subfigure}[b]{0.375\linewidth}
\includegraphics[width=\linewidth]{figures/throughput4.pdf}
\caption{Throughput performance across time.}
\label{f:eval:throughput_time}
\end{subfigure}
\begin{subfigure}[b]{0.375\linewidth}
\includegraphics[width=\linewidth]{figures/throughput_user3.pdf}
\caption{Throughput performance across users.}
\label{f:eval:throughput_user}
\end{subfigure}
\begin{subfigure}[b]{0.2\linewidth}
\includegraphics[width=\linewidth]{figures/sleeptime4.pdf}
\caption{Power consumption.}
\label{f:eval:power}
\end{subfigure}
\caption{End-to-end performance on throughput and power consumption: (a) aggregated throughput across time for every $500$ ms, (b) throughput across users for $20$, $40$, $80$, and $160$ MHz bandwidth, and (c) device sleep time over the entire transmission duration and the total number of Target Wake Time (TWT) triggered on every user.}
\end{figure*}
\parahead{Algorithms.} We compare CLCP{} to
the following algorithms
which collectively represent the state-of-the-art in channel prediction:
\begin{enumerate}[label=\arabic*)]
\item
R2F2~\cite{R2F2-sigcomm16} infers downlink CSI for a certain
LTE mobile\hyp{}base station link based on the path
parameters of uplink CSI readings for that \emph{same} link,
using an optimization method in lieu of an ML\hyp{}based approach.
\item OptML~\cite{bakshi2019fast} leverages a neural network
to estimate path parameters, which, in turn, are used to
predict across frequency bands.
\end{enumerate}
Both algorithms predict downlink based on uplink channels in LTE scenarios, where
the frequencies for downlink and uplink are different.
To adopt these algorithms into our OFDMA scheduling problem instead,
we use them to predict a full-band channel
based on the RU in a received OFDMA packet.
We use a maximum likelihood approach for fast path parameter estimation.
For example, for $160$ MHz bandwidth,
the AP triggers all clients to simultaneously transmit pilot signals in their $242$-subcarrier RUs.
Then the AP predicts the $2048$ subcarriers of the full band channel based on the received RUs.
For CLCP{}, we group IoT devices based on their proximity (3 to 5 m) and create one
CLCP{} prediction module per group.
This is because wireless links that are far apart or separated by a wall have an extremely low correlation~\cite{charvat2010through}.
However, since CLCP{} uses the latest OFDMA packet to make predictions,
some groups might not have any of its users assigned to that OFDMA packet
and hence it is not possible to make predictions.
For these groups, we trigger uplink OFDMA packets and run cross-bandwidth channel prediction like R2F2 and OptML.
\parahead{Channel variability.}
We present
the inherent variability of our channel environment.
\cref{f:variance} demonstrates the
variability of idle channels without human mobility in a dotted blue line and
that of our channel environment affected by multiple moving reflectors in a red line.
Both channel environments are measured from all users in NLoS settings.
Precisely, we collect a series of channel readings over time and segment readings by one second duration.
For every segment and subcarrier of channels, we measure a power variance of channels over one second.
Then we generate the corresponding variance distribution, conveying the variances of all segments and subcarriers for each link
and average the distributions across all links.
From \cref{f:variance}, we observe that power variance of our channel data is $\sim30$ dB higher than that of idle channel data.
This indicates that our links are not idle, and there is environment variability due to moving reflectors.
\subsection{End-to-end performance}
\label{s:e2e_perf}
In this section, we evaluate the end-to-end throughput performance of CLCP{} in comparison with the baseline, R2F2, and OptML across time and user. Then we demonstrate its performance on the power consumption.
\parahead{Significant throughput improvement.}
\Cref{f:eval:throughput_time} summarizes the end-to-end throughput performance of CLCP{} under $20$, $40$, $80$ $160$ MHz bandwidth channels. Each data of the curves indicates an aggregated uplink throughput within $500$ ms duration.
With $20$ MHz bandwidth, CLCP{} improves the throughput by a factor of $3.2$ compared to the baseline and by a factor of $1.4$ for R2F2 and OptML.
Similarly, CLCP{} provides $1.9x$ to $2x$ throughput improvement over the baseline for $40$, $80$, and $160$ MHz channels along with $1.3x$ improvement over R2F2 and OptML.
Throughput improvements under $20$ MHz bandwidths are significant compared to larger bandwidths because delivering channel feedbacks overwhelm the network with a massive number of users and a small bandwidth.
Hence, by eliminating the need for exchanging channel feedbacks,
CLCP{} significantly improves spectral efficiency for smaller bandwidths.
Moreover, a maximum number of users allowed in $20$ MHz OFDMA is $9$ while $40$ MHz, $80$ MHz, and $160$ MHz allows $18$, $37$, and $74$ users for each OFDMA packet, respectively.
Hence, using OFDMA for delivering channel feedbacks with a small bandwidth is not as effective as sending them with a large bandwidth.
Even with larger bandwidths, CLCP{} outperforms the baseline.
Moreover, CLCP{} provides better throughput performance than two cross-band prediction algorithms, R2F2 and OptML.
While existing cross-band prediction algorithms require a pilot signal dedicated for channel sounding from all users,
CLCP{} exploits the channel estimates obtained from existing transmissions and thus completely eliminates the need for extra signal transmissions and corresponding control beacons.
In \Cref{f:eval:throughput_user}, we present the end-to-end throughput performance across users. Here, each data indicates a throughput of one user within $10$ second duration of uplink traffics.
It is worth noting that as the bandwidth increases, more users have an opportunity to send their data.
Specifically, for $20$ MHz, only $20\%$ to $40\%$ of users send the data while for $160$ MHz, more than $50\%$ to $70\%$ of users communicate with the AP.
More importantly, we observe that for all bandwidths,
CLCP{} enables $15\%$ to $20\%$ more users to delivery their data within $10$ second duration by eliminating the channel sounding and increasing the spectral efficiency.
\begin{figure*}[t!]
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figures/accuracy.pdf}
\caption{Prediction accuracy.}
\label{f:accuracy}
\end{subfigure}
\begin{subfigure}[b]{0.245\linewidth}
\includegraphics[width=\linewidth]{figures/impact_missing.pdf}
\caption{Impact of input number}
\label{f:missing}
\end{subfigure}
\begin{subfigure}[b]{0.20\linewidth}
\includegraphics[width=1\linewidth]{figures/runtime.pdf}
\caption{CLCP runtime}
\label{f:runtime}
\end{subfigure}
\begin{subfigure}[b]{0.3\linewidth}
\includegraphics[width=\linewidth]{figures/overhead.pdf}
\caption{Overhead reduction}
\label{f:overhead}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figures/sum_rate.pdf}
\caption{Channel capacity.}
\label{f:sum_rate}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figures/per_comb.pdf}
\caption{PER distribution.}
\label{f:per}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=1\linewidth]{figures/phyrate.pdf}
\caption{PHY rate distribution.}
\label{f:phyrate}
\end{subfigure}
\begin{subfigure}[b]{0.25\linewidth}
\includegraphics[width=1\linewidth]{figures/ber.pdf}
\caption{BER distribution.}
\label{f:ber}
\end{subfigure}
\caption{Microbenchmark on the prediction accuracy, overhead reduction, and scheduling performance.
}
\end{figure*}
\parahead{Increasing device sleep time.}
Target wake time (TWT) reduces the power consumption by letting users sleep and
only waking up when they need to transmit their packets.
Thereby, users skip multiple beacons without disassociating from the network.
The subfigure on top of \Cref{f:eval:power} shows the average sleep time of all users over the entire measurement duration.
While the users sleep for slightly over $25\%$ of the time using the baseline algorithm, CLCP{} enables $90\%$ of users to remain in sleep, which is roughly $65\%$ and $15\%$ longer than the baseline and cross-band prediction algorithms, respectively.
We note that the average sleep time of the cross-band algorithms are much longer than the baseline.
This is because each user has to send at least $260$ byte of its channel information for the baseline
while R2F2 and OptML simply transmit a pilot signal with some control overheads.
However, CLCP{} does not need bulky channel feedbacks nor pilot signals since the AP directly infers CSIs from existing transmissions.
This also helps minimizing contention between users and reduce the amount of time a user in power save mode to be awake.
The subfigure on bottom of \cref{f:eval:power} shows how many TWT packets are received by each user when $150$ MB data are delivered to the AP.
This is equivalent to how frequent each user is waking up from the sleep mode to participate in channel sounding and data transmission.
Here, CLCP{} has significantly less TWT counts because the users stay idle during channel acquisition
while the baseline, R2F2, and OptML wake all users and make them to transmit a signal.
Given that the average wake power is $600$ uW and transmit power is $135$ mW per device, we can infer that the power consumption of CLCP{} is significantly smaller than the power consumption of the baseline, R2F2, and OptML.
\subsection{Microbenchmark}
\label{s:microbenchmarks}
The first microbenchmark focuses on CLCP{}'s prediction performance
across different users and varying number of observed channels.
We then analyze overhead reduction with varying parameters,
such as CSI quantization, feedback period, and number of users.
Lastly, we evaluate the performance of
rate selection and multiuser detection using predicted CSIs.
Microbenchmark results are obtained under settings in \cref{f:testbed:testbed1}.
\subsubsection{Prediction Accuracy}
As a measure of prediction accuracy, we use \emph{error vector magnitude} (EVM),
which represents how far a predicted channel $H$ deviates
from a ground truth channel $H_{\mathrm{gt}}$:
$\mathrm{EVM} = 10 \log_{10}\left(|H - H_{\mathrm{gt}}|^2 / |H_{\mathrm{gt}}|^2\right)$.
According to IEEE 802.11 specification~\cite{8672643, ieee2010ieee}, BPSK modulation scheme requires an EVM between $-5$ to $-10$ dB, and QPSK needs the EVM from $-10$ to $-13$ dB.
In \cref{f:accuracy}, CLCP{} provides an average EVM of approximately $-8$ dB.
Compared to the LoS setting, the NLoS setting shows a larger variation of EVMs across different users. This is because many wireless links in the NLoS setting are weak due to multiple wall blockages and long signal propagation distance. Such weak signals have low signal-to-noise ratio, and therefore the effect of noise is high and causing a high variation in prediction accuracy.
\noindent
\textbf{Impact of the number of observed channels.}
We evaluate prediction accuracy with varying number of observed channels.
\Cref{f:missing} shows that there is a significant improvement in prediction performance
when the number of input users is more than two.
Increasing the number of input users further does not greatly improve CLCP{}'s prediction accuracy.
This result indicates that CLCP{} is correctly predict channels even when there are many unobserved channels.
\subsubsection{Overhead Reduction}
We first evaluate runtime distribution of CLCP{} in \Cref{f:runtime}.
Specifically, CLCP{} achieves only about $4$ ms inference time.
CLCP{}'s inference time comply with that of other VAE-based models~\cite{NEURIPS2020_e3844e18}.
Next, we present the overhead reduction with varying parameters in \Cref{f:overhead}.
We define the overhead as the percentage of CSI transmission time over the total traffic time.
The short feedback period, increase in the number of users, and greater number of subcarriers result in a larger CSI overhead in the absence of CLCP{}, making our CLCP{} effective to a greater extent.
Under densely deployed scenario, our CLCP{} notably reduces the overhead.
\Cref{f:overhead} (\emph{right}) shows that with $400$ users, CLCP{} can free up more than $40\%$ overhead.
\subsubsection{Channel Capacity.}
In \cref{f:sum_rate}, we evaluate channel capacity of OFDMA packets that are scheduled using predicted channels.
We define channel capacity as a sum of achieved rates at each subcarrier $s$, that is
$R_{\mathrm{capacity}}(RU_{i}) = \sum_{s\in RU_{i}}$ $R_{\mathrm{capacity}}(s)$ where $RU_{i}$ is RU at $i$-th location.
Then, we define
capacity of a complete user schedule $g$ as:
\begin{equation}
\begin{aligned}
\sum_{j}R_{\mathrm{capacity}}(p_{j},u_{j}) =
\sum_{j}\sum_{s\in p_{j}}\sum_{u\in u_{j}}\log_{2}(1+P_{u,s})
\end{aligned}
\label{channel_capacity}
\end{equation}
where $P_{u,s}$ denotes a transmit power for user $u$ and subcarrier $s$.
In \cref{f:sum_rate}, channel capacity of packets scheduled based on predicted CSIs is almost identical to that of ground-truth CSIs.
These results demonstrate that our predicted channels is accurate enough for OFDMA scheduling.
\subsubsection{PER Distribution}
PER distributions of packets scheduled using predicted CSIs are shown in \cref{f:per}.
Even when PER is high, packets scheduled with ground-truth CSIs and predicted CSIs share similar PER distributions.
We conclude that even if channel condition is bad, CLCP{} still provides accurate channel prediction.
\subsubsection{PHY Rate Distribution}
11ax allows each RU to have its own MCS index, which is calculated based on on its channel condition.
Therefore, rate adaptation requires accurate channel estimates.
In \cref{f:phyrate}, we present PHY rate distributions
calculated using an effective SNR (ESNR)-based rate adaptation~\cite{halperin2010predictable}.
This algorithm leverages channel estimates to find a proper MCS index.
The results show that PHY rate distributions of both ground-truth channel and predicted channel are highly similar.
\subsubsection{Multiuser Detection}
In 11ax,
multi-user detection algorithms are used to separate uplink streams from multiple users.
The drawback is that for uplink MU-MIMO, it is crucial to not only select a subset of
users with low spatial channel correlation,
but also determine an appropriate decoding precedence.
To evaluate both aspects, we employ several multiuser detection algorithms, such as zero-forcing (ZF) and minimum mean squared error (MMSE), that are integrated with a successive interference cancellation (SIC) technique as well as the most optimal maximum-likelihood (ML) decoder.
\Cref{f:ber} shows a bit-error rate (BER) of packets that are scheduled with ground-truth CSIs and these with predicted CSIs.
We decode these packets using ZF-SIC, MMSE-SIC, or ML technique across different SNR values.
We observe that BER of packets from predicted CSIs are slightly higher than packets from ground-truth CSIs for ML decoder when SNR ranges from $10$ to $16$ dB.
On the other hand, BER with ZF- and MMSE-SIC decoder show no difference between predicted and ground-truth CSIs.
This indicates that CLCP{}'s prediction is accurate for ZF-SIC and MMSE-SIC decoders.
\section{Implementation}
\label{s:impl}
We conduct an experimental study on cross-link channel prediction in a large indoor lab (\cref{f:testbed:testbed2}) and in an entire floor (\cref{f:testbed:testbed1}) for the cashierless store and the smart warehouse scenario, respectively.
Typical cashierless stores consist of cameras and smartphones that demand large amounts of traffic; hence, in \cref{f:testbed:testbed1}, we collect channel traces from high-bandwidth 802.11ax commodity radios.
Specifically, the three receivers highlighted in red are Asus RT-AX86U APs supporting 802.11ax, 4x4 MIMO operation, and $160$-MHz bandwidth (i.e., $2048$ subcarriers per spatial stream) at $5$ GHz.
The transmitting nodes, shown in \cref{f:testbed:hardware}, include several Asus RT-AX82U and Asus ZenWifi XT8 routers (each one with four antennas), as well as some smartphones, like the Samsung A52S (single-antenna) and the Xiaomi Mi 11 5G (with two antennas each).
While the bandwidth of the 11ax Asus routers is $160$ MHz,
the smartphones' radios can only handle up to $80$ MHz bandwidth.
In total, we identify $144$ separate links (here, we are counting each spatial stream as a separate link).
To extract the CSI from commodity 11ax devices, we used the \href{https://ans.unibs.it/projects/ax-csi/}{AX-CSI extraction tool}~\cite{axcsi21}.
Since IoT devices in smart warehouses generally demand less data traffic,
we collect traces with $20$ and $40$ MHz bandwidth CSI for the scenario in \cref{f:testbed:testbed1}.
Both the AP and transmitting nodes are 11n WPJ558 with the Atheros QCA9558 chipset and three antennas.
Moreover, the nodes are placed in $95$ locations in NLoS settings, and we extract traces using \href{https://wands.sg/research/wifi/AtherosCSI/}{Atheros CSI Tool}.
All routers and phones together are generating traffic constantly using \texttt{iperf}.
For both testbeds, people moved at a constant walking speed of one to two meters per second.
Since commodity 11ax devices do not allow OFDMA scheduling on the user side,
we run a trace-driven simulation using a software-defined simulator.
We implement CLCP{} using \href{https://pytorch.org/}{Pytorch}, and the model parameters include batch size of $16$ and learning rate of $5e^{-6}$.
We employ Adam for adaptive learning rate optimization algorithm.
\textbf{Channel measurement error.}
The presence of noise in the dataset may significantly affect convergence while training.
Hence, we want to minimize some notable channel errors.
First, the packet boundary delay occurs during OFDM symbol boundary estimation.
Assuming this time shift follows a Gaussian distribution~\cite{speth1999optimum, xie2018precise}, averaging the phases of multiple CSIs within the channel coherence time compensates for this error.
Thus, the AP transmits three sequential pilot signals, and upon reception, the clients average the CSI phase of these signals and report an error-compensated CSI back to the AP.
Second, to compensate for the amplitude offset due to the power control uncertainty error, we leverage the Received Signal Strength Indicator (RSSI), reported alongside CSI in the feedback. We detect the RSSI outliers
over a sequence of packets and discard the associated packet.
Then, we average the amplitude of the channel estimates.
Lastly, non-synchronized local oscillators cause some carrier frequency offset.
To minimize this error, we subtract the phase constant of the first receiving antenna across all receiving antennas.
Since phase constant subtraction does not alter a relative phase change across antennas and subcarriers, we preserve the signal path information in CSI.
\section{Introduction}
\label{s:intro}
Today's wireless IoT sensor networks are changing, scaling up in spectral
efficiency, radio count, and traffic volume as never seen before.
There are many compelling examples:~sensors in smart agriculture,
warehouses, and smart\hyp{}city contexts
collect and transmit massive amounts of aggregate data, around the clock.
Networks of video cameras (\emph{e.g.}, for surveillance and in cashierless stores)
demand large amounts of uplink traffic in a more spatially\hyp{}concentrated pattern:
large retailers worldwide have recently introduced cashierless
stores that facilitate purchases via
hundreds of cameras streaming video to an edge server nearby,
inferring the items the customer has placed into their basket as well
as tabulating each customer's total when they leave the store.
And in multiple rooms of the home, smart cameras, speakers,
windows, and kitchen appliances stream their data continuously.
\begin{figure}
\begin{subfigure}[b]{0.485\linewidth}
\includegraphics[width=.93\linewidth]{figures/case1_4.png}
\caption{Cross-band channel pred.}
\label{f:cbcp}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.485\linewidth}
\includegraphics[width=.93\linewidth]{figures/case2_2.png}
\caption{Cross-link channel pred.}
\label{f:clcp}
\end{subfigure}
\caption{\textbf{\textit{Left:}} Previous work~\cite{R2F2-sigcomm16, bakshi2019fast} on cross-band channel prediction infers a
downlink channel at frequency $f_{2}$ using the uplink channel
at frequency $f_{1}$ on the \emph{same} link. \textbf{\textit{Right:}}
CLCP{} infers the channel to Sensor~2 using channel measurements from Sensor~1.}
\label{fig:intro_clcp}
\end{figure}
This sampling of the newest Internet of Things (IoT)
applications highlights unprecedented demand for massive IoT device scale,
together with ever\hyp{}increasing data rates.
Sending and receiving data to these
devices benefits from advanced
techniques such as Massive Multi-User MIMO (MU-MIMO) and
OFDMA-based channel allocation.
The 802.11ax \cite{802.11ax-wc16} Wi-Fi standard, also known as \emph{Wi-Fi 6},
uses both these techniques for efficient transmission
of large numbers of small frames, a good fit for IoT applications.
In particular, OFDMA divides the frequency bandwidth into multiple subchannels,
allowing simultaneous multi-user transmission.
While such techniques
achieve high spectral efficiency, they face a key challenge: they
require estimates of channel state information (CSI), a process that hampers overall spectral efficiency. Measuring and propagating CSI to neighbors, in fact, scales with the product of the number of users, frequency bandwidth, antenna count, and frequency of measurement.
Highly\hyp{}dynamic, busy environments with human and vehicle
mobility further exacerbate these challenges, necessitating
more frequent CSI measurement. With densely deployed IoT devices, the overhead
of collecting CSI from all devices may thus deplete available radio resources~\cite{xie2013adaptive}.
While compressing CSI feedback
\cite{xie2013adaptive, cspy-mobicom13,bejarano2014mute} and\fshyp{}or
leveraging channel reciprocity for implicit channel sounding \cite{R2F2-sigcomm16,bakshi2019fast,guerra2016opportunistic}
reduces CSI overhead to some degree,
users still need to exchange compressed CSI
with the Access Point (AP) \cite{xie2013adaptive}, and implicit sounding
relies on extremely
regular traffic patterns \cite{R2F2-sigcomm16, bakshi2019fast}, and so
with increasing numbers of clients, AP antennas, and OFDM subcarriers,
CSI overhead remains a significant burden.
\vspace*{2.00ex minus 0.25ex}\noindent{}In this paper, we take a qualitatively different
approach, inspired by the relative
regularity of IoT sensor traffic and the fact that a single wireless environment
is the determinant of nearby sensors' wireless channels.
While conventional wisdom holds that
the channels of the nodes that are at least half a wavelength
apart are independent due to link\hyp{}specific signal propagation
paths \cite{tse-viswanath}, with enough background
data and measurements of a wireless environment, we find that
it is possible to predict the CSI of a link that
has not been recently observed.
\cref{fig:intro_clcp} illustrates our high-level idea:
unlike previous works~\cite{R2F2-sigcomm16, bakshi2019fast} that
use CSI measurements at frequency $f_{1}$ to infer CSI at $f_{2}$ for a
single link, our approach exploits the cross-correlation between
different links' wireless channels to leverage traffic on one sensor's link
in order to predict the wireless channel of another.
We propose the \emph{Cross-Link Channel Prediction} (CLCP{}) method,
a wireless channel prediction technique that uses multiview representation
machine learning to realize this vision.
We provide a head-to-head performance evaluation of our
approach against the OptML \cite{bakshi2019fast} and R2F2 \cite{R2F2-sigcomm16}
cross-band channel prediction methods in \Cref{s:eval}.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figures/motivation5.png}
\caption{\emph{\textbf{CLCP's mechanism:}} Time of
flight and angle of arrival channel parameters for
two nearby IoT sensors (\textit{\textbf{upper}} and
\textit{\textbf{lower}}, respectively). While each sensor (link) has a distinct set of
static wireless paths, their parameters both indicate reflections
off the same moving object, highlighted in red dotted circles.
}
\label{fig:passive_localization}
\end{figure}
To support our idea, we measure the wireless channels
from two nearby sensors in the presence of a
moving human.
\Cref{fig:passive_localization} visualizes these channels using two wireless path parameters,
Time-of-Flight (ToF) and Angle-of-Arrival (AoA).
While Sensor~1's channel (upper pictures)
is independent from Sensor~2's channel (lower pictures), ToF and AoA
from both reflect the same moving body in the environment,
indicated by the dotted red circles, while
other major paths remain unchanged.
This suggests the existence of a function that correlates the occurrence of wireless links of stationary sensors in the presence of moving reflectors,
as shown in \cref{f:feature_embedding}.
A CLCP{} AP can hence use uplink channels estimated from the nodes in the last transmission
to predict a large number of unobserved wireless links.
In this way, the aggregated overhead no longer scales with the number of radios.
Finally, using the acquired CSIs, the AP schedules
the uplink traffic.
Multiview learning for wireless channels faces several challenges,
which CLCP's design addresses.
First,
traffic patterns are not perfectly regular, and
a set of observed channels in latest transmission
change at the time of prediction.
Hence, we cannot have a fixed set of observed channels as a model input.
Dynamic input data has been a big obstacle to multiview learning~\cite{zhu2020new, yang2018multi, wu2018multimodal}
because it often leads to an explosion in the number of trainable parameters,
making the learning process intractable.
Secondly, we often treat deep learning models as ``black boxes'' whose inner workings cannot be interpreted.
This is a critical issue because designers
cannot differentiate whether the trained model truly works or is simply overfitting.
We summarize our key design points as follows:
\begin{enumerate}[label=\arabic*)]
\item \textbf{Low-overhead.}
Since the feedback overhead no longer scales with the number of radios,
CLCP{} incurs much lower overhead compared to prior work, when the number of wireless nodes is large. Hence, it improves the overall channel efficiency.
\item \textbf{Opportunistic.}
Unlike conventional approaches, CLCP{} does not need dedicated channel sounding or extra pilot signals for channel prediction.
Instead, it exploits channel estimates obtained from existing transmissions on other links.
\item \textbf{Low-power.}
802.11ax adopts a special power-saving mechanism
in which the AP configures the timings of uplink transmissions
to increase the durations of nodes' sleep intervals.
By eliminating the need for channel sounding,
CLCP{} minimizes the frequency of wake-up and thus further reduces the power consumption.
\item \textbf{Interpretable.}
Using CLCP{}, we visualize a fully trained feature representation and interpret it using the wireless path-parameters, ToF and AoA.
This allows network operators to understand CLCP{}'s learning mechanism and further help in modifying the system to match their needs.
\end{enumerate}
Our implementation and
experimental evaluation using 802.11ax validate the effectiveness of
our system through microbenchmarks
as well as end-to-end performance.
End\hyp{}to\hyp{}end performance results show that CLCP{}
provides a $2\times$ throughput gain over baseline 802.11ax and
a $30\%$ throughput gain over R2F2
and a $30\%$ throughput gain over OptML
in a 144-link testbed.
\section{Design}
\label{s:overview}
Our system operates in the sequence of steps shown in \Cref{fig:framework}:
first, an AP acquires \textit{buffer status report} (BSR) and \textit{channel
state information} (CSI) from all clients.
Then it schedules an uplink OFDMA packet based on obtained BSRs and CSIs and triggers the uplink transmission.
When acquired CSIs become outdated,
the AP observes a set of channels from a latest OFDMA packet received and
extracts the wireless path parameters from each channel.
Then, the AP uses the path parameters to predict (1) the remaining bands of observed channels using \textit{cross-band channel prediction} (CBCP) and (2) a full-band channel information of unobserved links using \textit{cross-link channel prediction} (CLCP{}).
Lastly, based on predicted CSIs, the AP runs a scheduling and resource allocation (SRA) algorithm and
sends a trigger frame (TF) to initiate uplink transmission.
The AP repeats the procedure whenever CSI readings are outdated.
\parahead{(1) Opportunistic Channel Observation.}
In OFDMA, the entire bandwidth is divided into multiple subchannels with
each subchannel termed a resource unit (RU).
The AP assigns RUs to individual users,
which allows one OFDMA packet to contain channel estimates from multiple users.
We want to leverage channel information in already existing OFDMA transmissions to predict channels of a large number of users.
In Fig.~\ref{fig:framework},
user 3, 4, and 6 simultaneously transmit uplink signals in their dedicated RU,
which sums up to a full band.
Once acquired CSIs time out, the AP estimates three subchannels from the received OFDMA packet
and uses them to predict not only the remaining subcarriers of observed links
but also the full-band channel of unobserved users (\textit{i.e.,} user 1, 2, and 5).
This way, we completely eliminate the need for channel sounding.
\parahead{(2) Channel Prediction.}
When CSIs become outdated, the AP estimates the channels from the most recently received packet and
directly routes them to a backend server through an
Ethernet connection.
At the server side, the path parameters are extracted from partially observed channel estimates.
These path parameters are then fed to CLCP{} for predicting all CSIs.
We further elaborate on \shortnameposs{} design in \cref{s:design}.
\parahead{(3) Scheduling and Resource Allocation (SRA).}
Lastly, the AP schedules the upcoming OFDMA transmission using predicted CSIs and
a 11ax-oriented scheduling and resource allocation (SRA) algorithm.
We note that OFDMA scheduling requires
a full-bandwidth channel estimate to allocate a valid combination of RUs of varying subcarrier sizes~\cite{wang-infocom17} and to find a proper modulation and coding
scheme (MCS) index for each assigned RU.
Moreover, unlike 11n and 11ac, 11ax
provides support for uplink MU-MIMO,
which requires CSI to find an optimal set of
users with low spatial channel correlation and an appropriate decoding precedence.
After computing the user-specific parameters required for uplink transmission,
the AP encloses them in a \textit{trigger frame} (TF) and
broadcasts it as illustrated in Fig.~\ref{fig:framework}.
\parahead{(4) Uplink data transmission.} After receiving the TF, the corresponding users transmit data according to the TF.
\section{Related Work}
\label{s:related}
Work related to CLCP factors into \emph{(i)} work that shares
some ML techniques with CLCP but which targets other objectives;
and \emph{(ii)} work on predicting \emph{average} wireless
channel strength and the wireless
channel of a single given link, at different frequencies.
We discuss each in turn
in this section.
\parahead{Deep Probabilistic Networks for Wireless Signals.}
EI~\cite{jiang2018towards} leverages adversarial networks to classify a motion information embedded in the wireless signal. It uses a probabilistic learning model to
extract environment- and subject-independent features shared by the data collected in different environments.
RF\hyp{}EATS~\cite{ha2020food} leverages a probabilistic learning framework that adapts the variational inference networks to sense food and liquids in closed containers with the back-scattered RF signals as an input. Like EI, RF\hyp{} builds a model generalized to unseen environments.
Similarly, CLCP{} captures the common information (\textit{e.g.} dynamics in the environment) shared among different wireless links using a deep probabilistic model.
However, our task is more complicated than removing either the user-specific or environment-specific information.
We not only decompose observed wireless channels into a representation conveying environment-specific information, but also integrates the representation with user-specific information to generate raw wireless channels of a targeting ''unobserved'' user.
\parahead{Learning-based Channel Prediction.}
A growing body of work leverages various ML techniques for
the broad goals of radio resource management.
CSpy \cite{cspy-mobicom13} uses a Support Vector Machine (SVM)
to predict, on a single link, which channel
out of a set of channels has the strongest average
magnitude, but does not
venture into cross-link prediction
at a subcarrier\hyp{}level granularity, which modern
wireless networks
require in order to perform efficient OFDMA channel allocation
for a group of users.
Also, to manifest compression-based channel sounding for
uplink-dominant massive-IoT networks,
it requires extremely regular and frequent traffic patterns for every users, which is impractical.
R2F2 \cite{R2F2-sigcomm16} infers downlink CSI for a certain
LTE mobile\hyp{}base station link based on the path
parameters of uplink CSI readings for that \emph{same} link,
using an optimization method in lieu of an ML\hyp{}based approach.
Similarly, \cite{bakshi2019fast} leverages a neural network
to estimate path parameters, which, in turn, are used to
predict across frequency bands.
However, in 802.11ax,
there is instead a different need and oppportunity:
to predict \emph{different} links' channels
as recent traffic has used, in order
to reduce channel estimation overhead, the opportunity CLCP{} targets.
\section{AX-SRA Algorithm}
\label{s:sra}
The problem of user grouping and resource allocation in 802.11ax
is highly challenging
as it combines both MU\hyp{}MIMO and OFDMA functionality.
Here, we solve the problem in the context of 802.11ax.
\begin{figure}[t]
\includegraphics[width=0.92\linewidth]{figures/ofdma_alloc.pdf}
\caption{Uplink trigger-based OFDMA with four user groups with an equal amount of data $D$, $20 MHz$ bandwidth, $p_{j}=\left \{ RU(2,0),~RU(2,1),~RU(3,e),~RU(1,1) \right \}$ and $u_{j} = \left \{\left \{ 1 \right \}, \left \{ 2 \right \}, \left \{ 3 \right \}, \left \{ k \right \} \right \}$, and scheduling duration $T_{s}$. }
\label{f:protocol:ofdma_allocation}
\vspace{-4pt}
\end{figure}
\parahead{Problem Formulation.}
With up-to-date CSI and BSR of all users, we can formulate the scheduling problem as the following optimization task.
Each time unit, the scheduler allocates RUs to some clients in order to maximize
zero\hyp{}forcing beamforming (ZFBF) capacity, which we compute by
summing the achieved rates at each subcarrier $s$, that is
$R_{\mathrm{ZFBF}}(RU(l,i)) = \sum_{s\in RU(l,i)} R_{\mathrm{ZFBF}}(s)$. Then, the
ZFBF capacity of the complete user schedule $g$ is written as:
\begin{equation}
\vspace{-2pt}
\begin{aligned}
\sum_{j}R_{\mathrm{ZFBF}}(p_{j},u_{j}) =
\sum_{j}\sum_{s\in p_{j}}\sum_{u\in u_{j}}\log_{2}(1+P_{u,s})
\end{aligned}
\end{equation}
where $P_{u,s}$ denotes the transmit power for user $u$ and subcarrier $s$.
Finally, the optimization objective becomes:
\begin{equation}
\vspace{-2pt}
\begin{aligned}
&U =
\max_{g \in G} R_{\mathrm{ZFBF}}(g) \\
&s.t. ~0 \leq \sum_{j} c_{j,u} \leq 1,~1 \leq \sum_{u} c_{j,u} \leq \left \lfloor N_{T}/N_{R} \right \rfloor
\end{aligned}
\end{equation}
where $c_{j,k} \in \left \{ 0,1 \right \}$ indicates if user or user group $k$ is allocated on the $j$th RU, $G$ is the set of all possible user schedules, and $N_{T}$ and $N_{R}$ denote the number of transmitter and receiver antenna, respectively. Last two equations simply define the constraints that unlike LTE that enables the allocation of multiple resource blocks (RB) to a single user, 802.11ax restricts a single RU to each user, and the number of users allocated on RU is between $1$ and
$\left \lfloor N_{T}/N_{R} \right \rfloor$.
Since in 802.11ax
the transmission duration $T_{s}$ must
start and end at the same time for all the users in the same
OFDMA timeslot, users with insufficient
data append null data (\textit{i.e.} padding) to match the time duration.
For instance, assume the optimal user schedule
$g_{\mathrm{opt}}=\arg \max_{g \in G}R_{\mathrm{ZFBF}}(g)$ in
Fig.~\ref{f:protocol:ofdma_allocation}. According to BSRs, the users assigned to RUs have
an equal amount of data to simultaneously send. Due to variable RU sizes (two with $52$-tone RU,
one with $26$-tone RU, and one with $106$-tone RU), the user group $u_{j}$ assigned to RU of
a different level $l$ consists of different transmission time $T_{j}$. However, in
order to end transmission of the users at the same time, the users with insufficient data
transmit null data
which significantly degrades capacity and energy efficiency.
Dividing the scheduled packet into multiple smaller packets based on the minimum
transmission duration among the users (\textit{e.g.} $T_{k}$ in
Fig.~\ref{f:protocol:ofdma_allocation}) can minimize the padding, but
increasing the number of packet leads to high medium contention overhead.
Thus, to optimize the balance between MAC layer overhead (due to padding) and
medium contention overhead (due to a large number of packets), we consider
buffer status in the scheduling algorithm in addition to channel capacity.
\begin{algorithm}[t]
\noindent
\begin{algorithmic}[1]
\Require
\Statex The up-to-date $CSI_{u}$ and $BSR_{u}$ $\forall u \in U$
\While{$\sum_{u}^{U} BSR_{u} > 0$}
\State $MCS_{u}$ = \textbf{Rate-Adaptation}($CSI_{u}$) $\forall u \in U$
\If{OFDMA}
\State Compute $R_{u} = BSR_{u}/T_{MaxTime}$ $\forall u \in U$
\State Find $R_{RU(0)}, \dots, R_{RU(L)}$ given $MCS_{u}$ $\forall u \in U$
\State $U_{l} \cap \left \{ u \right \}$ where $l \geq \operatorname*{argmin}_{l \in L} \left \| R_{RU(l)} - R_{u} \right \| $
\State $g_{opt} = \textbf{Divide-Conquer}(U, CSI, 0,0)$
\Else
\State $g_{opt} = \textbf{User-Grouping}(U, CSI,0,0)$
\EndIf
\While{empty $RU$ in $g_{opt}$}
\State Pop $U_{l}$ if empty $\forall l \in L$
\State $g_{opt} = \textbf{Divide-Conquer}(U, CSI,0,0)$
\EndWhile
\State Set $T_{s} = max(min_{k \in g}(T_{k}), T_{MinTime})$
\State Transmit trigger frame
\State Receive \text{scheduled-packet}
\State $BSR_{u} = BSR_{u} - \text{Scheduled-Packet } \forall u \in g_{SA}$
\EndWhile
\caption{ax-SRA algorithm}
\label{alg:general}
\end{algorithmic}
\end{algorithm}
\parahead{ax-SRA algorithm.}
Algorithm~\ref{alg:general} summarizes ax-SRA, which
consists of five major steps:
\textbf{(1)} find a proper MCS index for each user based on the acquired\fshyp{}predicted
full\hyp{}bandwidth channel information, \textbf{(2)} assign the clients into $L+1$
user pool groups (\textit{e.g.} four groups for $20$~MHz
as shown in Fig.~\ref{f:protocol:ru}) according to
each user's MCS index and BSR, \textbf{(3)} execute a modified divide\hyp{}conquer
(DQ) algorithm with CSIs and user pools determined in step $2$,
\textbf{(4)} if the schedule result in Step~$3$ contains RU with no assigned user,
rearrange the users in the user pool groups from Step~$2$ and re\hyp{}execute
Step~$3$, and \textbf{(5)} enforce the packet duration to the minimum scheduling duration among all scheduled users.
For the first step, we conduct effective SNR (ESNR)\hyp{}based
rate adaptation~\cite{halperin2010predictable}
using a full-bandwidth channel. Specifically, we select an appropriate MCS index
by comparing the channel's ESNR against the pre\hyp{}determined threshold
that allows successfully delivery of the packets.
This pre-determined threshold is set to the SNR value that guarantees successful packet delivery (\textit{i.e.} $100\%$ packet reception rate (PRR))
because according to our simulation result,
larger RUs achieve higher PRR than smaller RUs when SNR is relatively high for every MCS index
in frequency-selective multipath environment.
As seen in Fig.~\ref{f:protocol:packet_delivery},
at SNR in which full-bandwidth RU achieves $100\%$ PRR, all smaller RUs attain PRR lower than $100\%$ but above $90\%$.
If the PRR gap between full-bandwidth RU and smallest RU is larger than $10\%$ due to a highly frequency-selective fading channel,
we increment threshold to the value in which the gap near $10\%$.
To minimize discrepancy between the buffer capacity of $j$th RU and data size of user $u_{j}$,
we create the user pool groups associated with the level of RU, which is an indication of RU size (\textit{e.g.} recall Fig.~\ref{f:protocol:ru} depicting that RU at level $L$ is $26$-tone RU, and RU at level $0$ is $242$-tone RU for $20$Mhz).
When allocating resources, each RU selects the users\fshyp{}user groups within the user pool group associated with its corresponding level.
The purpose is to prevent against assigning the low data-rate users into large RU, avoiding both the addition of padding and unnecessary MIMO execution for small data.
However, to increase user scheduling opportunities, we want to allow high data-rate users for small RUs.
To create these user pools, we need to assign the users into $L+1$ user pool groups $U_{0},\dots,U_{L}$ using BSR and selected MCS index.
Specifically, we first compute the data-rate $R_{u}$ for each user $u$ via dividing the length of its data indicated in $BSR$ by the maximum allowed duration of a packet $T_{MaxTime}$, which is $5.484 ms$~\cite{802.11ax-wc16}.
Then, we compare the calculated data-rate $R_{u}$ with predefined
PHY rates~\cite{802.11ax-wc16} supported by RUs of every level $R_{RU(0)},\dots,R_{RU(L)}$
at the selected MCS index.
Specifically, we find the level $l$ in which among all levels whose $R_{RU}$ is greater than $R_{u}$, $R_{RU}$ is closest to the $R_{u}$. With this level $l$, we assign the users to all user pool groups whose level is greater than or equal to $l$.
For users $u$ with $R_{u}$ greater than $R_{RU(0)}$, we assign $u$ to all user pool groups.
\begin{figure}[t]
\vspace{-5pt}
\includegraphics[width=0.9\linewidth]{figures/mcs_white.pdf}
\caption{802.11ax packet delivery for each RU in 20MHz bandwidth, simulated with maximum delay of $390$ ns.}
\vspace{-4pt}
\label{f:protocol:packet_delivery}
\end{figure}
\begin{figure*}[h]
\begin{subfigure}[b]{0.265\linewidth}
\includegraphics[width=\linewidth]{figures/LOSRoom_small2.pdf}
\caption{LOS testbed.}
\label{f:testbed:loc}
\end{subfigure}
\begin{subfigure}[b]{0.55\linewidth}
\includegraphics[width=\linewidth]{figures/NLOSRoom_small2.pdf}
\caption{NLOS testbed.}
\label{f:testbed:room}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.12\linewidth}
\includegraphics[width=\linewidth]{figures/hardware.png}
\caption{Hardware.}
\label{f:testbed:hardware}
\end{subfigure}
\vspace{-5pt}
\caption{CLCP{} preliminary experimental testbed
floor plans (radios-$\bigcirc$, moving reflectors-$\triangle$) and radio hardware.}
\label{f:testbed}
\vspace{-5pt}
\end{figure*}
We then leverage the DQ algorithm~\cite{wang2018scheduling} (see Appendix for details) with three modifications.
To briefly explain this algorithm, given the RU abstraction shown in Fig.~\ref{f:protocol:ru}, $RU(l,i)$ splits into $RU(l+1,2i)$ and $RU(l+1,2i+1)$ in the divide step, with each RU at level $l+1$ treated as a subproblem that is independently solved.
Let the optimal user schedules on $RU(l+1,2i)$ and $RU(l+1,2i+1)$ be $g_{opt}(l+1,2i)$ and $g_{opt}(l+1,2i+1)$ that maximizes the sum rate of ZFBF capacity. In the merge step, $g_{opt}(l+1,2i)$ and $g_{opt}(l+1,2i+1)$ merges into $g_m(l,i)$.
At $RU(l,i)$, the algorithm calculates the best schedule $g_{s}(l,i)$.
Then, the optimal schedule $g_{opt}(l,i)$ is the one that provides higher ZFBF capacity between $g_{s}(l,i)$ and $g_{m}(l,i)$.
Our first modification here is that $RU$ at level $l$ selects the users within its user pool group $U_{l}$.
Secondly, since 11ax restricts one RU per each user, ax-SRA removes the user $u$ selected in $g_{opt}(l,i)$ from $U_{l}$ so that $u$ is not considered for other RUs.
Lastly, since the original DQ algorithm does not consider extra $RU(l,e)$ (see Fig.~\ref{f:protocol:ru}), we add constraint that in the merge step at level $L-3$, $g_{opt}(L-2,2i)$, $g_{opt}(L,e)$, and $g_{opt}(L-2,2i+1)$ merges into $g_m(L-3,i)$.
For RUs larger than $106$ subcarriers, we use greedy user selection algorithm for MU-MIMO.
Once the modified DQ algorithm finalizes $g_{opt}$, ax-SRA checks if $g_{opt}$ contains RU with no user, which is due to a small number of user candidates, especially in the user pool group with lower level. Thus, we rearrange the list of user pool groups by removing the empty user pool groups and lowering the level of existing groups. For example, if two groups are removed among four, then $U_{2}$, and $U_{3}$ becomes $U_{0}$ and $U_{1}$. Then, we re-execute the DQ algorithm. This guarantees non-empty RU by enforcing the use of larger bandwidth.
For each user $k \in {1, 2, \dots, K}$ at time $t$, the optimal transmission duration $T^{*}_{s}(t)$ that maximizes the total throughput is equal to $T_{min}(t)$ among all users.
Therefore we enforce $T_{s} = max(min_{k \in g_{opt}}(T_{k}), T_{MinTime})$ with finalized $g_{opt}$.
This operation naturally prevents the addition of padding for the the schedules that assigned high data-rate users assigned to small RUs. Then the AP transmits the trigger frame, and
once the AP receives the scheduled packet, ax-SRA repeats itself until no more data is left in BSRs.
\subsection{Scheduling and Resource Allocation}
\label{s:sra}
\label{s:appendix}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figures/sra_fig.png}
\caption{Divide-and-conquer scheduling and resource alloction.}
\label{f:sra}
\end{figure}
\begin{figure*}[t]
\begin{subfigure}[b]{0.285\linewidth}
\includegraphics[width=\linewidth]{figures/testbed2.png}
\caption{Testbed for cashierless store.}
\label{f:testbed:testbed2}
\end{subfigure}
\begin{subfigure}[b]{0.44\linewidth}
\includegraphics[width=\linewidth]{figures/testbed1_2.png}
\caption{Testbed for smart warehouse.}
\label{f:testbed:testbed1}
\end{subfigure}
\begin{subfigure}[b]{0.265\linewidth}
\includegraphics[width=\linewidth]{figures/equipment2.png}
\caption{Hardware devices.}
\label{f:testbed:hardware}
\end{subfigure}
\caption{CLCP{} preliminary experimental testbed floor plans
and radio hardware with different operating bandwidths.}
\label{f:testbed}
\end{figure*}
Our scheduling algorithm exploits both channel conditions and buffer status to compute an optimal user schedule.
In OFDMA, channel bandwidth is divided into RUs with various sizes from the smallest $26$ tones ($2$ MHz) up to $996$ tones ($77.8$~MHz). The size and the locations of the RUs are defined for $20$, $40$, $80$, and $160$~MHz channels.
Our goal is to select an optimal set of RUs: one that covers
the entire channel bandwidth, and maximizes the sum channel capacity.
At the same time, we must consider the buffer status of all users, such that devices that require a lot of data, like streaming video, can be assigned a large RU, while devices that require very little data can be assigned a small RU.
Scheduling is challenging as the size of the search space increases exponentially with the number of users and RU granularity.
To efficiently compute the optimal set of RUs, we reduce the search space by constraining the user assignment for each RU based on the user buffers and adopt a divide-and-conquer algorithm~\cite{wang2018scheduling} to quickly compute the optimal RU combination that maximizes channel capacity.
Given the RU abstraction shown in \Cref{f:sra}, we first search for a user that maximizes the channel capacity for the first two $26$-tone RUs. Since this RU size is small,
only one of the users with low buffer occupancy is chosen for each RU.
Then, we select a best user for the $52$-tone RU from a group of users with
moderate buffer occupancy and compare its channel capacity with the
sum of two $26$-tone RUs capacities.
This step repeats for the subsequent resource blocks, and the RU combination with higher capacity is selected and compared with larger RUs until the combination completes the full bandwidth.
|
1,477,468,751,094 | arxiv | \section{Introduction}
Especially during the last three years one notices a significant further boost of interest in octonionic analysis both from mathematicians and from theoretical physicists, see for instance \cite{JRS,KO2018,KO2019,Kra2019-1,Nolder2018}.
In fact, many physicists currently believe that the octonions provide the adequate setting to describe the symmetries arising in a possible unified world theory combining the standard model of particle physics and aspects of supergravity. See also \cite{Kra2019-2} for the references therein.
\par\medskip\par
Already during the 1970s, but particularly in the first decade of this century, a lot of effort has been made to carry over fundamental tools from Clifford analysis to the non-associative octonionic setting.
Many analogues of important theorems from Clifford analysis could also be established in the non-associative setting, such as for instance a Cauchy integral formula or Taylor and Laurent series representations involving direct analogues of the Fueter polynomials, see for example \cite{Imaeda,Nono,XL2000,XL2001,XL2002,XZL}. Of course, one carefully has to put parenthesis in order to take care of the non-associative nature.
Although some of these fundamental theorems formally look very similar to those in the associative Clifford algebra setting, Clifford analysis and octononic analysis are two different function theories.
In \cite{KO2018,KO2019} the authors describe a number of substantial and structural differences between the set of Clifford monogenic functions from $\mathbb{R}^8 \to Cl_8 \cong \mathbb{R}^{128}$ and the set of octonionic monogenic functions from $\mathbb{O} \to \mathbb{O}$. This is not only reflected in the different mapping property, but also in the fact that unlike in the Clifford case, left octonionic monogenic functions do not form an octonionic right module anymore.
The fact that one cannot interchange the parenthesis arbitrarily in a product of octonionic expressions does not permit to carry over a number of standard arguments from the Clifford analysis setting to the octonionic setting.
In this paper we depart from the octonionic Cauchy integral formula for left or right octonionic monogenic functions, taking special care of the non-associativity by bracketing the terms together in a particular way. First we derive a topological generalized version of this Cauchy integral formula involving the winding number of $7$-dimensional hypersurfaces in the sense of the Kronecker index. From the physical point of view this winding number represents the fourth Chern number of the $G_2$-principal bundles that arise in the application of a generalization of 't Hoofd ansatz to construct special solutions of generalized $G_2$-Yang-Mills gauge fields, see \cite{Burdik,GTBook}.
This homological version of Cauchy's integral formula is the starting point to introduce first the notion of the order of an isolated zero, or more generally, of an isolated $a$-point of a left (right) octonionic monogenic function. This notion of the order represents the topological mapping degree counting how often the image of a small sphere around zero (or around an arbitrary point $a$) wraps around zero (or $a$, respectively). An application of the transformation formula then leads to an explicit argument principle for isolated zeroes and $a$-points of octonionic monogenic functions. On the one-hand this argument principle naturally relates the fundamental solution of the octonionic Cauchy-Riemann equation with the fourth Chern number of the $G_2$-principal bundles that are related to special solutions of the $G_2$-Yang-Mills equation from 't Hoofd' ansatz. However, this topic will be investigated in detail in one of our follow-up papers.
On the other hand this argument principle allows us to establish a generalization of Rouch\'e's theorem using a classical homotopy argument.
In turn, this version of Rouch\'e's theorem enables us to prove that the limit function of a normally convergent sequence of octonionic monogenic functions that have no isolated $a$-points inside an octonionic domain either vanishes identically over the whole domain or it satisfies $\sum_{c \in C}{\rm ord}(f;c)=0$. Note that this statement is slightly weaker than the classical Hurwitz theorem, because in the higher dimensional cases the condition ${\rm ord}(f;c)=0$ does not immediately mean that $f(c)\neq 0$. It is a sufficient but not necessary condition for being zero-free. Anyway, this statement is also new for the associative Clifford analysis setting, of course one has to restrict oneself to paravector-valued functions when addressing this case.
A big goal and novelty of this paper consists in addressing also the context of non-isolated zeroes and $a$-points which lie on special simply-connected compact manifolds of dimension $k \in \{1,\ldots,6\}$. Instead of taking small spheres, the adequate geometric tool is the use of tubular domains that surround these zero or $a$-point varieties. This geometrical setting allows us to introduce the winding number of a surface wrapping around such a compact zero or $a$-point variety and gives a meaningful definition for the order of a zero variety of an octonionic monogenic function. We also manage to establish an argument principle for these classes of non-isolated zero varieties. These results are even new for the associative Clifford analysis setting and can also be applied to left and right monogenic paravector valued functions in $\mathbb{R}^{n+1}$ for general dimensions $n \in \mathbb{N}$.
To finish we would like to mention that octonions also offer an alternative function theory of octonionic slice-regular functions, see for example \cite{GPzeroes,GP,JRS}.There are of course also connections between octonionic slice-regular functions and octonionic solutions of the generalized octonionic Cauchy-Riemann equations. In the slice-regular context one even gets explicit relations between poles and zeroes as well as a simpler classification of zeroes in a very general situation. In the slice-regular setting only isolated and spherical zeroes can appear and their multiplicity can simply be described in terms of a power exponent appearing in a factorization that makes use of the so-called slice-product. This is a very prosperous direction for developing further powerful function theoretical tools to address problems in the octonionic setting. Note that slice-regular functions also are connected with concrete physical applications, see for instance \cite{Burdik}, in particular also in the construction of special solutions of 't Hoofd ansatz of $G_2$-Yang-Mills solutions.
However, in this paper we entirely restrict ourselves to solutions of the octonionic Cauchy-Riemann equation, but it is an interesting challenge to pay more attention to these topics in the framework of other octonionic generalized function theories.
\section{Basic notions of octonions}
The octonions form an eight-dimensional real non-associative normed division algebra over the real numbers. They serve as a confortable number system to describe the symmetries in recent unifying physical models connecting the standard model of particle physics and supergravity, see \cite{Burdik,G}.
Following \cite{Baez,WarrenDSmith} and others, the octonions can be constructed by the usual Cayley-Dickson doubling process. The latter is initiated by taking two pairs of complex numbers $(a,b)$ and $(c,d)$ and forming an addition and multiplication operation by $$
(a,b)+(c,d) :=(a+c,b+d),\quad\quad (a,b)\cdot (c,d) := (ac-d\overline{b},\overline{a}d+cb)
$$
where $\overline{\cdot}$ denotes the conjugation (anti-)automorphism which will be extended by $\overline{(a,b)}:=(\overline{a},-b)$ to the set of pairs $(a,b)$.
In the first step of this doubling procedure we get the real quaternions $\mathbb{H}$. Each quaternion can be written in the form $z=x_0 + x_ 1e_1 + x_2 e_2 + x_3 e_3$ where $e_i^2=-1$ for $i=1,2,3$ and $e_1 e_2 = e_3$, $e_2 e_3 = e_1$, $e_3 e_1 = e_2$ and $e_i e_j = - e_j e_i$ for all mutually distinct $i,j$ from $\{1,2,3\}$. Already the commutativity has been lost in this first step of the doubling process. However, $\mathbb{H}$ is still associative.
The next duplification in which one considers pairs of quaternions already leads to the octonions $\mathbb{O}$ which are not even associative anymore. However, in contrast to Clifford algebras, the octonions still form a division algebra. In real coordinates octonions can be expressed in the form
$$
z = x_0 + x_1 e_1 + x_2 e_2 + x_3 e_3 + x_4 e_4 + x_5 e_5 + x_6 e_6 + x_7 e_7
$$
where $e_4=e_1 e_2$, $e_5=e_1 e_3$, $e_6= e_2 e_3$ and $e_7 = e_4 e_3 = (e_1 e_2) e_3$.
Like in the quaternionic case, we have $e_i^2=-1$ for all $i =1,\ldots,7$ and $e_i e_j = -e_j e_i$ for all mutual distinct $i,j \in \{1,\ldots,7\}$. Their mutual multiplication is illustrated as follows,
\begin{center}
\begin{tabular}{|l|rrrrrrr|}
$\cdot$ & $e_1$& $e_2$ & $e_3$ & $e_4$ & $e_5$ & $e_6$ & $e_7$ \\ \hline
$e_1$ & $-1$ & $e_4$ & $e_5$ & $-e_2$ &$-e_3$ & $-e_7$ & $e_6$ \\
$e_2$ & $-e_4$& $-1$ & $e_6$ & $e_1$ & $e_7$ & $-e_3$ & $-e_5$ \\
$e_3$ & $-e_5$& $-e_6$ & $-1$ & $-e_7$&$e_1$ & $e_2$ & $e_4$ \\
$e_4$ & $e_2$ & $-e_1$ & $e_7$ & $-1$ &$-e_6$ & $e_5$ & $-e_3$\\
$e_5$ & $e_3$ & $-e_7$ & $-e_1$& $e_6$& $-1$ & $-e_4$ & $e_2$ \\
$e_6$ & $e_7$ & $e_3$ & $-e_2$& $-e_5$& $e_4$ & $-1$ & $-e_1$ \\
$e_7$ & $-e_6$ & $e_5$ & $-e_4$& $e_3$ & $-e_2$& $e_1$ & $-1$ \\ \hline
\end{tabular}
\end{center}
Fortunately, the octonions still form an alternative and composition algebra.
In particular, the Moufang rule $(ab)(ca) = a((bc)a)$ holds for all $a,b,c \in \mathbb{O}$. In the special case $c=1$, one obtains the flexibility condition $(ab)a= a(ba)$.
Let $a = a_0 + \sum\limits_{i=1}^7 a_i e_i$ be an octonion represented with the seven imaginary units as mentioned above. We call $a_0$ the real part of $a$ and write $\Re{a} = a_0$. The conjugation leaves the real part invariant, but $\overline{e_j}=-e_j$ for all $j =1,\ldots,7$. On two general octonions $a,b \in \mathbb{O}$ one has $\overline{a\cdot b} = \overline{b}\cdot \overline{a}$.
The Euclidean norm and the Euclidean scalar product from $\mathbb{R}^8$ naturally extends to the octonionic case by $\langle a,b \rangle := \sum\limits_{i=0}^7 a_i b_i = \Re\{a \overline{b}\}$ and $|a|:= \sqrt{\langle a,a\rangle} = \sqrt{\sum\limits_{i=0}^7 a_i^2}$. We have the important norm composition property $|a\cdot b|=|a|\cdot|b|$ for all $a,b \in \mathbb{O}$. Every non-zero element $a \in \mathbb{O}$ is invertible with $a^{-1} =\overline{a}/|a|^2$.
The famous theorems of Frobenius and Hurwitz theorem tell us that
$\mathbb{R},\mathbb{C},\mathbb{H}$ and $\mathbb{O}$ are the only real normed division algebras.
Further important rules are
$$
(a\overline{b})b = \overline{b}(ba) =a(\overline{b}b)=a(b \overline{b})
$$
for all $a,b \in \mathbb{O}$ and,
$\Re\{b(\overline{a}a)c\} =\Re\{(b \overline{a})(ac)\}$ for all $a,b,c \in \mathbb{O}$, as stated for instance in \cite{CDieckmann} Proposition 1.6.
We also use the notation $B_8(z,r) :=\{z \in \mathbb{O} \mid |z| < r\}$ and $\overline{B_8(z,r)} :=\{z \in \mathbb{O} \mid |z| \le r\}$ for the eight-dimensional solid open and closed ball of radius $r$ in the octonions and $S_7(z,r)$ for the seven-dimensional sphere $S_7(z,r) :=\{z \in \mathbb{O} \mid |z| = r\}$. If $z=0$ and $r=1$ then we denote the unit ball and unit sphere by $B_8$ and $S_7$, respectively. The notation $\partial B_8(z,r)$ means the same as $S_7(z,r)$.
\section{Argument principle for isolated zeroes of octonionic monogenic functions}
We start this section by recalling the definition of octonionic regularity or octonionic monogenicity in the sense of the Riemann approach. From \cite{Imaeda,XL2000} and elsewhere we quote
\begin{definition}
Let $U \subseteq \mathbb{O}$ be an open set. A real differentiable function $f:U \to \mathbb{O}$ is called left (right) octonionic monogenic or equivalently left (right) ${\mathbb{O}}$-regular for short if it satisfies ${\cal{D}} f = 0$ or $f {\cal{D}} = 0$ where $
{\cal{D}}:= \frac{\partial }{\partial x_0} + \sum\limits_{i=1}^7 e_i \frac{\partial }{\partial x_i}$ stands for the octonionic Cauchy-Riemann operator,
where $e_i$ are the octonionic units like defined in the preliminary section before.
\end{definition}
In contrast to the associative Clifford analysis setting, the set of left (right) ${\mathbb{O}}$-regular functions do not form an ${\mathbb{O}}$-right (left) module. The following example given in \cite{KO2019} provides a counter-example. Take the function $f(z):= x_1 - x_2 e_4$. A direct computation gives ${\cal{D}}[f(z)] = e_1 - e_2 e_4 = e_1 - e_1 = 0$. But the function $g(z):=(f(z))\cdot e_3 = (x_1 - x_2 e_4) e_3 = x_1 e_3 - x_2 e_7$ satisfies ${\cal{D}}[g(z)] = e_1 e_3 - e_2 e_7 = e_5 -(-e_5) = 2 e_5 \neq 0$. It is clearly the lack of associativity that destroys the modular structure of ${\mathbb{O}}$-regular functions.
This is one significant structural difference to Clifford analysis. However, note that the composition with an arbitrary translation of the form $z \mapsto z + \omega$ where $\omega \in \mathbb{O}$ still preserves monogenicity also in the octonionic case, i.e. ${\cal{D}}f(z+\omega) = 0$ if and only if ${\cal{D}}f (z) = 0$. This is a simple consequence of the chain rule, because the differential remains invariant under an arbitrary octonionic translation.
An important property of left or right ${\mathbb{O}}$-regular functions is that they satisfy the following Cauchy integral theorem, cf. for instance \cite{XL2002}:
\begin{proposition} (Cauchy's integral theorem)\\
Let $G \subseteq \mathbb{O}$ be a bounded $8$-dimensional connected star-like domain with an orientable strongly Lipschitz boundary $\partial G$. Let $f \in C^1(\overline{G},\mathbb{O})$. If $f$ is left (resp.) right $\mathbb{O}$-regular inside of $G$, then
$$
\int\limits_{\partial G} d\sigma(z) f(z) = 0,\quad {\rm resp.}\;\;\int\limits_{\partial G} f(z) d\sigma(z) = 0
$$
where $d\sigma(z) = \sum\limits_{i=0}^7 (-1)^j e_i \stackrel{\wedge}{d x_i} = n(z) dS(z)$, where $\stackrel{\wedge}{dx_i} = dx_0 \wedge dx_1 \wedge \cdots dx_{i-1} \wedge dx_{i+1} \cdots \wedge dx_7$ and where $n(z)$ is the outward directed unit normal field at $z \in \partial G$ and $dS =|d \sigma(z)|$ the ordinary scalar surface Lebesgue measure of the $7$-dimensional boundary surface.
\end{proposition}
An important left and right ${\mathbb{O}}$-regular function is the function $q_{\bf 0}: \mathbb{O} \backslash\{0\} \to \mathbb{O},\;q_{\bf 0}(z) := \frac{x_0 - x_1 e_1 - \cdots - x_7 e_7}{(x_0^2+x_1^2+\cdots + x_7^2)^4} = \frac{\overline{z}}{|z|^8}$ whose only singular point is an isolated point singularity of order $7$ at the origin. This function serves as Cauchy kernel in the following Cauchy integral formula for ${\mathbb{O}}$-regular functions. Before we recall this formula, we point out another essential difference to the associative setting:
\begin{remark}
As already mentioned in {\rm \cite{GTBook}}, in contrast to quaternionic and Clifford analysis, octonionic analysis does {\em not} offer an analogy of a general Borel-Pompeiu formula of the form
$$
\int\limits_{\partial G} g(z) d\sigma(z) f(z) = 0,
$$
not even if $g$ is right $\mathbb{O}$-regular and $f$ left $\mathbb{O}$-regular, independently how we bracket these terms together. The lack of such an identity is again a consequence of the lack of associativity. However, if one of these functions is the Cauchy kernel, then one obtains a generalization.
\end{remark}
For convenience we recall from \cite{Imaeda,Nono,XL2002}:
\begin{proposition}
Let $U \subseteq \mathbb{O}$ be a non-empty open set and $G \subseteq U$ be an $8$-dimensional compact oriented manifold with a strongly Lipschitz boundary $\partial G$. If $f: U \to \mathbb{O}$ is left (resp. right) $\mathbb{O}$-regular, then for all $z \not\in \partial G$
$$
\chi(z)f(z)= \frac{3}{\pi^4} \int\limits_{\partial G} q_{\bf 0}(w-z) \Big(d\sigma(w) f(w)\Big),\quad\quad \chi(z) f(z)= \frac{3}{\pi^4} \int\limits_{\partial G} \Big(f(w)d\sigma(w)\Big) q_{\bf 0}(w-z),
$$
where $\chi(z) = 1$ if $z$ is in the interior of $G$ and $\chi(z)=0$ if $z$ in the exterior of $G$.
\end{proposition}
Note that the way how the parenthesis are put is very important. Putting the parenthesis in the other way around, would lead in the left $\mathbb{O}$-regular case to a different formula of the form
$$
\frac{3}{\pi^4} \int\limits_{\partial G} \Big( q_{\bf 0}(w-z) d\sigma(w)\Big) f(w) = \chi(z) f(z) + \int\limits_G \sum\limits_{i=0}^7 \Big[q_{\bf 0}(w-z),{\cal{D}}f_i(w),e_i \Big] dw_0 \wedge \cdots \wedge dw_7,
$$
where $[a,b,c] := (ab)c - a(bc)$ stands for the associator of three octonionic elements. The volume integral which appears additionally always vanishes in algebras where one has the associativity, such as in Clifford algebras.
See \cite{XL2002}.
An important subcase is obtained when we take for $f$ the constant function $f(z) = 1$ for all $z \in U$ which is trivially left and right $\mathbb{O}$-regular. Then the Cauchy integral simplifies to the constant value
$$
\chi(z) = \frac{3}{\pi^4} \int\limits_{\partial G} q_{\bf 0}(w-z) d\sigma(w),\quad {\rm resp.}\; \chi(z)= \frac{3}{\pi^4} \int\limits_{\partial G} d\sigma(w) q_{\bf 0}(w-z),
$$
simply indicating if $z$ belongs to the interior or to the exterior component of $\partial G$.
This is the starting point to introduce a following generalized topological version of the above stated Cauchy integral formula. Following for instance \cite{Hempfling}
one can consider more generally for $G$ a bounded Lipschitz domain whose boundary $\partial G$ could be a $7$-chain, homologous to a differentiable $7$-chain with image $\partial B(z,r)$, parametrized as
\begin{equation}\label{param}
\partial G = \{x_0(\lambda_1,\ldots,\lambda_7) + \sum\limits_{i=1}^7 x_i(\lambda_1,\ldots,\lambda_7) e_i\}.
\end{equation}
In this more general case one has
$$
w_{\partial G}(z) = \frac{3}{\pi^4} \int\limits_{\partial G} q_{\bf 0}(w-z) d\sigma(w),\quad {\rm resp.}\; w_{\partial G}(z)= \frac{3}{\pi^4} \int\limits_{\partial G} d\sigma(w) q_{\bf 0}(w-z),
$$
where $w_{\partial G}(z)$ represents the topological winding number, sometimes called the Kronecker-index (cf. \cite{ghs}), counting how often $\partial G$ wraps around $z$. Note that this is a purely topological entity induced by
$$
H_8(\partial G,\partial G - z) \cong H_8(B_8,S_7) \cong \tilde{H}_7(S_7) \cong \mathbb{Z},
$$
where $H_8$ is the related homology group and $\tilde{H}_7$ the related reduced homology group. Due to this property, the winding number $w_{\partial G}(z)$ is always an integer. This is the basis for the more general topogical version of Cauchy's integral formula:
\begin{theorem}\label{topcauchy}(Topological generalized octonionic Cauchy integral formula)\\
Let $U \subseteq \mathbb{O}$ be an open set and $G$ be a closed manifold whose boundary $\Gamma$ is a strongly Lipschitz $7$-chain. If $f:U \to \mathbb{O}$ is left $\mathbb{O}$-regular, then we have the identity
$$
w_{\Gamma}(z)f(z)= \frac{3}{\pi^4} \int\limits_{\Gamma} q_{\bf 0}(w-z) \Big(d\sigma(w) f(w)\Big),\quad z \not\in \Gamma
$$
where $w_{\Gamma}(z)$ is the topological winding number counting how often $\Gamma$ wraps around $z$. The latter equals zero if $z$ in a point from the exterior of $G$.
\end{theorem}
\begin{remark}
Note that if we put the parenthesis the other way around, then we get the identity
$$
\frac{3}{\pi^4} \int\limits_{\partial G} \Big( q_{\bf 0}(w-z) d\sigma(w)\Big) f(w) = w_{\partial G}(z) f(z) + \int\limits_G \sum\limits_{i=0}^7 \Big[q_{\bf 0}(w-z),{\cal{D}}f_i(w),e_i \Big] dw_0 \wedge \cdots \wedge dw_7.
$$
The volume integral is not affected in the topological version, because we simply integrate over the volume and orientation does not play any role, because the scalar differential $dV = dw_0 \wedge \cdots \wedge dw_7$ has no orientation.
\end{remark}
\begin{remark}
Comparing with {\rm \cite{Burdik,GTBook}}, we can relate the octonionic winding number with the fourth Chern number of the $G_2$-principal bundles associated to special solutions of $G_2$ Yang-Mills gauge fields arising in generalizing 't Hoofd ansatz, see {\rm \cite{Burdik,GTBook}}. This allows us to explicitly relate the fundamental solution of the octonionic Cauchy-Riemann equation with Chern numbers of the related $G_2$-principal bundles. We will shed some more light on this interesting connection in a follow-up paper.
\end{remark}
The topological winding number is also the key tool to define a generalized notion of multiplicity of zeroes and $a$-points of $\mathbb{O}$-regular functions. To proceed to the definition and classification of $a$-points we first need the octonionic identity theorem:
\begin{proposition}\label{identity}
Let $G \subseteq \mathbb{O}$ be an $8$-dimensional domain. Suppose that $f,g: G \to \mathbb{O}$ are two left (right) $\mathbb{O}$-regular functions. If there exists a $7$-dimensional smooth sub-manifold $V$ where $f(z)=g(z)$ for all $z \in V$, then we have $f(z) = g(z)$ for all $z \in G$.
\end{proposition}
In particular, a left (right) $\mathbb{O}$-regular function satisfying $f(z)=0$ on a $7$-dimensional sub-manifold vanishes identically. Similarly, if there is an octonion $a \in \mathbb{O}$ such that $f(z)=a$ for all $z \in V$, then $f(z) = a$ for all $z \in G$. Although the proof only uses basic tools of octonionic analysis, we prefer to present it in detail, as we are not aware of a direct reference in the literature addressing the particular octonionic setting. For the proof of the statement in the associative Clifford analysis setting we refer to \cite{ghs}, p. 187.
\begin{proof}
The proof can be done by extending R. Fueter's argumentation from the quaternionic case presented in \cite{Fueter1948-49} on pp.185-189. Without loss of generality we consider the situation where $g(z)$ is the zero function. Suppose now that $V$ is a seven dimensional smooth manifold where $f|_V = 0$. Consider an arbitrary point $c \in V$ with $f(c)=0$. Since $V$ is $7$-dimensional and smooth one can find seven $\mathbb{R}$-linearly independent unit octonions, say ${\bf n}_1, \ldots, {\bf n}_7$ with $|{\bf n}_h|=1$ $(h=1,\ldots,7)$ that lie in the $7$-dimensional tangent space $T_V(c)$. Next define $\xi^{(h)}_0 := \langle {\bf n}_h,1\rangle$ and $\xi^{(h)}_j :=\langle {\bf n}_h,e_j\rangle$ for $j=1,\ldots,7$ where $\langle\cdot,\cdot\rangle$ is the scalar product on $\mathbb{O}$ defined in Section~2.
Notice that all the values $\xi^{(h)}_j$ are real for all $j=0,1\ldots,7$ and all $h=1,\ldots,7$. Next consider for each point $c \in V$ the real $7\times8$-matrix composed by the seven rows constituted by the eight real coordinates of the seven octonions ${\bf n}_1,\ldots,{\bf n}_7$, respectively, i.e.
$$
A:=\left(\begin{array}{cccc} \xi^{(1)}_0 & \xi^{(1)}_1 & \cdots & \xi^{(1)}_7\\
\xi^{(2)}_0 & \xi^{(2)}_1 & \cdots & \xi^{(2)}_7\\
\vdots & \vdots & \cdots & \vdots \\
\xi^{(7)}_0 & \xi^{(7)}_1 & \cdots & \xi^{(7)}_7\\
\end{array} \right)
$$
Re-interpreting the seven octonions ${\bf n}_j$ as column vectors from $\mathbb{R}^8$, we have $rank({\bf n}_1,\ldots,{\bf n}_7)=7$ in view of the $\mathbb{R}$-linear independency. Consequently, also the rank of the largest non-vanishing sub-determinant must equal $7$. Without loss of generality we may suppose that
\begin{equation}\label{domega}
\det\left(\begin{array}{cccc} \xi^{(1)}_1 & \xi^{(1)}_2 & \cdots & \xi^{(1)}_7\\
\xi^{(2)}_1 & \xi^{(2)}_2 & \cdots & \xi^{(2)}_7\\
\vdots & \vdots & \cdots & \vdots \\
\xi^{(7)}_1 & \xi^{(7)}_2 & \cdots & \xi^{(7)}_7\\
\end{array} \right) \neq 0.
\end{equation}
Otherwise, we change the labels of the components.
Next we use that $f(z) = f_0(z) + \sum\limits_{k=0}^7 f_k(z) e_k \equiv 0$ on $V$. Therefore, the directional derivatives also vanish all, i.e. $\frac{\partial f}{\partial {\bf n}_h} = 0$ for each $h=1,2,\ldots,7$. Using the ordinary chain rule gives seven equations:
$$
\frac{\partial f}{\partial {\bf n}_h} = \sum\limits_{k=0}^7 \frac{\partial f}{\partial x_k} \frac{\partial x_k}{\partial {\bf n}_h} = \sum\limits_{k=0}^7 \frac{\partial f}{\partial x_k} \xi^{(h)}_k=0,\quad h=1,\ldots,7.
$$
Additionally, as eighth condition, $f$ has to satisfy the octonionic left Cauchy-Riemann equation $\sum\limits_{k=0}^7 e_k \frac{\partial f}{\partial x_k} = 0$.
Consider the formal octonionc determinant
$$
\det(\Omega) := \det\left(\begin{array}{cccc} 1 & e_1 & \cdots & e_7\\
\xi^{(1)}_0 & \xi^{(1)}_1 & \cdots & \xi^{(1)}_7\\
\vdots & \vdots & \cdots & \vdots \\
\xi^{(7)}_0 & \xi^{(7)}_1 & \cdots & \xi^{(7)}_7\\
\end{array} \right),
$$
defined formally in the usual way. Note that the non-associativity does not lead to ambiguous interpretations, because only the entities $e_1,\ldots,e_7$ are octonions, while the other entries $\xi^{(h)}_k$ are all real-valued expressions. So, this formal determinant is a well-defined octonion.
The eight equations mentioned above could be satisfied under two particular circumstances only. Firstly, they could be satisfied if $\det(\Omega)$ vanished. However, this is impossible. Notice that $\det(\Omega)$ represents an octonion. An octonion only vanishes if {\em all} its real components vanish. However, we obviously have $\Re\{\det(\Omega))\}\neq 0$ in view of (\ref{domega}). The only remaining second option is that
$$
\frac{\partial f}{\partial x_k} = 0,\quad k=0,1,\ldots,7
$$
at each $z \in V$. Note that also the octonionic Cauchy integral formula implies that the left $\mathbb{O}$-regularity of $f$ is also inherited by all partial derivatives of $f$. Consequently, the same argumentation is also true for all partial derivatives $\frac{\partial^{n_1+\cdots+n_7}}{\partial x_1^{n_1} \cdots \partial x_7^{n_7}} f(z) = 0$. Following \cite{Imaeda,XL2001} we can expand $f$ into a Taylor series around each left $\mathbb{O}$-regular point $z=c \in V$
of the form $f(z) = \sum\limits_{n=0}^{\infty} \sum\limits_{n=n_1+\cdots+n_7} V_{\bf n}(z-c) c_{n_1,\ldots,n_7} \equiv 0$ where
$$
V_{\bf n}(z) = \frac{1}{|{\bf n}|!} \sum\limits_{\pi \in perm({\bf n})} (Z_{\pi(n_1)}(Z_{\pi(n_2)}( \cdots (Z_{\pi(n_{6})} Z_{\pi(n_{7})})\cdots))).
$$
One has to apply the parenthesis in this particular way. Due to the lack of associativity, the parenthesis cannot be neglected.
Here, $perm({\bf n})$ denotes the set of all distinguishable permutations of the sequence $(n_1,n_2,\ldots,n_7)$ and $Z_i := V_{\tau(i)}(z) := x_i - x_0 e_i$ for all $i=1,\ldots,7$, cf. \cite{XL2001} Theorem C p.208. Here $\tau(i)$ is the multi-index $(n_1,\ldots,n_7)$ where $n_j = 0$ for all $j \neq i$ and $n_i=1$.
However, following also from \cite{XL2001}, $c_{n_1,\ldots,n_7} :=\Bigg( \frac{\partial^{n_1+\cdots+n_7}}{\partial x_1^{n_1} \cdots \partial x_7^{n_7}} f(z)\Bigg)_{z=c} = 0$. The uniqueness of the Taylor series representation implies that $f$ must be identically zero over the whole domain $G$.
\end{proof}
\begin{remark}
If one considers instead of $\mathbb{O}$-regular functions, the set of slice-regular functions from {\rm \cite{GP,JRS}}, then one even gets a much stronger version of the identity theorem, namely stating that two slice-regular functions already coincide with each other, when they coincide with each other on a one-dimensional set with an accumulation point. This has a strong consequence on the structure of the zeroes.
\end{remark}
Since also the octonions form a normed algebra, we can introduce the notion of an isolated $a$-point of an $\mathbb{O}$-regular function as follows, compare with \cite{Hempfling,Kra2004}:
\begin{definition}
Let $U \subseteq \mathbb{O}$ be an open set and $f:U \to \mathbb{O}$ be a function. Then we say that $f$ has an isolated $a$-point at $c \in U$, if $f(c)=a$ and if there exists a positive real $\varepsilon > 0$, such that $f(z) \neq a$ for all $z \in B(c,\varepsilon) \backslash\{c\}$. If $a=0$, then we call $c$ an isolated zero.
\end{definition}
Let $U \subseteq \mathbb{O}$ be an open set, $c \in U$ and $f:U \to \mathbb{O}$ be a real differentiable function, i.e. we suppose that each real component function $$f_i:U \to \mathbb{R}\;\;(i=0,1,\ldots,7)\;\; {\rm of}\;\; f(z)= f_0(z) + f_1(z) e_1 + \cdots + f_7(z)e_7$$ is partial differentiable.
According to the implicit function theorem in $\mathbb{R}^8$ a sufficient criterion for an $a$-point of a real-differentiable function $f:U \to \mathbb{O}$ of being an isolated $a$-point with $f(c)=a$ is that the Jacobian determinant does not vanish $\det(Jf)(c) := \det\Big(\frac{\partial f_j}{\partial x_j} \Big)_{0 \le i,j \le 7} \neq 0$. However, this clearly is just a sufficient criterion, as the following example illustrates.
Take for instance the function $:\mathbb{O} \to \mathbb{O}$ defined by
\begin{eqnarray*}
f(z)&:=& V_{2,0,0,\ldots,0}(z) + V_{0,2,0,\ldots,0}(z)+\cdots+V_{0,\ldots,0,2}(z)\\ &=& Z_1^2+Z_2^2+\cdots+Z_7^2 = (x_1^2+\cdots+x_7^2-7x_0^2) - 2 \sum\limits_{i=1}^7 x_0x_i e_i
\end{eqnarray*}
which is clearly left and right $\mathbb{O}$-regular in the whole algebra $\mathbb{O}$.
Obviously, one has $f(0)=0$. In general, $f(z)=0$ implies that first
$$x_1^2+x_2^2+\cdots+x_7^2=7x_0^2$$
and one has $x_0 x_i = 0$ for each $i=1,\ldots,7$. The first relation implies that $x_0 = \pm \frac{1}{\sqrt{7}} \sqrt{x_1^2+\cdots+x_7^2}$. Inserting this expression into the other relations yields $\sqrt{x_1^2+\cdots+x_7^2} x_i = 0$ for all $i=1,\ldots,7$. Since $x_1^2+\cdots+x_7^2 > 0$ whenever $(x_1,x_2,\ldots,x_7) \neq (0,0,\ldots,0)$ we must have $x_i = 0$ for all $i=1,\ldots,7$. Therefore, also $x_0=0$. Summarizing $z=0$ is the only zero of $f$ and therefore it must be an isolated zero. The Jacobian matrix however is:
$$
(Jf)(z):= \left(\begin{array}{cccccc} -14x_0 & 2x_1 & 2x_2 & \cdots & 2x_6 & 2x_7\\
-2x_1 & -2x_0 & 0 & \cdots & 0 & 0 \\
-2x_2 & 0 &-2x_0 & \cdots &0 & 0\\
\vdots&\vdots &\vdots&\vdots &\vdots& \vdots\\
-2x_6 & 0 & 0 & \cdots & -2x_0 & 0 \\
-2x_7 & 0 & 0 & \cdots & 0 & -2x_0
\end{array} \right).
$$
Inserting $z=0$ yields $\det(Jf)(z)=0$.
A typical example of a non-linear left $\mathbb{O}$-regular function with one single octonionic isolated zero $z^*$ satisfying $Jf(z^*) \neq 0$ can be constructed by applying T. Hempfling's construction from \cite{Hempfling} p.111.
Adapting from \cite{Hempfling}, the octonionic version of the function
$$
f(z)= (x_1 x_2 \cdots x_7-1)-(x_0 x_2 \cdots x_7-1)e_1 - \cdots -(x_0 x_1 \cdots x_6-1)e_7
$$
actually is left $\mathbb{O}$-regular. We have
$$
\frac{\partial f}{\partial x_0} = -\sum\limits_{j=1}^7\Big(\prod\limits_{i\neq 0, i \neq j} x_i \Big) e_j
$$
and for $k \in \{1,\ldots,7\}$
$$
e_k \frac{\partial f}{\partial x_k} = \Big(\prod\limits_{i\neq 0, i \neq k} x_i \Big) e_k - \Big(\prod\limits_{i\neq 0, i \neq k} x_i \Big) e_k e_1 - \cdots - \Big(\prod\limits_{i\neq 0, i \neq k} x_i \Big) e_k e_7.
$$
So, $f$ actually satisfies $\frac{\partial f}{\partial x_0} + \sum\limits_{k=1}^7 e_k \frac{\partial f}{\partial x_k} = 0$.
As one readily observes, one has $f(z^*)=0$ when inserting $z^{*}=1+e_1+\cdots+e_7$. Furthermore, $\frac{\partial f_i}{\partial x_j} = \delta_{ij} \prod\limits_{k=0,k\neq i,j}^7 x_k$, where $\delta_{ij}$ denotes the ordinary Kronecker symbol. Thus,
$$
Jf((1+e_1+\cdots+e_7)) = \left( \begin{array}{ccccc} 0 & 1 & 1 & \cdots & 1\\
1 & 0 & 1 & \cdots & 1 \\
1 & 1 & 0 & \cdots & 1\\
\vdots & \vdots & \vdots & \vdots & \vdots\\
1 & 1 & 1 & \cdots & 0 \end{array}
\right),
$$
and therefore $\det(Jf((1+e_1+\cdots+e_7))) = -7 \neq 0$. $z^*$ clearly is an isolated zero of $f$.
Note that in general a left or right $\mathbb{O}$-regular function can possess also zeroes that lie on $k$-dimensional manifolds with $k \le 6$. The case $k=7$ cannot appear as direct a consequence of Proposition~\ref{identity}, because if a left $\mathbb{O}$-regular function vanishes on a $7$-dimensional manifold, then it must be identically zero over the whole $8$-dimensional space. Furthermore, note that the zero sets of left or right $\mathbb{O}$-regular functions must be real analytic manifolds. Already very simple octonionic functions can have connected sets of zeroes. Adapting from \cite{Zoell} and \cite{Hempfling}, in the octonionic case the simplest examples (for each dimension) are
\begin{eqnarray*}
f(z)=Z_1^2+\cdots+Z_7^2 & & {\rm isolated\;zero\;at}\;z^*=0\\
f(z)=Z_1^2+\cdots+Z_6^2 & & {\rm 1-dimensional\;zero\;set \;at}\;z \in e_7 \mathbb{R}\\
f(z)=Z_1^2+\cdots+Z_5^2 & & {\rm 2-dimensional\;zero\;set \;at}\;z \in e_6 \mathbb{R} \oplus e_7 \mathbb{R}\\
f(z)=Z_1^2+\cdots+Z_4^2 & & {\rm 3-dimensional\;zero\;set \;at}\;z \in e_5 \mathbb{R} \oplus e_6 \mathbb{R} \oplus e_7 \mathbb{R}\\
f(z)=Z_1^2+Z_2^2+Z_3^2 & & {\rm 4-dimensional\;zero\;set \;at}\;z \in e_4 \mathbb{R} \oplus \cdots \oplus e_7 \mathbb{R}\\
f(z)=Z_1^2+Z_2^2 & & {\rm 5-dimensional\;zero\;set \;at}\;z \in e_3 \mathbb{R} \oplus \cdots \oplus e_7 \mathbb{R}\\
f(z)=Z_1^2 & & {\rm 6-dimensional\;zero\;set \;at}\;z \in e_2 \mathbb{R} \oplus \cdots \oplus e_7 \mathbb{R}\\
\end{eqnarray*}
where $Z_i$ are again the octonionic Fueter polynomials $Z_i = x_i-x_0e_i$ for $i=1,\ldots,7$.
Generalizing the construction from \cite{Hempfling} a further class of interesting examples can be gained from the following construction. Let $k \in \{2,\ldots,6\}$ be an integer and consider the function $f:\mathbb{O} \to \mathbb{O}$, $f(z):= Z_1^2+\cdots+Z_k^2-\sum\limits_{j=k+1}^7 Z_je_j$ composed by the octonionic Fueter polynomials. Again, this function is both left and right $\mathbb{O}$-regular and can be written in the form
$$
f(z)=\Big(\sum\limits_{i=1}^k x_i^2\Big) - k x_0^2+(7-k)x_0 - 2 x_0\sum\limits_{i=1}^k x_i e_i + \sum\limits_{i=k+1}^7 x_i e_i.
$$
when switching to the ordinary variables $x_i$.
Now consider the function $g(z):=f(z)-R^2$, where $R>0$ is a real number. Then $g(z)=0$ if and only if the following system of equations is satisfied
\begin{eqnarray*}
\sum\limits_{i=1}^k x_i^2-k x_0^2-R^2+(7-k)x_0
&=& 0 \\
x_0 x_i &=& 0,\;\;i=1,\ldots,k\\
x_i & = & 0,\;\;i=k+1,\ldots,7.
\end{eqnarray*}
First case: $x_0=0$. Then $g(z)=0$ if and only if $\sum\limits_{i=1}^k x_i^2-R^2 = 0$. Now the zero variety of $g$ is the compact $k-1$-dimensional sphere of radius $R$ centered around the origin in the subspace generated by $e_1,e_2,\ldots,e_k$. \\
Second case $x_0 \neq 0$. Then $x_i = 0$ for all $i=1,\ldots,7$. In this case $g(z)=0$ if and only if $-kx_0^2-R^2+(7-k)x_0=0$. This condition can only be satisfied if $x_0 = - \frac{1-\frac{7}{k}}{2} \pm \sqrt{\frac{(1-\frac{7}{k})^2}{4} - \frac{R}{k}}$, provided the value in the square root expression is not negative. In this case the zero set consists at most of two isolated points $(x_0,0,\ldots,0)$ on the real axis.
\par\medskip\par
In the spirit of \cite{Hempfling,HeKra,Kra2004} we now proceed to introduce the order of an isolated zero or isolated $a$-point of an $\mathbb{O}$-regular function. This can be done like in the quaternionic and Clifford analysis case in terms of the topological Cauchy integral mentioned above and then represents the order of an isolated $a$-point in the sense of the topological mapping degree.
\begin{definition}\label{isolatedorder}
Let $U \subseteq \mathbb{O}$ be an open set, $U \neq \emptyset$. Let $f:U \to \mathbb{O}$ be left $\mathbb{O}$-regular (resp. right $\mathbb{O}$-regular) and suppose that $c \in U$ is an isolated $a$-point of $f$, i.e. $f(c)=a$ with $a \in \mathbb{O}$. Choose an $\varepsilon > 0$ such that $\overline{B}(c,\varepsilon) \subseteq U$ and suppose that $f(z) \neq 0$ for all $z \in \overline{B}(c,\varepsilon) \backslash\{c\}$. Then,
$$
{\rm ord}(f-a;c) := \frac{3}{\pi^4} \int\limits_{(f-a)(\partial B(c,\varepsilon))} q_{\bf 0}(w) d\sigma(w),\quad {\rm resp.}\;{\rm ord}(f-a;c) := \frac{3}{\pi^4} \int\limits_{(f-a)(\partial B(c,\varepsilon))} d\sigma(w) q_{\bf 0}(w)
$$
is called the order of the isolated $a$-point of the octonionic left (resp. right) $\mathbb{O}$-regular function $f$ at $c$.
\end{definition}
In the case where $a=0$, we address the order of isolated zeroes of $f$, which in the left $\mathbb{O}$-regular case equals the Cauchy integral
$$
{\rm ord}(f;c) := \frac{3}{\pi^4} \int\limits_{f(\partial B(c,\varepsilon))} q_{\bf 0}(w) d\sigma(w).
$$
\begin{proposition}
The numbers ord$(f-a;c)$ are integers and count how often the image of the sphere around the octonionic $a$-point wraps around $a$ and therefore represents the notion of the order of an $a$-point in the sense of the topological mapping degree.
\end{proposition}
\begin{proof}
The topological generalized version of the octonionic Cauchy integral formula (Theorem~\ref{topcauchy}) tells us that every octonionic function $h:U \to \mathbb{O}$ that is left $\mathbb{O}$ regular over an open set $U$ which contains a closed manifold $G$ whose boundary $\Gamma$ is a strongly Lipschitz $7$-chain satisfies
$$
w_{\Gamma}(y)h(y) = \frac{3}{\pi^4} \int\limits_{\Gamma} q_{\bf 0}(w-y) \Big(d\sigma(w) h(w)\Big).
$$
So, in the case where $h(z) = 1$ for all $z \in U$, one has
$$
w_{\Gamma}(y) = \frac{3}{\pi^4} \int\limits_{\Gamma} q_{\bf 0}(w-y) d\sigma(w).
$$
In view of the mentioned property $H_8(\Gamma,\Gamma - c) \cong \tilde{H}_7(S_7)$ one can replace in the latter equation $\Gamma$ by the homeomorphic equivalent small sphere $\partial B(c,\varepsilon)$, so we have
$$
w_{\Gamma}(y) = \frac{3}{\pi^4} \int\limits_{\partial B(c,\varepsilon)} q_{\bf 0}(w-y) d\sigma(w).
$$
Next we replace the octonion $y$ by $f(c)-a$ and $\partial B(c,\varepsilon)$ by $(f-a)(\partial B(c,\varepsilon))$ and one obtains
\begin{eqnarray*}
w_{\Gamma}(y) &=& \frac{3}{\pi^4} \int\limits_{(f-a)\partial B(c,\varepsilon)} q_{\bf 0}(w-(f(c)-a)) d\sigma(w)\\
&= & \frac{3}{\pi^4} \int\limits_{(f-a)\partial B(c,\varepsilon)} q_{\bf 0}(w) d\sigma(w) \\
&=& w_{(f-a)(\partial B(c,\varepsilon))}(0).
\end{eqnarray*}
We recall that also $f(\partial B(c,\varepsilon))$ and hence also the translated expression $(f-a)(\partial B(c,\varepsilon))$ represents a $7$-dimensional cycle, cf. \cite{AH} p. 470.
\end{proof}
\begin{remark}\label{zero-order}
In contrast to complex analysis it can happen that one has ord$(f;c) = 0$ even if $f(c)=0$. This can occur for instance when the outward normal field of the surface of the image of the boundary cycle $\Gamma$ turns into an inward directed one after one loop of the parametrization of $f(\Gamma)$, so that in total all the contributions of the integration over the complete cycle $f(\Gamma)$ can cancel out each other symmetrically when this happens.
This phenomenon already occurs in the quaternionic setting, as pointed out in {\rm \cite{Fueter1948-49}, p. 199}.
\end{remark}
\begin{remark}
As explained in {\rm \cite{Zoell}}, already in the quaternionic case there is no direct correspondence anymore between the order of an $a$-point and the number of vanishing coefficients in the octonionic Taylor series expansion. Note that in complex analysis one has the relation
$$
{\rm ord}(f-a;c) = n, \quad \Longleftrightarrow \quad (f-a)^{(k)}(c) = 0,\; \forall k < n,\;(f-a)^{(n)}(c) \neq 0.
$$
Since the situation is already so complicated in the quaternions, it cannot be expected that one gets a simpler relation for the octonionic case. Actually, analogues of the counter-example presented in {\rm \cite{Zoell}} on p.131-132 can easily be constructed.
In the octonionic slice-regular setting described for instance in {\rm \cite{GPzeroes}}, the situation is much simpler. As mentioned previously, in the slice function theoretical setting an octonionic slice-regular function either has isolated zeroes or spherical zeroes, similarly to the slice-monogenic setting in $\mathbb{R}^{n+1}$ cf. {\rm \cite{CSS,GPzeroes}}. In terms of the symmetric slice product the multiplicity of such a zero then can be described by the exponent of the (slice) power, namely in the usual way like in classical real and complex analysis: A slice-regular function $f$ can be decomposed uniquely in the way $f(z)=(z-a)^{*k}*g(z)$ where $g(z)$ is a uniquely defined and zero-free slice-regular function around $a$, see {\rm \cite{CSS,GPzeroes}} and elsewhere. Note that ordinary powers of $z$ are intrinsic slice regular functions, also in the octonions. The slice-product gives some kind of symmetric structure. In the setting of $\mathbb{O}$-regular functions in the sense of the Cauchy-Riemann operator, such a decomposition is not possible, because of the lack of commutativity (and also of non-associativity).
\end{remark}
The definition of the order of an isolated $a$-point of an octonionic left or right $\mathbb{O}$-regular function in the sense of Definition~\ref{isolatedorder} is very natural from the topological point of view and so far the only meaningful tool to introduce a notion of ``multiplicity'' of an $a$-point. However, using this definition to calculate the value of the order of a concrete practical example is very difficult in general. Note that one has to perform the integration over the {\em image} of the sphere. Now, a significant advantage of the octonionic setting in comparison to the Clifford analysis setting is that octonionic functions represent maps from $\mathbb{O} \to \mathbb{O}$ which can be uniquely identified with a map from $\mathbb{R}^8 \to \mathbb{R}^8$, by identifying the map $$x_0 + x_1 e_1 + \ldots + x_7 e_7 \mapsto f_0(z) + f_1(z) e_1 + \cdots + f_7 (z)e_7$$ with the corresponding map $\left(\begin{array}{c} x_0 \\ x_1\\ \vdots \\ x_7 \end{array}\right) \mapsto \left(\begin{array}{c} f_0(x_0,\ldots,x_7) \\ f_1(x_0,x_1,\ldots,x_7)\\ \vdots \\ f_7(x_0,x_1,\ldots,x_7) \end{array}\right)$. However, in Clifford analysis one deals with maps from $\mathbb{R}^8$ to $Cl_8 \cong \mathbb{R}^{128}$. Now, if the $7$-dimensional surface $\partial G$ is parametrized as in (\ref{param}), the image of that surface $f(\partial G)$ can be parametrized as
$$
f(\partial G) = \{f_0(x(\lambda_1,\ldots,\lambda_7)) + \sum\limits_{i=1}^7 f_i(x(\lambda_1,\ldots,\lambda_7)e_i)\}
$$
and one can simply apply the chain rule for ordinary real differentiable functions from $\mathbb{R}^8 \to \mathbb{R}^8$, as indicated in \cite{Hempfling} for purely paravector-valued functions. Applying the chain rule and exploiting the special mapping property that the image of octonionic functions are again octonions leads to the following octonionic generalization of the transformation formula from \cite{Kra2004} p. 32. In the Clifford analysis case one had to restrict onself to particular paravector-valued functions. This restriction is not necessary in the octonionic setting:
\begin{lemma}
Let $G \subseteq \mathbb{O}$ be a domain and suppose that each real component function of an octonionic function $f:G \to \mathbb{O}$ is real differentiable in the ordinary sense. Then we have
$$
d\sigma(f(z)) = [(Jf)^{adj}(z)] \circledcirc [d\sigma(z)],
$$
where $[(Jf)^{adj}(z)]$ stands for the adjunct real component $8 \times 8$ matrix of the Jacobian $(Jf) = (\frac{\partial f_i}{\partial x_j})_{ij}$.Furthermore, $[d\sigma(z)]$ represents the $\mathbb{R}^8$-vector composed by $\stackrel{\wedge}{dx_i}$ for $i=0,\ldots,7$ and $\circledcirc$ means the usual matrix-vector product, multiplying the real $8\times8$-matrix in the usual way with the $8$-dimensional real vector. The resulting $\mathbb{R}^8$-vector on the right-hand side then is re-interpreted as on octonion on the left-hand side identifying the unit vectors with the corresponding octonionic units.
\end{lemma}
It should be pointed out very clearly that $\circledcirc$ does not mean the usual octonionic product. To be more explicit $[d \sigma(z)]$ is interpreted as the vector
$$
[d\sigma(z)] :=
\left(\begin{array}{c} (-1)^0 \stackrel{\wedge}{dx_0} \\ (-1)^1\stackrel{\wedge}{dx_1}\\ \vdots \\ (-1)^7 \stackrel{\wedge}{dx_7} \end{array}\right).$$
The adjunct matrix $[(Jf)^{adj}(z)]$ has the form
$$
(Jf)^{adj}(z) =\Bigg((-1)^{i+j} \det \Big( \frac{\partial f_i}{\partial x_j}(z) \Big)^{adj} \Bigg)_{i,j}.
$$
This also provides a correction to \cite{Kra2004} p. 32 where the index $i$ of the function $f$ has been forgotten as well as the star after $(Jf)$ (indicating the adjunct) in the second line of the proof. The proof for the octonionic case can be done along the same lines as presented for the paravector-valued Clifford case in \cite{Kra2004} p. 32. The chain rule leads to
$$
d\sigma(f(z)) = \sum\limits_{i=0}^7 \sum\limits_{j=0}^7 (-1)^{i+j} e_i \det \Big(\frac{\partial f_i}{\partial x_j}(z)\Big)^{adj}(-1)^j \stackrel{\wedge}{dx_j}
$$
and the stated formula follows, because no associativity property is required.
\par\medskip\par
This lemma allows us to reformulate the definition of the order given in Definition~\ref{isolatedorder} in the way that the integration is performed over the simple sphere $S_7(c,\varepsilon)$. In contrast to the Clifford analysis case presented in \cite{Kra2004} p. 33 we do not need to worry about a possible restriction of the range of values. All octonion-valued functions satisfying the left or right octonionic Cauchy-Riemann system are admitted here.
However, the way how we put the brackets in the following theorem is crucially important. In the left $\mathbb{O}$-regular case we have
\begin{theorem}\label{order-reformulated}
Let $G \subseteq \mathbb{O}$ be a domain. Let $f:G \to \mathbb{O}$ be a left $\mathbb{O}$-regular function and suppose that $c \in G$ is an isolated $a$-point of $f$ with $f(c)=a$. Choose $\varepsilon > 0$ such that $\overline{B}(c,\varepsilon) \subseteq G$ and $f(z) \neq 0$ for all $z \in \overline{B}(c,\varepsilon) \backslash\{c\}$. Then the order of the $a$-point can be re-expressed by
\begin{eqnarray*}
{\rm ord}(f-a;c) &=& \frac{3}{\pi^4} \int\limits_{S_7(c,\varepsilon)} q_{\bf 0}(f(z)-a) \cdot \Bigg( [(Jf)^{adj}(z)] \circledcirc [d\sigma(z)] \Bigg)\\ &=& \frac{3}{\pi^4} \int\limits_{S_7(c,\varepsilon)} \frac{\overline{f(z)-a}}{|f(z)-a|^8}\cdot \Bigg( [(Jf)^{adj}(z)] \circledcirc [d\sigma(z)] \Bigg).
\end{eqnarray*}
\end{theorem}
Here, $\cdot$ stands for the octonionic product, where the term inside the large parenthesis on the right is re-interpreted as octonion.
Note that the Jacobian determinant is invariant under translations. Therefore $J(f-a)(z) = Jf(z)$. In the complex case the Jacobian simplifies to $(f-a)'(z)= f'(z)$ and one re-obtains the usual integrand $\frac{f'(z)}{f(z)-a}$ because the Cauchy kernel then coincides with the simple inverse.
For the sake of completeness, in the right $\mathbb{O}$-regular case one obtains
\begin{eqnarray*}
{\rm ord}(f-a;c) &=& \frac{3}{\pi^4} \int\limits_{S_7(c,\varepsilon)} \Bigg( [(Jf)^{adj}(z)] \circledcirc [d\sigma(z)] \Bigg) \cdot q_{\bf 0}(f(z)-a) \\ &=& \frac{3}{\pi^4} \int\limits_{S_7(c,\varepsilon)} \Bigg( [(Jf)^{adj}(z)] \circledcirc [d\sigma(z)] \Bigg) \cdot \frac{\overline{f(z)-a}}{|f(z)-a|^8}.
\end{eqnarray*}
\par\medskip\par
Note that we always have ord$(f-a;c)=0$ in all points $c$ where $f(c) \neq a$.
As a direct application this property and the statement of Theorem~\ref{order-reformulated} we can deduce the following argument principle for isolated $a$-points of $\mathbb{O}$-regular functions which provides an extension of Theorem 1.34 from \cite{Kra2004} where the paravector-valued Clifford holomorphic case has been treated. But also in the octonionic case we have
\begin{theorem} (Octonionic argument principle)\\
Let $G \subseteq \mathbb{O}$ be a domain and suppose that $f:G \to \mathbb{O}$ is left $\mathbb{O}$-regular over $G$. Now, consider a nullhomologous $7$-dimensional cycle $\Gamma$ that parametrizes the surface of an $8$-dimensional oriented compact manifold $C \subset G$. Under the assumption that $f$ has only isolated $a$-points in the interior of $C$ and no further $a$-points on the boundary $\Gamma$, we have the order relation
$$
\sum\limits_{c \in C} {\rm ord}(f-a;c) = \frac{3}{\pi^4} \int\limits_{\Gamma} \frac{\overline{f(z)-a}}{|f(z)-a|^8}\cdot \Bigg( [(Jf)^{adj}(z)] \circledcirc [d\sigma(z)] \Bigg).
$$
\end{theorem}
\begin{proof}
The proof follows along the same lines as in the Clifford analysis case given in \cite{Kra2004} pp.33. This is a consequence of its predominant topological nature. The crucial point is that any oriented compact manifold can have atmost finitely many isolated $a$-points in its interior, let us call them $c_1,\ldots,c_n$. Thus, one can find a sufficiently small real number $\varepsilon > 0$ such that there are no $a$-points in the union of the sets $\bigcup_{i=1}^n B(c_i,\varepsilon) \backslash\{c_i\}$. Since $f$ has neither further $a$-points nor singular points in the remaining part $C \backslash \bigcup_{i=1}^n B_i$ one obtains in view of Theorem~\ref{order-reformulated} that
$$
\int\limits_{\Gamma} \frac{\overline{f(z)-a}}{|f(z)-a|^8}\cdot \Bigg( [(Jf)^{adj}(z)] \circledcirc [d\sigma(z)] \Bigg) = \sum\limits_{i=1}^n \int\limits_{S(c_i,\varepsilon)} \frac{\overline{f(z)-a}}{|f(z)-a|^8}\cdot \Bigg( [(Jf)^{adj}(z)] \circledcirc [d\sigma(z)] \Bigg).
$$
The assertion now follows directly, when we take into account the mentioned property that ord$(f-a;c)=0$ at all $c \in C$ with $f(c) \neq a$.
\end{proof}
The big goal of the argument principle is that it provides us with a toplogical tool to control the isolated $a$-points or zeroes of an octonionic regular function under special circumstances. Its classical application is Rouch\'e's theorem that presents a sufficient criterion to describe by which function an octonionic regular function may be distorted in the way that it has no influence on the numbers of isolated zeroes inside a domain, when particular requirements are met. Alternatively, it gives a criterion to decide whether two octonionic monogenic functions have the same number of isolated zeroes inside such a domain. In close analogy to the associative Clifford analysis case, cf. \cite{Kra2004} Theorem 1.35, we may establish
\begin{theorem}\label{rouche} (Generalized classical Rouch\'e's theorem)\\
Suppose that $G \subseteq \mathbb{O}$ is a domain and that $\Gamma$ is a nullhomologous $7$-dimensional cycle parametrizing the boundary of an oriented compact $8$-dimensional manifold $C \subset G$. Let $f,g:G \to \mathbb{O}$ be two $\mathbb{O}$-regular functions that have only a finite number of zeroes inside of int $C$ and no zeroes on $\Gamma$. Provided that $|f(z)-g(z)| < |f(z)|$ for all $z \in \Gamma$, then
$$
\sum\limits_{c \in C} {\rm ord}(f;c) = \sum\limits_{c \in C} {\rm ord}(g,c).
$$
\end{theorem}
Also the nature of this theorem is predominantly topological. The topological aspects play a much more profound role than the function theoretical aspects, which nevertheless are also needed because the proof uses the argument principle involving the particular Cauchy-kernel of the octonionic Cauchy-Riemann system. Let us define a family of left $\mathbb{O}$-regular functions depending on a {\em continuous} real parameter $t \in [0,1]$ by
$$
h_{t}(z) := f(z)+ t(g(z)-f(z)), \quad z \in G.
$$
For each $t \in [0,1]$ each function $h_z$ is left $\mathbb{O}$-regular over $G$, since $t$ is only a {\em real} parameter. Note that otherwise, the left $\mathbb{O}$-regularity would be destroyed in general. Let $z \in \Gamma$. Then we have $|t(g(z)-f(z)|=|t||f(z)-g(z)| \le |f(z)-g(z)| < |f(z)|$, where the latter inequality follows from the assumption. Therefore $h_t(z) \neq 0$ for all $z \in \Gamma$.
Furthermore, for each $t \in [0,1]$ the entity ord$(h_t;c)$ is an integer. Since the number of zeroes is supposed to be finite in $G$, for each $t$ the sum $\sum\limits_{c \in C} {\rm ord}(h_t;c)$ is finite and represents a finite integer $N(t) \in \mathbb{Z}$. Per definition we have
\begin{eqnarray*}
N(t) &=& \sum\limits_{c \in C} {\rm ord}(h_t;c)\\
& =& \frac{3}{\pi^4} \int\limits_{\Gamma} q_{\bf 0}(h_t(z)) \cdot \Bigg(
[(Jh_t)^{adj}(z)] \circledcirc [d\sigma(z)]
\Bigg)\\
&=& \frac{3}{\pi^4} \int\limits_{\Gamma} q_{\bf 0}(f(z)+t g(z)-t f(z)) \cdot \Bigg(
[(J (f+tg-tf))^{adj}(z)] \circledcirc [d\sigma(z)]
\Bigg).
\end{eqnarray*}
Since all terms under the latter integral are continuous functions in the variable $t$, also the expression $N(t)$ on the left-hand side must be continuous in the variable $t$. However, $N(t)$ is an integer-valued expression for any $t \in [0,1]$.Therefore, $N(t)$ must be a constant expression, hence $N(0)= \sum\limits_{c \in C} {\rm ord}(h_0;c) = \sum\limits_{c \in C} {\rm ord}(f;c) $ and $N(1) = \sum\limits_{c \in C} {\rm ord}(h_1;c) = \sum\limits_{c \in C} {\rm ord}(g;c)$ must be equal.
\par\medskip\par
As a nice application of Theorem~\ref{rouche} we can establish the following weakened version of Hurwitz' theorem. The following statement can also be carried over to the quaternionic monogenic setting and to the context of paravector-valued monogenic functions in Clifford algebras, for which this statement has not been established so far, at least as far as we know. We prove
\begin{theorem}(Generalized Hurwitz theorem)\\
Let $G \subset \mathbb{O}$ be a domain. Suppose that $f_n: G \to \mathbb{O}$ is a normally convergent sequence of $\mathbb{O}$-regular functions with $f_n(z) \neq 0$ at all $z \in G$ and for each $n \in \mathbb{N}$. Then the limit function $f(z):= \lim\limits_{n \to \infty} f_n(z)$ has the property that either $\sum\limits_{c \in G} {\rm ord}(f;c)=0$ for all $c \in G$ or $f$ vanishes identically over $G$.
\end{theorem}
\begin{proof}
According to \cite{XL2002} Theorem 11, left (or right) $\mathbb{O}$-regular functions satisfy Weierstra{\ss}' convergence theorem.
Therefore, the limit function $f$ is a well-defined $\mathbb{O}$-regular function over the whole domain $G$. Let us assume now that $f \not\equiv 0$ over $G$. Take an arbitrary point $z^* \in G$. In view of the identity theorem of left $\mathbb{O}$-regular functions (Proposition~\ref{identity}) there must exist a positive real $R > 0$ such that the closed ball $\overline{B(z^*,R)}$ is entirely contained inside $G$ and $M:= \min_{z \in S_7(z^*,R)} |f(z)| > 0$. Moreover, since $S_7(z^*,R)$ is compact there must exist an index $n_0 \in \mathbb{N}$ such that
$$
\max_{z \in S_7(z^*,R)} |f(z)-f_n(z)| < M,\quad \forall n \ge n_0.
$$
Summarizing, for all indices $n \ge n_0$, we have the inequality
$$
|f(z)-f_n(z)| < M \le |f(z)| \quad\quad \forall z \in S_7(z^*,R)
$$
which is the required condition of Rouch\'e's theorem in Theorem~\ref{rouche}.
Now Rouch\'e's theorem tells us that
$$
\sum\limits_{c \in S_7(z^*,R)} {\rm ord}(f;c) = \sum\limits_{c \in S_7(z^*,R)} \underbrace{{\rm ord}(f_n;c)}_{=0}.
$$
Note that since $f_n(z) \neq 0$ for all $z \in G$ we have ${\rm ord}(f_n;c)=0$ for all $n \in \mathbb{N}$. Since the points $z^*$ can be chosen arbitrarily inside of $G$, we can conclude that
$$
\sum\limits_{c \in G} {\rm ord}(f;c) = 0
$$
and the statement is proven.
\end{proof}
\begin{remark}
Note that in contrast to the complex analytic case, ord$(f;c) = 0$ does not guarantee that $f(c)\neq 0$, as pointed out in Remark~\ref{zero-order}. Therefore, we can only establish this weaker statement.
\end{remark}
\begin{remark}
In the context of other regularity concepts, such as for slice-regular octonionic functions and generalized octonionic holomorphic functions in the sense of S.V. Ludkovski, generalized statements of Rouch\'e and Hurwitz type could be established, see {\rm \cite{GPzeroes,L2007}}.
\end{remark}
\section{Rudiments for the treatment of non-isolated zeroes}
The following section presents results which are even new for quaternionic functions and paravector-valued functions in associative Clifford algebras.
The aim is to present a meaningful definition of the order of zeroes or $a$-points of an $\mathbb{O}$-regular function that are not-isolated but lying on a $k$-dimensional simply connected compact manifold of dimension $1 \le k \le 6$, including in the simplest case compact algebraic varieties in eight variables.
The case $k=0$ is the isolated case which has been treated in the previous section. As mentioned in the previous section, the case $k=7$ does not appear in the $\mathbb{O}$-regular setting, because of the identity theorem for $\mathbb{O}$-regular functions (Proposition~\ref{identity}), which excludes this situation. Without loss of generality we focus on the treatment of compact varieties of zeroes, because varieties of $a$-points can be studied in the same way by looking at the function $f(z)-a$.
Let us recall that in the isolated case one can always consider a small sphere around that zero with the property that no zeroes lie inside or on the boundary of that sphere.
Let us now suppose that we have a $k$-dimensional simply-connected compact variety of zeroes ($k \le 6$), that we call $M$. To leave it simple we restrict ourselves in all that follows to those varieties that do not have auto-intersections.
In the case of dealing with a variety of non-isolated zeroes with these properties,
the proper analogue of a sphere surrounding an isolated point is a tubular domain of thickness $\varepsilon >0$ of the form
$$
T_M^{\varepsilon} :=\{z \in \mathbb{O} \backslash M \mid \min_{c \in M}\{|z-c|\} =\varepsilon\}.
$$
In the case where $k = {\rm dim}\;M=1$ and where $M$ is a finite closed line segment, parametrized in the form $[\gamma] = \gamma(t),\quad t \in [0,1]$, the domain
$$
T_{[\gamma]}^{\varepsilon} :=\{z \in \mathbb{O} \backslash M \mid \min_{t \in [0,1]}\{|z-\gamma(t)|\} =\varepsilon\}
$$
is nothing else than an ordinary symmetric circular tube of thickness $\varepsilon$ around that line segment. In the case where $M$ is a closed circle, the associated tubular domain $T_{[\gamma]}^{\varepsilon}$ is a generalized torus, more precisely it is homeomorphically equivalent to the real Hopf manifold $S_1 \times S_6 \cong \mathbb{R}^8 \backslash\{0\}/\mathbb{Z}$. A concrete example of a left and right $\mathbb{O}$-regular function where the zero set is up to atmost two isolated points the unit circle lying in the subspace generated by $e_1$ and $e_2$ is the function $f(z)=Z_1^2+Z_2^2-1+\sum\limits_{j=3}^7 Z_j e_j$, where again $Z_i = x_i - x_0 e_i$ for all $i=1,\ldots,7$.
In the particular case where $M$ is just an isolated point, say $M =\{z_0\}$, the tube then reduces to the set $T_{z_0} = \{z \in \mathbb{O} \mid |z-z_0| = \varepsilon\}$ which is the ordinary sphere $\partial B_8(z_0;\varepsilon)$ of the eight-dimensional ball. Thus, tubular domains provide us with a natural analogue for circular symmetric neighborhoods around closed simply connected manifolds with no auto-intersections.
In this framework, this is an adequate geometry to meaningfully introduce the notion of the order of a compact simply connected zero manifold of a left $\mathbb{O}$-regular function, generalizing the definitions given above for the isolated case.
We introduce
\begin{definition}
Suppose that $U \subseteq \mathbb{O}$ is a non-empty open set. Let $f:U \to \mathbb{O}$ be left $\mathbb{O}$-regular and suppose that $M$ is a compact simply connected manifold of dimension $k \in \{0,1\ldots,6\}$ with the above mentioned properties and with $M \subset U$ and $f(z)=0$ for all $z \in M$. Further assume that there is a real positive $\varepsilon > 0$ such that $T_M^{\varepsilon} \subset U$ and that $f(z) \neq 0$ for all $z \in T_M^{\varepsilon}$ and for all $z \in int T_M^{\varepsilon} \backslash M$.Then we can define the order of the non-isolated zero variety $M$ of $f$ by
\begin{eqnarray*}
{\rm ord}(f;M) &:=& \frac{3}{\pi^4} \int\limits_{f(T_M^{\varepsilon})} q_{\bf 0}(w) d \sigma(w)\\
&=& \frac{3}{\pi^4} \int\limits_{T_M^{\varepsilon}} q_{\bf 0}(f(z)) \cdot
\Bigg( [(Jf)^{adj}(z)] \circledcirc [d\sigma(z)] \Bigg).
\end{eqnarray*}
\end{definition}
\begin{remark}
The integral counts how often the image of the tubular surface $T_M^{\varepsilon}$ under $f$ wraps around zero.
All zeroes belonging to the same zero variety $M$ have the same order, because the winding number is in view of its homotopic property a continuous and hence constant expression. The zero variety $M$ is simply-connected. Therefore ${\rm ord}(f;c_i) = {\rm ord}(f;c_j) = {\rm ord}(f;M)$ for all $c_i,c_j \in M$.
\end{remark}
Notice further that the integral expressions are really well-defined because we do not integrate over any zeroes of $f$; $f(z) \neq 0$ for all $z \in T_M^{\varepsilon}$.
This generalized notion allows us to set up a generalized version of the octonionic argument principle where we now may admit left $\mathbb{O}$-regular functions having a finite number of compact simply-connected zero varieties $M_1,\ldots,M_p$ with no auto-intersections of dimension $k_1,\ldots,k_p$, respectively, lying inside a domain $G \subset \mathbb{O}$. We can prove
\begin{theorem} (Generalized octonionic argument principle for non-isolated zeroes)\\
Let $G \subset \mathbb{O}$ be a domain. Suppose that $f:G \to \mathbb{O}$ is a left $\mathbb{O}$-regular function over $G$. Assume that $C$ is an $8$-dimensional oriented compact manifold $C \subset G$ whose boundary is parametrized by a $7$-dimensional null-homologous cycle $\Gamma$. Furthermore, suppose that $f$ has a finite number of simply-connected closed zero varieties $M_1,\ldots,M_p$ with no auto-intersections of dimension $k_1,\ldots,k_p$, respectively, and that $f$ has no further zeroes inside of $C$ nor on its boundary $\Gamma$. Then we have
$$
\frac{3}{\pi^4} \int\limits_{f(\Gamma)} q_{\bf 0}(w) d \sigma(w) = \sum\limits_{i=1}^p {\rm ord}(f;M_i).
$$
\end{theorem}
\begin{proof}
Since $f$ has no zeroes on $\Gamma$ and since $C$ is compact the following integral and integral transformation is well-defined:
\begin{equation}\label{preveq}
\frac{3}{\pi^4} \int\limits_{f(\Gamma)} q_{\bf 0}(w)d\sigma(w) = \frac{3}{\pi^4} \int\limits_{\Gamma} q_{\bf 0}(f(z)) \cdot
\Bigg( [(Jf)^{adj}(z)] \circledcirc [d\sigma(z)] \Bigg).
\end{equation}
Since $f$ has no zeroes in $C \backslash \bigcup_{i=1}^p M_i$, we have that $\sum\limits_{c \in C \backslash \bigcup_{i=1}^p M_i} {\rm ord}(f;c) = 0$, so that
the latter integral from (\ref{preveq}) can be expressed in the form
$$
\frac{3}{\pi^4}\sum\limits_{i=1}^p\int\limits_{T_{M_i}^{\varepsilon_i}} q_{\bf 0}(f(z)) \cdot
\Bigg( [(Jf)^{adj}(z)] \circledcirc [d\sigma(z)] \Bigg) = \sum\limits_{i=1}^p {\rm ord}(f;M_i),
$$
because the contribution of this integral over the boundary of a domain that contains no zeroes inside is zero.
\end{proof}
\begin{remark}
The statement remains valid in the Clifford analysis setting, addressing paravector-valued functions with zero varieties that have the above mentioned properties.
\end{remark}
\section{Perspectives}
The previous section suggests an approach how to address orders of non-isolated zeroes of octonionic regular or Clifford monogenic functions in the sense of the Riemann approach. A further step would consist in applying this argument principle to establish generalizations of Rouch\'e's theorem and Hurwitz' theorem to the non-isolated context. Obviously, the geometric conditions claimed in the previous section are very strong. As mentioned in Section~3 it is very easy to also construct $\mathbb{O}$-regular functions that have zero varieties of infinite extension. If we want to address varieties with auto-intersections, then we have to adapt the use of tubular domains. An important question is to investigate which genus do the arising zero manifolds have in the most general case. To get some insight in these kinds of questions a profound study of algebraic geometrical methods, in particular a deep study of understanding the nature of the appearing zero varities of $\mathbb{O}$-regular functions is required. Working on the intersection of algebraic geometry and hypercomplex function theories represents a promising branch for future investigation.
Furthermore, this paper shows that the argument principle is more a topological theorem than an analytic one, although the Cauchy kernel is explicitly needed in its definition. However, the predominant topological character gives the hope that these kinds of theorems can be carried over to many more hypercomplex function theories, in particular to the context of null-solutions to other differential equations. However, a really substantial question is to ask whether these tools can be carried over to functions that are defined in other algebras beyond octonions and paravector-valued subspaces of Clifford algebras. Both paravector spaces and octonions are normed and are free of zero-divisors. Following K. Imaeda \cite{Imaeda}, already in the context of sedenions it is not anymore possible to set up a direct analogue of Cauchy's integral formula. Cauchy's integral formula however is the basic tool for establishing all these results. The appearance of zero divisors will also have an impact on the topological properties. There remain a lot of open questions and challenges for future research.
|
1,477,468,751,095 | arxiv | \section{Convergence for general canonical decomposition method at increasing number of modes}
In this section, we discuss the convergence of the canonical decomposition method at increasing $Q$ (number of modes). In Sect. 2, we have shown that $\mathcal{M}_Q^h \subset \mathcal{V}^h$, provided that their interpolations are based on the same basis functions. Here, we make some further discussions.
For 2D case, we compare
\begin{equation}
\mathcal{M}_Q^h=\left\{u^h \bigg |u^h=\sum_{q=1}^Q \left( \sum_{I=1}^{n_1} N_I(x)\beta_I^{(q)} \right) \left( \sum_{J=1}^{n_2} N_J(y)\gamma_J^{(q)} \right), \beta_I^{(q)}, \gamma_J^{(q)} \in \mathbb{R}, I=1,\cdots,n_1, J=1,\cdots,n_2\right\}
\label{eq:2DMS}
\end{equation}
with
\begin{equation}
\mathcal{V}^h=\left\{ u^h \bigg |u^h=\sum_{I=1}^{n_1} \sum_{J=1}^{n_2} N_I(x)N_J(y)u_{(I,J)}, u_{(I,J)}\in\mathbb{R}, I=1,\cdots,n_1, J=1,\cdots,n_2 \right\}.
\label{eq:2DFEMVh}
\end{equation}
As the same basis functions are used, it follows that:
\begin{description}
\item[1.] $\forall Q\in\mathbb{N}, \mathcal{M}_Q^h \subset \mathcal{V}^h$;
\item[2.] If $Q_1 \le Q_2$, $Q_1,Q_2 \in \mathbb{N}$, $\mathcal{M}_{Q_1}^h \subset \mathcal{M}_{Q_2}^h$;
\item[3.] $\forall Q\geq \min\{n_1,n_2\}, Q\in\mathbb{N}, \mathcal{M}_Q^h = \mathcal{V}^h$.
\end{description}
The first two statements are straightforward. The last property is proved in Appendix A.
We then conclude the following relationship:
\begin{equation}
\mathcal{M}^h_{Q=1} \subset \mathcal{M}^h_{Q=2} \subset \cdots \subset \mathcal{M}^h_{Q=\min\{n_1,n_2\}} = \mathcal{M}^h_{Q=\min\{n_1,n_2\}+1} = \cdots = \mathcal{V}^h.
\end{equation}
In other words, $\mathcal{M}^h_{Q}$ is always a subset of $\mathcal{V}^h$, and it becomes the same as $\mathcal{V}^h$ when $Q$ increases. Consequently, when enough number of modes are taken, the canonical decomposition result reaches the same accuracy as the FEM solution. We remark that $\min\{n_1,n_2\}$ is precisely the minimal number of modes to ensure $\mathcal{M}^h_{Q}=\mathcal{V}^h$.
The above discussions extend to 3D readily.
\begin{description}
\item[1.] $\forall Q\in\mathbb{N}, \mathcal{M}_Q^h \subset \mathcal{V}^h$;
\item[2.] If $Q_1 \le Q_2, Q_1,Q_2 \in \mathbb{N}, \mathcal{M}_{Q_1}^h \subset \mathcal{M}_{Q_2}^h$;
\item[3.] $\forall Q\geq \min\{n_1n_2,n_2n_3, n_1n_3\}, Q\in\mathbb{N}, \mathcal{M}_Q^h = \mathcal{V}^h$.
\end{description}
We note that the minimal number of modes to ensure $\mathcal{M}_Q^h = \mathcal{V}^h$ in 3D is essentially to find a best rank-$r$ approximation to order-3 tensor, which is an open mathematical problem. An upper bound is given in property 3.
\section{Convergence for 2D canonical decomposition method at increasing number of modes}
\label{Appdix:2DConverge}
\begin{theorem}
For 2D case, $\mathcal{M}_Q^h$ and $\mathcal{V}^h$ are defined in Eq.(\ref{eq:2DMS}) and Eq.(\ref{eq:2DFEMVh}), respectively. We have
\begin{equation}
\forall Q \geq \min(n_1,n_2), Q \in \mathbb{N}, \mathcal{M}_Q^h=\mathcal{V}^h.
\label{eq:2DMconvergeToV}
\end{equation}
\end{theorem}
\begin{proof}
For any interpolation function
\begin{equation}
u^{h,FEM}=\sum_{i=1}^{n_1}\sum_{j=1}^{n_2} N_i(x)N_j(y) u_{i,j}
\end{equation}
in the set $\mathcal{V}^h$, we write the nodal values in the form of matrix,
\begin{equation}
\bm{U}=\left[
\begin{array}{cccc}
u_{1,1} & u_{1,2} & \cdots & u_{1,n_2} \\
u_{2,1} & u_{2,2} & \cdots & u_{2,n_2} \\
\vdots & \vdots & & \vdots \\
u_{n_1,1} & u_{n_1,2} & \cdots & u_{n_1,n_2}
\end{array}\right].
\end{equation}
According to the singular value decomposition (SVD), $\bm{U}$ is represented by
\begin{equation}
\bm{U}=\sum_{q=1}^{rank(\bm{U})} \sigma^{(q)} \bm{w}^{(q)} \otimes \bm{v}^{(q)}, \sigma^{(1)}\geq\sigma^{(2)}\geq\cdots\geq\sigma^{(rank(\bm{U}))}>0,
\end{equation}
where $\bm{w}^{(q)}$ is the $n_1$-dimensional vector, and $\bm{v}^{(q)}$ is the $n_2$-dimensional vector. Thus $u^{h,FEM}$ is rewritten in the form of the separation of variables, i.e.,
\begin{equation}
u^{h,FEM}=\sum_{i=1}^{n_1} \sum_{j=1}^{n_2} N_i(x) N_j(y) \left(\sum_{q=1}^{rank(\bm{U})}\sigma^{(q)} w_i^{(q)} v_j^{(q)} \right).
\end{equation}
So we have $u^{h,FEM}\in\mathcal{M}_Q^h$, if $Q\geq\min\{n_1,n_2\}\geq rank(\bm{U})$. Combining with $\mathcal{M}_Q^h\subset\mathcal{V}^h$, we obtain Eq. (\ref{eq:2DMconvergeToV}).
\end{proof}
We remark that SVD tells us the minimal number of modes to reproduce the FE solution, i.e. $\min\{n_1,n_2\}$.
\section{1D HiDeNN Formulation}
\label{sec:1DHiDeNN}
In standard 1D FEM, the computational domain $\Omega$ is discretized by a grid with $n$ nodes and the shape function associated with an internal node $x_I$ is
\begin{equation}
N_I(x)=\left\{
\begin{array}{cc}
\dfrac{x-x_{I-1}}{x_I-x_{I-1}}, & x_{I-1}\leq x \leq x_I, \\
\dfrac{x_{I+1}-x}{x_{I+1}-x_{I}}, & x_I \leq x \leq x_{I+1}, \\
0, & else where,
\end{array}
\right.
\label{eq:linearshapefunction}
\end{equation}
where $x_{I-1}$ and $x_{I+1}$ are the two neighbor points of the node $\:x_{I}$ from the left side and right side, respectively.
We rewrite $N_I(x)$ in a DNN format consists of weights, biases, and activation functions. Considering the shape function is a piecewise linear function, the activation function is selected as ReLU function, i.e., $\mathcal{A}_{1}=\max(0,x)$. Fig. \ref{fig:1D_DNN_representation}(a) shows the DNN representation of the linear shape function. The corresponding formula is
\begin{eqnarray}
\mathscr{N}_{I}(x;\bm{W}, \bm{b},\:\bm{\mathcal{A}})
&=& W_{11}^{l=4}\mathcal{A}_{1}\left( W_{11}^{l=3} \mathcal{A}_{1} \left( W_{11}^{l=2}x+b_{1}^{l=2} \right) +b_{1}^{l=3} \right) \\ \nonumber
&&+ W_{21}^{l=4}\mathcal{A}_{1} \left( W_{22}^{l=3} \mathcal{A}_{1} \left( W_{12}^{l=2}x+b_{2}^{l=2} \right) +b_{2}^{l=3} \right) +b_{1}^{l=4}\\ \nonumber
&=& \mathcal{A}_{1}\left( \dfrac{-1}{x_I-x_{I-1}} \mathcal{A}_{1} \left( -x+x_I \right) +1 \right) + \mathcal{A}_{1} \left( \dfrac{-1}{x_{I+1}-x_I} \mathcal{A}_{1} \left( x-x_I \right) +1 \right) -1, \nonumber
\end{eqnarray}
where $\bm{W}=[W_{11}^{l=2},W_{12}^{l=2},W_{11}^{l=3},W_{22}^{l=3},W_{11}^{l=4},W_{21}^{l=4}]$, and $\bm{b}=[b_{1}^{l=2},b_{2}^{l=2},b_{1}^{l=3},b_{2}^{l=3},b_{1}^{l=4}]$ are the weights and biases of the connected neurons. Note that all the weights and biases are functions of nodal coordinates. The formula can be rewritten as the form of
\begin{equation}
\mathcal{N}_I(\bm{x};\bm{x}_I^*,\bm{\mathcal{A}}),
\end{equation}
where $\bm{x}_I^*$ denotes the vector that represents the neighbor nodes of node $\bm{x}_I$ involved in $N_I(\bm{x})$. For 1D linear shape function, it should be $\bm{x}_I^*=[x_{I-1},\:x_{I},\:x_{I+1}]$. For the sake of clarity, one more layer is added to introduce the nodal value $u_I$, i.e., the formula becomes
\begin{eqnarray}
\mathscr{u}_I^{h}&=&\mathscr{N}_{I} (x;\:\bm{W},\bm{b},\:\bm{\mathcal{A}})\mathscr{u}_{I}=\mathscr{N}_{I} (x;\:\bm{x}_I^*,\:\bm{\mathcal{A}})\mathscr{u}_{I}; \mbox{ no summation on } {I}\\ \nonumber
&=& \mathcal{A}_{0}\left(\mathcal{A}_{1}\left( \dfrac{-1}{x_I-x_{I-1}} \mathcal{A}_{1} \left( -x+x_I \right) +1 \right) -0.5\right) \mathscr{u}_{I} \\ \nonumber
&&+ \mathcal{A}_{0}\left(\mathcal{A}_{1} \left( \dfrac{-1}{x_{I+1}-x_I} \mathcal{A}_{1} \left( x-x_I \right) +1 \right) -0.5\right) \mathscr{u}_{I},
\end{eqnarray}
where $\mathscr{u}_I^{h}$ and $\mathscr{u}_I$ are the interpolated displacement and nodal displacement at node $x_I$, $\bm{\mathcal{A}}=[\mathcal{A}_{0},\:\mathcal{A}_{1}]$ are the activation functions used for the construction of the DNN approximation. $\mathcal{A}_{0}(x)=x$ is an identical function. Fig. \ref{fig:1D_DNN_representation}(b) gives the DNN representation of the interpolation of the nodal displacement at node $x_I$.
\begin{figure}[h]
\centering
\subfigure[DNN-based 1D shape function]{\includegraphics[width=0.42\textwidth]{global_shape_func_DNN.png}}
\hspace{0.1in}
\subfigure[DNN-based 1D interpolation function ]{\includegraphics[width=0.5\textwidth]{interpolation_func_DNN.png}}
\caption{Deep neural network (DNN) representation of the 1D global shape function and interpolation function.}
\label{fig:1D_DNN_representation}
\end{figure}
Once the shape function with nodal value for an arbitrary node $x_I$ is constructed, the interpolation is obtained by assembling all DNNs, i.e.,
\begin{equation}
\mathscr{u}^{h}(x)=\sum_{I=1}^{n}\mathscr{N}_{I} (x;\:\bm{x}_I^*,\:\bm{\mathcal{A}})\mathscr{u}_{I}.
\end{equation}
Compared with classical FEM, nodal positions are introduced as additional DoFs in the optimization for HiDeNN, which increases both the local and global accuracy of the interpolants.
Reference \cite{zhang2021hierarchical} also presented the DNN representation of various rational functions including Lagrange polynomials, B-spline, Reproducing Kernel Particle Method (RKPM), NURBS, Isogeometric analysis (IGA), etc., and multidimensional shape functions.
\section{Space separated PGD}
When the domain is not intrinsically separable, fully separated representation can not be applied directly.
\cite{gonzalez2010recent} immersed the non-separable domain onto a fully separable one. \cite{ghnatios2019advanced} used geometrical mapping to deal with layered domain, where interfaces are not planar. Now we combine representation of separated variables with FE mapping to deal with more complex geometrical cases. In section 4.1, we define the mapping to a regular parameter space by means of FE geometrical mapping. Then section 4.2 addresses several solution schemes. Finally, we take a 2D Poisson problem as illustration for the whole solution procedure in section 4.3.
\subsection{Mesh mapping and recovering for irregular domains}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{PolarTransform.png}
\caption{A quarter of ring is transformed in to a rectangular by the polar coordinates transformation.}
\label{fig:PolarTrans}
\end{figure}
The main idea is to map original irregular domain $\Omega_{(\bm{x})}$ to a regular one $\tilde{\Omega}_{(\tilde{\bm{x}})}$, and then apply the separated representation. Fig, \ref{fig:PolarTrans} illustrates a simple example. A quarter of ring becomes a rectangular through polar transformation. Then the new representation of separated variables is shown as below,
\begin{equation}
u^h=\sum_{q=1}^Q \tilde{X}^{(q)}(\theta)\tilde{Y}^{(q)}(r),
\end{equation}
where $\theta$ and $r$ are the functions of space coordinates $x,y$.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{Mapping.png}
\caption{Illustration for the geometrical mapping. The irregular domain with irregular mesh is related to a regular domain with regular mesh by a 2-step mapping.}
\label{fig:FETrans}
\end{figure}
By virtue of parametric transformation in FEM, we propose a general way to define this mapping as illustrated in Fig \ref{fig:FETrans}. We present a FE mesh over the 2D irregular computational domain $\Omega_{\bm{x}}$ first with nodes $\bm{x}_{i,j}, i=1,2,\cdots,n_1, j=1,2,\cdots,n_2$. Then we define a mapping to its corresponding lattice $(i,j)$. $\tilde{\bm{x}}$ is the coordinates of the transformed domain $\tilde{\Omega}_{\tilde{\bm{x}}}$. The mapping consist of two steps:
1. Mapping each element to a square or cubic
The first mapping is the classical parametric mapping in FEM. We make a change of coordinates which maps the 4-node element into a square $[-1,1]^2$ for 2D or maps the 8-node element into a cubic $[-1,1]^3$ for 3D. The coordinates of a point $\bm{\xi}$ in the square is related to the physical coordinates of a point $\bm{x}$ in the element by mappings of the form
\begin{equation}
\bm{x}=\sum_{a=1}^{n_e} N_a^e(\bm{\xi}) \bm{x}_a^e
\end{equation}
where $n_e$ is the number of nodes of the element ($n_e=4$ for 2D and $n_e=8$ for 3D), $\bm{x}_a^e$ is the coordinates of the $a$-th node of the element, and $N_a^e(\bm{\xi})$ is the corresponding shape function.
$\bm{\xi}$ is called as natural coordinates.
2. Mapping the square to a lattice
For the sake of separated representation, we define the second mapping to assemble the square into a lattice. The transformed formula is
\begin{equation}
\tilde{\bm{x}}=\sum_{a=1}^{n_e} N_a^e(\bm{\xi}) \tilde{\bm{x}}_a^e
\end{equation}
and the inverse transformation is
\begin{eqnarray}
\xi&=&\dfrac{2\tilde{x}-\tilde{x}^e_1-\tilde{x}^e_2}{\tilde{x}^e_2-\tilde{x}^e_1} \\
\eta&=&\dfrac{2\tilde{y}-\tilde{y}^e_1-\tilde{y}^e_2}{\tilde{y}^e_2-\tilde{y}^e_1} \\
\zeta&=&\dfrac{2\tilde{z}-\tilde{z}^e_1-\tilde{z}^e_2}{\tilde{z}^e_2-\tilde{z}^e_1}.
\end{eqnarray}
The final transformed domain $\tilde{\Omega}_{\tilde{\bm{x}}}$ is called as reference domain, which is a regular domain with a regular mesh $[\tilde{x}_1, \tilde{x}_2, \cdots, \tilde{x}_{n_1}]\times[\tilde{y}_1, \tilde{y}_2, \cdots, \tilde{y}_{n_2}]\times[\tilde{z}_1, \tilde{z}_2, \cdots, \tilde{z}_{n_3}]$. $\tilde{x}^e_1,\tilde{x}^e_2,\tilde{y}^e_1,\tilde{y}^e_2,\tilde{z}^e_1,\tilde{z}^e_2$ are the coordinates of the element $[\tilde{x}^e_1,\tilde{x}^e_2]\times[\tilde{y}^e_1,\tilde{y}^e_2]\times[\tilde{z}^e_1,\tilde{z}^e_2]$ in the reference domain. For convenience, we might take the mesh in the reference domain as a lattice corresponding to the index of the nodes in the physical domain, i.e., $\tilde{x}_i=i,i=1,2,\cdots,n_1, \tilde{y}_j=j,j=1,2,\cdots,n_2, \tilde{z}_k=k,k=1,2,\cdots,n_3$.
The whole mapping is defined as below,
\begin{equation}
\bm{x}=\sum_{a=1}^{n_e} N_a^e(\bm{\xi}(\tilde{\bm{x}})) \bm{x}_a^e.
\label{eq:WholeMap}
\end{equation}
Then separation of spatial variables is applicable to the reference domain, i.e., the interpolation function set is
\begin{equation}
\tilde{\mathcal{M}}^h_Q=\left\{u^h\Bigg |u^h=\sum_{q=1}^Q \tilde{X}(\tilde{\bm{x}})\tilde{Y}(\tilde{y})\tilde{Z}(\tilde{z})=\sum_{q=1}^Q \left( \sum_{i=1}^{n_1}N_i(\tilde{x})\beta_i^{(q)} \right) \left( \sum_{j=1}^{n_2}N_j(\tilde{y})\gamma_j^{(q)} \right) \left( \sum_{k=1}^{n_3}N_k(\tilde{z})\theta_k^{(q)} \right)\right\},
\end{equation}
where $N_i(\tilde{x}),N_j(\tilde{y}),N_i(\tilde{z})$ are shape functions, and $\beta_i^{(q)}, \gamma_j^{(q)}, \theta_k^{(q)}$ are the corresponding coefficients of the $q$-th mode.
Note that $\tilde{x},\tilde{y},\tilde{z}$ are the functions of physical coordinates $x,y,z$.
\subsection{Solution schemes}
There are different solution schemes to solve the problem. A straight way is to find the solution by minimizing the variational formula with the given number of modes $Q^*$ directly, i.e.,
\begin{equation}
u^h=\argmin_{u^h\in \tilde{\mathcal{M}}^h_{Q=Q^*}} \Pi(u^h(\bm{x};\bm{\beta}^{(q)},\bm{\gamma}^{(q)},\bm{\theta}^{(q)})).
\end{equation}
All the parameters $\beta$ are solved at the same time.
Yet this global optimization might be expensive, so we might borrow the idea from PGD \cite{ammar2006new,chinesta2013proper}, i.e., incremental solution scheme. More precisely, the solution scheme is
\begin{equation}
\begin{array}{lll}
\text{The first mode:} &u^{PGD,(1)}=\arg \min_{\Delta u \in \tilde{\mathcal{M}}^h_{Q=1}} \Pi[u^0+\Delta u];\\
\text{For $m>1$, the $m$-th mode:} &\Delta u^{(m)}=\arg \min_{\Delta u \in \tilde{\mathcal{M}}^h_{Q=1}} \Pi[u^{PGD,(m-1)}+\Delta u], \\
\text{The PGD solution with $m$ modes:} &u^{PGD,(m)}=u^{PGD,(m-1)}+\Delta u^{(m)},
\end{array}
\end{equation}
We remarked it is also possible to solve several modes simultaneously in one incremental step.
In general, the initial guess $u_0$ is set to be zero. When dealing with the boundary conditions, $u^0$ can be arbitrary continuous functions satisfying boundary conditions. We can also set an appropriate initial guess to improve the efficiency.
These two solution schemes have their advantages and disadvantages. With the same number of modes $Q^*$, we solve the modes one by one using the latter solution scheme while optimize all the modes together using the previous one. Thus, we have $\Pi(u^{CD})<\Pi(u^{PGD})$ and then obtain
\begin{equation}
\| u^{CD}-u^{exact} \|_E \leq \| u^{PGD}-u^{exact} \|_E.
\end{equation}
This indicates the solution of the previous solution scheme might be better than that of the incremental way. However, the previous one might cost more.
\subsection{Illustrating the solution procedure: the 2D Poisson problem}
For the sake of simplicity and without loss of generality, we consider 2D Poisson problem with incremental solution scheme for illustration,
\begin{equation}
\left\{\begin{array}{l}
\nabla^2 \bm{u}(x,y)+b(x,y)=0 \text{ in } \Omega_{(x,y)} \subset \mathbb{R}^2, \\
\bm{u}|_{\partial \Omega}=\bm{0}.
\end{array}\right.
\label{eq:PoissonEq}
\end{equation}
(\ref{eq:PoissonEq}) is solved in the irregular domain $\Omega_{(x,y)}$ with homogeneous boundary conditions.
The solution is assumed in the form of (\ref{eq:2DMS}). Then we solve it with incremental solution scheme. Let previous $q-1$ modes solved. The $q$-th mode is obtained by
\begin{equation}
\Delta u^{(q)}=\argmin_{\Delta u \in \tilde{\mathcal{M}}^h_{Q=1}} \Pi[u^{PGD,(q-1)}+\Delta u^{(q)}],
\end{equation}
where $u^{PGD,(q-1)}$ is the sum of the previous $q-1$ modes.
We rewrite the interpolation function in the following matrix form
\begin{equation}
u^{PGD,(q)}=u^{PGD,(q-1)}+\Delta u^{(q)}, \Delta u^{(q)}=\left((\bm{\beta}^{(q)})^T \bm{N}^{\beta}(\tilde{x})\right)\left((\bm{\gamma}^{(q)})^T \bm{N}^{\gamma}(\tilde{y})\right),
\label{eq:PGDway}
\end{equation}
where $\bm{\beta}^{(q)}, \bm{\gamma}^{(q)}$ are the coefficient vector, and $\bm{N}^{\beta}(\tilde{x}),\bm{N}^{\gamma}(\tilde{y})$ denotes the vector containing shape functions.
Substituting (\ref{eq:PGDway}) into the variational formula (\ref{eq:PoissonVar}), we have
\begin{eqnarray}
\Pi(u^{PGD,(q)})&=&\dfrac{1}{2}\int_{\Omega_{(x,y)}} \left(\nabla (\Delta u^{(q)})\right)^2 \mathrm{d}x\mathrm{d}y+\int_{\Omega_{(x,y)}} \left(\nabla (\Delta u^{(q)})\right)\left(\nabla (u^{PGD,(q-1)})\right) \mathrm{d}x\mathrm{d}y \\ \nonumber
&&- \int_{\Omega_{(x,y)}} u(x,y)b(x,y)\mathrm{d}x\mathrm{d}y + \Pi(u^{PGD,(q-1)}).
\end{eqnarray}
The quadratic term with respect to $\bm{\beta}^{(q)}, \bm{\gamma}^{(q)}$ in the variational formula is given by
\begin{eqnarray}
\int_{\Omega_{(x,y)}} \left(\nabla (\Delta u^{(q)})\right)^2 \mathrm{d}x\mathrm{d}y &=& \int_{\tilde{\Omega}_{(\tilde{x},\tilde{y})}} \left(\nabla_{(\tilde{x},\tilde{y})} (\Delta u^{(q)})\right)^T \bm{J}^{-T} \bm{J}^{-1} \nabla_{(\tilde{x},\tilde{y})} (\Delta u^{(q)}) \mathrm{det}(\bm{J}) \mathrm{d}\tilde{x}\mathrm{d}\tilde{y}.
\label{eq:Integration}
\end{eqnarray}
with the Jacobi matrix
\begin{equation}
\bm{J}=\dfrac{\partial(x,y)}{\partial(\tilde{x},\tilde{y})}.
\end{equation}
The gradient of $\Delta u^{(q)}$ is
\begin{equation}
\nabla_{(\tilde{x},\tilde{y})}\Delta u^{(q)}=
\left[\begin{array}{c}
\dfrac{\partial}{\partial \tilde{x}}\\
\dfrac{\partial}{\partial \tilde{y}}
\end{array}\right]
\Delta u^{(q)}=
\left[\begin{array}{c}
(\bm{\beta}^{(q)})^T \dfrac{d\bm{N}^{\beta}(\tilde{x})}{d\tilde{x}}(\bm{N}^{\gamma}(\tilde{y}))^T \gamma^{(q)}\\
(\bm{\beta}^{(q)})^T \bm{N}^{\beta}(\tilde{x})(\dfrac{d\bm{N}^{\gamma}(\tilde{y})}{d\tilde{y}})^T \gamma^{(q)}
\end{array}\right].
\end{equation}
Note that the expression is a nonlinear algebraic system. It is hard to solve it directly, so we use the alternating direction strategy as below:
In each iteration step,
1. Fix $\gamma$ and solve $\beta$
The quadratic term becomes
\begin{equation}
\int_{\Omega_{(x,y)}} \left(\nabla (\Delta u^{(q)})\right)^2 \mathrm{d}x\mathrm{d}y=\int_{\tilde{\Omega}_{(\tilde{x},\tilde{y})}} \beta^{(q),T} \bm{B}^{\beta,T}(\tilde{x},\tilde{y})\bm{B}^{\beta}(\tilde{x},\tilde{y})\beta^{(q)} \mathrm{det}(\bm{J}) \mathrm{d}\tilde{x}\mathrm{d}\tilde{y}
\end{equation}
with
\begin{equation}
\bm{B}^{\beta}(\tilde{x},\tilde{y})=\bm{J}^{-1}
\left[\begin{array}{c}
(\bm{\gamma}^{(q)})^T\bm{N}^{\gamma}(\tilde{y}) (\dfrac{d\bm{N}^{\beta}(\tilde{x})}{d\tilde{x}})^T\\
(\bm{\gamma}^{(q)})^T\dfrac{d\bm{N}^{\gamma}(\tilde{y})}{d\tilde{y}} (\bm{N}^{\beta}(\tilde{x}))^T
\end{array}\right].
\end{equation}
The stiffness matrix for $\beta^{(q)}$ is
\begin{equation}
\bm{K}^{\beta}=\int_{\tilde{\Omega}_{(\tilde{x},\tilde{y})}} \bm{B}^{\beta,T}(\tilde{x},\tilde{y})\bm{B}^{\beta}(\tilde{x},\tilde{y}) \mathrm{det}(\bm{J}) \mathrm{d}\tilde{x}\mathrm{d}\tilde{y}.
\end{equation}
2. Fix $\beta$ and solve $\gamma$
The quadratic term becomes
\begin{equation}
\int_{\Omega_{(x,y)}} \left(\nabla (\Delta u^{(q)})\right)^2 \mathrm{d}x\mathrm{d}y=\int_{\tilde{\Omega}_{(\tilde{x},\tilde{y})}} \gamma^{(q),T} \bm{B}^{\gamma,T}(\tilde{x},\tilde{y})\bm{B}^{\gamma}(\tilde{x},\tilde{y})\gamma^{(q)} \mathrm{det}(\bm{J}) \mathrm{d}\tilde{x}\mathrm{d}\tilde{y}
\end{equation}
with
\begin{equation}
\bm{B}^{\gamma}(\tilde{x},\tilde{y})=\bm{J}^{-1}
\left[\begin{array}{c}
(\bm{\beta}^{(q)})^T \dfrac{d\bm{N}^{\beta}(\tilde{x})}{d\tilde{x}}(\bm{N}^{\gamma}(\tilde{y}))^T\\
(\bm{\beta}^{(q)})^T \bm{N}^{\beta}(\tilde{x})(\dfrac{d\bm{N}^{\gamma}(\tilde{y})}{d\tilde{y}})^T
\end{array}\right].
\end{equation}
The stiffness matrix for $\gamma^{(q)}$ is
\begin{equation}
\bm{K}^{\gamma}=\int_{\tilde{\Omega}_{(\tilde{x},\tilde{y})}} \bm{B}^{\gamma,T}(\tilde{x},\tilde{y})\bm{B}^{\gamma}(\tilde{x},\tilde{y}) \mathrm{det}(\bm{J}) \mathrm{d}\tilde{x}\mathrm{d}\tilde{y}.
\end{equation}
If we present a regular mesh over a regular domain $\Omega_{(x,y)}$, the mapping is a linear transformation for coordinates. Let one element of the regular mesh $[x^e_1,x^e_2]\times[y^e_1,y^e_2]$. The mapping (\ref{eq:WholeMap}) reduces to
\begin{equation}
x=\dfrac{x^e_2-x^e_1}{\tilde{x}^e_2-\tilde{x}^e_1}(\tilde{x}-\tilde{x}^e_1)+x^e_1,
y=\dfrac{y^e_2-y^e_1}{\tilde{y}^e_2-\tilde{y}^e_1}(\tilde{y}-\tilde{y}^e_1)+y^e_1.
\end{equation}
$\bm{J}$ reduces to a diagonal matrix $\mathrm{diag}(\dfrac{x^e_2-x^e_1}{\tilde{x}^e_2-\tilde{x}^e_1},\dfrac{y^e_2-y^e_1}{\tilde{y}^e_2-\tilde{y}^e_1})$. Thus we have
\begin{equation}
\bm{J}^{-T}\bm{J}^{-1}\mathrm{det}(J)=\dfrac{x^e_2-x^e_1}{\tilde{x}^e_2-\tilde{x}^e_1}\dfrac{y^e_2-y^e_1}{\tilde{y}^e_2-\tilde{y}^e_1},
\end{equation}
which is constant in each element and separated representation. Thus this method degenerates to the classical PGD. For irregular mesh, $\bm{J}^{-T}\bm{J}^{-1}\mathrm{det}(J)$ in (\ref{eq:Integration}) is the function of $\bm{x}$ and mostly non-separated representation. In the numerical implementation, we usually approximate it by a separated form using SVD technique, i.e.,
\begin{equation}
\bm{J}^{-T}\bm{J}^{-1}\mathrm{det}(J)\approx
\left[\begin{array}{cc}
\sum_a \phi_{11}^{(a)}(\tilde{x})\psi_{11}^{(a)}(\tilde{y}) & \sum_a\phi_{12}^{(a)}(\tilde{x})\psi_{12}^{(a)}(\tilde{y}) \\
\sum_a\phi_{12}^{(a)}(\tilde{x})\psi_{12}^{(a)}(\tilde{y}) & \sum_a\phi_{22}^{(a)}(\tilde{x})\psi_{22}^{(a)}(\tilde{y})
\end{array}\right].
\end{equation}
This converts the 2D integration (\ref{eq:Integration}) to the product of 1D integration along different directions ($\tilde{x}$ and $\tilde{y}$ directions in the reference domain), which might reduce the computational cost for integration.
\subsection{Numerical exmples}
\begin{figure}[htbp]
\centering
\subfigure[Physical domain ]{\includegraphics[width=3in]{figures/PGDGeo_Corner.png}}
\subfigure[FE mesh]{\includegraphics[width=3in]{figures/PGDMesh_Corner.png}}
\subfigure[Reference solution ]{\includegraphics[width=3in]{figures/PGDRef_Corner.png}}
\caption{Geometry and mesh for the plate. (a) Model of the plate. (b) $100$ elements mesh as an illustration. (c) Reference solution with $4.90\times 10^7$ elements.}
\label{fig:PGDEg_Corner}
\end{figure}
\begin{figure}[h]
\centering
\subfigure[Physical domain ]{\includegraphics[width=2in]{figures/PGDGeo_Hole.png}}
\subfigure[FE mesh]{\includegraphics[width=3in]{figures/PGDMesh_Hole.png}}
\subfigure[Reference mesh]{\includegraphics[width=2.2in]{figures/PGDRefMesh_Hole.png}}
\subfigure[Reference solution ]{\includegraphics[width=3in]{figures/PGDRef_Hole.png}}
\caption{Geometry and mesh for the plate with one hole. (a) Model of the plate with one hole. (b) Discretization by a $84$ elements mesh. (c) Reference mesh corresponding to $84$ elements.}
\label{fig:PGDEg_Hole}
\end{figure}
\section{Numerical examples}
In this section, we study the performance of HiDeNN-PGD with comparison to FEM, HiDeNN and PGD methods. This study mainly focuses on the accuracy comparison and the convergence behavior, as the computational cost can be strongly affected by the under-optimized implementation. The computational efficiency of the proposed HiDeNN-PGD method will be investigated in our future work on the basis of GPU-computing.
\subsection{2D case }
The HiDeNN-PGD is applied to the Poisson problem with a concentrated load. To demonstrate the capability of the method, the domain analyzed by HiDeNN-PGD is initialized with a uniform mesh of 40 by 40 elements. As shown in \figurename~\ref{fig:HiDPGDtoFEM}, the final solution agrees with the reference FE solution. This reference solution is obtained with a very fine mesh containing $4,000\times 4,000$ elements.
\tablename~\ref{table:error} illustrates the evolution of accuracy with an increasing number of modes. For comparison purposes, we also applied the PGD, CD, FEM and HiDeNN for the same problem on the uniform mesh of 40 by 40. The errors of different methods are computed based on the energy norm with comparison to the reference solution. As expected, HiDeNN is the most accurate one but leads to a significantly larger number of degrees of freedom (DoFs). The proposed HiDeNN-PGD method can have the same level of accuracy and only requires a small number of modes. Compared to PGD and CD, the HiDeNN-PGD is much more accurate at a limited number of modes. Taking a closer look at PGD and CD, these two methods converge overall to the FEM on the coarse mesh. HiDeNN-PGD and HiDeNN can overcome this limitation imposed by the mesh size with the adaptivity. This observation is consistent with our theoretical analysis. Moreover, it should be noticed that, unlike HiDeNN, the HiDeNN-PGD only increases slightly the DoFs when compared with PGD and CD. This attributes to the separation of variables. Indeed, the mesh adaptation is performed only in the two separated axes, as shown in \figurename~\ref{fig:HiDPGDtoFEM}(d). For comparison purposes, the final optimized mesh of HiDeNN is illustrated in \figurename~\ref{fig:MeshForHPandHiDeNN}. It is shown that the HiDeNN enables a full adaptivity for the entire mesh, which leads to a significantly different and more accurate final result. Nevertheless, the HiDeNN-PGD shows attractive advantages in terms of DoFs.
\figurename~\ref{fig:mode} illustrates the first four modes of HiDeNN-PGD, PGD, and CD. The PGD modes remain similar to CD in this example. However, the modes of HiDeNN-PGD seem to be more concentrated in the region of interest. This difference mainly comes from the mesh adaptivity.
To further confirm the performance of HiDeNN-PGD, we have compared the accuracy of different methods on different meshes. In \tablename~\ref{table:errormesh}, the PGD, CD and HiDeNN-PGD results are obtained from the final converged mode. It is shown that the HiDeNN-PGD always gives more accurate results with fewer degrees of freedom. This confirms the previous observation.
\begin{figure}[htbp]
\centering
\subfigure[Reference FE solution obtained with a extremely fine mesh ]{\includegraphics[scale=0.9]{figures/FEMref.PNG}}\quad \quad
\subfigure[HiDeNN-PGD solution]{\includegraphics[scale=0.9]{figures/HiDeNNPGDsolution.PNG}}
\subfigure[Initial uniform mesh for HiDeNN-PGD ]{\includegraphics[scale=1]{figures/HiDeNNPGDmesh0.PNG}}
\subfigure[Final optimized mesh for HiDeNN-PGD]{\includegraphics[scale=0.9]{figures/HiDeNNPGDmesh.PNG}}
\caption{FE solution versus HiDeNN-PGD solution.}
\label{fig:HiDPGDtoFEM}
\end{figure}
\begin{figure}[htbp]
\centering
\subfigure[Inital mesh for HiDeNN ]{\includegraphics[scale=0.5]{figures/HiDeNNPGDmesh.png}}
\subfigure[Final optimized mesh for HiDeNN]{\includegraphics[scale=0.5]{figures/HiDeNNmesh.PNG}}
\caption{Mesh optimization in HiDeNN.}
\label{fig:MeshForHPandHiDeNN}
\end{figure}
\begin{figure}[htbp]
\centering
\subfigure[First four modes in HiDeNN-PGD ]{\includegraphics[scale=0.7]{figures/HiDeNNPGDmode.PNG}}
\subfigure[First four modes in PGD]{\includegraphics[scale=0.7]{figures/PGDmode.PNG}}
\subfigure[First four modes in CD ]{\includegraphics[scale=0.7]{figures/MSmode.PNG}}
\caption{Mode comparison for HiDeNN-PGD, PGD and CD}
\label{fig:mode}
\end{figure}
\begin{table}[!htb]
\caption{Accuracy comparison for different methods on the coarse 40 by 40 mesh}
\centering
\begin{tabular}{|c | c c | c c | c c | c c | c c|}
\hline
& \multicolumn{2}{c}{PGD}& \multicolumn{2}{|c}{CD} &\multicolumn{2}{|c}{FEM}&\multicolumn{2}{|c}{HiDeNN-PGD}&\multicolumn{2}{|c|}{HiDeNN} \\ \hline
Mode number & DoFs& Err & DoFs& Err & DoFs& Err& DoFs& Err & DoFs& Err \\ \hline
1 & 78& $38.167\%$ & 78& $38.167\%$ & \textbf{1521}& $\bm{11.659\%}$& 156& $37.357\%$ & \textbf{4719}& $\bm{2.102\%}$ \\ \hline
2 & 156& $16.500\%$ & 156& $14.422\%$ & &- & \textbf{234}& $\bm{9.293\%}$ & & - \\ \hline
3 & 234& $13.188\%$ & 234& $11.789\%$ & &- & 312& $3.674\%$ & & - \\ \hline
4 & 312& $11.811\%$ &312& $11.664\%$ & &- & 390& $3.662\%$ & & - \\ \hline
5 & 390& $11.685\%$ &\textbf{390}& $\bm{11.659\%}$ & &- &468 & $3.661\%$ & & - \\ \hline
6 & 468& $11.666\%$ &468& $11.659\%$ & &- &546& $3.661\%$ & & - \\ \hline
8 & \textbf{624}& $\bm{11.659\%}$ & & - & &- && - & & - \\ \hline
20 & 1560& $11.659\%$ & & - & &- && - & & - \\ \hline
\end{tabular}
\label{table:error}
\end{table}
\begin{table}[!htb]
\caption{Accuracy comparison for different methods with different meshes }
\centering
\begin{tabular}{|c | c c | c c | c c | c c | c c|}
\hline
& \multicolumn{2}{c}{PGD}& \multicolumn{2}{|c}{CD} &\multicolumn{2}{|c}{FEM}&\multicolumn{2}{|c}{HiDeNN-PGD}&\multicolumn{2}{|c|}{HiDeNN} \\ \hline
Mesh & DoFs& Err & DoFs& Err & DoFs& Err& DoFs& Err & DoFs& Err \\ \hline
$40\times40$ & 624(8)
& $11.659\%$ & 390(5)
& $11.659%
\%$ & \textbf{1,521}
& $\bm{11.659\%}$& \textbf{468}(5)
& $\bm{3.661\%}$ & \textbf{4719}& $\bm{2.102\%}$ \\ \hline
$80\times80$ & 1,422(9)
& $5.887\%$ & 790(5)
& $5.887\%$ &6,241
&$5.887\%$& 1,106(6)
& $1.851\%$ & 19,039
& $1.406\%$\\ \hline
$160\times160$ & 2,862(9)
& $2.948\%$ & 1,908(6)
& $2.948\%$ &25,281
&$2.948\%$ & 2,226(6)
& $1.174\%$ &76,479
& $1.081\%$ \\ \hline
$320\times320$ & 7,018(11)
& $1.469\%$ &5,104(8)
& $1.469\%$ &101,761
&$1.469\%$ & 3,828(5)
& $0.896\%$ &306,559
& $0.889\%$\\ \hline
$640\times640$ & 11,502(9)
& $0.724\%$ &7,668(6)
& $0.724\%$ &408,321
&$0.724\%$ & 10,224(7)
& $0.606\%$ &1,227,519
& $0.597\%$ \\ \hline
\end{tabular}
\label{table:errormesh}
Note that the number in the parentheses indicates the number of modes.
\end{table}
\subsubsection{Convergence studies}
In the HiDeNN-PGD method, the mode number has to be prescribed. In general, this is unknown for a given problem and can vary significantly from one to another. Thus, we want to study the convergence property of this method and the PGD to get a general idea about how to choose the mode number.
As a first attempt, we restrict ourselves to a one-mode solution problem. This eliminates the effect of the number of modes, and allows us to study the convergence rate of the methods with respect to mesh refinement. This kind of error is usually known as discretization error in FEM. To do so, the body force term is manufactured so that the final solution is analytically known in a separated form. As shown in \figurename~\ref{fig:convdof}, the PGD and HiDeNN-PGD results converge with respect to the element size at a rate similar to FEM. However, if we consider the DoFs, this error converges much faster for PGD and HiDeNN-PGD. This confirms that doing separation of variables for PGD and HiDeNN-PGD does not degrade the convergence rate in terms of mesh refinement.
The PGD based model reduction induces two kinds of errors: mesh discretization error and mode reduction error. In particular, we have theoretically shown that the latter one should be independent of the mesh. In order to show this numerically, we use a manufactured load to compute the multi-modes PGD solution in different meshes. The results are shown in \figurename~\ref{fig:convPGD2D}. As expected, the convergence rates on the number of modes remain similar regardless of the mesh size. It seems that the log error is linearly proportional to the mode number. The decreasing slop remained unchanged from the very coarse mesh to the fine one. This implies the coarse mesh has the same mode reduction error as a fine mesh. Hence, it can be used to choose the mode number.
From the above observation, we may consider using a coarse mesh PGD, which is very cheap, to study the mode reduction error and choose an appropriate mode number for HiDeNN-PGD. Since the the HiDeNN-PGD is always more accurate than the usual PGD, the selected prescribed number should be large enough.
\begin{figure}[htbp]
\centering
\subfigure[Convergence with the element size ]{\includegraphics[width=3in]{figures/ConvElementSize.png}}
\subfigure[Convergence with DoFs]{\includegraphics[width=3in]{figures/ConvDOF.png}}
\caption{Convergence rate of FEM, PGE, HiDeNN-PGD with respect to mesh refinement}
\label{fig:convdof}
\end{figure}
\begin{figure}[htbp]
\centering
{\includegraphics[scale=0.9]{figures/ConvPGD2D.PNG}}
\caption{Convergence of PGD with respect to the increasing number of modes for different meshes}
\label{fig:convPGD2D}
\end{figure}
\subsection{3D case }
The proposed HiDeNN-PGD method has been tested in three dimensional cases. Similar to the previous two-dimensional example, \tablename~\ref{table:3Derror} reports the evolution of error at an increasing mode number for different methods on a coarse mesh. Again, the HiDeNN-PGD outperforms the other methods in terms of accuracy and DoFs. The same conclusion can be drawn on finer meshes, as reported in \tablename~\ref{table:3Derrormesh}.
\begin{table}[!htb]
\caption{Accuracy comparison for different methods on the coarse $40\times40\times40$ mesh}
\centering
\begin{tabular}{|c | c c | c c | c c | c c | c c |}
\hline
& \multicolumn{2}{c}{PGD}& \multicolumn{2}{|c}{CD} &\multicolumn{2}{|c}{FEM}&\multicolumn{2}{|c|}{HiDeNN-PGD}&\multicolumn{2}{|c|}{HiDeNN} \\ \hline
Mode number & DoFs& Err & DoFs& Err & DoFs& Err& DoFs& Err& DoFs& Err \\ \hline
1 & 117& $72.474\%$ & 117& $72.474\%$ & \textbf{59,319}& $\bm{27.870\%}$ & 234& $72.349\%$ & \textbf{255,996}
& $\bm{10.780\%}$ \\ \hline
2 & 234& $29.883\%$ & 234& $27.927\%$ & &- & \textbf{351}& $\bm{10.797\%}$ & &-\\ \hline
3 & 351& $28.026\%$ & 351& $27.923\%$ & &- & 468& $10.797\%$ & &- \\ \hline
4 & 468& $27.931\%$ &468& $27.920\%$ & &- & 585& $10.797\%$ & &- \\ \hline
5 & 585& $27.895\%$ &585& $27.872\%$ & &- & 702 & $10.796\%$ & &- \\ \hline
6 & 702& $27.881\%$ &702& $27.871\%$ & &- & - & -& &-\\ \hline
7 & 819& $27.877\%$ &819& $27.871\%$ & &- & - & -& &-\\ \hline
8 & 936& $27.874\%$ &\textbf{936}& $\bm{27.870\%}$ & &- & - & -& &-\\ \hline
17 & \textbf{1,989}& $\bm{27.870\%}$ & & - & &- & & -& &-\\ \hline
20 & 2,340& $27.870\%$ & & - & &- & - & -& &-\\ \hline
\end{tabular}
\label{table:3Derror}
\end{table}
\begin{table}[!htb]
\caption{Accuracy comparison for different methods with different meshes for 3D problem }
\centering
\begin{tabular}{|c | c c | c c | c c | c c | c c | }
\hline
& \multicolumn{2}{c}{PGD}& \multicolumn{2}{|c}{CD} &\multicolumn{2}{|c}{FEM}&\multicolumn{2}{|c|}{HiDeNN-PGD}&\multicolumn{2}{|c|}{HiDeNN} \\ \hline
Mesh & DoFs& Err & DoFs& Err & DoFs& Err& DoFs& Err & DoFs& Err \\ \hline
$40\times40\times40$ & 1,989(17)
& $27.870\%$ & 936(8)
& $27.870%
\%$ & \textbf{59,319}
& $\bm{27.870\%}$& \textbf{702}(5)
& $\bm{10.796\%}$ & \textbf{255,996}
& $\bm{10.780\%}$ \\ \hline
$80\times80\times80$ & 2,607(11)
& $14.416\%$ & 1185(5)
& $14.416\%$ &493,039
&$14.416\%$& 1185(4)
& $6.771\%$ & 2,047,996
& \\ \hline
$160\times160\times160$ & 4,770(10)
& $7.247\%$ & 2,385(5)
& $7.247\%$ &4,019,679
&$7.247\%$ & 3,339(6)
& $4.036\%$ & 16,383,996 & \\ \hline
$320\times320\times320$ & 9,570(10)
& $3.628\%$ &5,742(6)
& $3.628\%$ &32,461,759
&$3.628\%$ & 5,742(5)
& $1.831\%$ & 131,071,996
& \\ \hline
\end{tabular}
\label{table:3Derrormesh}
\end{table}
\section{ DNN, HiDeNN, FEM and canonical decomposition based function approximation}
Function approximation is a key component in numerical solutions of partial differential equations (PDEs). In this section, we briefly review how such approximation can be performed in terms of FEM, DNN, HiDeNN and canonical decomposition (CD). We present their approximation function sets, which will be used in the theoretical analysis in Subsection 2.2.
We restrict the discussion to a scalar-value function ${u}(\bm{x}):\mathbb{R}^3\rightarrow \mathbb{R}$. The conclusions should be straightforwardly extended to vector functions.
\begin{figure}[h]
\centering
\subfigure[DNN-based interpolation function]{\includegraphics[trim={1.2cm 0 0 0},width=0.41\textwidth]{NNShapeDNN.png}}
\subfigure[HiDeNN interpolation function]{\includegraphics[width=0.45\textwidth]{HiDeNN_DNN.png}}
\subfigure[FEM interpolation function]{\includegraphics[width=0.45\textwidth]{FEM_DNN.png}}
\subfigure[CD interpolation function]{\includegraphics[width=0.45\textwidth]{MS_DNN.png}}
\caption{Illustration for the interpolation functions in the form of DNN with $\bm{x}=(x,y,z)$ as input and $u^h$ as output. Weights and biases inside dashed line-box are constrained, and those inside the red solid line-box are fixed. }
\label{fig:Interpolant_DNNrepresentation}
\end{figure}
\subsection{Overview of the approximation function sets}
\textbf{Deep Neural network based method}
According to the universal approximation theorem, a deep neural network (DNN) can be designed and trained to approximate any given continuous function to desired accuracy \cite{hornik1989multilayer,cybenko1989approximation}. Thus it can be a candidate to approximate solutions for solving PDEs \cite{wu2016physics,xiao2016quantifying,weinan2017deep,berg2018unified,raissi2017physics,raissi2019physics,sirignano2018dgm,weinan2018deep}, i.e.
\begin{equation}
u^h(\bm{x})=\mathcal{F}^{NN}(\bm{x})
\end{equation}
where $\mathcal{F}^{NN}$ represents the neural network with $\bm{x}$ as input and $u^h$ as output. Note that $u^h$ can be a multidimensional vector.
For instance, in a classical feedforward neural network (FFNN) \cite{Goodfellow-et-al-2016,haykin1994neural,oishi2017computational} with $N_L$ layers, recursive relations among neurons are as follows
\begin{eqnarray} \label{eq:transfer-function}
&&a^{l}_{j=1}=x, a^{l}_{j=2}=y, a^{l}_{j=3}=z, \text{ if } l=1 \text{ (input layer)} \\
&&a^{l}_j=\mathscr{A}(\sum_{i=1}^{N_N(l-1)}{W^{l}_{ij} a^{l-1}_i + b^{l}_j}), \text{ if } l\in\{2,...,N_L-1\} \text{ (hidden layer)}
\end{eqnarray}
Hence, the output layer can be defined as
\begin{eqnarray}
\mathcal{F}^{NN}_j&=&a^{N_L}_j=\sum_{i=1}^{N_N(N_L-1)}{W^{N_L}_{ij} a^{N_L-1}_i + b^{N_L}_j}, \text{ if } l=N_L \text{ (output layer)}
\end{eqnarray}
with the detailed definition of the notations in Table \ref{table:ffnn-notation}. Therefore, once the weights $\bm{W}$, biases $\bm{b}$ and activation functions $\bm{\mathcal{A}}$ have been chosen, $\mathcal{F}^{NN}$ can serve as an approximation function with the input variable as $\bm{x}=(x,y,z)$.
\begin{table}[h!]
\centering
\caption{Notation table of variables used in the feed forward neural network}
\label{table:ffnn-notation}
\begin{tabular}{cl}
\hline
$\bm{x}=(x,y,z)$ & Space coordinates \\
$l$ & Counting index for number of layers \\
$i$ & Counting index for neurons in layer $l-1$ \\
$j$ & Counting index for neurons in layer $l$ \\
$N_L$ & Number of layers in the neural network \\
$N_N(l)$ & Number of neurons in layer $l$ \\
$W_{ij}^{l}$ & Weight connecting the $i^\text{th}$ neuron in layer $l-1$ to the $j^\text{th}$ in layer $l$ \\
$b_{j}^{l}$ & Bias of the $j^\text{th}$ neuron in layer $l$\\
$a_j^{l}$ & Neuron value for $j^{th}$ neuron in $l^{th}$ layer \\
$\mathscr{A}$ & Activation function \\
$\mathcal{F}^{NN}$ & Feedforward neural network function\\
\hline
\end{tabular}
\end{table}
The approximation function set formed by a general NN is
\begin{equation}
\mathcal{N}^h=\left\{ u^h(\bm{x}) \big | u^h=\mathcal{F}^{NN}(\bm{x};\bm{W},\bm{b},\bm{\mathscr{A}}), W^l_{ij}\in \mathbb{R}, b^l_j\in \mathbb{R} \right\},
\end{equation}
where $\mathcal{F}^{NN}(\bm{x};\bm{W},\bm{b},\bm{\mathcal{A}})$ denotes an NN with the input $\bm{x}$ and depends on weights $\bm{W}$, biases $\bm{b}$ and activation functions $\bm{\mathcal{A}}$.
For interpretation, NN can be somehow written in the form of shape functions associated to nodal values of $ u^h$. This way, the function approximation reads
\begin{equation}
u^h=\sum_{I=1}^{np} \mathcal{F}^{NN}_\mathcal{I}(\bm{x};\bm{W},\bm{b},\bm{\mathscr{A}}) \mathscr{u}_{\mathcal{I}},
\label{eq:InterpolationNN}
\end{equation}
As illustrated in Fig. \ref{fig:Interpolant_DNNrepresentation}(a), the equation \eqref{eq:InterpolationNN} would define a specific structure of DNN. The blue blocks in the figure denoted by $\mathcal{F}^{NN}_{\mathcal{I}}(\bm{x};\bm{W},\bm{b},\bm{\mathscr{A}})$ is a DNN substructure with $\bm{x}$ as input and $\mathcal{F}^{NN}_{\mathcal{I}}$ as output, which has a similar definition to \eqref{eq:transfer-function}. $u_I$'s are actually the weights connecting the output layer with the last hidden layer.
Compared to DNN, this interpolation form may provide a more interpretable structure for the weights and biases.
In multidimensional cases, such as for 3D mechanical problems, the above equation can be straightforwardly applied to each component of displacement fied as follows
\begin{equation}
u_x^h=\sum_{I=1}^{np} \mathcal{F}^{NN}_\mathcal{I}(\bm{x};\bm{W},\bm{b},\bm{\mathscr{A}}) \mathscr{u}_x_{\mathcal{I}},\\
\end{equation}
\begin{equation}
u_y^h=\sum_{I=1}^{np} \mathcal{F}^{NN}_\mathcal{I}(\bm{x};\bm{W},\bm{b},\bm{\mathscr{A}}) \mathscr{u}_y_{\mathcal{I}},
\end{equation}
\begin{equation}
u_z^h=\sum_{I=1}^{np} \mathcal{F}^{NN}_\mathcal{I}(\bm{x};\bm{W},\bm{b},\bm{\mathscr{A}}) \mathscr{u}_z_{\mathcal{I}},
\end{equation}
\textbf{HiDeNN}
The recently developed HiDeNN method \cite{zhang2020hierarchical} uses a similar DNN structure of \eqref{eq:InterpolationNN} with additional constraints to build a family of function approximations. Similarly to the FEM, the continuous domain
$\Omega$ is discretized by a mesh with $np$ nodes $\bm{x}_1,\bm{x}_2,\cdots,\bm{x}_{np}$. Then the finite element shape functions $N_{\mathcal{I}}$ can be constructed by the neural network block, namely,
\begin{equation}
\mathcal{N}_{\mathcal{I}}(\bm{x};\bm{W},\bm{b},\bm{\mathscr{A}})
\end{equation}
where $\mathcal{I}=1,2,\cdots,np$. Different from $\mathcal{F}_{\mathcal{I}}^{NN}(\bm{x};\bm{W},\bm{b},\bm{\mathcal{A}})$ in (\ref{eq:InterpolationNN}), $\mathcal{N}_{\mathcal{I}}(\bm{x};\bm{W},\bm{b},\bm{\mathcal{A}})$ precisely equals to the finite element shape function $N_{\mathcal{I}}(\bm{x})$ with inputs $\bm{x}$ and an output $N_{\mathcal{I}}$, satisfying the following constraints for shape functions automatically, i.e.
\begin{eqnarray}
\sum_{\mathcal{I}=1}^{np} \mathcal{N}_{\mathcal{I}}(\bm{x};\bm{W},\bm{b},\bm{\mathscr{A}})=1, \mathcal{N}_{\mathcal{I}}(\bm{x}_{\mathcal{J}};\bm{W},\bm{b},\bm{\mathscr{A}})=\delta_{\mathcal{I}\mathcal{J}}.
\label{eq:Constraint}
\end{eqnarray}
With Kronecker Delta constraints, we can apply Dirichlet boundary conditions directly similar to that of the finite element method,
so that all the weights $\bm{W}$, and biases $\bm{b}$ are functions of nodal coordinates $\bm{x}_I$. Thus we can rewrite the shape function explicitly in terms of $\bm{x}_I$ as
\begin{equation}
\mathcal{N}_{\mathcal{I}}(\bm{x};\bm{x}_{\mathcal{I}}^*,\bm{\mathscr{A}}) = \mathcal{N}_{\mathcal{I}}(\bm{x};\bm{W},\bm{b},\bm{\mathscr{A}}),
\end{equation}
where $\bm{x}_{\mathcal{I}}^*$ denotes the support of $N_{\mathcal{I}}(\bm{x})$, e.g. in linear 1D cases $\bm{x}_{\mathcal{I}}^*=[x_{{\mathcal{I}}-1},x_{\mathcal{I}},x_{{\mathcal{I}}+1}]$.
Combining such neural network blocks for the entire mesh gives the final form of HiDeNN, as shown in Fig. \ref{fig:1D_DNN_representation}(b). This results in the approximation function set
\begin{equation}
\mathcal{H}^h=\left\{ u^h(\bm{x}) \bigg | u^h=\sum_{\mathcal{I}=1}^{np} \mathcal{N}_{\mathcal{I}}(\bm{x};\bm{x}_{\mathcal{I}}^*,\bm{\mathcal{A}})\mathscr{u}_{\mathcal{I}}, \mathscr{u}_{\mathcal{I}}\in\mathbb{R} \right\}.
\end{equation}
The parametric expression with nodal positions $\bm{x}_{\mathcal{I}}^*$ allows automatic r-adaptivity, and accordingly improves the local and global accuracy of the interpolant.
\textbf{Finite element method}
The approximation function set $\mathcal{HI}^h$ degenerates to the FE approximation function set $\mathcal{V}^h$ \cite{belytschko2013nonlinear} when the nodal position is fixed, which reads
\begin{equation}
\mathcal{V}^h=\left\{u^h(\bm{x}) \bigg | u^h=\sum_{\mathcal{I}=1}^{np} N_{\mathcal{I}}(\bm{x}) u_{\mathcal{I}}, u_{\mathcal{I}}\in\mathbb{R} \right\}.
\end{equation}
From the DNN viewpoint, this corresponds to fixing the weights and biases that are functions of $\bm{x}_{\mathcal{I}}^*$, as shown in Fig. \ref{fig:Interpolant_DNNrepresentation}(b).
\textbf{Canonical decomposition (CD)}
Under the assumption of separation of variables, the function $\bm{u}$ may be approximated
by the sum of the products of multiple 1D functions, i.e.,
\begin{equation}
u^h(\bm{x})=u^h(x,y,z)=\sum_{q=1}^Q X^{(q)}(x)Y^{(q)}(y)Z^{(q)}(z)
\label{eq:SeparaRepresentation}
\end{equation}
where $Q$ is the number of modes, and the product of $X^{(q)}, Y^{(q)},Z^{(q)}$ provides a mode for the interpolation function.
This form or concept is known as canonical decomposition (CD) \cite{kolda2009tensor}. The so-called PGD method has adopted this concept for solving PDEs \cite{ammar2006new,gonzalez2010recent,ghnatios2019advanced,modesto2015proper,chinesta2013proper} and for data learning \cite{lu2018adaptive,lu2019datadriven,blal2019non, ghnatiosspurious,giacomini2020separated}.
Thanks to the separation of variables, only shape functions of reduced dimension are needed. In (\ref{eq:SeparaRepresentation}),
1D FE shape functions can be used for a 3D problem, namely,
\begin{eqnarray}
\label{eq:MSDisX}
X^{(q)}(x)&=&\sum_{I=1}^{n_1} N_I(x)\beta_I^{(q)}, \\
\label{eq:MSDisY}
Y^{(q)}(y)&=&\sum_{J=1}^{n_2} N_J(y)\gamma_J^{(q)}, \\
\label{eq:MSDisZ}
Z^{(q)}(z)&=&\sum_{K=1}^{n_3} N_K(z)\theta_K^{(q)}.
\end{eqnarray}
Here, $n_1, n_2, n_3$ are the number of nodes in $x,y,z$ directions, respectively.
Thus the corresponding approximation function set for a given $Q$ modes is
\begin{eqnarray}
\mathcal{M}_Q^h=&\Bigg \{ &u^h(\bm{x}) \bigg |u^h=\sum_{q=1}^Q \left( \sum_{I=1}^{n_1} N_I(x)\beta_I^{(q)} \right) \left( \sum_{J=1}^{n_2} N_J(y)\gamma_J^{(q)} \right) \left( \sum_{K=1}^{n_3} N_K(z)\theta_K^{(q)} \right), \\ \nonumber
&&\beta_I^{(q)},\gamma_J^{(q)},\theta_K^{(q)}\in\mathbb{R} \Bigg \}.
\end{eqnarray}
Note that the interpolation function in (\ref{eq:SeparaRepresentation}) can be rearranged as
\begin{equation}
u^h(x,y,z)=\sum_{I=1}^{n_1}\sum_{J=1}^{n_2}\sum_{K=1}^{n_3} N_I(x) N_J(y) N_K(z) \left( \sum_{q=1}^Q \beta^{(q)}_I \gamma^{(q)}_J \theta^{(q)}_K \right)
\label{eq:MSExpand}
\end{equation}
which is regarded as a finite element interpolation function with $N_I(x)N_J(y)N_K(z)$ as shape
functions and $\left( \sum_{q=1}^Q \beta^{(q)}_I \gamma^{(q)}_J \theta^{(q)}_K \right)$ as coefficients. Fig. \ref{fig:Interpolant_DNNrepresentation}(d) illustrates a DNN format of (\ref{eq:MSExpand}), which will be the basis for the proposed HiDeNN-PGD method.
Multidimensional shape functions of CD, i.e., the product of 1D shape functions, are fixed and determined by nodal positions along each direction. In addition, the nodal values in the last layer are constraint in the form of tensor product, i.e.,
\begin{equation}
u_{(I,J,K)}=\sum_{q=1}^{Q}\beta^{(q)}_{I}\gamma^{(q)}_J\theta^{(q)}_K.
\end{equation}
When a few modes may represent the function $u^h(x,y,z)$, this method is advantageous in terms of lower integration complexity and less degrees of freedom (DoFs) \cite{ammar2006new,chinesta2013proper}. The DoFs in (\ref{eq:MSExpand}) is of the order $O((n_1+n_2+n_3)Q)$, which is linear with the spatial dimension and far smaller than traditional methods (e.g., FEM).
\subsection{Relationship among the NN, HiDeNN, FEM and CD approximation function sets}
In this subsection, we explore the relationship among FEM, NN, HiDeNN and CD.
Assume that CD, FEM are based on the same regular mesh with $n_1,n_2,n_3$ nodes along $x,y,z$ directions. This mesh also serves as an initial guess of $\bm{x}_I^*$ in HiDeNN. The shape functions of FEM is the product of 1D shape functions, i.e., the shape function associated with the node $(x_i,y_j,z_k)$ is $N_{(I,J,K)}(x,y,z)=N_I(x) N_J(y) N_K(z)$. NN has the same neural network with HiDeNN, and might be fully-connected. By definition, we have the following relationship among approximation function sets of the four methods:
\begin{equation}
\mathcal{M}_Q^h \subset \mathcal{V}^h \subset \mathcal{HI}^h \subset \mathcal{N}^h,
\label{eq:RelationshipAmongMethods}
\end{equation}
as illustrated in Fig. \ref{fig:Relationship}.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{RelationshipFiveMethods.png}
\caption{Illustration for relationship among interpolation function sets of CD, FEM, HiDeNN-PGD, HiDeNN and NN. Especially, when $Q$ tends to infinity, the ``distance" $\varepsilon_Q$ between $\mathcal{M}_Q^h$ and $\mathcal{V}^h$ vanishes. When $Q$ is big enough, FEM interpolation set $\mathcal{V}^h$ is the subset of HiDeNN-PGD one $\mathcal{G}^h_Q$.}
\label{fig:Relationship}
\end{figure}
Based on the assumptions and definitions of interpolation sets of four methods, we have the following observations:
\begin{description}
\item[$\bullet$] According to (\ref{eq:MSExpand}), the CD interpolation functions can be regarded as finite element interpolation functions, which belong to $\mathcal{V}^h$, so $\mathcal{M}_Q^h \subset \mathcal{V}^h$. Especially, when $Q$ tends to infinity, the ``distance" between these two sets vanish. Detailed results will be shown in Section 4. Proofs can be found in Appendix \ref{Appdix:2DConverge}.
\item[$\bullet$] In HiDeNN, an optimization of the nodal positions is performed. Thus FEM may be regarded as a specific case in the HiDeNN with nodal coordinates fixed .
\item[$\bullet$] HiDeNN is a class of structured NN with weights and biases as functions of nodal values and nodal positions.
\end{description}
\section{Error analysis of FEM, CD, HiDeNN and NN-based solutions for PDEs}
We consider a partial differential equation with homogeneous boundary conditions
\begin{equation}
\left\{\begin{array}{l}
\mathcal{L}\bm{u}(\bm{x})+\bm{b}(\bm{x})=0 \text{ in } \Omega \subset \mathbb{R}^d, \\
\bm{u}|_{\partial \Omega}=\bm{0},
\end{array}\right.
\label{eq:GeneralPDE}
\end{equation}
where $\bm{u}\in \mathbb{R}^m$ denotes an $m$-dimensional vector-valued function in a certain Hilbert space $H(\mathbb{R}^d, \mathbb{R}^d)$, $\bm{b}$ the source term, $\bm{x} \in \mathbb{R}^d$ the $d$-dimensional space coordinates, and $\mathcal{L}$ a second-order differential operator.
We assume that an energy potential $\Pi(u)$ exists, formulated in the following form
\begin{equation}
\Pi(\bm{u})=\dfrac{1}{2}a(\bm{u},\bm{u})-(\bm{b},\bm{u}),
\end{equation}
where $a(\cdot , \cdot)$ is the symmetric and bilinear form corresponding to the second-order differential operator $\mathcal{L}$, and $(\bm{f},\bm{g})=\int_{\Omega} \bm{f} \cdot \bm{g} \mathrm{d}\bm{x}$ denotes the inner product. For example, let $a(\bm{f},\bm{g})=\int_{\Omega} \nabla \bm{f} \cdot \nabla \bm{g} \mathrm{d}\bm{x}$ for Poisson equation,
The minimization of $\Pi(u)$ gives the solution to \eqref{eq:GeneralPDE}. This leads to a weak form of \eqref{eq:GeneralPDE}, which reads
\begin{equation}
\bm{u}=\argmin_{\bm{u}^* \in H(\mathbb{R}^m, \mathbb{R}^n)} \Pi[\bm{u^*}]
\end{equation}
Such a weak form is commonly adopted in interpolation theory based numerical approaches, such as the methods shown in Section 2. Denoted by $\mathcal{S}^h \subset H(\mathbb{R}^d, \mathbb{R}^m)$ the discretized approximation solution set with a characteristic mesh size $h$, the approximate solution based a given interpolation function set is then
\begin{equation}
\bm{u}^h=\argmin_{\bm{u}^{h*} \in \mathcal{S}^h} \Pi[\bm{u^{h*} }]
\end{equation}
In the following, we shall take $\mathcal{S}^h$ to be $\mathcal{M}_{0,Q}^h, \mathcal{V}_0^h, \mathcal{H}_0^h, \mathcal{N}_0^h$ , which are the subsets of $\mathcal{M}_Q^h, \mathcal{V}^h, \mathcal{H}^h, \mathcal{N}^h$ under homogeneous boundary conditions, respectively.
\subsection{Error analysis}
We assert the following relations among error bounds for FEM, NN, HiDeNN and CD
\begin{equation}
\left\| u^{CD}-u^{exact} \right\|_{E} \geq \left\| u^{FEM}-u^{exact} \right\|_{E} \geq \left\| u^{HiDeNN}-u^{exact} \right\|_{E} \geq \left\| u^{NN}-u^{exact} \right\|_{E},
\label{eq:ErrorAnalysis}
\end{equation}
Here, $\left\| \cdot \right\|_{E}=\sqrt{a(\cdot,\cdot)}$ is called as the energy norm, and $u^{exact}$ is the real solution of the problem, i.e.,
\begin{equation}
u^{exact}=\arg \min_{u\in H^1_0} \Pi[u].
\end{equation}
The relationship (\ref{eq:RelationshipAmongMethods}) is inherited by $\mathcal{M}_{0,Q}^h, \mathcal{V}_0^h, \mathcal{H}_0^h, \mathcal{N}_0^h$, i.e.,
\begin{equation}
\mathcal{M}_{0,Q}^h \subset \mathcal{V}_0^h \subset \mathcal{H}_0^h \subset \mathcal{N}_0^h.
\label{eq:RelationshipAmongMethodsBC}
\end{equation}
These four methods are all based on the minimal energy principle,
\begin{equation}
\bm{u}=\arg \min_{\bm{u}^h \in \mathcal{S}^h} \Pi[\bm{u}^h],
\end{equation}
where $\mathcal{S}^h$ is selected as $\mathcal{M}_{0,Q}^h, \mathcal{V}_0^h, \mathcal{H}_0^h, \mathcal{N}_0^h$ for CD, FEM, HiDeNN, and NN, respectively. Thus due to the relationship (\ref{eq:RelationshipAmongMethodsBC}) among them, we have
\begin{equation}
\Pi[u^{CD}] \geq \Pi[u^{FEM}] \geq \Pi[u^{HiDeNN}] \geq \Pi[u^{NN}].
\end{equation}
This leads to the error bounds (\ref{eq:ErrorAnalysis}).
We remark that the above theoretical analysis does not account for numerical implementations. For instance, it is not easy to find the minimal value of $\Pi[u^{NN}]$ for NN, although its theoretical results seem better.
\section{ DNN, HiDeNN, FEM and canonical decomposition based function approximation}
Function approximation is a key component in numerical solutions of partial differential equations (PDEs). In this section, we briefly review how such an approximation can be performed in terms of FEM, DNN, HiDeNN and Canonical tensor Decomposition (CD). We present their approximation function sets, which will be used in the theoretical analysis in Subsection 2.2.
We restrict the discussion to a scalar-value function ${u}(\bm{x}):\mathbb{R}^3\rightarrow \mathbb{R}$. The conclusions should be straightforwardly extended to vector functions.
\begin{figure}[h]
\centering
\subfigure[DNN-based interpolation function]{\includegraphics[trim={1.2cm 0 0 0},width=0.41\textwidth]{NNShapeDNN.png}}
\subfigure[HiDeNN interpolation function]{\includegraphics[width=0.45\textwidth]{HiDeNN_DNN.png}}
\subfigure[FEM interpolation function]{\includegraphics[width=0.45\textwidth]{FEM_DNN.png}}
\subfigure[CD interpolation function]{\includegraphics[width=0.45\textwidth]{MS_DNN.png}}
\caption{Illustration for the interpolation functions in the form of DNN with $\bm{x}=(x,y,z)$ as input and $u^h$ as output. Weights and biases inside dashed line-box are constrained, and those inside the red solid line-box are fixed. $\mathcal{A}_0$ is the identity activation function defined by $\mathcal{A}_0(x)=x$. }
\label{fig:Interpolant_DNNrepresentation}
\end{figure}
\subsection{Overview of the approximation function sets}
\textbf{Deep neural network-based method}
According to the universal approximation theorem, a deep neural network (DNN) can be designed to approximate any given continuous function to desired accuracy \cite{hornik1989multilayer,cybenko1989approximation,tang2021neural}. Thus it can be a candidate to approximate solutions for solving PDEs \cite{wu2016physics,xiao2016quantifying,weinan2017deep,berg2018unified,raissi2017physics,raissi2019physics,sirignano2018dgm,weinan2018deep}, i.e.,
\begin{equation}
u^h(\bm{x})=\mathcal{F}^{NN}(\bm{x}),
\end{equation}
where $\mathcal{F}^{NN}$ represents the neural network with $\bm{x}$ as input and $u^h$ as output. Note that $u^h$ can be a multidimensional vector.
For instance, in a classical feedforward neural network (FFNN) \cite{Goodfellow-et-al-2016,haykin1994neural,oishi2017computational} with $N_L$ layers, recursive relations among neurons are as follows
\begin{eqnarray} \label{eq:transfer-function}
&&a^{l}_{j=1}=x, a^{l}_{j=2}=y, a^{l}_{j=3}=z, \text{ if } l=1 \text{ (input layer)}; \\
&&a^{l}_j=\mathcal{A}(\sum_{i=1}^{N_N(l-1)}{W^{l}_{ij} a^{l-1}_i + b^{l}_j}), \text{ if } l\in\{2,...,N_L-1\} \text{ (hidden layer)}.
\end{eqnarray}
Hence, the output layer can be defined as
\begin{eqnarray}
\mathcal{F}^{NN}_j&=&a^{N_L}_j=\sum_{i=1}^{N_N(N_L-1)}{W^{N_L}_{ij} a^{N_L-1}_i + b^{N_L}_j}, \text{ if } l=N_L \text{ (output layer)},
\end{eqnarray}
with the detailed definition of the notations in Table \ref{table:ffnn-notation}. Therefore, once the weights $\bm{W}$, biases $\bm{b}$ and activation functions $\bm{\mathcal{A}}$ have been chosen, $\mathcal{F}^{NN}$ can serve as an approximation function with the input variable as $\bm{x}=(x,y,z)$.
\begin{table}[h!]
\centering
\caption{Notation table of variables used in the feed forward neural network}
\label{table:ffnn-notation}
\begin{tabular}{cl}
\hline
$\bm{x}=(x,y,z)$ & Space coordinates \\
$l$ & Counting index for number of layers \\
$i$ & Counting index for neurons in layer $l-1$ \\
$j$ & Counting index for neurons in layer $l$ \\
$N_L$ & Number of layers in the neural network \\
$N_N(l)$ & Number of neurons in layer $l$ \\
$W_{ij}^{l}$ & Weight connecting the $i^\text{th}$ neuron in layer $l-1$ to the $j^\text{th}$ in layer $l$ \\
$b_{j}^{l}$ & Bias of the $j^\text{th}$ neuron in layer $l$\\
$a_j^{l}$ & Neuron value for $j^{th}$ neuron in $l^{th}$ layer \\
$\mathcal{A}$ & Activation function \\
$\mathcal{F}^{NN}$ & Feedforward neural network function\\
\hline
\end{tabular}
\end{table}
The approximation function set formed by a general NN is
\begin{equation}
\mathcal{N}^h=\left\{ u^h(\bm{x}) \big | u^h=\mathcal{F}^{NN}(\bm{x};\bm{W},\bm{b},\bm{\mathcal{A}}), W^l_{ij}\in \mathbb{R}, b^l_j\in \mathbb{R} \right\},
\end{equation}
where $\mathcal{F}^{NN}(\bm{x};\bm{W},\bm{b},\bm{\mathcal{A}})$ denotes an NN with the input $\bm{x}$ and depends on weights $\bm{W}$, biases $\bm{b}$ and activation functions $\bm{\mathcal{A}}$.
For interpretation, DNN can be somehow rewritten in the form of shape functions associated to nodal values of $ u^h$. In this way, the function approximation reads
\begin{equation}
u^h=\sum_{\mathcal{I}=1}^{np} \mathcal{F}^{NN}_\mathcal{I}(\bm{x};\bm{W},\bm{b},\bm{\mathcal{A}}) \mathscr{u}_{\mathcal{I}},
\label{eq:InterpolationNN}
\end{equation}
as illustrated in Fig. \ref{fig:Interpolant_DNNrepresentation}(a). $\mathcal{F}^{NN}_\mathcal{I}$ represents the value of the $\mathcal{I}$-th neuron in the last hidden layer, i.e., the output of the previous hidden layers. $\mathscr{u}_{\mathcal{I}}$ is the corresponding weight connecting the output layer with the last hidden layer.
This interpolation form may provide a more interpretable structure for DNN.
In multidimensional cases, such as for 3D mechanical problems, the above equation can be straightforwardly applied to each component of displacement fied as follows
\begin{equation}
u_x^h=\sum_{\mathcal{I}=1}^{np} \mathcal{F}^{NN}_\mathcal{I}(\bm{x};\bm{W},\bm{b},\bm{\mathcal{A}}) \mathscr{u}_{x\mathcal{I}},\\
\end{equation}
\begin{equation}
u_y^h=\sum_{\mathcal{I}=1}^{np} \mathcal{F}^{NN}_\mathcal{I}(\bm{x};\bm{W},\bm{b},\bm{\mathcal{A}}) \mathscr{u}_{y\mathcal{I}},
\end{equation}
\begin{equation}
u_z^h=\sum_{\mathcal{I}=1}^{np} \mathcal{F}^{NN}_\mathcal{I}(\bm{x};\bm{W},\bm{b},\bm{\mathcal{A}}) \mathscr{u}_{z\mathcal{I}},
\end{equation}
\textbf{HiDeNN}
The recently developed HiDeNN method \cite{zhang2021hierarchical} uses a similar DNN structure of \eqref{eq:InterpolationNN} with additional constraints to build a family of function approximations. Similar to the FEM, the continuous domain
$\Omega$ is discretized by a mesh with $np$ nodes $\bm{x}_1,\bm{x}_2,\cdots,\bm{x}_{np}$. Then the finite element shape functions $N_{\mathcal{I}}$ can be constructed by the neural network block, namely,
\begin{equation}
\mathcal{N}_{\mathcal{I}}(\bm{x};\bm{W},\bm{b},\bm{\mathcal{A}})
\end{equation}
where $\mathcal{I}=1,2,\cdots,np$. Different from $\mathcal{F}_{\mathcal{I}}^{NN}(\bm{x};\bm{W},\bm{b},\bm{\mathcal{A}})$ in (\ref{eq:InterpolationNN}), $\mathcal{N}_{\mathcal{I}}(\bm{x};\bm{W},\bm{b},\bm{\mathcal{A}})$ precisely equals to the finite element shape function $N_{\mathcal{I}}(\bm{x})$ with inputs $\bm{x}$ and an output $N_{\mathcal{I}}$, satisfying the following constraints for shape functions automatically, i.e.
\begin{eqnarray}
\sum_{\mathcal{I}=1}^{np} \mathcal{N}_{\mathcal{I}}(\bm{x};\bm{W},\bm{b},\bm{\mathcal{A}})=1, \mathcal{N}_{\mathcal{I}}(\bm{x}_{\mathcal{J}};\bm{W},\bm{b},\bm{\mathcal{A}})=\delta_{\mathcal{I}\mathcal{J}}.
\label{eq:Constraint}
\end{eqnarray}
With Kronecker Delta constraints, we can apply Dirichlet boundary conditions directly similar to that of the finite element method,
so that all the weights $\bm{W}$, and biases $\bm{b}$ are functions of nodal coordinates $\bm{x}_I$. Thus we can rewrite the shape function explicitly in terms of $\bm{x}_I$ as
\begin{equation}
\mathcal{N}_{\mathcal{I}}(\bm{x};\bm{x}_{\mathcal{I}}^*,\bm{\mathcal{A}}) = \mathcal{N}_{\mathcal{I}}(\bm{x};\bm{W},\bm{b},\bm{\mathcal{A}}),
\end{equation}
where $\bm{x}_{\mathcal{I}}^*$ denotes the support of $N_{\mathcal{I}}(\bm{x})$, e.g. in linear 1D cases $\bm{x}_{\mathcal{I}}^*=[x_{{\mathcal{I}}-1},x_{\mathcal{I}},x_{{\mathcal{I}}+1}]$.
Combining such neural network blocks for the entire mesh gives the final form of HiDeNN, as shown in Fig. \ref{fig:1D_DNN_representation}(b). This results in the approximation function set
\begin{equation}
\mathcal{H}^h=\left\{ u^h(\bm{x}) \bigg | u^h=\sum_{\mathcal{I}=1}^{np} \mathcal{N}_{\mathcal{I}}(\bm{x};\bm{x}_{\mathcal{I}}^*,\bm{\mathcal{A}})\mathscr{u}_{\mathcal{I}}, \mathscr{u}_{\mathcal{I}}\in\mathbb{R} \right\}.
\end{equation}
The parametric expression with nodal positions $\bm{x}_{\mathcal{I}}^*$ allows automatic r-adaptivity, and accordingly improves the local and global accuracy of the interpolant.
\textbf{Finite element method}
The approximation function set $\mathcal{H}^h$ degenerates to the FE approximation function set $\mathcal{V}^h$ \cite{belytschko2013nonlinear} when the nodal position is fixed, which reads
\begin{equation}
\mathcal{V}^h=\left\{u^h(\bm{x}) \bigg | u^h=\sum_{\mathcal{I}=1}^{np} N_{\mathcal{I}}(\bm{x}) u_{\mathcal{I}}, u_{\mathcal{I}}\in\mathbb{R} \right\}.
\end{equation}
From the DNN viewpoint, this corresponds to fixing the weights and biases that are functions of $\bm{x}_{\mathcal{I}}^*$, as shown in Fig. \ref{fig:Interpolant_DNNrepresentation}(b).
\textbf{Canonical tensor decomposition}
Under the assumption of separation of variables, the function $\bm{u}$ may be approximated
by the sum of the products of multiple 1D functions, i.e.,
\begin{equation}
u^h(\bm{x})=u^h(x,y,z)=\sum_{q=1}^Q X^{(q)}(x)Y^{(q)}(y)Z^{(q)}(z),
\label{eq:SeparaRepresentation}
\end{equation}
where $Q$ is the number of modes, and the product of $X^{(q)}, Y^{(q)},Z^{(q)}$ provides a mode for the interpolation function.
This form or concept is known as CD \cite{kolda2009tensor}. The so-called PGD method has adopted this concept for solving PDEs \cite{ammar2006new,gonzalez2010recent} and for data learning \cite{lu2018adaptive,lu2019datadriven,blal2019non}.
Thanks to the separation of variables, only shape functions of reduced dimension are needed. In (\ref{eq:SeparaRepresentation}),
1D FE shape functions can be used for a 3D problem, namely,
\begin{eqnarray}
\label{eq:MSDisX}
X^{(q)}(x)&=&\sum_{I=1}^{n_1} N_I(x)\beta_I^{(q)}, \\
\label{eq:MSDisY}
Y^{(q)}(y)&=&\sum_{J=1}^{n_2} N_J(y)\gamma_J^{(q)}, \\
\label{eq:MSDisZ}
Z^{(q)}(z)&=&\sum_{K=1}^{n_3} N_K(z)\theta_K^{(q)}.
\end{eqnarray}
Here, $n_1, n_2, n_3$ are the number of nodes in $x,y,z$ directions, respectively.
Thus the corresponding approximation function set for a given $Q$ modes is
\begin{eqnarray}
\mathcal{M}_Q^h=&\Bigg \{ &u^h(\bm{x}) \bigg |u^h=\sum_{q=1}^Q \left( \sum_{I=1}^{n_1} N_I(x)\beta_I^{(q)} \right) \left( \sum_{J=1}^{n_2} N_J(y)\gamma_J^{(q)} \right) \left( \sum_{K=1}^{n_3} N_K(z)\theta_K^{(q)} \right), \\ \nonumber
&&\beta_I^{(q)},\gamma_J^{(q)},\theta_K^{(q)}\in\mathbb{R} \Bigg \}.
\end{eqnarray}
Note that the interpolation function in (\ref{eq:SeparaRepresentation}) can be rearranged as
\begin{equation}
u^h(x,y,z)=\sum_{I=1}^{n_1}\sum_{J=1}^{n_2}\sum_{K=1}^{n_3} N_I(x) N_J(y) N_K(z) \left( \sum_{q=1}^Q \beta^{(q)}_I \gamma^{(q)}_J \theta^{(q)}_K \right)
\label{eq:MSExpand}
\end{equation}
which is regarded as a finite element interpolation function with $N_I(x)N_J(y)N_K(z)$ as shape
functions and $\left( \sum_{q=1}^Q \beta^{(q)}_I \gamma^{(q)}_J \theta^{(q)}_K \right)$ as coefficients. Fig. \ref{fig:Interpolant_DNNrepresentation}(d) illustrates a DNN format of (\ref{eq:MSExpand}), which will be the basis for the proposed HiDeNN-PGD method.
Multidimensional shape functions of CD, i.e., the product of 1D shape functions, are fixed and determined by nodal positions along each direction. In addition, the nodal values in the last layer are constraint in the form of the tensor product, i.e.,
\begin{equation}
u_{(I,J,K)}=\sum_{q=1}^{Q}\beta^{(q)}_{I}\gamma^{(q)}_J\theta^{(q)}_K.
\end{equation}
When a few modes may represent the function $u^h(x,y,z)$, this method is advantageous in terms of lower integration complexity and less degrees of freedom (DoFs) \cite{ammar2006new,chinesta2013proper}. The DoFs in (\ref{eq:MSExpand}) is of the order $O((n_1+n_2+n_3)Q)$, which is linear with the spatial dimension and far smaller than traditional methods (e.g., FEM).
\subsection{HiDeNN-PGD: Reduced order HiDeNN via PGD}
\label{ssec:HiDeNN-PGD}
HiDeNN gets better accuracy compared with classical FEM due to the adaptivity of nodal positions. More DoFs might result in more cost. On the other hand, representation of separated variables provides a reduced order model to improve the efficiency but might lose accuracy. Here, we propose HiDeNN-PGD, a reduced-order model of HiDeNN via PGD, which seeks to accomplish an optimized balance between accuracy and computational cost.
In HiDeNN-PGD, the shape functions in each direction are written in the DNN format, namely, (\ref{eq:MSDisX}-\ref{eq:MSDisZ}) are replaced by 1D HiDeNN interpolants (refer to Appendix \ref{sec:1DHiDeNN}),
\begin{eqnarray}
X^{(q)}(x)&=&\sum_{I=1}^{n_1} \mathcal{N}_I(x;\bm{x}_I^*,\bm{\mathcal{A}})\beta_I^{(q)}, \\
Y^{(q)}(y)&=&\sum_{J=1}^{n_2} \mathcal{N}_J(y;\bm{y}_J^*,\bm{\mathcal{A}})\gamma_J^{(q)}, \\
Z^{(q)}(z)&=&\sum_{K=1}^{n_3} \mathcal{N}_K(z;\bm{z}_K^*,\bm{\mathcal{A}})\theta_K^{(q)},
\end{eqnarray}
where $\mathcal{N}_I(x;\bm{x}_I^*,\bm{\mathcal{A}}), \mathcal{N}_J(y;\bm{y}_J^*,\bm{\mathcal{A}}), \mathcal{N}_K(z;\bm{z}_K^*,\bm{\mathcal{A}})$ are the 1D HiDeNN shape functions in $x,y,z$ directions, respectively.
Thus the interpolation function set is defined by
\begin{eqnarray}
\label{eq:HiDeNNPGD_FunSet}
&&\mathcal{G}_Q^h= \\ \nonumber
&&\Bigg \{ u^h \bigg |u^h=\sum_{q=1}^Q \left( \sum_{I=1}^{n_1} \mathcal{N}_I(x;\bm{x}_I^*,\bm{\mathcal{A}})\beta_I^{(q)} \right) \left( \sum_{J=1}^{n_2} \mathcal{N}_J(y;\bm{y}_J^*,\bm{\mathcal{A}})\gamma_J^{(q)} \right) \left( \sum_{K=1}^{n_3} \mathcal{N}_K(z;\bm{z}_K^*,\bm{\mathcal{A}})\theta_K^{(q)} \right) \Bigg \}.
\end{eqnarray}
Since adaptivity occurs only in each direction, the mesh is always regular.
\subsection{Relationship among the NN, HiDeNN, FEM, CD and HiDeNN-PGD approximation function sets}
In this subsection, we explore the relationship among FEM, NN, HiDeNN, CD and HiDeNN-PGD.
Assume that CD, FEM are based on the same regular mesh with $n_1,n_2,n_3$ nodes along $x,y,z$ directions. This mesh also serves as an initial guess of $\bm{x}_I^*$ in HiDeNN and HiDeNN-PGD. The shape functions of FEM is the product of 1D shape functions, i.e., the shape function associated with the node $(x_i,y_j,z_k)$ is $N_{(I,J,K)}(x,y,z)=N_I(x) N_J(y) N_K(z)$. NN has a more general structure than HiDeNN, and might be fully-connected.
By definition, we have the following relationship among approximation function sets of NN, HiDeNN, FEM and CD:
\begin{equation}
\mathcal{M}_Q^h \subset \mathcal{V}^h \subset \mathcal{HI}^h \subset \mathcal{N}^h.
\label{eq:RelationshipAmongMethods}
\end{equation}
Especially when $Q$ is big enough ($Q\geq\min\{n_1,n_2\}$ for 2D and $Q\geq\min\{n_1 n_2,n_2 n_3, n_3 n_1\}$ for 3D), we have
\begin{equation}\label{eq:RelationshipAmongMethodsBigQ}
\mathcal{M}^h_Q=\mathcal{V}^h\subset\mathcal{G}^h_Q \subset \mathcal{H}^h \subset \mathcal{N}^h,
\end{equation}
as illustrated in Fig. \ref{fig:Relationship}.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{RelationshipFiveMethods.png}
\caption{Illustration for relationship among interpolation function sets of CD, FEM, HiDeNN-PGD, HiDeNN and NN. Especially, when $Q$ tends to infinity, $\mathcal{M}_Q^h$ approaches to $\mathcal{V}^h$. When $Q$ is big enough, FEM interpolation set $\mathcal{V}^h$ is the subset of HiDeNN-PGD one $\mathcal{G}^h_Q$.}
\label{fig:Relationship}
\end{figure}
The above conclusions (\ref{eq:RelationshipAmongMethods})-(\ref{eq:RelationshipAmongMethodsBigQ}) are based on the following observations:
\begin{description}
\item[$\bullet$] According to (\ref{eq:MSExpand}), the CD interpolation functions can be regarded as finite element interpolation functions, which belong to $\mathcal{V}^h$, so $\mathcal{M}_Q^h \subset \mathcal{V}^h$. Especially, when $Q$ is big enough ($Q\geq\min\{n_1,n_2\}$ for 2D and $Q\geq\min\{n_1 n_2,n_2 n_3, n_3 n_1\}$ for 3D), $\mathcal{M}_Q^h$ approaches to $\mathcal{V}^h$, i.e., $\mathcal{M}^h_Q=\mathcal{V}^h$. Detailed results will be shown in Section 4. Proofs can be found in Appendix \ref{Appdix:2DConverge}.
\item[$\bullet$] In HiDeNN, an optimization of the nodal positions is performed. Thus FEM may be regarded as a specific case in the HiDeNN with nodal coordinates fixed .
\item[$\bullet$] HiDeNN is a class of structured NN with weights and biases as functions of nodal values and nodal positions.
\item[$\bullet$] HiDeNN-PGD requires the mesh regular, while HiDeNN optimizes nodal positions freely, so $\mathcal{G}^h_Q \subset \mathcal{H}^h, \forall Q \in \mathbb{N}$. On the other hand, HiDeNN-PGD has more DoFs than CD under the same number of modes, so $\mathcal{M}^h_Q \subset \mathcal{G}^h_Q, \forall Q \in \mathbb{N}$. When $Q$ is small, $\mathcal{M}^h_Q$ is the subset of the intersection of $\mathcal{G}^h_Q$ and $\mathcal{V}^h$.
\end{description}
We can also summarize the DoFs for different methods in Table \ref{table:dofsmethod}. It is shown that HiDeNN-PGD and PGD have only a linear growth in terms of DoFs, whereas the DoFs of FEM and HiDeNN may grow in a polynomial manner.
\begin{table}[!htb]
\caption{Comparison of DoFs for different methods on a 3D mesh}
\centering
\begin{tabular}{|c | c | c | c | c |}
\hline
& FEM& PGD/CD& HiDeNN-PGD&HiDeNN \\ \hline
DoFs & $n_1 \times n_2 \times n_3$& $(n_1 + n_2 + n_3) \times Q$ & $(n_1 + n_2 + n_3) \times Q +n_1 + n_2 + n_3$ & $n_1 \times n_2 \times n_3 +n_1 \times n_2 \times n_3$\\ \hline
\end{tabular}
\label{table:dofsmethod}
\end{table}
\section{Error analysis of FEM, CD, HiDeNN, NN-based solutions and HiDeNN-PGD for PDEs}
We consider a partial differential equation with homogeneous boundary conditions
\begin{equation}
\left\{\begin{array}{l}
\mathcal{L}\bm{u}(\bm{x})+\bm{b}(\bm{x})=0 \text{ in } \Omega \subset \mathbb{R}^d, \\
\bm{u}|_{\partial \Omega}=\bm{0},
\end{array}\right.
\label{eq:GeneralPDE}
\end{equation}
where $\bm{u}\in \mathbb{R}^m$ denotes an $m$-dimensional vector-valued function in a certain Hilbert space $H(\mathbb{R}^d, \mathbb{R}^m)$, $\bm{b}$ the source term, $\bm{x} \in \mathbb{R}^d$ the $d$-dimensional space coordinates, and $\mathcal{L}$ a second-order differential operator.
We assume that an energy potential $\Pi(u)$ exists, formulated in the following form
\begin{equation}
\Pi(\bm{u})=\dfrac{1}{2}a(\bm{u},\bm{u})-(\bm{b},\bm{u}),
\end{equation}
where $a(\cdot , \cdot)$ is the symmetric and bilinear form corresponding to the second-order differential operator $\mathcal{L}$, and $(\bm{f},\bm{g})=\int_{\Omega} \bm{f} \cdot \bm{g} \mathrm{d}\bm{x}$ denotes the inner product. For example, let $a(\bm{f},\bm{g})=\int_{\Omega} \nabla \bm{f} \cdot \nabla \bm{g} \mathrm{d}\bm{x}$ for Poisson equation.
The minimization of $\Pi(u)$ gives the solution to \eqref{eq:GeneralPDE}. This leads to a weak form of \eqref{eq:GeneralPDE}, which reads
\begin{equation}
\bm{u}=\argmin_{\bm{u}^* \in H(\mathbb{R}^m, \mathbb{R}^n)} \Pi[\bm{u^*}].
\end{equation}
Such a weak form is commonly adopted in interpolation theory based numerical approaches, such as the methods shown in Section 2. Denoted by $\mathcal{S}^h \subset H(\mathbb{R}^d, \mathbb{R}^m)$ the discretized approximation solution set with a characteristic mesh size $h$, the approximate solution based on this given interpolation function set is then
\begin{equation}
\bm{u}^h=\argmin_{\bm{u}^{h*} \in \mathcal{S}^h} \Pi[\bm{u^{h*} }].
\end{equation}
In the following, we shall take $\mathcal{S}^h$ to be $\mathcal{M}_{0,Q}^h, \mathcal{V}_0^h, \mathcal{H}_0^h, \mathcal{N}_0^h, \mathcal{G}_{0,Q}^h$, which are the subsets of $\mathcal{M}_Q^h, \mathcal{V}^h, \mathcal{H}^h, \mathcal{N}^h, \mathcal{G}_{Q}^h$ under homogeneous boundary conditions, respectively.
\subsection{Error analysis}
We assert the following relations among error bounds for FEM, NN, HiDeNN and CD
\begin{equation}
\left\| u^{CD}-u^{exact} \right\|_{E} \geq \left\| u^{FEM}-u^{exact} \right\|_{E} \geq \left\| u^{HiDeNN}-u^{exact} \right\|_{E} \geq \left\| u^{NN}-u^{exact} \right\|_{E},
\label{eq:ErrorAnalysis}
\end{equation}
Here, $\left\| \cdot \right\|_{E}=\sqrt{a(\cdot,\cdot)}$ is called as the energy norm, and $u^{exact}$ is the real solution of the problem, i.e.,
\begin{equation}
u^{exact}=\arg \min_{u\in H^1_0} \Pi[u].
\end{equation}
Especially when $Q$ is big enough ($Q\geq\min\{n_1,n_2\}$ for 2D and $Q\geq\min\{n_1 n_2,n_2 n_3, n_3 n_1\}$ for 3D), the error bounds for five methods become
\begin{eqnarray}
&&\left\| u^{CD}-u^{exact} \right\|_{E} \geq \left\| u^{FEM}-u^{exact} \right\|_{E} \geq \left\| u^{HiDeNN-PGD}-u^{exact} \right\|_{E} \\ \nonumber
&&\geq \left\| u^{HiDeNN}-u^{exact} \right\|_{E} \geq \left\| u^{NN}-u^{exact} \right\|_{E},
\label{eq:ErrorAnalysisUpdate}
\end{eqnarray}
The relationship (\ref{eq:RelationshipAmongMethods}) is inherited by $\mathcal{M}_{0,Q}^h, \mathcal{V}_0^h, \mathcal{H}_0^h, \mathcal{N}_0^h$, i.e.,
\begin{equation}
\mathcal{M}_{0,Q}^h \subset \mathcal{V}_0^h \subset \mathcal{H}_0^h \subset \mathcal{N}_0^h.
\label{eq:RelationshipAmongMethodsBC}
\end{equation}
These four methods are all based on the minimal energy principle,
\begin{equation}
\bm{u}=\arg \min_{\bm{u}^h \in \mathcal{S}^h} \Pi[\bm{u}^h],
\end{equation}
where $\mathcal{S}^h$ is selected as $\mathcal{M}_{0,Q}^h, \mathcal{V}_0^h, \mathcal{H}_0^h, \mathcal{N}_0^h$ for CD, FEM, HiDeNN, and NN, respectively. Thus due to the relationship (\ref{eq:RelationshipAmongMethodsBC}) among them, we have
\begin{equation}
\Pi[u^{CD}] \geq \Pi[u^{FEM}] \geq \Pi[u^{HiDeNN}] \geq \Pi[u^{NN}].
\end{equation}
This leads to the error bounds (\ref{eq:ErrorAnalysis}).
In the same manner, the relationship (\ref{eq:RelationshipAmongMethodsBigQ}) leads to the error bounds (\ref{eq:ErrorAnalysisUpdate}).
(\ref{eq:ErrorAnalysisUpdate}) shows that HiDeNN-PGD might reach better accuracy than FEM with regular mesh at increasing number of modes.
We remark that the above theoretical analysis does not account for numerical aspects, such as the difficulties to find out the global minimizer of the energy potential.
\subsection{Proof of the mesh in-dependency of mode reduction error in the PGD method}
The PGD based model reduction induces two kinds of errors: mesh discretization error and mode reduction error.
\begin{theorem}
Let $u^{exact}, u^{PGD}$,$u^{FEM}$ be the exact solution, the numerical solution of PGD and FEM, respectively. PGD and FEM take the same regular mesh, and FEM takes the shape functions as the product of 1D shape functions of PGD in each dimension. Then the following error decomposition holds:
\begin{equation}\label{eq:ErrorDecomposition}
\| u^{PGD}-u^{exact} \|_E^2 = \| u^{FEM}-u^{exact} \|_E^2 + \| u^{PGD}-u^{FEM} \|_E^2.
\end{equation}
\end{theorem}
\begin{proof}
We calculate
\begin{eqnarray} \label{eq:PGDDecompExpand}
&& \| u^{PGD}-u^{exact} \|_E^2 \\
&=&\int_\Omega (\nabla u^{PGD}- \nabla u^{FEM}+\nabla u^{FEM}-\nabla u^{exact})^2 \mathrm{d} \bm{x} \\ \nonumber
&=&\| u^{FEM}-u^{exact} \|_E^2 + \| u^{PGD}-u^{FEM} \|_E^2+\int_\Omega 2(\nabla u^{PGD}- \nabla u^{FEM})\cdot(\nabla u^{FEM}-\nabla u^{exact}) \mathrm{d} \bm{x}.
\end{eqnarray}
By the Gauss theorem, we obtain
\begin{eqnarray} \label{eq:PGDDecompCross}
&&\int_\Omega (\nabla u^{PGD}- \nabla u^{FEM})\cdot(\nabla u^{FEM}-\nabla u^{exact}) \mathrm{d} \bm{x} \\ \nonumber
&=&\int_\Omega (\nabla u^{PGD}- \nabla u^{FEM})\cdot \nabla u^{FEM}+(u^{PGD}-u^{FEM})\nabla^2 u^{exact} \mathrm{d} \bm{x} \\ \nonumber
&=&\int_\Omega (\nabla u^{PGD}- \nabla u^{FEM})\cdot \nabla u^{FEM}-(u^{PGD}-u^{FEM}) b(\bm{x}) \mathrm{d} \bm{x}.
\end{eqnarray}
Since PGD and FEM share the same mesh and shape functions, $v= u^{PGD}- u^{FEM}$ belongs to the test function space of FEM. By the weak form of the FEM problem, we have
\begin{equation}
\int_\Omega \nabla v \cdot \nabla u^{FEM}-v(\bm{x})b(\bm{x}) \mathrm{d} \bm{x} = 0.
\end{equation}
That is to say, (\ref{eq:PGDDecompCross}) vanishes and hence (\ref{eq:ErrorDecomposition}) holds.
$\hfill\square$.
\end{proof}
This theorem asserts that the PGD error is a direct sum of the FEM error and the mode reduction error (the difference between FEM and PGD).
\section{The formulation of HiDeNN-PGD: the 2D Poisson problem as illustration}
In subsection \ref{ssec:HiDeNN-PGD}, we defined HiDeNN-PGD in terms of the approximation function space. Here, we give the detailed formulation of the method.
For the sake of simplicity and without loss of generality, we consider 2D Poisson problem,
\begin{equation}
\left\{\begin{array}{l}
\Delta u(x,y)+b(x,y)=0 \text{ in } \Omega_{(x,y)} \subset \mathbb{R}^2, \\
u|_{\partial \Omega}=0,
\end{array}\right.
\label{eq:PoissonEqHP}
\end{equation}
(\ref{eq:PoissonEqHP}) is solved in the regular domain $\Omega_{(x,y)}=[a,b]\times[c,d]$ with homogeneous boundary conditions. Note that if inhomogeneous boundary conditions are under consideration, we can separate the solution into two parts,
\begin{equation}
u=u^0+\tilde{u},
\end{equation}
where $u^0$ is an arbitrary function satisfying boundary conditions, and $\tilde{u}$ is the solution to the new equations with homogeneous boundary condition.
The variational formula of (\ref{eq:PoissonEqHP}) is
\begin{equation}
\Pi(u)=\dfrac{1}{2}\int_{\Omega_{(x,y)}} \left|\nabla u\right|^2 \mathrm{d}x\mathrm{d}y - \int_{\Omega_{(x,y)}} u(x,y)b(x,y) \mathrm{d}x\mathrm{d}y
\label{eq:PoissonVar}
\end{equation}
Substituting HiDeNN-PGD interpolation function into (\ref{eq:PoissonVar}), we obtain
\begin{eqnarray}
\label{VariationalHP}
\Pi(u^h)&=&\dfrac{1}{2}\sum_{p=1}^Q\sum_{q=1}^Q \left(\int_{x_1}^{x_{n_1}} \dfrac{d}{dx}X^{(p)}(x)\dfrac{d}{dx}X^{(q)}(x) \mathrm{d}x\right) \left(\int_{y_1}^{y_{n_2}} Y^{(p)}(y)Y^{(q)}(y) \mathrm{d}y\right) \\ \nonumber
&+&\dfrac{1}{2}\sum_{p=1}^Q\sum_{q=1}^Q \left(\int_{x_1}^{x_{n_1}} X^{(p)}(x)X^{(q)}(x) \mathrm{d}x\right) \left(\int_{y_1}^{y_{n_2}} \dfrac{d}{dy}Y^{(p)}(y) \dfrac{d}{dy}Y^{(q)}(y) \mathrm{d}y\right) \\ \nonumber
&-&\sum_{q=1}^Q\int_{\Omega_{(x,y)}} X^{(q)}(x)Y^{(q)}(y) b(x,y)\mathrm{d}x\mathrm{d}y,
\end{eqnarray}
with the discrete mesh $[x_1=a, x_2, \cdots, x_{n_1}=b]\times[y_1=c,y_2,\cdots,y_{n_2}=d]$.
Notice that there exist cross terms in Eq. (\ref{VariationalHP}). For convenience and considering the difficulties to do exact integration between different discrete meshes, all the modes share the same mesh $[x_1,x_2,\cdots,x_{n_1}]\times[y_1,y_2,\cdots,y_{n_2}]$ and the same shape functions. We use Gauss quadrature for the source term.
Once the interpolation function set (\ref{eq:HiDeNNPGD_FunSet}) is obtained, minimal variational principle gives the approximated solution. The process in HiDeNN-PGD is formulated as
\begin{eqnarray}
\text{find} && \beta_I^{(1)}, \gamma_J^{(1)},\cdots,\beta_I^{(Q)}, \gamma_J^{(Q)}, x_I, y_J,I=1,\cdots,n_1, J=1,\cdots,n_2\\ \nonumber
\text{min} && \dfrac{1}{2}\int_{\Omega_{(x,y)}} \left|\nabla u\right|^2 \mathrm{d}x\mathrm{d}y - \int_{\Omega_{(x,y)}} u(x,y)b(x,y) \mathrm{d}x\mathrm{d}y \\ \nonumber
&& u^h=\sum_{q=1}^Q \left( \sum_{I=1}^{n_1} \mathcal{N}_I(x;\bm{x}_I^*,\bm{\mathcal{A}})\beta_I^{(q)} \right) \left( \sum_{J=1}^{n_2} \mathcal{N}_J(y;\bm{y}_J^*,\bm{\mathcal{A}})\gamma_J^{(q)} \right) \\ \nonumber
\text{and} && \sum^{n_1}_{I=1}\mathcal{N}(\bm{x}^{*}_{I},x,\mathcal{A}) = 1, \sum^{n_2}_{J=1}\mathcal{N}(\bm{y}^{*}_{J},y,\mathcal{A}) = 1.
\end{eqnarray}
The gradient descent method is applied to iteratively minimize $\Pi(\mathscr{u}^h)$ and solve for all parameters together. In the following numerical examples, we choose Adam algorithm \cite{kingma2014adam}, i.e.,
\begin{framed}
\begin{enumerate}
\item Initialization: Set number of modes $Q$, initial nodal positions $x_I,y_J,I=1,2,\cdots,n_1,J=1,2,\cdots,n_2$, initial coefficients $\beta^{(q)}_I,\gamma^{(q)}_J, q=1,2,\cdots,Q$ and maximal iteration step $M$
\item Algorithm:
While $k\leq M$ do
\begin{enumerate}
\item Compute gradient $\dfrac{\partial}{\partial x_I}, \dfrac{\partial}{\partial y_J}, \dfrac{\partial}{\partial \beta_I^{(q)}},
\dfrac{\partial}{\partial \gamma_J^{(q)}}, I=1,\cdots,n_1, J=1,\cdots,n_2, q=1,\cdots,Q$;
\item Update $x_I,y_J,\beta_I^{(q)},\gamma_J^{(q)},I=1,\cdots,n_1, J=1,\cdots,n_2, q=1,\cdots,Q$ by using Adam algorithm;
\end{enumerate}
End while.
\end{enumerate}
\end{framed}
\section{HiDeNN-PGD}
\label{sec:HiDeNNPGD}
From Section 3, HiDeNN gets better accuracy compared with classical FEM due to the adaptivity of nodal positions. More DoFs might result in more cost. On the other hand, representation of separated variables provide a reduced order model to improve the efficiency but might lose accuracy. Here, we combine HiDeNN and PGD, which might seek for a good balance between accuracy and computational cost. In subsection 5.1, we review HiDeNN formulation. Considering only 1D representation needed in the separated form, we focus on 1D HiDeNN here. And then combine HiDeNN and PGD for multidimensional problem in subsection 5.2. We present the 2D Poisson problem to illustrate the solution scheme.
\subsection{HiDeNN Formulation}
In standard 1D FEM, the computational domain $\Omega$ is descretized by a grid with $n$ nodes and the shape function associated with an internal node $x_I$ is
\begin{equation}
N_I(x)=\left\{
\begin{array}{cc}
\dfrac{x-x_{I-1}}{x_I-x_{I-1}}, & x_{I-1}\leq x \leq x_I, \\
\dfrac{x_{I+1}-x}{x_{I+1}-x_{I}}, & x_I \leq x \leq x_{I+1}, \\
0, & else where,
\end{array}
\right.
\label{eq:linearshapefunction}
\end{equation}
where $x_{I-1}$ and $x_{I+1}$ are the two neighbor points of the node $\:x_{I}$ from the left side and right side, respectively.
We rewrite $N_I(x)$ in a DNN format consists of weights, biases, and activation functions. Considering the shape function is a piecewise linear function, the activation function is selected as ReLU function, i.e., $\mathscr{A}_{1}=\max(0,x)$. Fig. \ref{fig:1D_DNN_representation}(a) shows the DNN representation of the linear shape function. The corresponding formula is
\begin{eqnarray}
\mathscr{N}_{I}(x;\bm{W}, \bm{b},\:\bm{\mathscr{A}})
&=& W_{11}^{l=4}\mathscr{A}_{1}\left( W_{11}^{l=3} \mathscr{A}_{1} \left( W_{11}^{l=2}x+b_{1}^{l=2} \right) +b_{1}^{l=3} \right) \\ \nonumber
&&+ W_{21}^{l=4}\mathscr{A}_{1} \left( W_{22}^{l=3} \mathscr{A}_{1} \left( W_{12}^{l=2}x+b_{2}^{l=2} \right) +b_{2}^{l=3} \right) +b_{1}^{l=4}\\ \nonumber
&=& \mathscr{A}_{1}\left( \dfrac{-1}{x_I-x_{I-1}} \mathscr{A}_{1} \left( -x+x_I \right) +1 \right) + \mathscr{A}_{1} \left( \dfrac{-1}{x_{I+1}-x_I} \mathscr{A}_{1} \left( x-x_I \right) +1 \right) -1, \nonumber
\end{eqnarray}
where $\bm{W}=[W_{11}^{l=2},W_{12}^{l=2},W_{11}^{l=3},W_{22}^{l=3},W_{11}^{l=4},W_{21}^{l=4}]$, and $\bm{b}=[b_{1}^{l=2},b_{2}^{l=2},b_{1}^{l=3},b_{2}^{l=3},b_{1}^{l=4}]$ are the weights and biases of the connected neurons. Note that all the weights and biases are functions of nodal coordinates. The formula can be rewritten as the form of
\begin{equation}
\mathcal{N}_I(\bm{x};\bm{x}_I^*,\bm{\mathscr{A}}),
\end{equation}
where $\bm{x}_I^*$ denotes the vector that represents the neighbor nodes of node $\bm{x}_I$ involved in $N_I(\bm{x})$. For 1D linear shape function, it should be $\bm{x}_I^*=[x_{I-1},\:x_{I},\:x_{I+1}]$. For the sake of clarity, one more layer is added to introduce the nodal value $u_I$, i.e., the formula becomes
\begin{eqnarray}
\mathscr{u}_I^{h}&=&\mathscr{N}_{I} (x;\:\bm{W},\bm{b},\:\bm{\mathscr{A}})\mathscr{u}_{I}=\mathscr{N}_{I} (x;\:\bm{x}_I^*,\:\bm{\mathscr{A}})\mathscr{u}_{I}; \mbox{ no summation on } {I}\\ \nonumber
&=& \mathscr{A}_{0}\left(\mathscr{A}_{1}\left( \dfrac{-1}{x_I-x_{I-1}} \mathscr{A}_{1} \left( -x+x_I \right) +1 \right) -0.5\right) \mathscr{u}_{I} \\ \nonumber
&&+ \mathscr{A}_{0}\left(\mathscr{A}_{1} \left( \dfrac{-1}{x_{I+1}-x_I} \mathscr{A}_{1} \left( x-x_I \right) +1 \right) -0.5\right) \mathscr{u}_{I},
\end{eqnarray}
where $\mathscr{u}_I^{h}$ and $\mathscr{u}_I$ are the interpolated displacement and nodal displacement at node $x_I$, $\bm{\mathscr{A}}=[\mathscr{A}_{0},\:\mathscr{A}_{1}]$ are the activation functions used for the construction of the DNN approximation. $\mathscr{A}_{0}(x)=x$ is an identical function. Fig. \ref{fig:1D_DNN_representation}(b) gives the DNN representation of the interpolation of the nodal displacement at node $x_I$.
\begin{figure}[h]
\centering
\subfigure[DNN-based 1D shape function]{\includegraphics[width=0.42\textwidth]{global_shape_func_DNN.png}}
\hspace{0.1in}
\subfigure[DNN-based 1D interpolation function ]{\includegraphics[width=0.5\textwidth]{interpolation_func_DNN.png}}
\caption{Deep neural network (DNN) representation of the 1D global shape function and interpolation function.}
\label{fig:1D_DNN_representation}
\end{figure}
Once the shape function with nodal value for an arbitrary node $x_I$ is constructed, the interpolation is obtained by assembling all DNNs together, i.e.,
\begin{equation}
\mathscr{u}^{h}(x)=\sum_{I=1}^{n}\mathscr{N}_{I} (x;\:\bm{x}_I^*,\:\bm{\mathscr{A}})\mathscr{u}_{I}.
\end{equation}
Compared with classical FEM, nodal positions are introduced as additional DoFs in the optimization for HiDeNN, which increases both the local and global accuracy of the interpolants.
Reference \cite{zhang2020hierarchical} also presented the DNN representation of various rational functions including Lagrange polynomials, B-spline, Reproducing Kernel Particle Method (RKPM), NURBS, Isogeometric analysis (IGA), etc., and multidimensional shape functions.
\subsection{Reduced order HiDeNN via PGD}
HiDeNN obtains better results than classical FEM with fixed number of nodes because of adaptive mesh. \COMMENT{Now we can also relax nodal positions in each dimension in the PGD for regular mesh.} The shape functions in each direction are written in the DNN format, namely, (\ref{eq:MSDisX}-\ref{eq:MSDisZ}) are replaced by 1D HiDeNN interpolants,
\begin{eqnarray}
X^{(q)}(x)&=&\sum_{I=1}^{n_1} N_I(x;\bm{x}_I^*,\bm{\mathcal{A}})\beta_I^{(q)}, \\
Y^{(q)}(y)&=&\sum_{J=1}^{n_2} N_J(y;\bm{y}_J^*,\bm{\mathcal{A}})\gamma_J^{(q)}, \\
Z^{(q)}(z)&=&\sum_{K=1}^{n_3} N_K(z;\bm{z}_K^*,\bm{\mathcal{A}})\theta_K^{(q)}.
\end{eqnarray}
Thus the interpolation function set is defined by
\begin{eqnarray}
\mathcal{G}_Q^h=&\Bigg \{ &u^h \bigg |u^h=\sum_{q=1}^Q \left( \sum_{I=1}^{n_1} N_I(x;\bm{x}_I^*,\bm{\mathcal{A}})\beta_I^{(q)} \right) \left( \sum_{J=1}^{n_2} N_J(y;\bm{y}_J^*,\bm{\mathcal{A}})\gamma_J^{(q)} \right) \left( \sum_{K=1}^{n_3} N_K(z;\bm{z}_K^*,\bm{\mathcal{A}})\theta_K^{(q)} \right) \Bigg \}.
\end{eqnarray}
Since adaptivity occurs only in each direction, the mesh is always regular.
\subsection{Multi-modes solution scheme: the 2D Poisson problem as illustration}
For the sake of simplicity and without loss of generality, we consider 2D Poisson problem,
\begin{equation}
\left\{\begin{array}{l}
\nabla^2 \bm{u}(x,y)+b(x,y)=0 \text{ in } \Omega_{(x,y)} \subset \mathbb{R}^2, \\
\bm{u}|_{\partial \Omega}=\bm{0},
\end{array}\right.
\label{eq:PoissonEqHP}
\end{equation}
(\ref{eq:PoissonEqHP}) is solved in the regular domain $\Omega_{(x,y)}=[a,b]\times[c,d]$ with homogeneous boundary conditions. Note that if considering boundary conditions, we can separate the solution into two parts,
\begin{equation}
u=u^0+\Delta u,
\end{equation}
Where $u^0$ is an arbitrary function satisfying boundary conditions and $\Delta u$ is the solution to the new equations with homogeneous boundary conditions.
The variational formula of (\ref{eq:PoissonEqHP}) is
\begin{equation}
\Pi(u)=\dfrac{1}{2}\int_{\Omega_{(x,y)}} (\nabla u)^2 \mathrm{d}x\mathrm{d}y - \int_{\Omega_{(x,y)}} u(x,y)b(x,y) \mathrm{d}x\mathrm{d}y
\label{eq:PoissonVar}
\end{equation}
Substituting HiDeNN-PGD interpolation function into (\ref{eq:PoissonVar}), we obtain
\begin{eqnarray}
\label{VariationalHP}
\Pi(u^h)&=&\dfrac{1}{2}\sum_{p=1}^Q\sum_{q=1}^Q \left(\int_{x_1}^{x_{n_1}} \dfrac{d}{dx}X^{(p)}(x)\dfrac{d}{dx}X^{(q)}(x) \mathrm{d}x\right) \left(\int_{y_1}^{y_{n_2}} Y^{(p)}(y)Y^{(q)}(y) \mathrm{d}y\right) \\ \nonumber
&+&\dfrac{1}{2}\sum_{p=1}^Q\sum_{q=1}^Q \left(\int_{x_1}^{x_{n_1}} X^{(p)}(x)X^{(q)}(x) \mathrm{d}x\right) \left(\int_{y_1}^{y_{n_2}} \dfrac{d}{dy}Y^{(p)}(y) \dfrac{d}{dy}Y^{(q)}(y) \mathrm{d}y\right) \\ \nonumber
&-&\sum_{q=1}^Q\int_{\Omega_{(x,y)}} X^{(q)}(x)Y^{(q)}(y) b(x,y)\mathrm{d}x\mathrm{d}y,
\end{eqnarray}
with the discrete mesh $[x_1=a, x_2, \cdots, x_{n_1}=b]\times[y_1=c,y_2,\cdots,y_{n_2}=d]$.
Notice that there exist cross terms in Eq.(\ref{VariationalHP}). For convenience and considering the difficulties to do exact integration between different discrete meshes, all the modes share the same mesh $[x_1,x_2,\cdots,x_{n_1}]\times[y_1,y_2,\cdots,y_{n_2}]$ and the same shape functions. We use Gaussian numerical integration for the source term.
Once the interpolation function is obtained, minimal variational principle gives the approximated solution. The process in HiDeNN-PGD is formulated as
\begin{eqnarray}
\text{find} && \beta_I^{(1)}, \gamma_J^{(1)},\cdots,\beta_I^{(Q)}, \gamma_J^{(Q)}, x_I, y_J,I=1,\cdots,n_1, J=1,\cdots,n_2\\ \nonumber
\text{min} && \dfrac{1}{2}\int_{\Omega_{(x,y)}} (\nabla u)^2 \mathrm{d}x\mathrm{d}y - \int_{\Omega_{(x,y)}} u(x,y)b(x,y) \mathrm{d}x\mathrm{d}y \\ \nonumber
&& u^h=\sum_{q=1}^Q \left( \sum_{I=1}^{n_1} N_I(x;\bm{x}_I^*,\bm{\mathcal{A}})\beta_I^{(q)} \right) \left( \sum_{J=1}^{n_2} N_J(y;\bm{y}_J^*,\bm{\mathcal{A}})\gamma_J^{(q)} \right) \\ \nonumber
\text{and} && \sum^{n_1}_{I=1}\mathscr{N}(\bm{x}^{*}_{I},x,\mathscr{A}) = 1, \sum^{n_2}_{J=1}\mathscr{N}(\bm{y}^{*}_{J},y,\mathscr{A}) = 1.
\end{eqnarray}
The gradient descent method is applied to iteratively minimize $\Pi(\mathscr{u}^h)$ and solve for all parameters together. In the following numerical examples, we choose the Adam algorithm \cite{kingma2014adam}, i.e.,
\begin{framed}
\begin{enumerate}
\item Initialization: Set number of modes $Q$, initial nodal positions $x_I,y_J,I=1,2,\cdots,n_1,J=1,2,\cdots,n_2$, initial coefficients $\beta^{(q)}_I,\gamma^{(q)}_J, q=1,2,\cdots,Q$ and maximal iteration step $M$
\item Algorithm:
While $k\leq M$ do
\begin{enumerate}
\item Compute gradient $\dfrac{\partial}{\partial x_I}, \dfrac{\partial}{\partial y_J}, \dfrac{\partial}{\partial \beta_I^{(q)}},
\dfrac{\partial}{\partial \gamma_J^{(q)}}, I=1,\cdots,n_1, J=1,\cdots,n_2, q=1,\cdots,Q$;
\item Update $x_I,y_J,\beta_I^{(q)},\gamma_J^{(q)},I=1,\cdots,n_1, J=1,\cdots,n_2, q=1,\cdots,Q$ by using Adam algorithm;
\end{enumerate}
End while.
\end{enumerate}
\end{framed}
\subsection{Error bounds for HiDeNN-PGD}
We proposed HiDeNN-PGD in this section. Then we can put it in our comparion among several methods. The relationship of several interpolation function sets is illustrated in Fig. \ref{fig:Relationship}. HiDeNN-PGD requires the mesh regular, while HiDeNN optimizes nodal positions freely, so $\mathcal{G}^h_Q \subset \mathcal{H}^h, \forall Q \in \mathbb{N}$. On the other hand, HiDeNN-PGD has more DoFs than CD under the same number of modes, so $\mathcal{M}^h_Q \subset \mathcal{G}^h_Q, \forall Q \in \mathbb{N}$. Referring to Section 3, when $Q$ is big enough ($Q\geq\min\{n_1,n_2\}$ for 2D and $Q\geq\min\{n_1 n_2,n_2 n_3, n_3 n_1\}$ for 3D), we have $\mathcal{M}^h_Q=\mathcal{V}^h$. Thus,
\begin{equation}
\mathcal{M}^h_Q=\mathcal{V}^h\subset\mathcal{G}^h_Q \subset \mathcal{H}^h \subset \mathcal{N}^h,
\end{equation}
with $Q\geq\min\{n_1,n_2\}$ for 2D or $ Q\geq\min\{n_1 n_2,n_2 n_3, n_3 n_1\}$ for 3D.
It leads to
\begin{eqnarray}
&&\left\| u^{CD}-u^{exact} \right\|_{E} \geq \left\| u^{FEM}-u^{exact} \right\|_{E} \geq \left\| u^{HiDeNN-PGD}-u^{exact} \right\|_{E} \\ \nonumber
&&\geq \left\| u^{HiDeNN}-u^{exact} \right\|_{E} \geq \left\| u^{NN}-u^{exact} \right\|_{E},
\label{eq:ErrorAnalysisUpdate}
\end{eqnarray}
This shows HiDeNN-PGD might reach better accuracy than FEM with regular mesh at increasing number of modes. When $Q$ is small, $\mathcal{M}^h_Q$ is the subset of the intersection of $\mathcal{G}^h_Q$ and $\mathcal{V}^h$.
\section{Introduction}
Despite the constantly increasing computer power, numerical simulations of physical systems with numerous degrees of freedom remain computationally prohibitive. These kinds of problems arise usually in simulation-based engineering applications and the repetitive manipulation (or modification) of the mesh system has been identified as a key time-costly issue in standard finite element method (FEM) \cite{belytschko2013nonlinear}. This has been a motivation for developing the isogeometric approaches \cite{hughes2005isogeometric}.
In recent years, the deep neural network (DNN) has shown some interesting features in handling the solution of physics constrained systems. The universal approximation theorem \cite{hornik1989multilayer,cybenko1989approximation} and the natural scalability of DNN have been the foundation of its superior performance for large systems. This has thus motivated the use of DNN to approximate the solution of partial differential equations (PDEs) \cite{weinan2017deep,raissi2019physics,li2019clustering}. A recently developed Hierarchical Deep-learning Neural Network (HiDeNN) method \cite{zhang2021hierarchical,saha2021hierarchical} falls within this perspective. The so-called HiDeNN is developed by constraining the weights and biases of DNN to mesh coordinates to build multiple dimensions finite element, meshfree, isogeometric, B-spline, and NURBs interpolation functions. HiDeNN allowed the automatic mesh adaptivity and showed a good potential to prevent large mesh systems and the standard time-consuming mesh refinement procedure.
In order to further enhance the efficiency of HiDeNN, this work proposed HiDeNN-PGD, a reduced-order model of HiDeNN using the proper generalized decomposition (PGD).
The PGD-based model reduction methods rely on the idea of separation of variables, and are usually written in the format of canonical decomposition. This kind of method was originally proposed in an \textit{a priori} setting \cite{ammar2006new,chinesta2011short,bhattacharyya2018multi,modesto2015proper}, in which the separated functions are computed on-the-fly by solving the PDEs. It has gained increased popularity in recent years. For overcoming the intrusiveness and extending the applicability of the method, the \textit{a posteriori} data-driven PGD \cite{lu2018adaptive,lu2019datadriven,lu2018multi,diez2018algebraic} has also been developed more recently. In contrast to \textit{a priori} PGD, the \textit{a posteriori} method uses a database to learn the separated functions and thus can be used as regression for constructing reduced order surrogate models.
In our work, we adopted the same idea of separation of variables, in particular, the separation of space variables for solving PDEs in the HiDeNN framework, leading to the so-called HiDeNN-PGD method, which is expected to have reduced degrees of freedom with a high accuracy. Indeed, the space separated PGD, leading to lower dimensional space functions, is usually considered for reducing the computational complexity of 3D separable domains (see e.g. \cite{bognet2014separated}). However, the convergence aspect with respect to the mesh refinement and number of modes has been less studied.
We investigated the convergence aspect of the PGD approach in this paper. Based on the approximation function spaces, we analyzed the numerical error and convergence associated with different approaches and compare their error bounds. It can be shown that the HiDeNN-PGD is more accurate than both FEM and conventional PGD, thanks to the adaptivity achieved by HiDeNN. For further enhancing the optimality of PGD modes, we suggested fixing the number of modes firstly and solve them together. Hence, HiDeNN-PGD can require fewer modes than PGD. This is advantageous for high-dimensional problems where the optimality of modes is crucial. The numerical examples have confirmed our theoretical analysis. In addition, we numerically investigated the relationship between the approximation error and the modes and proposed a strategy to select the prescribed mode number in HiDeNN-PGD. The proposed HiDeNN-PGD has shown a high potential to achieve high performance computing with high accuracy.
The paper is organized as follows. Section 2 gives a brief overview of different numerical methods for partial differential equations (PDEs), in which the approximation function spaces are described. The error analysis based on a class of PDEs is given in Section 3. Section 4 presents the proposed HiDeNN-PGD method. Section 5 provides some numerical examples and discussions. Finally, the paper closes with some concluding remarks.
\input{Part1_ReviewAndAnalysis}
\input{Part2_HiDeNNPGD}
\input{NumericalExamples}
\section{Conclusion}
A reduced-order hierarchical deep learning network has been proposed. The so-called HiDeNN-PGD is a combination of HiDeNN and PGD with separated spatial variables. This combined method presents several advantages over HiDeNN and PGD methods. First, it allows to leverage the automatic mesh adaptivity of the HiDeNN method for reducing the mode number in PGD approximation. Second, combining PGD with HiDeNN reduces significantly the number of degrees of freedom for HiDeNN and potentially leads to a high computational efficiency. Furthermore, we have demonstrated that both HiDeNN and HiDeNN-PGD can provide more accurate solutions than FEM and PGD (or CD), through an error analysis with the help of analyzing the approximation function spaces.
The numerical results have confirmed the mathematical analysis. These examples have been performed based on 2D and 3D Poisson problems. It is shown that the proposed HiDeNN-PGD method can provide accurate solutions with the least degrees of freedom. In order to have an idea for the prescribed number of modes in HiDeNN-PGD, we have studied numerically the convergence rate on PGD approximation. It has been found that the convergence rate on the mode number is insensitive to the mesh size. Therefore, we can expect to use a coarse mesh PGD to compute a roughly estimated mode number for HiDeNN-PGD. This finding is interesting and provides a useful guideline on the choice of the number of modes for HiDeNN-PGD or other PGD-based methods that may require a better optimality in terms of basis.
In the future, the computational efficiency of HiDeNN-PGD will be further explored based on realistic problems, such as thermo-mechanical analysis in additive manufacturing or multi-scale composite simulations. In terms of convergence studies, theoretical results need to be derived through a rigorous mathematical analysis. The numerical results provided in this paper can serve as the first evidence for demonstrating the capabilities of the method.
\section*{Acknowledgement}
L. Zhang, and S. Tang are supported by National Natural Science Foundation of China grant number 11890681, 11832001, 11521202 and 11988102.
W. K. Liu and Y. Lu are supported by National Science Foundation grant number CMMI-1934367 and CMMI-1762035.
|
1,477,468,751,096 | arxiv | \section{Sample Section Title}
\label{sec:sample1}
Lorem ipsum dolor sit amet, consectetur adipiscing \citep{Fabioetal2013} elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud \citet{Blondeletal2008} exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit \citep{Blondeletal2008,FabricioLiang2013} anim id est laborum.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum see appendix~\ref{sec:sample:appendix}.
\section{Sample Section Title}
\label{sec:sample1}
Lorem ipsum dolor sit amet, consectetur adipiscing \citep{Fabioetal2013} elit, sed do eiusmod tempor incididunt ut labore et dolore magna \citet{Blondeletal2008} aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit \citep{Blondeletal2008,FabricioLiang2013} anim id est laborum.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum see appendix~\ref{sec:sample:appendix}.
\section{{Propagating} Life over Interstellar Distances}
\subsection{Introduction}
Since the inception of the space programs, thousands of spacecraft have flown around Earth and into the solar system.
However, spacecraft in interstellar space, despite the prospect of new discovery and the promises of science fiction, are
few and far between---less than five have ventured beyond the heliopause. This discrepancy is explained in part by
distance; it takes decades for spacecraft to travel the 18 billion kilometers ($\sim$120 AU) to interstellar space
with traditional chemical propulsion. Project Starlight (DEEP-IN), a NASA Innovative Advanced Concepts (NIAC) program
promises to both radically increase maximum spacecraft speed and redefine the economics of space
propulsion~\citep{Lubin2016}. This requires the multi-decade development of a system that employs a large-aperture
standoff directed-energy laser-phased array, henceforth referred to as a ``directed energy (DE) system''. By illuminating
a reflector (laser sail) mounted to a spacecraft, it is possible to achieve velocities that are significant fractions of
the speed of light. The key is to leave the ``propulsion at home'' and use direct photon momentum exchange to achieve
relativistic speeds. What took the \textit{Voyager} and \textit{Pioneer} space probes decades to travel to interstellar
space will take the next generation of spacecraft days---and without the need for on-board propellant.
The reason we have not had any serious proposals for relativistic flight to achieve interstellar capability is because no realistic technology has existed until recently. All chemical and nuclear propulsion strategies lack the energy per mass to achieve relativistic flight. Only antimatter and photon propulsion are consistent with current physics and engineering, and antimatter lacks engineering options for realization. This leaves only photon propulsion as a realistic option. Driven by a wide variety of needs, photonics, like electronics, has grown exponentially in capability and dropped exponentially in cost with a doubling time in capability and cost reduction of about two years. Project Starlight uses this exponential growth in photonics as the basis for the program.
With relativistic flight on the horizon, the question remains: Why should we develop the technology to send spacecraft
into interstellar space? The answer lies in two general areas. One is the human drive to understand and explore, which is
partly logical and not easily expressed in logic. The other reason is vastly more practical: the same technology we use to
enable relativistic flight also enables a transformative range of possibilities for space exploration. This includes rapid
high-mass transport in space and human-capable missions in the solar system and beyond. Other benefits, such as planetary
defense options and extremely large unique optical phased array telescopes, are also foreseen. From the perspective of
exoplanet exploration, sending small telescopes close to exoplanets allows vastly higher spatial resolution than we can
achieve with conceivable Earth or space-based telescopes~\citep{Lubin2016}.
The combination of spacecraft swarming -- launching large numbers of small probes -- with earth and orbital assets is a synergistic approach to exploration.
\subsection{What is Interesting to Send Out?}
What is interesting to send out to interstellar space? There is precedent for deliberately sending out information to the
stars, an action known as Active SETI (Search for Extraterrestrial Intelligence) or messaging to extraterrestrial
intelligence (METI). In 1972 and 1973, \textit{Pioneer 10} and \textit{Pioneer 11}, respectively, were launched into space
with plaques depicting the human form, the location of the solar system and Earth in relation to pulsars, and more.
Similarly, the Arecibo message, a radio transmission sent to globular cluster M13 in 1974, reproduced the human form and
Earth's location. Three years later, the \textit{Voyager} spacecrafts carried golden records into space, which included
sounds and images from Earth. In 2017 and 2018, sets of ten second audio transmissions were sent to Luyten's star in the
art project S\'onar Calling GJ273b. The contents of Active SETI messages -- and the notion that they should be sent out at
all -- is highly debated; the inquisitive reader will find more detail in an overview by Musso~\citep{Musso2012}.
Like the artifacts of human societies separated from us by time, the contents of a laser-sail spacecraft may provide the only insight into humankind. Not only will it be deeply reflective of our inherent values as a species, but the information may direct extraterrestrial life's first impression of humanity.
While previous interstellar missions have sent messages, the options for what can be sent via a swarm of laser-sail spacecraft are wider in scope. With the cost of fabricating each spacecraft decreasing over time and aside from an initial upfront investment to build the array, it will be relatively inexpensive to send out multiple spacecraft. \textbf{Together with a thousand-fold decrease in the time it takes to reach extrasolar space, life -- particularly living microorganisms -- can be studied in the interstellar space environment for the first time.} Launching ``biosentinel'' experiments into empty interstellar space provide research possibilities that will be able to more fully characterize biological systems and pave the way for human travel to the stars.
In this work, we adopt the definition of life provided by NASA's Astrobiology Institute: that which is ``a self-sustaining
chemical system capable of Darwinian evolution''~\citep{Benner2010DefiningLife.}.
Understandably, this is limited by our terrestrial context and is subject to change in the future given a nuanced or
drastically different version of life is found outside of Earth. For more complex life, including \textit{Homo sapiens},
to safely embark on interplanetary and interstellar missions, we must know more about life's ability to endure the
deep-space environment. However, we cannot obtain relevant data from ground-based studies and in low-Earth orbit (LEO).
While instruments for simulating microgravity exist, such as ground-based clinostats and rotating wall vessel bioreactors,
they cannot fully reproduce the lack of structural deformation, displacement of intracellular components, or reduced mass
transfer associated with the proposed spaceflight environment~\citep{Horneck2010}.
These experiments also lack the coupled effects of microgravity and space radiation, which may elicit different biological
responses than exposure to only one stressor~\citep{Moreno-Villanueva2017}. LEO studies do not provide a suitable
experimental environment for understanding biological responses to interplanetary/interstellar radiation either; the
radiation dose rate is substantially higher in interplanetary space than inside LEO (i.e.,~in the International Space
Station (ISS)) with comparable shielding, and the quality factor ($Q$) is substantially larger outside of LEO with
light shielding~\citep{Straume2017}. Thus, only biosentinel experiments can provide the necessary data.
\section{Engineering Vision and Challenges}
Starlight's directed-energy system is both modular and scalable, allowing a logical development and build progression from very low-mass wafer-scale spacecraft to much larger and more capable spacecraft. Even small wafer-scale spacecraft attached to a laser sail (Figure~\ref{fig:DEEP-laser-sail}) present a range of options that we explore here. As Starlight is not limited to small spacecraft, the long-term future opens this technology to a variety of mission options. The initial small spacecraft become scouting or sentinel missions much like our solar system exploration programs. The space environment and the interstellar distances present engineering challenges which demand reliable solutions for the success of interstellar biosentinel experiments. These challenges include the need for propulsion and robust communication, which influence payload selection and experimental design.
\subsection{Propulsion}
The NASA Starlight program has developed a standoff laser-phased array that is scalable to sizes needed to provide relativistic flight (Figure~\ref{fig:DEEP-laser-sail}). Since the primary directed-energy system is modular and scalable from handheld to kilometer scale, the program allows a ``roadmap to interstellar flight'' with milestones and applications at each step. A system capable of achieving relativistic flight for small spacecraft at 0.25\textit{c}, for example, would require a directed energy power level on the order of 100 GW for a few minutes per launch for wafer scale spacecraft, with a DE array size scale of order 1--10~km. The power scale is large (about 1/10 of the US electrical grid) but it is only used for a few minutes per launch for low-mass missions. The required level of energy needed for a launch is about 0.01 percent of the US energy used per day. The ``roadmap'' proposed starts with Earth-based systems, such as those we propose in the NASA Starlight program, and will evolve to orbital and lunar-based systems. This is a long-term transformational program, not a single-use program. Initial variants that are 1--10~m DE arrays are immediately useful for powering and propelling (via DE coupled ion engines) CubeSats, as well as for long-distance power transfer to LEO and geostationary Earth orbit (GEO) assets. They will also be useful against space debris and exploration of the beginning of planetary defense applications. Since the DE array is scalable and each module is identical, mass production and economies of scale will greatly aid the program. In addition to the exponential decrease in cost with time, the cost in large quantities is also lower. This is much like the cost of computation, which has dropped by more than a factor of a million per unit of performance (FLOP) over the past 40~years.
\begin{figure*}
\begin{subfigure}{\textwidth}
\centering
\centering
\includegraphics[width=0.65\textwidth]{fig1a.jpg}
\caption{}
\label{fig:DEEP-laser-sail}
\end{subfigure}
\hfill
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.65\textwidth]{fig1b.jpg}
\caption{}
\label{fig:DE-STAR}
\end{subfigure}
\caption{\bf{Directed Energy Propulsion of a Light Sail.} \bf{(a) }\normalfont{A light sail and payload propelled into interstellar space by directed energy laser propulsion. Emitted photons from a standoff laser array on the surface of the Earth (space-based laser arrays are also possible) impart momentum on the sail by reflection so as to accelerate the spacecraft up to relativistic speeds.} \textit{Artist's rendition.} \bf{(b) }\normalfont{The laser array is composed of many small, modular sub-elements which can be articulated, switched off, and added so as to enable a large mission space. As the capability of directed energy propulsion grows, relativistic flight will become possible.}}
\end{figure*}
\subsection{Communication}
Data acquisition from interstellar experiments requires a robust downlink (spacecraft-to-Earth) with a baseline laser
communications link. This poses further challenges to successful biosentinel experiment execution, including the severe
limitations on the spacecraft's available electrical power, processing, and transmit systems. Other challenges include
high propagation loss from diffraction, interfering background radiation, and ground-based reception issues related to
atmospheric turbulence, scattering, and weather. To avoid data volume limitations, careful attention must be paid to: (1)
rendering the communication link as energy efficient as possible, (2) aggressive compression of the data, and (3) use of
reception optics at larger aperture sizes~\citep{Lubin2016,Messerschmitt2018}. Larger distances and
power losses may be compensated by transmitting at a lower data rate and longer times, with a possible mission design to
include a nuclear battery. Such a nuclear battery employing a traditional radioisotope thermoelectric generator (RTG) or thermophotovoltaics can be designed to fit the ultra-low mass
requirements~\citep{Pustovalov1999,Wilt2007,Hein2017}. Sub gram Pu-238 RTG's we successfuly used in human pacemakers lasting many decades. A data uplink (Earth-to-spacecraft) is also possible and may be
required to configure the transmit wavelength for variations in spacecraft speed. To increase reliability, a swarm of
small wafer-scale spacecraft ($>$1000) can be launched, increasing the redundancy at relatively minimal economic burden with
each additional spacecraft. A highly selective optical bandpass filter will mitigate the radiation from the Earth's
atmosphere (if ground-based), our solar system dust emission and scattering, and nearby stars to improve communication.
Communications issues are further discussed in various papers from within the directed energy
community~\citep{Lubin2016,Messerschmitt2018}.
\section{Physiological Challenges Posed by the Space Environment}
The space environment poses unique challenges for the survival of biological systems and their study. Lifeforms sent into interstellar space will be exposed to microgravity, hypergravity (during launch and propulsion phases), and space radiation. In the absence of proper environmental control, exposure to vacuum and extreme temperatures is also possible. Here, we briefly review relevant ground-based and space-based experiments to aid the development of experimental design heuristics for interstellar space biology.
\subsection{Microgravity}
In microgravity, convection is driven predominantly by surface tension, resulting in unique patterns of heat and mass
transfer~\citep{Napolitano1984}.
Such environmental conditions give rise to stress responses in several species of microorganisms, including the formation
of biofilms in \textit{Pseudomonas aeruginosa}~\citep{Kim2013}.
It is well known that biofilms promote the survivability of cells by allowing nutrients to be shared and protected from external hazards such as desiccation and antibiotics. Species that are more suited to biofilm formation (e.g.,~species of bacteria, fungi, and protists) may at present fare better for interstellar travel.
Another consequence of surface-driven convection is a decrease in fluid mixing, which has been shown to induce local
anoxia~\citep{Briarty2004}.
In bacterial species, convection elicits a shortened period of adjustment to the environment (lag phase) and increased
final cell count when compared with ground-based controls~\citep{Horneck2010}.
However, biological responses associated with decreased fluid mixing are not observed in bacterial species if the growth
medium and strain are conducive to cell motility (i.e.,~by the movement of flagella, etc.)~\citep{Benoit2007}.
In cells of higher organisms, such as plants and animals, a common stress response to microgravity is the downregulation
of genes controlling reactive oxygen species (ROS) levels~\citep{Mahalingam2003}.
Heightened ROS levels in microgravity have been observed in \textit{Brassica napus} (rapeseed) protoplast cells as well as
in \textit{Ceratopteris richardii} (fern) spores, thought to be a result of decreased peroxidase
activity~\citep{Salmi2008}.
Other notable responses include changes in cell membrane structure and function, transcriptome, proteome, cell wall
structure, and Ca$^{2+}$ signaling~\citep{Kordyum2017}.
\subsection{Hypergravity}
While microgravity is characteristic of the deep space environment, spacecraft launch and directed-energy propulsion will
expose biological systems to hypergravity. Microgravity research on Earth requires setups such as drop tubes and parabolic
flights, whereas hypergravity experiments require only a centrifuge, which can simulate gravitational accelerations within
the range of our spacecraft ($10^4$ to $10^6$ g). Tardigrade species are adversely affected by hypergravity but
are more resistant than larger organisms, such as the fruit fly, \textit{D. melanogaster}, while species like \textit{C.
elegans} fare quite well under hyperacceleration~\citep{LeBourg1999,DeSouza2018,Vasanthan2017}.
However, tardigrades can be launched in a cryptobiotic state, with the hope that this will mitigate deleterious
acceleration-dependent effects of hypergravity on their survival rate~\citep{Vasanthan2017}.
\subsection{Space radiation}
We do not envision biological payloads spending extended periods of time in LEO, but we must consider the effects of
radiation during launch. There are three sources of radiation within the low-Earth orbit environment: galactic cosmic
radiation (GCR), solar cosmic radiation (SCR), and trapped radiation as a result of GCR and SCR interactions with Earth's
magnetic field and atmosphere~\citep{Reitz2008}.
Outside of low-Earth orbit, where the vast majority of exposure occurs, there are two sources of significant radiation
concerns. The high-energy GCRs consist of a nucleus of nearly every element on the periodic table as well as electrons,
gamma rays, and neutrinos. The distribution is largely a power law with flux peaking around 1 GeV/nucleon and decreasing
rapidly at higher energies. The primary biological damage, in the form of DNA damage and increased cytotoxicity, comes
from free protons rather than higher-Z nuclei because protons are more numerous, with some studies showing survival rates
of cells falling to 10\% around 5~Gy~\citep{AlanMitteer2015ProtonSpecies}.
The approximate dose from GCR protons is about 6 rad/yr and is extremely difficult to shield against due to their high
energy and penetration; however, the dose is modest enough to not be catastrophic over the multi-decade missions required.
The more serious source of radiation is unique to relativistic missions due to the high-speed collisions with interstellar
medium (ISM). The ISM is dominated by protons and electrons with a density of about 0.25/cc as well as heavier elements at
a much lower density~\citep{Draine2010}.
The ISM particles are largely in thermal equilibrium with a temperature of approximately 8000 K. The particle energies in
the ISM rest frame are very low (${\sim}1~{\rm eV}$) and are not a radiation hazard, but when a relativistic spacecraft impacts
the ISM particles, the radiation hazard on the forward edge becomes extreme with a nearly mono-velocity distribution in
the frame of the spacecraft. This ISM ``impact radiation'' is extremely angle-dependent on the frame of the
spacecraft~\citep{Lubin2020,Cohen2020}.
It is analogous to driving through the rain. For this reason the spacecraft is thin and oriented edge-on like a
``Frisbee'' with the forward edge receiving an extremely large radiation dose (Gigarad/year), while the radiation exposure
on the rearward portion of the spacecraft (where the electronics and biology are located) receive vastly less radiation
that allows a mission to proceed~\citep{Brashears2016}. Further research is needed to confirm if this orientation is
self-stabilizing.
\subsection{Vacuum}
An interstellar mission may have both hermetic enclosures as well as areas open to vacuum. For these apparatus designs, or
in the event of hermetic failure, vacuum compatibility may also help determine experimental viability of species in
interstellar space. Two species of lichen, \textit{Rhizocarpon geographicum} and \textit{Xanthoria elegans}, were found to
withstand the full space environment for two weeks, with no permanent loss of photosynthetic
capabilities~\citep{Sancho2007}.
The tardigrade species \textit{Richtersius coronifer} and \textit{Milnesium tardigradum}, as well as \textit{Artemia}
brine shrimp cysts, were also found to be vacuum-resistant, albeit for less
time~\citep{Jonsson2008,Hernandorena1999AEnvironment}.
It is important to note that the coupled effect of space radiation with vacuum exposure significantly decreased survivability of the tardigrade species.
\begin{figure*}[h]
\centering
\includegraphics[width=0.75\textwidth]{fig2.jpg}
\caption{\bf{Metabolic rate (MR) and mass for various groups of living organisms.} \normalfont{Despite the vast diversity of species, we observe a near universal energy requirement per unit mass of tissue. This generalization excludes species capable of cryptobiosis (such as tardigrades, brine shrimp, and \textit{Chironomidae}), which exhibit virtually no metabolic activity while in a state of suspended animation, making them better suited for interstellar flight \citep{Geiser2004MetabolicTorpor,Anderson1970MetabolicSpiders.,Anderson1982RespiratorySpiders,Greenstone1980ForagingSpiders,Lighton1995MassStrategists,Clarke1999ScalingFish,Makarieva2005EnergeticsWhales,Zeuthen1947BodyMicrofauna,Heitmann2001MetabolicTemperature,Klekowski1989OxygenSpitsbergen,Ikeda1982OxygenComposition,CLEGG1964TheSalina,GLASHEEN1989MetabolicWater,Crowe1977AnhydrobiosisActivity,Loomis1996MetabolismStates,Okuda2006Trehalose:Vanderplanki,Ricci2008VolumeAnhydrobiosis,Tong2000EffectEdwardsii,Tomlinson2018MeasuringRespirometry,Jonsson2010TrehaloseTardigrades,Anderson1978ResponsesDeprivation,VanAardt2010TheNematode,Wieser1961EcologicalMassachusetts}}.}
\label{fig:metabolic-rate}
\end{figure*}
\section{Species Selection Criteria}
Research success hinges upon practicality and survivability, lending metrics as a basis of comparison between species: low metabolic rate per unit mass, cryptobiotic capability, and high radiation tolerance.
\subsection{Low metabolic rate}
Launch economics in the near future limit viable organisms to the low-mass regime to ensure achievement of relativistic flight. Due to these engineering constraints, the amount of sustenance we can bring on board is currently severely limited. To survive extended periods of time on a spacecraft where nutrients are scarce, a species will fare better if it exhibits a low metabolic rate as compared to other organisms of similar size. A plot of sub-micron to macroscopic species energy usage per gram of body weight is shown in Figure~\ref{fig:metabolic-rate}. Species at or near zero energy usage are able to undergo cryptobiosis---a state marked by severely reduced metabolic activity in response to adverse environmental conditions. In order to keep biological systems alive outside of cryptobiosis (e.g., once a spacecraft reaches a target solar system), we require an appropriate environment: pressure, temperature, radiation shielding, nutrients, and waste flow must all be tightly controlled. Current engineering constraints limit the biology available for the interstellar journey, favoring low-maintenance organisms that require less nutrients -- and less payload mass -- to sustain them during the long-duration space travel.
\subsection{Cryptobiotic capability}
To transport life across vast distances in space, an ideal species would undergo cryptobiosis at the beginning of the
mission and would be brought out of hibernation when the destination is reached. Unlike the hibernation of bears or other
animals, which is a lowered metabolic state with energy being consumed from internal storage, organisms in cryptobiosis
maintain extended periods of undetectable metabolic rate. Organisms enter cryptobiosis in response
to extremely harsh environmental conditions such as a lack of water (anhydrobiosis), freezing temperatures (cryobiosis),
fluctuating osmotic conditions (osmobiosis), or a lack of oxygen
(anoxybiosis)~\citep{Withers2018Dormancy,Clegg2002CryptobiosisOrganization}. It is of philosophical
interest to debate whether organisms with no detectable metabolism are ''alive''. To this point, dormant seeds are still
considered alive, though they have largely halted metabolisms until water is imbibed by hydrophilic proteins.
We expect species with cryptobiotic capability to have a higher survival rate during the interstellar travel, as lowered metabolism will confer added protection against the space environment.
Human cryptobiosis has not been achieved but is currently being explored, with avenues such as cryonics and vitrification. Human biology, in addition to many other unfavorable characteristics for long-duration flight, is not well-suited to achieving true cryptobiosis. Of the life forms that are capable of cryptobiosis, only a few are known to be tolerant of spaceflight. These include tardigrades, \textit{C. elegans}, brine shrimp, insects such as \textit{Chironomidae}, most seeds, and microorganisms such as brewer's yeast.
\subsection{High radiation tolerance}
Without the protection of Earth's atmosphere and magnetic field, organisms are subject to much higher fluxes of SCR and
GCR~\citep{Kennedy2014}. The sun produces mostly visible, infrared, and ultraviolet radiation, but higher energetic solar
particle events (SPE) composed of gamma rays, X-rays, protons and electrons do occur. Physical shielding in a tiny
spacecraft can reduce the damage done by ultraviolet radiation, but the other types of radiation produced during SPE can
pass through most lightweight materials, causing radiation sickness and cancer. The vast differences between species'
radiotolerance levels, as shown by Figure~\ref{fig:radiation_tolerance}, have led to comparative studies to determine
causality. Harrison and Anderson observed four physiological trends in species with high radiotolerance: lower nuclear
material content in cells; the abilities to repopulate cells; regenerate tissue and organs; and the ability to repair
DNA~\citep{Harrison1996}.
The last of these trends may be correlated with desiccation tolerance~\citep{Musilova2015IsolationResistance},
but this is disputed~\citep{BebloVranesevic2018}. Epigenetic factors may also play a role in DNA damage
response~\citep{Montenegro2016TargetingCancer}.
Naturally, the higher the radiation tolerance of an organism, the better it will fare in the space environment. Selecting for species that are better suited to cope with radiation increases the likelihood of success of any on-board experiments.
High-energy neutrons produced by the interaction of the ISM at the forward edge as well as galactic cosmic radiation with
the spacecraft are also important to consider for interstellar travel. This is discussed in detail in Cohen,
2020~\citep{Cohen2020}. Neutrons can penetrate large distances and damage living organisms~\citep{Kuhne2009}. When these
types of neutrons interact with matter, they can also produce other charged particles and gamma rays. They can cause
clustered and complex DNA lesions that are more difficult to repair than those caused by X-rays, gamma rays, or alpha
particles~\citep{Cucinotta2001}. Numerous studies have reported the induction of mutations, chromosomic aberrations,
reduced survival, cell apoptosis, carcinogenesis, and other effects in multiple different life forms exposed to energy
ranges from 1~MeV to 1000
MeV~\citep{Moreno-Villanueva2017,Kuhne2009,Cucinotta2001,2010IARCHumans,Gersey2007,Nelson1989,IARC2009ACarcinogens}.
Selecting a species for high radiation tolerance also necessarily includes a discussion of mutation rate. Mutation rate is
defined by the number of mutations that occur per genome replication, and is largely determined by the fidelity of
polymerase activity. There are different types of mutations and correction mechanisms, such as proofreading or mismatch
repair to fix errors, that cause mutation rates to vary extensively across taxa and even across the genome
itself~\citep{Hodgkinson2011VariationGenomes}. Some organisms, such as \textit{Paramecium tetraurelia}, have incredibly
low mutation rates~\citep{Sung2012ExtraordinaryTetraurelia}, whereas viruses like the DNA virus Hepatitis
B~\citep{Caligiuri2016OverviewInfection} have very high mutation rates. As the organisms on board will mostly be in a
cryptobiotic state, the intrinsic rate for each individual species is not of major concern. The concern is the organism's
ability to accumulate mutations from exogenous factors, particularly radiation. The organisms on board should not
accumulate so many mutations during travel as to no longer be viable once the time comes for them to be revived from
cryptobiosis. Even with proper shielding, there will be increased exposure to radiation that may increase the number of
mutations past an unacceptable threshold. In the event this occurs, a species with an already low mutation rate in
addition to DNA repairing capabilities would fare better. An organism like the tardigrade \textit{Ramazzottius
varieornatus} has demonstrated the ability to repair UVC-induced thymine dimers after revival from desiccation and
exposure to fluorescent light~\citep{Horikawa2013AnalysisRadiation}. As such, it is a desirable type of DNA repair and
radiation tolerance mechanism for on-board study. An additional consideration for species selection is the presence of
redundancies in DNA.
\begin{figure*}[h]
\centering
\includegraphics[width=0.9\textwidth]{fig3.jpg}
\caption{\small{In the case of gamma radiation, most organisms exhibit LD$_{50}$ (median lethal dose) values on the order of $10^1$ -- $10^2$ Gy; however, certain nematode, fungi, rotifer, tardigrade, bacterial, and archaeal species demonstrate much higher tolerances ($\sim10^3$ -- $10^4$ Gy) \citep{Jonsson2018EnvironmentalTolerance,Gladyshev2008,Horikawa2006,Jolivet2003,Watanabe2006,Grynberg2012TrypanosomaRadiation,Dvorak2009PossibilitiesArtemia,Jeong2015EffectsPathogens}. Even across species, large differences in LD$_{50}$ values exist, as can be seen between the three represented tardigrade species \textit{M. tardigradum}, \textit{R. coronifer}, and \textit{H. dujardini}.}}
\label{fig:radiation_tolerance}
\end{figure*}
\subsection{Selected Species}
\subsubsection{Tardigrades}
Tardigrades, colloquially called ``water bears'' or ``moss piglets'', are microscopic aquatic organisms known for their
robustness. They are found in a variety of habitats ranging from freshwater environments to the ocean and across all seven
continents~\citep{Tsujimoto2016}.
Although not true extremophiles due to their ability to endure but not thrive in harsh conditions, tardigrades have been
shown to withstand droughts, freezing temperatures, high levels of radiation, and high-pressure
environments~\citep{Hengherr2009,Ono2008,Rebecchi2007,Mbjerg2011SurvivalTardigrades}. Tardigrades can enter an
anhydrobiotic state by constricting into a ball known as a tun~\citep{Gagyi-Palffy2011}. The tardigrades' tolerance to
severe environmental factors is contingent on their ability to undergo cryptobiosis. In addition to added protection, the
drastically lowered metabolic needs of the organism while in a cryptobiotic state, suggested to be less than 0.01\% of the
active metabolism~\citep{Piper2007}, make tardigrades ideal for a journey wherein few resources can be available.
Dehydrated and hydrated tardigrades have already been exposed to space conditions in low earth orbit in various
projects~\citep{Jonsson2008,Kennedy2011,Jonsson2016,Persson2011,Rebecchi2009,Rebecchi2011}, and have shown high survival
rates in open space while shielded from UV radiation~\citep{Jonsson2008,Jonsson2016,Rebecchi2011}. Tardigrades are highly
tolerant to $^{56}$Fe ions, a specific type of HZE (high atomic number and energy) particle, and can survive doses as
high as 1000~Gy~\citep{Jonsson2017}. In other organisms, including humans, space radiation and microgravity can initiate
oxidative stress that upsets the homeostasis of the reactive oxidative species (ROS) systems~\citep{Goodwin2018}. In
contrast, the ROS systems of desiccated tardigrades did not show significant differences between samples in space and
controls on earth~\citep{Rizzo2015}. Although desiccation can initiate oxidative stress, it is well-known that tardigrades
can produce the osmolyte trehalose to protect against dehydration~\citep{Hengherr2008TrehaloseDehydration}. Additionally,
tardigrades have intrinsically disordered proteins that generate a glass-like structure during desiccation to help prevent
cellular damage~\citep{Franca2007,Boothby2017}. The tardigrades' proven capacity for maintaining normal system functioning
in extreme conditions may increase its probability of survival for longer periods in space.
The induction or downregulation of certain DNA repair functions, cell cycles, and anti-oxidative stress genes has been
shown to be related to tardigrade response to radiation or
desiccation~\citep{Beltran-Pardo2013,Jonsson2007,Wang2014,Forster2012,Hashimoto2017}. The mechanism responsible for the
increased radiotolerance in tardigrades has been studied in great detail~\citep{Chavez2019}, identifying the role of a
unique nucleosome-binding protein, termed Dsup (damage suppressor). Dsup and its orthologs bind to nucleosomes and shield
chromosomal DNA from radical-mediated cleavage, caused by, but not limited to, radiation from UV rays, X-rays, gamma rays,
and reactive oxidative species. This mechanism of protection has been observed on cultured human
cells~\citep{Hashimoto2016} and more recently in transgenic plants~\citep{Kirke2019}, with transgenic animal models
following promptly. Additional studies are necessary to explore the applications of Dsup to further augment tolerance of
stress-sensitive cells from both plant and animal origins.
\subsubsection{C. elegans}
\textit{C. elegans} is a widely understood multicellular animal whose development and physiology is described at
single-cell resolution. \textit{C. elegans} is optically clear at all stages of its life, allowing the imaging of gene
expression, protein localization, subcellular structures, and cellular physiology in the whole animal at single-cell
resolution by using green fluorescent protein and other spectral variants of fluorescent protein
reporters~\citep{Chalfie1994GreenExpression}.
The worms are small in size; a teaspoon (5 mL) holds approximately 100 million juvenile worms. They have a very rapid life
cycle, with the laboratory strain of \textit{C. elegans} living about 14 days~\citep{Corsi2006AElegans}.
Like tardigrades, \textit{C. elegans} can be placed in suspended animation by either anhydrobiosis or
cryonics~\citep{Erkut2013MolecularDesiccation}. Indeed, juvenile worms that have been frozen for many decades recover
immediately and proceed to develop to fertile adults that produce hundreds of offspring within two days. Under unfavorable
growth conditions such as starvation or crowding, young larvae enter an alternative developmental stage (the ``dauer''
state) and can survive for several months without
feeding~\citep{Fielenbach2008C.Plasticity,Padilla2012SuspendedQuiescence}. The ability of these organisms to undergo
cryptobiosis -- in addition to the subsequent lower metabolic needs arising from this state -- make them ideal spaceflight
candidates.
Generating transgenic animals in \textit{C. elegans} is relatively straightforward. This has allowed the development of
several transgenic reporters of neural, muscle, and other cellular activity~\citep{KaymakEfficientMosSCI}. There are
trials being conducted to create a transgenic line of \textit{C. elegans} expressing Dsup, to increase resistance to
ionizing radiation, thereby reducing the overall necessary shielding on board the spacecraft. Biological radiotolerance is
the limiting factor in regards to the minimum necessary shielding required aboard the craft. By increasing radiotolerance
genetically, we stand to reduce payload mass considerably.
\subsubsection{Single-celled organisms}
In addition to studying the effects of microgravity and interstellar space radiation on multicellular organisms, analysis
of single-celled organisms is not only possible but presents an opportunity to study evolution in interstellar space. For
example, in akinete (dormant) stages, \textit{Nostoc spp.} have demonstrated desiccation tolerance~\citep{Katoh2003},
resistance to ionizing radiation~\citep{Potts1999}, and survival in vacuum for over a year~\citep{Arai2008}. Previous
experiments on the ISS such as BIOMEX have further validated the survivability of different strains (CCCryo 231-06, UTEX
EE21, and CCMEE 391) of the cyanobacterium in low-Earth orbit (LEO)~\citep{Baque2016}. As terrestrial cyanobacterium,
\textit{Nostoc} is capable of nitrogen and CO$_2$ fixation, as well as oxygen production.
Studies of \textit{Deinococcus radiodurans}, a bacteria species with one of the highest known radiation tolerances,
suggest that the same mechanisms responsible for repairing DNA after desiccation are also responsible for repairing DNA
after ionizing radiation damage~\citep{Watanabe2006}. In \textit{D.radiodurans}, multiple genome copies, a high degree of
redundancy in DNA repair enzymes, antioxidant defenses, structural toroids, and protein protection with manganese-rich
peptides all play an important role in protecting and repairing radiation damage~\citep{Slade2011,Daly2010,Englander2004}.
Such mechanisms appear unique to this bacteria species, as other organisms demonstrate different tolerance mechanisms.
Another organism, the melanized fungus, has been shown to be highly tolerant of radiation as specimens have been isolated
from irradiated areas~\citep{Zhdanova2004}. The authors report that melanized fungi growth is also stimulated by the
presence of low-dose radiation, with the electron transfer properties of their melanin increasing in radiation's
presence~\citep{Dadachova2007,Robertson2012}. Further studies are necessary to confirm these findings, but the use of
radiation to store metabolic energy presents another avenue for experimental research in the interstellar environment.
In any microbial study persisting over the course of several years, evolution of the microorganisms and the ability of the bio-containment system to remain axenic will be important. While this may hinder the study of individual species in isolation, an opportunity arises to study the time evolution of community structure and function of microbial communities in the space environment. Such research will aid the development of bioregenerative life support systems (BLSS) that employ microorganisms for O$_2$ production.
\subsubsection{Other notable organisms and cell types}
Rotifers (particularly bdelloid rotifers) have also been shown to be adept at anhydrobiosis,~\citep{Ricci1987} ionizing
radiation,~\citep{Gladyshev2008} and robustness to low temperatures~\citep{Shain2016}. Indeed, a core sample of
24,000-year old Arctic permafrost revealed rotifers that survived cryptobiosis and were able to reproduce (via
parthenogenesis)~\citep{Shmakova2021}.
Mammalian cells are not as robust to the space environment as the previously mentioned organisms. However, understanding
the extrasolar space environment's effects on mammalian cell development and division is one of the first steps to sending
additional life forms out of the solar system. Microgravity and ionizing radiation has shown to have degenerative effects
on a range of human stem cells~\citep{Blaber2014,Cerutti1974},
but mutation research suggests that mammalian cells may be able to produce an adaptive response when exposed to low levels
of radiation that then strengthens their resistance to subsequent higher levels~\citep{Vijayalaxmi2014}. More research is
needed before we can make confident statements concerning the effect of space conditions on mammalian cells. Human
embryonic kidney (HEK) cells, Chinese hamster ovary (CHO) cells, and fibroblasts are all favorable subjects for these sort
of tests~\citep{Niwa2006}, several of which are ongoing in LEO.
\section{Experimental Opportunities}
\subsection{Range of biology to be tested}
Selecting for species that are physiologically better fit for interstellar travel opens up new avenues for space research.
In testing the metabolism, development, and replication of species, like \textit{C. elegans}, we can see how biological
systems are generally affected by space conditions. \textit{C. elegans} and tardigrades are inherently more suited to
space flight as opposed to humans due to factors like the extensive DNA protection mechanisms some tardigrades have for
radiation exposure or the dauer larva state of arrested development \textit{C. elegans} enter when faced with unfavorable
growth conditions. However, as there is overlap between our species, like the human orthologs for 80\% of \textit{C.
elegans} proteins~\citep{Lai2000}, we can begin to make some predictions on the potential for human life in interstellar
space.
Although the current size of the interstellar spacecraft restricts the species available for study, the use of \textit{C. elegans} and tardigrades will be able to return statistically significant results. Even if radiation-induced population losses occur, the remaining worm and tardigrade count available onboard is still expected to be enough to conclude on findings with significance. In addition, \textit{C. elegans} can easily be cultured on a diet of \textit{E. coli} bacteria or completely defined synthetic growth medium while tardigrades can feed on \textit{C. elegans}. This provides us with the opportunity to study the effects of interstellar travel on reproduction, growth, and development of an organism in addition to the effects on a food chain in statistically significant numbers.
Given the desire for low power and mass on board the spacecraft, new designs for low mass fluorescent imaging may be needed. We can also harness the ease of transgenesis and optical properties of \textit{C. elegans} to develop transgenic worms with bioluminescent reporters for muscle and neural activity. In addition, we will develop reporters to monitor DNA repair response to exposure to cosmic radiation and study its mutagenic effect and evolution during interstellar space travel. Thus \textit{C. elegans} will provide an unprecedented window into the effects of hypergravity, microgravity, and cosmic radiation associated with interstellar travel on physiology and anatomy of a whole organism at single-cell resolution.
An interesting avenue of research that can be explored is memory retention, specifically in \textit{C. elegans}; the worms
can be trained to associate odors with food and exhibit classic Pavlovian associative learning
behaviors~\citep{Rothman2019}. It has been shown long-term associative memory lasts up to 40~h with noticeable declines
beginning after 16~h~\citep{Kauffman2011}. If proven to be capable of memory retention after extended periods of
cryptobiosis, \textit{C. elegans} can be used in a variety of experiments with the limiting variable as time. Testing is
currently being done to determine the tardigrades' capacity to be classically conditioned.
\subsection{Experimental design}
To house specimens on board, we are developing the design of a simple microfluidics chamber that provides conditions
necessary for reviving and sustaining life (microorganisms of order 200 $\mu$m in length). These dimensions
are large enough to perform remote lab-on-a-chip experiments, but also small enough to be viable for low mass spacecraft.
Polydimethylsiloxane (PDMS) is the typical material in microfluidic applications due to its biocompatibility,
transparency, and ease of fabrication~\citep{Roy2016}, but it is not ideal for the space environment. PDMS has several
disadvantages, including high O$_2$ permeability, strong adsorption of biomolecules, relatively high glass
transition temperature, and poor mechanical compliance. New thermoplastic elastomers are compatible with microfabrication
and biology, mechanically stable, less permeable to O$_2$ than PDMS, and optically transparent (UV to IR
range)~\citep{Lachaux2017ThermoplasticDevices}.
They can be manufactured with diverse glass transition temperatures and either monolithically prepared with an imprinter
or integrated with other candidates, such as glass. Polymerase chain reaction (PCR) on a chip is another area that will
evolve naturally over the next decade and is one of many biological techniques that could be incorporated in designs. It
is also possible that enzymes could be stabilized in osmolytes to perform onboard biochemical reactions. For the study of
life in the interstellar environment, it is necessary to include experimental controls (in LEO and on the ground) and the
use of diverse genotypes and species to characterize a wide response space~\citep{Vandenbrink2016}.
Initial research on miniaturizing the necessary research hardware and supporting electrical
components is detailed in Brashears, 2016~\citep{Brashears2016}.
\subsection{Abiogenesis experiments}
A hermetically sealed environment aboard the
spacecraft would enable unprecedented abiogenesis experiments with the ability to conduct further research on prebiotic
chemical reactions. The primary engineering constraint will be the source of energy, as it must be able to emulate an
Earth-like environment for the entire duration of the experiment. Biologically, the challenge rests with forming nucleic
acids and other key biological molecules out of the various complex organic molecules expected in order for life as we
know it, to arise---a feat that has only been recreated under tightly controlled
conditions~\citep{Gibson2010CreationGenome}. The high radiation environment of interstellar space provides an interesting
opportunity to study the biochemical origins of life in spacecraft with low radiation shielding versus those equipped with
protective measures to limit the effect of galactic cosmic radiation (GCR) on the prebiotic chemicals, possibly shedding
light on the flexibility in the conditions needed for life to arise.
\subsection{An interstellar seed and genetic ark}
Much like the seed banks on Earth such as the Svalbard seed vault that acts as a ``backup'' in case of a large-scale disaster on Earth or in the solar system, we can conceive of variants of the missions we propose as the ultimate backup of life on Earth. Seeds have been shown to be able to survive for extremely long times, with 32,000 year old seeds being viable. The idea of storing and propagating the ``seeds of life'' is a natural extension of our program. In this case, ``seeds'' can be physical seeds of all kinds from plant to human as well as the genetic precursors. Slower-moving payloads can potentially be recovered by future technologies or extraterrestrial intelligence.
\subsection{Digital ark --- backup of Earth}
In addition to the physical propagation of life, we can also send out digital backups of the ``blueprints of life'', a
sort of ``how-to'' guide to replicating the life and knowledge of Earth. The increasing density of data storage allows for
current storage density of more than a petabyte per gram and with new techniques, such as DNA encoding of information,
much larger amounts of storage can be envisioned. As an indication of viability, we note the US Library of Congress with
some 20 million books only requires about 20 TB to store. A small picture and letter from every person on Earth, as in the
``Voices of Humanity'' project, would only require about 100 TB to store, easily fitting on the smallest of our spacecraft
designs. Protecting these legacy data sets from radiation damage is key and is discussed in Lubin 2020 and Cohen et al.
2020~\citep{Lubin2020,Cohen2020}.
\section{Planetary Protection for Extrasolar Biology Missions}
\subsection{Relevance of planetary protection regulations and measures taken}
Sending Earth-based life to interstellar space using a directed-energy powered craft requires addressing risks of possible
contamination of extrasolar planets. As planetary protection guidelines under Article IX of the Outer Space
Treaty~\citep{1967TreatyBodies} are until now only reified for solar system bodies in the Committee on Space Research
(COSPAR) regulations (2011), the first part of the mission -- moving through our own solar system -- is of primary concern.
COSPAR's international planetary protection guidelines regulate missions' profiles and their potential to contaminate
life-supporting areas of solar system bodies with Earth organisms (forward contamination)~\citep{Sherwood2019}, and of
possibly contaminating Earth through sample return missions (backward contamination). The proposed interstellar mission
profile will only touch the aspects of forward contamination, as it has no plans of collecting and returning samples.
A comparable example of a mission with biological payload is the LIFE Experiment aboard the failed Phobos Grunt Mission.
Although all planetary protection regulations were followed~\citep{Khamidullina2012RealizationMission}, considerable
criticism from different sources followed~\citep{DiGregorio2009InternationalMars}.
Criticism of this mission profile concerning the potential for forward contamination by the failed
mission~\citep{DiGregorio2010DontMars} is thus not applicable for the reasons already outlined above. A suggested
fail-safe mechanism in the form of a self-destruct mechanism is already integrated into this mission by the speed of the
craft.
Independent of the actual scope of the regulation for biological forward contamination, the suggested mission profile will live up to the standards. An object with a mass of less than ten grams accelerating with potentially hundreds of GW of power, will, even if it were aimed at a planetary protection target (for example Mars), enter its atmosphere or impact the solar system body with enough kinetic energy to cause total sterilization of the biological samples on board. The velocity of the craft would thus serve as an in-built mechanism for sterilization. The mission profile does not include deceleration, so this mechanism is valid for the entirety of the mission. Apart from sterilization, the spacecraft will be aimed at a target outside of the ecliptic and will be accelerated rapidly by directed energy. The probability of an incident relevant to planetary protection regulations (e.g.~inadvertently hitting a planet on the way out of the solar system) is thus significantly lower than the regulatory probability of $10^{-6}$. In summary, the requirements for demonstrating probabilities of 99\% to avoid impact for 20~years, and 95\% to avoid impact for 50~years, will be fulfilled by the velocity of the craft and the target lying outside of the solar system.
The intended final velocity of the spacecraft depends on the spacecraft's mass and the DE system's power and size. One goal of Starlight is to achieve speeds greater than 25\% of the speed of light. Upon impact, the kinetic energy (${\sim}1$ kiloton/g at 0.25$c$) will sterilize the entire spacecraft and adhere to COSPAR planetary protection regulations outside of the solar system as well, even if those regulations are not applicable at this time. Should planetary protection regulations be extended to extrasolar missions, a mission type such as the one proposed in this paper will automatically adhere to any regulations regarding probabilities of extrasolar forward contamination.
\subsection{Extrasolar protection looking forward}
The discussion of extrasolar planetary protection rules by Charles Cockell concerning sterilization of craft by long
exposure to the ISM on the example of the \textit{Voyager} craft makes an interesting point for DE
missions~\citep{Cockell2008}, Cockell argues that long-duration missions with lifespans larger than $10^3$ years will
automatically be sterilized through interstellar radiation. If the proposed profile is a flyby mission, the mission
lifespan will automatically stretch to larger than $10^3$ years, despite the target being reached in decades, as the
craft will enter the target solar system for a period of time before flying on. If the spacecraft entered the atmosphere
of an exoplanet, the craft's kinetic energy will ensure a similar or greater level of sterilization than the kind
suggested by Cockell~\citep{Cockell2008}. However, the released kinetic energy poses other ethical problems beyond
contamination by Earth life. The energy released will be in the range of $10^{12}$ J or about a kiloton TNT equivalent,
similar to measured impact energies from large meteorites and small asteroids. For a planet with an atmosphere similar to
Earth, this would be unnoticeable except to extremely sensitive instruments. Schwartz argues for the ethical values of
uninhabited or uninhabitable places in the solar system and beyond~\citep{Schwartz2019}, drawing on Cockell and Horneck's
arguments for planetary park systems~\citep{Cockell2006,Cockell2004}. The park system would seek to preserve a planet's
pre-existing environment, just as national parks do on Earth. This would pose a challenge for any directed-energy mission
entering a planetary atmosphere at a significant fraction of the speed of light, regardless of its payload.
The planetary park system demonstrates a need to reevaluate planetary protection considerations and regulations in light
of coming extrasolar missions. While planetary protection -- especially in the form of forward contamination regulations
for solar system bodies -- aims to protect scientific research from unintended contamination, this rationale no longer
works for extrasolar missions. Outside the solar system, distances are so vast and the mission's timescale is so large
that the protection of science cannot be the only rationale \citep{Gros2019}. The value of sending
Earth life to other solar systems needs to be discussed to reform planetary protection regulations. The ethical value of
paving the way for future human missions to other solar systems through biological precursor missions like this one cannot
be addressed within current regulations. The lead author subscribes to Schwartz's train of thought
for interstellar planetary protection in seeking to preserve exoplanets beyond the life bias, including preventing
contamination during ``first visits''~\citep{Schwartz2019}. However, once initial exploratory phases have been completed
and best efforts at gathering knowledge of exoplanets has been performed, there may be certain circumstances in which
intentionally ``seeding life'' would not be unethical.
The idea of ``seeding life'', whether accidentally or intentionally on an
exoplanet~\citep{Gros2016}, is intriguing. Re-entry, assuming there is an atmosphere or direct impact to the surface if
there is no atmosphere, requires the dissipation of an extremely large amount of energy per mass (about 60~MeV per nucleon
of about 1 kiloton TNT/gram at 0.33$c$). Coupling this amount of energy directly to the spacecraft with no other
dissipation modes will result in complete vaporization of the spacecraft. Various techniques for slowing the spacecraft
have been discussed \citep{Lubin2020,Gros2017}, but none of them appear to be practical at the
extreme speeds we are proposing. There are some possibilities for speeds less than about 0.05$c$ including using
photon braking from bright stars. For now, reentry remains an open question.
\section{Conclusion}
We are rapidly approaching the technological capability for interstellar flight on meaningful timescales. As such, we must
consider the benefits of sending life outside of the solar system. Using remote sensing turned inward, we can study
biological systems under microgravity and space radiation conditions unique to interstellar space as well as remote
sensing of the interstellar target. Interstellar probes may yet bring us closer to answering questions long pondered in
the stories of science fiction, such as ``Can humans travel to other star systems?'', but may also allow us to investigate
more pragmatic matters, such as the plausibility of the panspermia hypothesis of the origins of life on Earth. Exploration
via interstellar probes is as such warranted; the United States Congress has even encouraged NASA to develop a conceptual
roadmap for doing so~\citep{2016CommerceCong.}.
Here we have outlined several challenges associated with studying terrestrial life in interstellar space, namely transport, data acquisition, life support, experimental design, and ethical considerations. While this list is by no means exhaustive, we see these as necessary considerations for producing meaningful research results and urge the community to foster more development in these areas. We see the standoff directed energy propulsion technology from Project Starlight as a possible option for interstellar transport of biological systems. Species selection criteria have also been identified, as well as examples of species suitable for experiments. Sending small (micro-scale) cryptobiotic lifeforms as model organisms can pave the way for researching more complex, less robust organisms in interstellar space. The information presented in this paper aims to facilitate the conceptual design and elucidate the experimental space currently available to interstellar space biology missions. Future work in the development of more sophisticated interstellar biosentinel experiments and hardware design, eventually culminating in a deeper understanding of both our own biology and worlds beyond, will be one of humanity's greatest endeavors.
\section*{8. Acknowledgments}
Funding for this program comes from NASA grants NIAC Phase I DEEP-IN [NNX15AL91G] and NASA NIAC Phase II DEIS [NNX16AL32G] and the NASA California Space Grant Consortium [NNX10AT93H] as well as a generous gift from the Emmett and Gladys W. fund (P.L.). Additional funding comes from the National Institutes of Health [1R01HD082347 and 1R01HD081266] (J.H.R.).
\bibliographystyle{elsarticle-num}
|
1,477,468,751,097 | arxiv | \section{Introduction}
Graph learning methods generate predictions by leveraging complex inductive biases captured in the topology of the graph \cite{battaglia2018relational}. A large volume of work in this area, including graph neural networks (GNNs), exploits \textit{homophily} as a strong inductive bias, where connected nodes tend to be similar to each other in terms of labels \cite{mcpherson2001birds, altenburger2018monophily}. Such assumptions of homophily, however, do not always hold true. For example, malicious node detection, a key application of graph machine learning, is known to be non-homophilous in many settings \cite{pandit2007netprobe, chau2006detecting, gatterbauer2014semi, breuer2020friend}.
Further, while new GNNs that work better in these non-homophilous settings have been developed \cite{zhu2020beyond, nonlocal, zhu2020graph, chien2021adaptive, chen2020simple, yan2021two, kim2021how, jin2021node, bo2021beyond, nt2020stacked}, their evaluation is limited to a few graph datasets used by \citet{pei2019geom} (collected by \cite{rozemberczki2019multi,tang2009social, mitchell1997web}) that have certain undesirable properties such as small size, narrow range of application areas, and high variance between different train/test splits \cite{zhu2020beyond}.
Consequently, method scalability has not been thoroughly studied in non-homophilous graph learning. In fact, many non-homophilous techniques frequently require more parameters and computational resources~\cite{zhu2020beyond, abu2019mixhop, chien2021adaptive}, which is neither evident nor detrimental when they are evaluated on very small datasets. Even though scalable graph learning techniques do exist, these methods generally cannot be directly applied to the non-homophilous setting, as they oftentimes assume homophily in their construction \cite{wu2019simplifying, huang2021combining, deng2020graphzoom, bojchevski2020scaling}.
Non-homophily in graphs also degrades proven graph learning techniques that have been instrumental to strong performance in scalable graph learning.
For instance, label propagation, personalized PageRank, and low-pass graph filtering have been used for scalable graph representation learning models, but these methods all assume homophily \cite{wu2019simplifying, huang2021combining, deng2020graphzoom, bojchevski2020scaling}.
Moreover, we give empirical evidence that existing minibatching techniques in graph learning \cite{chiang2019cluster, zeng2019graphsaint} significantly degrade performance in non-homophilous settings.
In response, we develop a novel model, LINKX, that addresses these concerns; LINKX outperforms existing graph learning methods on large-scale non-homophilous datasets and admits a simple minibatching procedure that maintains strong performance.
To summarize, we demonstrate three key areas of deficiency as mentioned above, namely: (1) that there is a lack of large, high-quality datasets covering different non-homophilous applications, (2) that current graph minibatching techniques and scalable methods do not work well in non-homophilous settings, and (3) that prior non-homophilous methods are not scalable. To these ends, this paper makes the following contributions:
\textbf{Dataset Collection and Benchmarking.} We collect a diverse series of large, non-homophilous graph datasets and define new node features and tasks for classification. These datasets are substantially larger than previous non-homophilous datasets, span wider application areas, and capture different types of complex label-topology relationships.
With these proposed datasets, we conduct extensive experiments with 14 graph learning methods and 3 graph minibatching techniques that are broadly representative of the graph machine learning model space.
\textbf{Analyzing Scalable Methods and Minibatching.}
We analyze current graph minibatching techniques like GraphSAINT \cite{zeng2019graphsaint} in non-homophilous settings, showing that they substantially degrade performance in experiments.
Also, we show empirically that scalable methods for graph learning like SGC and C\&S \cite{wu2019simplifying, huang2021combining} do not perform well in non-homophilous settings --- even though they achieve state-of-the-art results on many homophilous graph benchmarks. Finally, we demonstrate that existing non-homophilous methods often suffer from issues with scalability and performance in large non-homophilous graphs, in large part due to a lack of study of large-scale non-homophilous graph learning.
\textbf{LINKX: a strong, simple method.} We propose a simple method LINKX that achieves excellent results for non-homophilous graphs while overcoming the above-mentioned minibatching issues. LINKX works by separately embedding the adjacency $\mbf{A}$ and node features $\mbf{X}$, then combining them with multilayer perceptrons and simple transformations, as illustrated in Figure~\ref{fig:linkx}. It generalizes node feature MLP and LINK regression \cite{zheleva2009to}, two baselines that often work well on non-homophilous graphs. This method is simple to train and evaluate in a minibatched fashion, and does not face the performance degradation that other methods do in the minibatch setting. We develop the model and give more details in Section~\ref{sec:linkx}.
\begin{figure}[ht]
\centering
\includegraphics[width=.98\textwidth]{visualizations/linkx_new.pdf}
\caption{Our model LINKX separately embeds node features and adjacency information with MLPs, combines the embeddings together by concatenation, then uses a final MLP to generate predictions.}
\label{fig:linkx}
\vspace{-10pt}
\end{figure}
\section{Prior Work}
\myparagraph{Graph Representation Learning.} Graph neural networks \cite{hamilton2017inductive, kipf2017semi, velivckovic2018graph} have demonstrated their utility on a variety of graph machine learning tasks. Most GNNs are constructed by stacking layers that propagate transformed node features, which are then aggregated via different mechanisms. The neighborhood aggregation used in many existing GNNs implicitly leverage homophily, so they often fail to generalize on non-homophilous graphs \cite{zhu2020beyond, balcilar2021analyzing}. Indeed, a wide range of GNNs operate as low-pass graph filters \cite{nt2019revisiting, wu2019simplifying, balcilar2021analyzing} that smooth features over the graph topology, which produces similar representations and thus similar predictions for neighboring nodes.
\myparagraph{Scalable methods.} A variety of scalable graph learning methods have been developed for efficient computation in larger datasets \cite{zeng2019graphsaint, chiang2019cluster, ying2018graph, hamilton2017inductive, wu2019simplifying, huang2021combining, deng2020graphzoom, bojchevski2020scaling}. Many of these methods explicitly make use of an assumption of homophily in the data \cite{wu2019simplifying, huang2021combining, deng2020graphzoom, bojchevski2020scaling}. By leveraging this assumption, several simple, inexpensive models are able to achieve state-of-the-art performance on homophilic datasets \cite{wu2019simplifying, huang2021combining}. However, these methods are unable to achieve comparable performance in non-homophilous settings, as we show empirically in Section~\ref{sec:experiments}.
\myparagraph{Graph sampling.} As node representations depend on other nodes in the graph, there are no simple minibatching techniques in graph learning as there are for i.i.d. data. To scale to large graphs, one line of work samples nodes that are used in each layer of a graph neural network \cite{hamilton2017inductive, ying2018graph, chen2018fastgcn}. Another family of methods samples subgraphs of an input graph, then passes each subgraph through a GNN to make a prediction for each node of the subgraph \cite{chiang2019cluster, zeng2019accurate, zeng2019graphsaint}. While these methods are useful for scalable graph learning, we show that they substantially degrade performance in our non-homphilous experiments (see Section~\ref{sec:experiments}).
\myparagraph{Non-Homophilous methods.} Various GNNs have been proposed to achieve higher performance in low-homophily settings \cite{zhu2020beyond, nonlocal, zhu2020graph, chien2021adaptive, chen2020simple, yan2021two, kim2021how, jin2021node}. Geom-GCN \cite{pei2019geom} introduces a geometric aggregation scheme, MixHop \cite{abu2019mixhop} proposes a graph convolutional layer that mixes powers of the adjacency matrix, GPR-GNN \cite{chien2021adaptive} features learnable weights that can be positive and negative in feature propagation, GCNII \cite{chen2020simple} allows deep graph convolutional networks with relieved oversmoothing, which empirically performs better in non-homophilous settings, and H\textsubscript{2}GCN \cite{zhu2020beyond} shows that separation of ego and neighbor embeddings, aggregation in higher-order neighborhoods, and the combination of intermediate representations improves GNN performance in low-homophily.
There are several recurring design decisions across these methods that appear to strengthen performance in non-homophilous settings: using higher-order neighborhoods, decoupling neighbor information from ego information, and combining graph information at different scales \cite{zhu2020beyond}. Many of these design choices require additional overhead (see Section~\ref{sec:complexity}), thus reducing their scalability.
\myparagraph{Datasets.} The widely used citation networks Cora, Citeseer, and Pubmed \cite{sen2008collective, yang2016revisiting} are highly homophilous (see Appendix~\ref{sec:appendix_measures}) \cite{zhu2020beyond}. Recently, the Open Graph Benchmark \cite{hu2020open} has provided a series of datasets and leaderboards that improve the quality of evaluation in graph representation learning; however, most of the node classification datasets tend to be homophilous, as noted in past work \cite{zhu2020beyond} and expanded upon in Appendix~\ref{sec:homophilous_stats}. A comparable set of high-quality benchmarks to evaluate non-homophilous methods does not currently exist.
\section{Datasets for Non-Homophilous Graph Learning}
\subsection{Currently Used Datasets}
The most widely used datasets to evaluate non-homophilous graph representation learning methods were used by \citet{pei2019geom} (and collected by \cite{rozemberczki2019multi,tang2009social, mitchell1997web}); see our Table~\ref{tab:geom_gcn} for statistics. However, these datasets have fundamental issues. First, they are very small --- the Cornell, Texas, and Wisconsin datasets have between 180-250 nodes, and the largest dataset Actor has 7,600 nodes. In analogy to certain pitfalls of graph neural network evaluation on small (homophilic) datasets discussed in \cite{shchur2018pitfalls}, evaluation on the datasets of \citet{pei2019geom} is plagued by high variance across different train/test splits (see results in \cite{zhu2020beyond}). The small size of these datasets may tend to create models that are more prone to overfitting \cite{dwivedi2020benchmarking}, which prevents the scaling up of GNNs designed for non-homophilous settings. %
\citet{peel2017graph} also studies node classification on network datasets with various types of relationships between edges and labels. However, they only study methods that act on graph topology, and thus their datasets do not necessarily have node features. We take inspiration from their work, by testing on Pokec and Facebook networks with node features that we define, and by introducing other year-prediction tasks on citation networks that have node features.
\subsection{An Improved Homophily Measure}\label{sec:new_measure}
Various metrics have been proposed to measure the homophily of a graph. However, these metrics are sensitive to the number of classes and the number of nodes in each class. Let $G = (V, E)$ be a graph with $n$ nodes, none of which are isolated. Further let each node $u \in V$ have a class label $k_u \in \{0, 1, \ldots, C-1\}$ for some number of classes $C$, and denote by $C_k$ the set of nodes in class $k$. The edge homophily \cite{zhu2020beyond} is the proportion of edges that connect two nodes of the same class:
\begin{equation}\label{eq:edge_hom}
h = \frac{| \{(u,v) \in E : k_u = k_v \} |}{|E|}.
\end{equation}
Another related measure is what we call the node homophily \cite{pei2019geom}, defined as $\frac{1}{|V|} \sum_{u \in V} \frac{d_u^{(k_u)}}{d_u}$,
in which $d_u$ is the number of neighbors of node $u$, and $d_u^{(k_u)}$ is the number of neighbors of $u$ that have the same class label. We focus on the edge homophily \eqref{eq:edge_hom} in this work, but find that node homophily tends to have similar qualitative behavior in experiments.
The sensitivity of edge homophily to the number of classes and size of each class limits its utility.
We consider a null model for graphs in which the graph topology is independent of the labels; suppose that nodes with corresponding labels are fixed, and include edges uniformly at random in the graph that are independent of node labels.
Under this null model, a node $u \in V$ would be expected to have $d_u^{(k_u)}/d_u \approx |C_{k_u}|/n$ as the proportion of nodes of the same class that they connect to \cite{altenburger2018monophily}. For a dataset with $C$ balanced classes, we would thus expect the edge homophily to be around $\frac{1}{C}$, so the interpretation of the measure depends on the number of classes. Also, if classes are imbalanced, then the edge homophily may be misleadingly large. For instance, if 99\% of nodes were of one class, then most edges would likely be within that same class, so the edge homophily would be high, even when the graph is generated from the null model where labels are independent of graph topology. Thus, the edge homophily does not capture deviation of the label distribution from the null model.
We introduce a metric that better captures the presence or absence of homophily. Unlike the edge homophily, our metric measures excess homophily that is not expected from the above null model where edges are randomly wired. Our metric does not distinguish between different non-homophilous settings (such as heterophily or independent edges); we believe that there are too many degrees of freedom in non-homophilous settings for a single scalar quantity to be able to distinguish them all. Our measure is given as:
\begin{equation}\label{eq:our_measure}
\hat h = \frac{1}{C-1} \sum_{k=0}^{C-1} \left[h_k - \frac{|C_k|}{n} \right]_+,
\end{equation}
where $[a]_+ = \max(a, 0)$, and $h_k$ is the class-wise homophily metric
\begin{equation}
h_k = \frac{\sum_{u \in C_k} d_u^{(k_u)}}{\sum_{u \in C_k} d_u}.
\end{equation}
Note that $\hat h \in [0,1]$, with a fully homophilous graph (in which every node is only connected to nodes of the same class) having $\hat h = 1$. Since each class-wise homophily metric $h_k$ only contributes positive deviations from the null expected proportion $|C_k|/n$, the class-imbalance problem is substantially mitigated. Also, graphs in which edges are independent of node labels are expected to have $\hat h \approx 0$, for any number of classes. Our measure $\hat h$ measures presence of homophily, but does not distinguish between the many types of possibly non-homophilous relationships. This is reasonable given the diversity of non-homophilous relationships. For example, non-homophily can imply independence of edges and classes, extreme heterophily, connections only among subsets of classes, or certain chemically / biologically determined relationships. Indeed, these relationships are very different, and are better captured by more than one scalar quantity, such as the compatibility matrices presented in the appendix. Further discussion is given in Appendix~\ref{sec:appendix_measures}.
\subsection{Proposed Datasets}
Here, we detail the non-homophilous datasets that we propose for graph machine learning evaluation.
Our datasets and tasks span diverse application areas. \textbf{Penn94}~\cite{traud2012social}, \textbf{Pokec}~\cite{snapnets}, \textbf{genius}~\cite{lim2021expertise}, and \textbf{twitch-gamers}~\cite{rozemberczki2021twitch} are online social networks, where the task is to predict reported gender, certain account labels, or use of explicit content on user accounts. For the citation networks \textbf{arXiv-year}~\cite{hu2020open} and \textbf{snap-patents} \cite{leskovec2005graphs, snapnets} the goal is to predict year of paper publication or the year that a patent is granted. The dataset \textbf{wiki} consists of Wikipedia articles, where the goal is to predict total page views of each article. Detailed descriptions about the graph structure, node features, node labels, and licenses of each dataset are given in Appendix~\ref{sec:dataset_properties}.
Most of these datasets have been used for evaluation of graph machine learning models in past work; we make adjustments such as modifying node labels and adding node features that allow for evaluation of GNNs in non-homophilous settings. We define node features for Pokec, genius, and snap-patents, and we also define node labels for arXiv-year, snap-patents, and genius. Additionally, we crawl and clean the large-scale wiki dataset --- a new Wikipedia dataset where the task is to predict page views, which is non-homophilous with respect to the graph of articles connected by links between articles (see Appendix~\ref{sec:wiki}). This wiki dataset has 1,925,342 nodes and 303,434,860 edges, so training and inference require scalable algorithms.
Basic dataset statistics are given in Table~\ref{tab:data_stats}. Note the substantial difference between the size of our datasets and those of \citet{pei2019geom} in Table~\ref{tab:geom_gcn}; our datasets have up to 384x more nodes and 1398x more edges.
The homophily measures along with the lower empirical performance of homophily-assuming models (Section~\ref{sec:experiments}) and examination of compatibility matrices (Appendix~\ref{sec:appendix_measures}) show that our datasets are indeed non-homophilous.
As there is little study in large-scale non-homophilous graph learning, our proposed large datasets strongly motivate the need for developing a new, scalable approach that can accurately learn on non-homophilous graphs.
\begin{table*}[t]
\centering
\caption{Statistics for previously used datasets from \citet{pei2019geom} (collected by \cite{rozemberczki2019multi, tang2009social, mitchell1997web}). \#C is the number of node classes. The highest number of nodes or edges overall are bolded.}
{\footnotesize
\begin{tabular}{crrrrrrr}
\toprule
Dataset & \# Nodes & \# Edges & \# Feat. & \# C & Context & Edge hom. & $\hat h$ (ours) \\
\midrule
Chameleon & 2,277 & 36,101 & 2,325 & 5 & Wiki pages & .23 & .062\\
Cornell & 183 & 295 & 1,703 & 5 & Web pages & .30 & .047\\
Actor & \textbf{7,600} & 29,926 & 931 & 5 & Actors in movies & .22 & .011\\
Squirrel & 5,201 & \textbf{216,933} & 2,089 & 5 & Wiki pages & .22 & .025\\
Texas & 183 & 309 & 1,703 & 5 & Web pages & .11 & .001\\
Wisconsin & 251 & 499 & 1,703 & 5 & Web pages & .21 & .094\\
\bottomrule
\end{tabular}
}
\vspace{-5pt}
\label{tab:geom_gcn}
\end{table*}
\begin{table}[ht]
\centering
\caption{Statistics of our proposed non-homophilous graph datasets. \# C is the number of distinct node classes. Note that our datasets come from more diverse applications areas and are much larger than those shown in Table~\ref{tab:geom_gcn}, with up to 384x more nodes and 1398x more edges.}
\label{tab:data_stats}
{\footnotesize
\begin{tabular}{crrrrrrrr}
\toprule
Dataset & \# Nodes & \# Edges & \# Feat. & \# C & Class types & Edge hom. & $\hat h$ (ours) \\
\midrule
Penn94 & 41,554 & 1,362,229 & 5 & 2 & gender & .470 & .046 \\
pokec & 1,632,803 & 30,622,564 & 65 & 2 & gender & .445 & .000 \\
arXiv-year & 169,343 & 1,166,243 & 128 & 5 & pub year & .222 & .272 \\
snap-patents & \textbf{2,923,922} & 13,975,788 & 269 & 5 & time granted & .073 & .100 \\
genius & 421,961 & 984,979 & 12 & 2 & marked act. & .618 & .080 \\
twitch-gamers & 168,114 & 6,797,557 & 7 & 2 & mature content & .545 & .090 \\
wiki & 1,925,342 & \textbf{303,434,860} & 600 & 5 & views & .389 & .107 \\
\bottomrule
\end{tabular}
}
\vspace{0pt}
\end{table}
\section{LINKX: A New Scalable Model}\label{sec:linkx}
In this section, we introduce our novel model, LINKX, for scalable node classification in non-homophilous settings. LINKX is built out of multilayer perceptrons (MLPs) and linear transformations, thus making it simple and scalable. It also admits simple row-wise minibatching procedures that allow it to perform well on large non-homophilous graphs.
As a result, LINKX is able to circumvent aforementioned issues of graph minibatching and non-homophilous GNNs in large-scale non-homophilous settings.
\subsection{Motivation from two simple baselines}
Here, we detail two simple baselines for node classification that we build on to develop LINKX.
\myparagraph{MLP on node features.} A na\"ive method for node classification is to ignore the graph topology and simply train an MLP on node features.
For the same reason that the graph topology has more complicated relationships with label distributions in non-homophilous graphs, many GNNs are not able to effectively leverage the graph topology in these settings. Thus, MLPs can actually perform comparatively well on non-homophilous graphs --- achieving higher or approximately equal performance to various GNNs~\cite{zhu2020beyond}.
\myparagraph{LINK regression on graph topology.} On the other extreme, there is LINK~\cite{zheleva2009to} --- a simple baseline that only utilizes graph topology. In particular, we consider LINK regression, which trains a logistic regression model in which each node's features are taken from a column of the adjacency matrix. Letting $\mbf{A} \in \{0,1\}^{n \times n}$ be the binary adjacency matrix of the graph, and $\mbf{W} \in \mathbb R^{c \times n}$ be a learned weight matrix, LINK computes class probabilities as
\begin{equation}
\mbf{Y} = \mrm{softmax}(\mbf{W}\mbf{A}).
\end{equation}
Let $u \in \{1, \ldots, n\}$ be a specific node, and let $k \in \{1, \ldots, c \}$ be a specific class. Then, expanding the matrix multiplication, the log-odds of node $u$ belonging to class $k$ is given by
\begin{equation}
(\mbf{W} \mbf{A})_{ku} = \sum_{v \in \mc N(u)} \mbf{W}_{kv},
\end{equation}
where $\mc N(u)$ contains the 1-hop neighbors of $u$. In other words, the logit is given by the sum of weights $\mbf{W}_{kv}$ across the 1-hop neighbors of $u$. If a specific node $v$ has many neighbors of class $k$, then $\mbf{W}_{kv}$ is probably large, as we would expect with a high probability that any neighbor of $v$ is of class $k$.
In this sense, LINK is like a 2-hop method: for a given node $u$, the probability of being in a given class is related to the class memberships of $u$'s 2-hop neighbors in $\mc N(v)$ for each neighbor $v \in \mc N(u)$. Related interpretations of LINK as a method acting on 2-hop paths between nodes are given by \citet{altenburger2018monophily}.
Though it is simple and has been overlooked in the recent non-homophilous GNN literature, LINK has been found to perform well in certain node classification tasks like gender prediction in social networks~\cite{altenburger2018monophily, altenburger2018node}. A major reason why LINK does well in many settings is exactly because it acts as a 2-hop method. For example, while 1-hop neighbors are often not so informative for gender prediction in social networks due to lack of homophily, 2-hop neighbors are very informative due to so-called ``monophily,'' whereby many nodes have extreme preferences for connecting to a certain class~\cite{altenburger2018monophily}. Beyond just gender prediction, we show in Section~\ref{sec:experiments} that LINK empirically outperforms many models across the various application areas of the non-homophilous datasets we propose.
\subsection{LINKX}\label{sec:linkx_design}
We combine these two simple baselines through simple linear transformations and component-wise nonlinearities. Let $\mbf{X} \in \mathbb R^{D \times n}$ denote the matrix of node features with input dimension $D$, and let $[\mathbf{h}_1 ; \mathbf{h}_2]$ denote concatenation of vectors $\mathbf{h}_1$ and $\mathbf{h}_2$. Then our model outputs predictions $\mbf{Y}$ through the following mapping:
\begin{align}
\mbf{h}_\A & = \mrm{MLP}_\mbf{A}(\mbf{A}) \in \mathbb R^{d \times n}\\
\mbf{h}_\X & = \mrm{MLP}_\mbf{X}(\mbf{X}) \in \mathbb R^{d \times n}\\
\mathbf{Y} & = \mrm{MLP}_f\left(\sigma\left(\mbf{W}[\mbf{h}_\A; \mbf{h}_\X] + \mbf{h}_\A + \mbf{h}_\X \right)\right),
\end{align}
in which $d$ is the hidden dimension, $\mbf{W} \in \mathbb R^{d \times 2d}$ is a weight matrix, and $\sigma$ is a component-wise nonlinearity (which we take to be $\mrm{ReLU}$). We call our model LINKX, as it extends LINK with node feature information from the matrix $\mbf{X}$. A diagram of LINKX is given in Figure~\ref{fig:linkx}.
First, LINKX computes hidden representations $\mbf{h}_\A$ of the adjacency (extending LINK) and $\mbf{h}_\X$ of the feature matrix (as in node-feature MLPs). Then it combines these hidden representations through a linear transform $\mbf{W}$ of their concatenation, with skip connections that add back in $\mbf{h}_\A$ and $\mbf{h}_\X$ to better preserve pure adjacency or node feature information. Finally, it puts this combined representation through a non-linearity and another MLP to make a prediction.
\myparagraph{Separating then mixing adjacency and feature information.} LINKX separately embeds the adjacency $\mbf{A}$ to $\mbf{h}_\A$ and the features $\mbf{X}$ into $\mbf{h}_\X$ before mixing them for a few reasons. First, we note that this design is reminiscent of fusion architectures in multimodal networks, where data from different modalities are processed and combined in a neural network \cite{gadzicki2020early, zeng2019deep}. In our setting, we can view adjacency information and node feature information as separate modalities. Since node feature MLPs and LINK do well independently on different datasets, this allows us to preserve their individual performance if needed. Ignoring $\mbf{h}_\X$ information is similar to just using LINK, and ignoring $\mbf{h}_\A$ information is just using an node feature MLP. Still, to preserve the ability to just learn a similar mapping to LINK or to a node feature MLP, we find that having the additive skip connections helps to get performance at least as good as either baseline. Our initial empirical results showed that simply concatenating adjacency and node features as input to a network does worse overall empirically (see Appendix~\ref{sec:linkx_ablation}).
There are also computational benefits to our design choices.
Embedding $\mbf{A}$ is beneficial for depth as
adding more layers to the MLPs only gives an $\mc O(d^2)$ cost --- depending only on the hidden dimension $d$ --- and thus does not scale in the number of edges $|E|$ as when adding layers to message-passing GNNs. This is because the graph information in $\mbf{A}$ is already compressed to hidden feature vectors after the first linear mapping of $\mrm{MLP}_\mbf{A}$, and we do not need to propagate along the graph in later steps.
Moreover, this enables a sparse-dense matrix product to compute the first linear mapping of $\mrm{MLP}_\mbf{A}$ on $\mbf{A}$, which greatly increases efficiency as $\mbf{A}$ is typically very sparse for real-world graphs. Separate embeddings are key here, as this would not be possible if we for instance concatenated $\mbf{A}$ and $\mbf{X}$ when $\mbf{X}$ is large and dense.
\myparagraph{Simple minibatching.} Message-passing GNNs must take graph topology into account when minibatching with techniques such as neighbor sampling, subgraph sampling, or graph partitioning. However, LINKX does not require this, as it utilizes graph information solely through defining adjacency matrix columns as features. Thus, we can train LINKX with standard stochastic gradient descent variants by taking i.i.d. samples of nodes along with the corresponding columns of the adjacency and feature matrix as features.
This is much simpler than the graph minibatching procedures for message-passing GNNs, which require specific hyperparameter choices, have to avoid exponential blowup of number of neighbors per layer, and are generally more complex to implement \cite{zeng2019graphsaint}.
In Section~\ref{sec:mini_results}, we use the simple LINKX minibatching procedure for large-scale experiments that show that LINKX with this minibatching style outperforms GNNs with graph minibatching methods. This is especially important on the scale of the wiki dataset, where none of our tested methods --- other than MLP --- is capable of running on a Titan RTX GPU with 24 GB GPU RAM (see Section~\ref{sec:experiments}).
\subsection{Complexity Analysis}\label{sec:complexity}
Using the above notation, a forward pass of LINKX has a time complexity of $\mc O\left(d|E| + nd^2L\right)$, in which $d$ is the hidden dimension (which we assume to be on the same order as the input feature dimension $D$), $L$ is the number of layers, $n$ is the number of nodes, and $|E|$ is the number of edges. We require a $\mc O(d|E|)$ cost for the first linear mapping of $\mbf{A}$ and a $\mc O(d^2)$ cost per layer for MLP operations on hidden features, for $L$ total layers and each of $n$ nodes.
As mentioned above, message passing GNNs have to propagate using the adjacency in each layer, so they have an $L |E|$ term in the complexity. For instance, an $L$-layer GCN~\cite{kipf2017semi} with $d$ hidden dimensions has $\mc O(d L |E| + nd^2 L)$ complexity, as it costs $\mc O(d|E|)$ to propagate features in each layer, and $\mc O(nd^2)$ to multiply by the weight matrix in each layer.
Non-homophilous methods often make modifications to standard architectures that increase computational cost, such as using higher-order neighborhoods or using additional hidden embeddings~\cite{zhu2020beyond}. For instance, the complexity of MixHop~\cite{abu2019mixhop} is $\mc O(K(dL|E| + nd^2 L))$, which has an extra factor $K$ that is the number of adjacency powers to propagate with. The complexity of GCNII~\cite{chen2020simple} is asymptotically the same as that of GCN, but in practice it requires more computations per layer due to residual connections and linear combinations, and it also often achieves best performance with a large number of layers $L$. H\textsubscript{2}GCN~\cite{zhu2020beyond} is significantly more expensive due to its usage of strict two-hop neighborhoods, which requires it to form the squared adjacency $\mbf{A}^2$. This makes the memory requirements intractable even for medium sized graphs (see Section~\ref{sec:experiments}).
\section{Experiments}\label{sec:experiments}
We conduct two sets of experiments for node classification on our proposed non-homophilous datasets. One set of experiments does full batch gradient descent training for all applicable methods. This of course limits the size of each model, as the large datasets require substantial GPU memory to train on. Our other set of experiments uses minibatching methods.
As all graph-based methods run out of memory on the wiki dataset, even on 24 GB GPUs, we only include wiki results in the minibatching section.
In all settings, our LINKX model matches or outperforms other methods.
\begin{table}[ht]
\vspace{-5pt}
\centering
\caption{Experimental results. Test accuracy is displayed for most datasets, while genius displays test ROC AUC. Standard deviations are over 5 train/val/test splits. The three best results per dataset are highlighted. (M) denotes some (or all) hyperparameter settings run out of memory. }
\label{tab:results}
{\tiny
\begin{tabular}{lllllllll}
\toprule
& Penn94 & pokec & arXiv-year & snap-patents & genius & twitch-gamers \\
\midrule
MLP & $73.61 \std{0.40}$ & $62.37\std{0.02}$ & $36.70\std{0.21}$ & $31.34\std{0.05}$ & $86.68 \std{0.09}$ & $60.92\std{0.07}$ \\
\hdashline
L Prop 1-hop & $63.21 \std{0.39}$ & $53.09\std{0.05}$ & $43.42\std{0.17}$ & $30.28\std{0.09}$ & $66.02\std{0.16}$ & $62.77\std{0.24}$ \\
L Prop 2-hop & $74.13 \std{0.46} $ & $76.76\std{0.03}$ & $46.07\std{0.15}$ & $38.61\std{0.07}$ & $67.04\std{0.20}$ & $63.88\std{0.24}$ \\
LINK & $80.79 \std{0.49}$ & $80.54\std{0.03}$ & \cellcolor{blue!25} $53.97\std{0.18}$ & \cellcolor{blue!25} $60.39\std{0.07}$ & $73.56\std{0.14}$ & $64.85\std{0.21}$ \\
\hdashline
SGC 1-hop & $66.79 \std{0.27} $ & $53.61\std{0.17}$ & $32.83\std{0.13}$ & $30.31\std{0.06}$ & $82.36 \std{0.37}$ & $58.97\std{0.19}$ \\
SGC 2-hop & $76.09 \std{0.45}$ & $62.81\std{1.42}$ & $32.27\std{0.06}$ & $29.09\std{0.09}$ & $82.10 \std{0.14}$ & $59.94\std{0.21}$ \\
C\&S 1-hop & $74.28 \std{1.19}$ & $62.35\std{0.06}$ & $44.51\std{0.16}$ & $35.55\std{0.05}$ & $82.93 \std{0.15}$ & $64.86 \std{0.27}$ \\
C\&S 2-hop & $78.40 \std{3.12} $ & \cellcolor{blue!25} $81.69\std{0.09}$ & $49.78\std{0.26}$ & $49.08\std{0.04}$ & $84.94 \std{0.49}$ & \cellcolor{blue!25} $65.02 \std{0.16}$ \\
\hdashline
GCN & $82.47 \std{0.27}$ & $75.45\std{0.17}$ & $46.02\std{0.26}$ & $45.65\std{0.04}$ & $87.42 \std{0.37}$ & $62.18\std{0.26}$ \\
GAT & $81.53 \std{0.55}$ & $71.77\std{6.18}$ (M) & $46.05 \std{0.51}$ & $45.37\std{0.44}$ (M) & $55.80 \std{0.87}$
& $59.89\std{4.12}$ \\
GCNJK & $ 81.63 \std{0.54} $ & $77.00\std{0.14}$ & $46.28\std{0.29}$ & $46.88\std{0.13}$ & $89.30 \std{0.19}$ & $63.45\std{0.22}$ \\
GATJK & $80.69 \std{0.36}$ & $71.19\std{6.96}$ (M) & $45.80 \std{0.72}$ & $44.78\std{0.50}$ & $56.70 \std{2.07}$ & $59.98\std{2.87}$\\
APPNP & $74.33 \std{0.38}$ & $62.58\std{0.08}$ & $38.15\std{0.26}$ & $32.19\std{0.07}$ & $85.36 \std{0.62}$ & $60.97\std{0.10}$\\
\hdashline
H\textsubscript{2}GCN & (M) & (M) & $49.09\std{0.10}$ & (M) & (M) & (M) \\
MixHop & \cellcolor{blue!25} $83.47 \std{0.71}$ & \cellcolor{blue!25} $81.07\std{0.16}$ & \cellcolor{blue!25} $51.81\std{0.17}$ & \cellcolor{blue!25} $52.16\std{0.09}$ (M) & \cellcolor{blue!25} $90.58 \std{0.16}$ & \cellcolor{blue!25} $65.64\std{0.27}$ \\
GPR-GNN & $81.38 \std{0.16}$ & $78.83\std{0.05}$ & $45.07\std{0.21}$ & $40.19\std{0.03}$ & $90.05 \std{0.31}$ & $61.89\std{0.29}$\\
GCNII & \cellcolor{blue!25} $82.92 \std{0.59}$ & $78.94 \std{0.11}$ (M) & $47.21 \std{0.28}$ & $37.88 \std{0.69}$ (M) & \cellcolor{blue!25} $90.24 \std{0.09}$ & $63.39 \std{0.61}$\\
\midrule
LINKX & \cellcolor{blue!25} $84.71 \std{0.52}$ & \cellcolor{blue!25} $82.04\std{0.07}$ & \cellcolor{blue!25} $56.00 \std{1.34}$ & \cellcolor{blue!25} $61.95 \std{0.12}$ & \cellcolor{blue!25} $90.77 \std{0.27}$ & \cellcolor{blue!25} $66.06\std{0.19}$\\
\bottomrule
\end{tabular}
}
\vspace{-10pt}
\end{table}
\subsection{Experimental Setup}\label{sec:experimental_setup}
\myparagraph{Methods.} We include both methods that are graph-agnostic and node-feature-agnostic as simple baselines. The node-feature-agnostic models of two-hop label propagation \cite{peel2017graph} and LINK (logistic regression on the adjacency matrix) \cite{zheleva2009to} have been found to perform well in various non-homophilous settings, but they have often been overlooked by recent graph representation learning work. Also, we include SGC \cite{wu2019simplifying} and C\&S \cite{huang2021combining} as simple, scalable methods that perform well on homophilic datasets. We include a two-hop propagation variant of C\&S in analogy with two-step label propagation. In addition to representative general GNNs, we also include GNNs recently proposed for non-homophilous settings. The full list of methods is: \textbf{Only node features:} MLP \cite{goodfellow2016deep}.
\textbf{Only graph topology:} label propagation (standard and two-hop) \cite{zhou2004learning, peel2017graph}, LINK \cite{zheleva2009to}.
\textbf{Simple methods:} SGC \cite{wu2019simplifying}, C\&S \cite{huang2021combining} and their two-hop variants.
\textbf{General GNNs:} GCN \cite{kipf2017semi}, GAT \cite{velivckovic2018graph}, jumping knowledge networks (GCNJK, GATJK) \cite{xu2018representation}, and APPNP \cite{klicpera2019predict}.
\textbf{Non-homophilous methods:} H\textsubscript{2}GCN \cite{zhu2020beyond}, MixHop \cite{abu2019mixhop}, GPR-GNN \cite{chien2021adaptive}, GCNII \cite{chen2020simple}, and LINKX (ours).
\myparagraph{Minibatching methods.} We also evaluate GNNs with various minibatching methods. We take GCNJK~\cite{xu2018representation} and MixHop~\cite{abu2019mixhop} as our base models for evaluation, as they are representative of many GNN design choices and MixHop performs very well in full batch training. As other minibatching methods are trickier to make work with these models, we use the Cluster-GCN~\cite{chiang2019cluster} and GraphSAINT~\cite{zeng2019graphsaint} minibatching methods, which sample subgraphs. We include both the node based sampling and random walk based sampling variants of GraphSAINT. We compare these GNNs with MLP, LINK, and our LINKX, which use simple i.i.d. node minibatching.
\myparagraph{Training and evaluation.} Following other works in non-homophilous graph learning evaluation, we take a high proportion of training nodes \cite{zhu2020beyond, pei2019geom, yan2021two}; we run each method on the same five random 50/25/25 train/val/test splits for each dataset. All methods requiring gradient-based optimization are run for 500 epochs, with test performance reported for the learned parameters of highest validation performance. We use ROC-AUC as the metric for the class-imbalanced genius dataset (about 80\% of nodes are in the majority class), as it is less sensitive to class-imbalance than accuracy. For other datasets, we use classification accuracy as the metric. Further experimental details are in Appendix \ref{sec:exp_details}.
\subsection{Full-Batch Results}
Table~\ref{tab:results} lists the results of each method across the datasets that we propose.
Our datasets reveal several important properties of non-homophilous node classification. Firstly, the stability of performance across runs is better for our datasets than those of \citet{pei2019geom} (see \cite{zhu2020beyond} results). Secondly, as suggested by prior theory and experiments \cite{zhu2020beyond, abu2019mixhop, chien2021adaptive}, the non-homophilous GNNs usually do well --- though not necessarily on every dataset.
The core assumption of homophily in SGC and C\&S that enables them to be simple and efficient does not hold on these non-homophilous datasets, and thus the performance of these methods is typically relatively low. Still, as expected, two-hop variants generally improve upon their one-hop counter-parts in these low-homophily settings.
One consequence of using larger datasets for benchmarks is that the tradeoff between scalability and learning performance of non-homophilous methods has become starker, with some methods facing memory issues. This tradeoff is especially important to consider in light of the fact that many scalable graph learning methods rely on implicit or explicit homophily assumptions \cite{wu2019simplifying, huang2021combining, deng2020graphzoom, bojchevski2020scaling}, and thus face issues when used in non-homophilous settings.
Finally, LINKX achieves superior performance on all datasets, taking advantage of LINK's power, while also being able to utilize node features where they provide additional information.
\subsection{Minibatching Results}\label{sec:mini_results}
\begin{table}[ht]
\vspace{-5pt}
\centering
\caption{Minibatching results on our proposed datasets. $\dagger$ denotes that 10 random partitions of the graphs are used for testing GraphSAINT sampling. (T) denotes that five runs takes $\geq 48$ hours for a single hyperparameter setting. Best results up to a standard deviation are highlighted.}
\label{tab:mini_results}
{\tiny
\begin{tabular}{lllllllll}
\toprule
& Penn94 & pokec $\dagger$ & arXiv-year & snap-patents $\dagger$ & genius & twitch-gamers $\dagger$ & wiki $\dagger$ \\
\midrule
MLP Minibatch & 74.24$\pm$0.55 & 62.14$\pm$0.05 & 36.89$\pm$0.11 & 22.96$\pm$0.81 & 82.35$\pm$0.38 & 61.01$\pm$0.06 & 37.38$\pm$0.21 \\
LINK Minibatch & 81.61$\pm$0.34 & \cellcolor{blue!25} 81.15$\pm$0.25 & \cellcolor{blue!25} 53.76$\pm$0.28 & 45.65$\pm$8.25 & 80.95$\pm$0.07 & 64.38$\pm$0.26 & 57.11$\pm$0.26 \\
GCNJK-Cluster & 69.99$\pm$0.85 & 72.67$\pm$0.05 & 44.05$\pm$0.11 & 37.62$\pm$0.31 & 83.04$\pm$0.56 & 61.15$\pm$0.16 & (T) \\
GCNJK-SAINT-Node & 72.80$\pm$0.43 & 63.68$\pm$0.06 & 44.30$\pm$0.22 & 26.97$\pm$0.10 & 80.96$\pm$0.09 & 59.50$\pm$0.35 & 44.86$\pm$0.19 \\
GCNJK-SAINT-RW & 72.29$\pm$0.49 & 65.00$\pm$0.11 & 47.40$\pm$0.17 & 33.05$\pm$0.06 & 81.04$\pm$0.14 & 59.82$\pm$0.27 & 47.39$\pm$0.19 \\
MixHop-Cluster & 75.79$\pm$0.44 & 76.67$\pm$0.07 & 48.41$\pm$0.31 & 46.82$\pm$0.11 & 81.12$\pm$0.10 & 62.95$\pm$0.08 & (T) \\
MixHop-SAINT-Node & 75.61$\pm$0.55 & 66.42$\pm$0.06 & 44.84$\pm$0.18 & 27.45$\pm$0.11 & 81.06$\pm$0.08 & 59.58$\pm$0.27 & 47.39$\pm$0.18\\
MixHop-SAINT-RW & 76.38$\pm$0.50 & 67.92$\pm$0.06 & 50.55$\pm$0.20 & 34.21$\pm$0.07 & 82.25$\pm$0.78 & 60.39$\pm$0.16 & 49.15$\pm$0.26 \\
\midrule
LINKX Minibatch & \cellcolor{blue!25} 84.50$\pm$0.65 & \cellcolor{blue!25} 81.27$\pm$0.38 & \cellcolor{blue!25} 53.74$\pm$0.27 & \cellcolor{blue!25} 60.27$\pm$0.29 & \cellcolor{blue!25} 85.81$\pm$0.10 & \cellcolor{blue!25} 65.84$\pm$0.19 & \cellcolor{blue!25} 59.80$\pm$0.41 \\
\bottomrule
\end{tabular}
}
\vspace{-10pt}
\end{table}
Our experimental results for minibatched methods on our proposed datasets are in Table~\ref{tab:mini_results}. Since GraphSAINT does not partition the nodes of the graph into subgraphs that cover all nodes, we test on the full input graph for the smaller datasets and uniformly random partitions of the graph into 10 induced subgraphs for the larger datasets.
First, we note that both Cluster-GCN and GraphSAINT sampling lead to performance degradation for these methods on our proposed non-homophilous datasets. When compared to the full-batch training results of the previous section, classification accuracy is typically substantially lower. Further experiments in Appendix~\ref{sec:appendix_minibatch_exp} give evidence that the performance degradation is often more substantial in non-homophilous settings, and provides possible explanations for why this may be the case.
On the other hand, LINKX does not suffer much performance degradation with the simple i.i.d. node minibatching technique. In fact, it matches or outperforms all methods in this setting, often by a wide margin. Though LINK performs on par with LINKX in arXiv-year and pokec, our LINKX model significantly outperforms it on other datasets, again due to LINKX's ability to integrate node feature information. We again stress that the LINKX minibatching is very simple to implement, yet it still substantially outperforms other methods. Consequently, LINKX is generally well-suited for scalable node classification across a broad range of non-homophilous settings, surpassing even specially designed non-homophilous GNNs with current graph minibatching techniques.
\section{Discussion and Conclusion}\label{sec:conclusion}
In this paper, we propose new, high-quality non-homophilous graph learning datasets, and we benchmark simple baselines and representative graph representation learning methods across these datasets. Further, we develop LINKX: a strong, simple, and scalable method for non-homophilous classification. Our experiments show that LINKX significantly outperforms other methods on our proposed datasets, thus providing one powerful method in the underexplored area of scalable learning on non-homophilous graphs.
We hope that our contributions will provide researchers with new avenues of research in learning on non-homophilous graphs, along with better tools to test models and evaluate utility of new techniques.
While we do find utility in our proposed datasets and LINKX model, this work is somewhat limited by only focusing on transductive node classification. This setting is the most natural for studying performance in the absence of homophily, since here we define homophily in terms of the node labels, and previous non-homophilous GNN work using the \citet{pei2019geom} data also studies this setting exclusively \cite{zhu2020beyond, chien2021adaptive}. Using other Facebook 100 datasets besides Penn94 \cite{traud2012social} would allow for inductive node classification, but LINKX does not directly generalize to this setting. Our proposed datasets and model LINKX could be used for link prediction, but this is left for future work.
\myparagraph{Broader Impact.}
Fundamental research in graph learning on non-homophilous graphs has the potential for positive societal benefit. As a major application, it enables malicious node detection techniques in social networks and transaction networks that are not fooled by fraudsters’ connections to legitimate users and customers. This is a widely studied task, and past works have noted that non-homophilous structures are present in many such networks \cite{breuer2020friend, gatterbauer2014semi, pandit2007netprobe}. We hope that this paper provides insight on the homophily limitations of existing scalable graph learning models and help researchers design scalable models that continue to work well in the non-homophilous regime, thus improving the quality of node classification on graphs more broadly. As our proposed datasets have diverse structures and our model performs well across all of these datasets, the potential for future application of our work to important non-homophilous tasks is high.
Nevertheless, our work could also have potential for different types of negative social consequences. Nefarious behavior by key actors could be one source of such consequences.
Nonetheless, we expect that the actors that can make use of large-scale social networks for gender prediction as studied in our work are limited in number. Actors with both the capability and incentive to perform such operations probably mostly consist of entities with access to large social network data such as social media companies or government actors with auxiliary networks \cite{Narayanan2009}.
Smaller actors can perform certain attacks, but this may be made more difficult by resource requirements such as the need for certain external information \cite{Narayanan2009} or the ability to add nodes and edges before an anonymized version of a social network is released \cite{Backstrom2007WhereforeAT}. Furthermore, additional actors could make use of deanonymization attacks \cite{Hay2008Resisting, Narayanan2008, Narayanan2009} to reveal user identities in supposedly anonymized datasets.
Also, accidental consequences and implicit biases are a potential issue, even if the applications of the learning algorithms are benign and intended to benefit society \cite{mehrabi2019survey}. Performance of algorithms may vary substantially between intersectional subgroups of subjects — as in the case of vision-based gender predictors \cite{buolamwini18gender} (and some have questioned the propriety of vision-based gender classifiers altogether). Thus, there may be disparate effects on different populations, so care should be taken to understand the impact of those differences across subgroups. Moreover, large datasets require computing resources, so projects can only be pursued by large entities at the possible expense of the individual and smaller research groups \cite{birhane2021values}. This is alleviated by the fact that our experiments are each run on one GPU, and hence have significantly less GPU computing requirements than much current deep learning research. Thus, smaller research groups and independent researchers should find our work beneficial, and should be able to build on it.
Finally, the nature of collection of online user information also comes with notable ethical concerns. Common notice-and-consent policies are often ineffective in actually protecting user privacy \cite{Nissenbaum2011ACA}. Indeed, users may not actually have much choice in using certain platforms or sharing data due to social or economic reasons. Also, users are generally unable to fully read and understand all of the different privacy policies that they come across, and may not understand the implications of having their data available for long periods of time to entities with powerful inference algorithms. Furthermore, people may rely on obscurity for privacy \cite{CaseOnlineObscurity}, but this assumption may be ignored in courts of law, and it may be directly broken when data leaks or is released in aggregated form without sufficient privacy protections. Overall, while we believe that our work will benefit machine learning research and enable positive applications, we must still be aware of possible negative consequences.
\subsubsection*{Acknowledgements}
We thank Abhay Singh, Austin Benson, and Horace He for insightful discussions.
We also thank the rest of Cornell University Artificial Intelligence
for their support and discussion. We thank Facebook AI for funding equipment that made this work possible.
|
1,477,468,751,098 | arxiv | \section{Introduction \label{sec:Introduction}}
The L{\'e}vy walk is a popular and well-studied model which describes
a variety of physical scenarios in which superdiffusive dynamics lead
to nonlocal stationary behavior. Nonlocality is manifested in the
mathematical description of the relevant observables in terms of an
integral equation with a power-law kernel \cite{mandelbrot1982fractal,drysdale1998levy,buldyrev2001average,lepri2011density,dhar2013exact,zaburdaev2015levy}.
One fruitful application of the L{\'e}vy walk model is to the study of
anomalous heat transport in one-dimensional (1D) Hamiltonian systems
\cite{cipriani2005anomalous,zhao2006identifying,zoia2007fractional,lepri2011density,zaburdaev2011perturbation,dhar2013exact,liu2014anomalous,zaburdaev2015levy}.
When such systems are constrained to an interval of length $L$ and
driven out of equilibrium by heat baths of temperature difference
$\Delta T$, one observes an anomalous transport behavior in the asymptotic
large-$L$ limit: The energy current $J_{e}$ is found to scale as
$J_{e}\sim\Delta T/L^{1-\alpha}$ and the temperature profile is singular
at the boundaries. The ``anomalous exponent'' $\alpha\in\oc{0,1}$ gets
its name from the fact that Fourier's law predicts $\alpha=0$ \cite{grassberger2002heat,cipriani2005anomalous,lepri2011density,dhar2013exact}.
Active research of anomalous transport focuses on its universal features
in the asymptotic limit. The main questions include the classification
of models into different universality classes, precisely determining
the anomalous exponent $\alpha$ corresponding to each universality class
and the relation between $\alpha$ and the singular behavior of the accompanying
temperature profiles \cite{lepri2011density,lepri2016heat,cividini2017temperature}.
Although significant progress has recently provided several theoretical
predictions for the different universal values of $\alpha$ \cite{narayan2002anomalous,grassberger2002heat,wang2004intriguing,cipriani2005anomalous,lukkarinen2008anomalous,spohn2014nonlinear,popkov2015fibonacci},
obtaining conclusive experimental support is a difficult problem.
Namely, the asymptotic behavior predicted in theory for large-$L$
is hard to reach in numerical simulations and experiments due to finite-size
corrections \cite{cipriani2005anomalous}. In fact, it is generally
not known how large a system should be to ensure the asymptotic limit
is reached \cite{Miron2019}. Indeed, the literature contains numerous
observed values of $\alpha$ for a variety of models. These $\alpha$'s are
usually extracted from numerical simulations by fitting the observed
$L$ dependence of $J_{e}\left(L\right)$ to an inverse power law,
assuming that the system is safely inside the asymptotically large-$L$
regime. However, not all simulation results stand in agreement \cite{aoki2001fermi,grassberger2002heat,li2003anomalous,wang2004intriguing,cipriani2005anomalous,lepri2005studies,basile2007anomalous},
making it difficult to determine the different universality classes
and refute incompatible predictions. For this reason, insufficient
understanding of finite-size corrections poses a significant hurdle
which must be overcome to make progress.
Since L{\'e}vy particles are noninteracting, anomalous L{\'e}vy walk transport
is easier to study than that of Hamiltonian models. Imposing different-density
baths similarly gives rise to an anomalous walker current $J$ and
a corresponding singular density profile $P\left(x\right)$. This
setup was studied in Refs. \cite{lepri2011density,dhar2013exact}
where an integral equation relating the asymptotic current $J_{0}$
and density profile $P_{0}\left(x\right)$ was formulated and solved
exactly in Ref. \cite{dhar2013exact}. However, finite-size corrections
remain unstudied.
In this paper, a perturbative method is presented for computing finite-size
corrections to the asymptotic L{\'e}vy walk results in three steps: First,
the integral equation relating the asymptotic $J_{0}$ and $P_{0}\left(x\right)$
of Ref. \cite{dhar2013exact} is extended to include finite-size corrections.
Then, a perturbative method for computing the corrections order-by-order
in inverse powers of $L$ is introduced. Finally, the method is used
to explicitly compute the leading correction to the asymptotic $J_{0}$
and $P_{0}\left(x\right)$ for a L{\'e}vy walk of order $\beta=5/3$, which
is expected to represent a broad universality class of anomalous transport
models with exponent $\alpha=1/3$ \cite{cipriani2005anomalous}. In this
case, the asymptotic current $J_{0}$ decays as $J_{0}\sim L^{-2/3}$,
whereas its leading correction $J_{1}$ is shown to decay as $J_{1}\sim L^{-1}$.
Thus, the asymptotic regime in which $J_{0}\gg J_{1}$ is reached
only when $L$ is very large, further illustrating the importance
of accounting for finite-size corrections. The intuitive explanation
behind the diffusive correction $J_{1}$ is that, although the width
of the L{\'e}vy walkers\textquoteright{} walk-time distribution diverges,
its mean is finite. Thus, although the walkers occasionally undergo
very long excursions, most of the walks last a small amount of time,
leading to diffusive transport. In an infinite system, the contribution
of finite walks vanishes. However, in finite systems they give rise
to corrections, the first of which is diffusive transport.
For the reasons noted above, these results constitute a crucial step
towards understanding finite-size corrections in anomalous transport
and, ultimately, in settling the debate on the different universality
classes and the precise values of the corresponding anomalous exponents.
A surprising corollary is that integral equations, which are similar
to the one derived for anomalous L{\'e}vy walk transport, also show up
in many additional physical scenarios. They appear, for example, in
the mean first-passage time of L{\'e}vy walkers on a finite interval with
absorbing boundaries \cite{buldyrev2001average}, in the anomalous
transport of a stochastic 1D gas model \cite{Miron2019}, in nonlocal
elasticity theory \cite{lazopoulos2006non,carpinteri2011fractional}
and in the viscosity of polymers in a solvent \cite{kirkwood1948intrinsic,douglas1989surface}.
As such, the method presented here for studying anomalous L{\'e}vy walk
transport can be directly applied to a diverse set of problems, spanning
across a wide range of research fields.
The paper is organized as follows: The L{\'e}vy walk model and nonequilibrium
setup are presented in Sec. \ref{sec:The-Model}. Section \ref{sec:A-Step-Beyond}
extends the asymptotic results of Ref. \cite{dhar2013exact} by first
deriving a more general integral equation, containing information
on both the asymptotic behavior and its corrections, and then presenting
a perturbative method for solving it. The method is explicitly used
to compute the leading correction to the known asymptotic behavior
for the L{\'e}vy walk of order $\beta=5/3$ in Sec. \ref{sec:The-Leading-Correction}.
Section \ref{sec:Different beta} provides important details which
are relevant when applying the method to other values of $\beta$ and
higher order corrections. Concluding remarks follow in Sec. \ref{sec:CONCLUSIONS}.
\section{The Model \label{sec:The-Model}}
The L{\'e}vy walk model of order $\beta$ describes particles moving at a
fixed velocity $v$ which evolve via random \textquotedblleft walks\textquotedblright{}
of duration $t$ drawn from the distribution
\begin{equation}
\phi\left(t\right)=\beta t_{0}^{\beta}\frac{\theta\left[t-t_{0}\right]}{t^{\beta+1}},\label{eq:phi(t)}
\end{equation}
where $1<\beta<2$, $t_{0}$ is the minimal walk-time and $\theta\left[\tau\right]$
is the step function. All but the first moment of $\phi\left(t\right)$
diverge, giving rise to rare, long walks that connect distant points
in the system \cite{grassberger2002heat,cipriani2005anomalous,lepri2011density,dhar2013exact}.
The model is studied on a 1D interval parameterized by $x\in\left[0,L\right]$.
Following Ref. \cite{dhar2013exact}, let $P\left(x,t\right)\mathbf{d} x$
denote the number of walkers crossing the interval $\left(x,x+\mathbf{d} x\right)$
at time $t$ and let $Q\left(x,t\right)\mathbf{d} x\mathbf{d} t$ denote the number
of walkers whose walk ends inside the interval $\left(x,x+\mathbf{d} x\right)$
during the time interval $\left(t,t+\mathbf{d} t\right)$. Correspondingly,
$P\left(x,t\right)$ is called the walker density and $Q\left(x,t\right)$
is called the turning-point density. It will prove useful to consider
the rescaled position $x\in\left[0,1\right]$, obtained by dividing
the position by $L$.
To model the nonequilibrium settings of anomalous transport, appropriate
boundary conditions must be imposed. Following Ref. \cite{dhar2013exact},
different density walker baths are imposed at the two ends of the
system by setting
\begin{equation}
Q\left(x\le0\right)=Q_{L}\text{ and }Q\left(x\ge1\right)=Q_{R}.\label{eq:baths}
\end{equation}
With these boundary conditions, the stationary walker current satisfies
the integral equation
\[
J_{exact}\left(x\right)=\frac{v}{2}\left(Q_{L}\int_{\frac{Lx}{v}}^{\infty}\mathbf{d}\tau\psi\left(\tau\right)-Q_{R}\int_{\frac{L\left(1-x\right)}{v}}^{\infty}\mathbf{d}\tau\psi\left(\tau\right)\right)
\]
\begin{equation}
+\frac{L}{2}\int_{0}^{1}\mathbf{d} y\text{ }\sign{x-y}\psi\left(\frac{L\left|x-y\right|}{v}\right)Q\left(y\right),\label{eq:Derrida's current}
\end{equation}
where $\psi\left(t\right)$, the probability of drawing a walk-time
larger than $t$, is given by
\begin{equation}
\psi\left(t\right)=\int_{t}^{\infty}\mathbf{d}\tau\phi\left(\tau\right)=1+\theta\left[t-t_{0}\right]\left(\left(\frac{t_{0}}{t}\right)^{\beta}-1\right).\label{eq:psi beta}
\end{equation}
Eq. (\ref{eq:Derrida's current}) implies that the walker current
at position x is the sum of two contributions: The first line describes
the contribution coming from the two constant-density walker baths,
whereas the second line describes the contributions from walkers inside
the system. Since the system is in its steady state, the current must
be independent of $x$, i.e. $J_{exact}\left(x\right)\equiv J_{exact}$.
It was also shown in Ref. \cite{dhar2013exact} that the turning point
density $Q\left(x\right)$ satisfies the self-consistent equation
\[
Q\left(x\right)=\frac{Q_{L}}{2}\psi\left(\frac{Lx}{v}\right)+\frac{Q_{R}}{2}\psi\left(\frac{L\left(1-x\right)}{v}\right)
\]
\begin{equation}
+\frac{L}{2v}\int_{0}^{1}\mathbf{d} y\phi\left(\frac{L\left|x-y\right|}{v}\right)Q\left(y\right),\label{eq:Q of x}
\end{equation}
and that the turning point density $Q\left(x\right)$ is related to
the walker density $P\left(x\right)$ by
\begin{equation}
P\left(x\right)=\frac{\beta t_{0}}{\beta-1}Q\left(x\right)+\ord{\varepsilon^{\beta-1}},\label{eq:Q P relation}
\end{equation}
where $\varepsilon=t_{0}v/L$ plays the role of a dimensionless inverse system-size.
Eqs. (\ref{eq:Derrida's current}), (\ref{eq:Q of x}) and (\ref{eq:Q P relation})
constitute the starting point of this study.
\section{A Step Beyond Asymptotics \label{sec:A-Step-Beyond}}
Since the solution of Eq. (\ref{eq:Derrida's current}) for $J_{exact}$
is hard to compute, the first step is to expand the equation in small
$\varepsilon$
\begin{equation}
J\approx A\varepsilon P'\left(x\right)-B\varepsilon^{\beta-1}\int_{0}^{1}\mathbf{d} y\frac{P'\left(y\right)}{\left|x-y\right|^{\beta-1}},\label{eq:Fredholm 2nd}
\end{equation}
where $A=\frac{v\left(\beta-1\right)}{2\left(2-\beta\right)}$, $B=\frac{v}{2\beta}$,
$P'\left(x\right)$ denotes the derivative of $P\left(x\right)$.
Note that corrections of $\ord{\varepsilon^{2\left(\beta-1\right)}}$ have been
neglected and will be addressed later in Sec. \ref{sec:Different beta}
(also see Appendix A). Equation (\ref{eq:Fredholm 2nd}) is derived
by substituting $\psi\left(t\right)$ into Eq. (\ref{eq:Derrida's current})
for $J_{exact}$, expanding up to linear order in $\varepsilon$ and employing
the relation between $P\left(x\right)$ and $Q\left(x\right)$ of
Eq. (\ref{eq:Q P relation}). A similar equation for A = 0 was derived
in Ref. \cite{dhar2013exact}.
Before proceeding to solve Eq. (\ref{eq:Fredholm 2nd}), let us first
establish some useful notations. The rightmost term in Eq. (\ref{eq:Fredholm 2nd})
is intuitively called the ``nonlocal'' term since it depends on
the values of $P'\left(x\right)$ across the entire system. Naturally,
the term $A\varepsilon P'\left(x\right)$ is then referred to as the ``local''
term and $J$ is called the ``source'' term. The integral Eq. (\ref{eq:Fredholm 2nd})
for $J$ is a weakly singular Fredholm integral equation (WSFIE) of
the second kind \cite{kress1989linear,moiseiwitsch2011integral,zemyan2012classical}.
It is called weakly singular since the integral kernel diverges for
$y=x$ yet, since $0<\beta-1<1$, the singularity is integrable. Last,
when the unknown function appears both under the integral sign and
outside the integral, the equation is of the \textquotedblleft second
kind\textquotedblright{} but if it appears only under the integral
sign, it is of the \textquotedblleft first kind\textquotedblright .
Note that an equation for $J$, containing both a local term $\sim\ord{\varepsilon^{-1}}$
and a non-local term $\sim\ord{\varepsilon^{\beta-1}}$, is obtained whenever
the walk-time distribution $\phi\left(t\right)$ contains a short walk-time
cutoff mechanism.
\begin{figure}
\includegraphics[scale=0.67]{Q1_collapse_various_eps_eps_1_over_n}
\caption{A comparison between the density profile $P_{1}\left(x\right)$ of
Eq. (\ref{eq:P'1_1}) and the collapse of $\protect\varepsilon^{1/3}\left(P_{Num}\left(x\right)-P_{0}\left(x\right)\right)$
for different values of the inverse system size $\protect\varepsilon$. The
two are expected to identify as $\protect\varepsilon\protect\ra0$.\label{fig:P_1 collapse}}
\end{figure}
The simplest way to proceed is to take the asymptotic $\varepsilon\ra0$ limit
in Eq. (\ref{eq:Fredholm 2nd}). In this limit the local term vanishes
from Eq. (\ref{eq:Fredholm 2nd}), and with it all information about
finite-size corrections, reducing the equation to the WSFIE of the
first kind studied in Ref. \cite{dhar2013exact}. Although this WSFIE
can indeed be solved exactly by the Sonin formula \cite{samko1993fractional,buldyrev2001average},
the trade-off is that finite-size corrections remain out of reach.
Here, a method which preserves information about finite size corrections
is suggested instead. This method relies on the interplay between
the local term $\propto\varepsilon$ and the nonlocal term $\propto\varepsilon^{\beta-1}$ to
construct an ansatz for $P'\left(x\right)$ and $J$ in the form of
a power-series in $\varepsilon^{\beta-2}$, the ratio of the two scales, as
\begin{equation}
\begin{cases}
P'=P_{0}'+\varepsilon^{2-\beta}P_{1}'+\varepsilon^{2\left(2-\beta\right)}P_{2}'+...\\
J=\varepsilon^{\beta-1}\left[\mathcal{J}_{0}+\varepsilon^{2-\beta}\mathcal{J}_{1}+\varepsilon^{2\left(2-\beta\right)}\mathcal{J}_{2}+...\right]
\end{cases},\label{eq:first few terms}
\end{equation}
where $P_{n}'\left(x\right)$ and $\mathcal{J}_{n}$ are independent
of $\varepsilon$. In turn, this allows replacing Eq. (\ref{eq:Fredholm 2nd})
by a hierarchy of WSFIEs of the first kind
\begin{equation}
\begin{cases}
\mathcal{J}_{0}=-B\int_{0}^{1}\mathbf{d} y\frac{P_{0}'\left(y\right)}{\left|x-y\right|^{\beta-1}} & \text{at }\text{\ensuremath{\ord{\varepsilon^{\beta-1}}}}\\
\mathcal{J}_{1}=AP_{0}'\left(x\right)-B\int_{0}^{1}\mathbf{d} y\frac{P_{1}'\left(y\right)}{\left|x-y\right|^{\beta-1}} & \text{at }\text{\ensuremath{\ord{\varepsilon}}}\\
\vdots\\
\mathcal{J}_{n}=AP_{n-1}'\left(x\right)-B\int_{0}^{1}\mathbf{d} y\frac{P_{n}'\left(y\right)}{\left|x-y\right|^{\beta-1}} & \text{at }\ord{\varepsilon^{\left(\beta-1\right)+\left(2-\beta\right)n}}
\end{cases}\label{eq:first few equations}
\end{equation}
where many of which can be solved using the Sonin formula \cite{samko1993fractional,buldyrev2001average}.
The first hierarchy equation, at $\ord{\varepsilon^{\beta-1}}$, coincides with
the asymptotic equation of Ref. \cite{dhar2013exact} while the rest
provide increasingly higher-order, finite-size corrections which must
be solved in an iterative fashion. It is important to stress that
this method can be extended to additional WSFIEs of the second kind
which exhibit a similar interplay between the local and nonlocal terms,
even when the constant source term, e.g., $J$ in Eq. (\ref{eq:Fredholm 2nd}),
is replaced by a sufficiently well-behaved function of $x$ (see Appendix
B). In particular, it can be directly applied to the problems mention
in Sec. \ref{sec:Introduction} \cite{kirkwood1948intrinsic,douglas1989surface,lazopoulos2006non,carpinteri2011fractional,Miron2019}.
\section{The Leading Correction For $\protect\beta=5/3$ \label{sec:The-Leading-Correction}}
This method is next used to compute the leading correction to the
asymptotic density profile and current for a L{\'e}vy walk of order $\beta=\frac{5}{3}$.
The generalization to different values of \textgreek{b} and higher-order
corrections is then discussed in Sec. \ref{sec:Different beta}.
Applying the ansatz
\begin{equation}
\begin{cases}
P'\left(x\right)=P_{0}'\left(x\right)+\varepsilon^{1/3}P_{1}'\left(x\right)+\ord{\varepsilon^{2/3}}\\
J=\varepsilon^{2/3}\mathcal{J}_{0}+\varepsilon\mathcal{J}_{1}+\ord{\varepsilon^{4/3}}
\end{cases},\label{eq:P' and J ansatz 5/3}
\end{equation}
to Eq. (\ref{eq:Fredholm 2nd}) for $J$ yields a hierarchy of WSFIEs
of the first kind. The first equation, appearing at $\ord{\varepsilon^{2/3}}$,
is simply the asymptotic equation studied in Ref. \cite{dhar2013exact}.
The solution is obtained by applying the Sonin formula \cite{samko1993fractional,buldyrev2001average}
(see Appendix B) and enforcing the boundary conditions in Eq. (\ref{eq:baths}).
One finds
\begin{equation}
P_{0}'\left(x\right)=-\frac{b\mathcal{J}_{0}}{v\left(x\left(1-x\right)\right)^{1/6}}\text{ }\text{and}\text{ }\mathcal{J}_{0}=-\frac{av\Delta P}{b}\label{eq:J0 P0}
\end{equation}
where $a=\Gamma\left[\frac{5}{3}\right]/\Gamma\left[\frac{5}{6}\right]^{2}$,
$b=\frac{5}{3\pi}$, $\Gamma\left[x\right]$ is the gamma function and
$\Delta P\equiv P_{R}-P_{L}=\frac{5t_{0}}{2}\left(Q_{R}-Q_{L}\right)$
follows from Eq. (\ref{eq:Q P relation}).
To step beyond the known asymptotic results, let us consider the next
hierarchy equation for the leading correction, $P_{1}'\left(x\right)$.
This equation appears at $\ord{\varepsilon}$ and is given by
\begin{equation}
\frac{10}{3v}\left(vP_{0}'\left(x\right)-\mathcal{J}_{1}\right)=\int_{0}^{1}\mathbf{d} y\frac{P_{1}'\left(y\right)}{\left|x-y\right|^{2/3}}.\label{eq:J_1 5/3}
\end{equation}
Due to the hierarchical structure of the ansatz of Eq. (\ref{eq:P' and J ansatz 5/3}),
$P_{0}'\left(x\right)$ enters this equation as a source term. Equation
(\ref{eq:J_1 5/3}) is also a WSFIE of the first kind since $P_{1}'\left(x\right)$
appears only inside the integral. Applying the Sonin formula \cite{samko1993fractional,buldyrev2001average}
yields
\begin{equation}
P_{1}'\left(x\right)=-\frac{b}{v}\left(\frac{\mathcal{J}_{1}}{\left(x\left(1-x\right)\right)^{1/6}}+\frac{av\Delta P}{3^{1/2}2^{1/3}}I\left(x\right)\right),\label{eq:P'1_1}
\end{equation}
where $\varepsilon\mathcal{J}_{1}$ is the yet-unknown leading corrections
to the asymptotic current and $I\left(x\right)$ is given by
\begin{equation}
I\left(x\right)=\frac{1}{x^{\frac{1}{6}}}\der{}x\int_{x}^{1}\frac{\mathbf{d} tt^{\frac{1}{3}}}{\left(t-x\right)^{\frac{1}{6}}}\der{}t\int_{0}^{t}\frac{\mathbf{d} q\left(1-q\right)^{-\frac{1}{6}}}{q^{\frac{1}{3}}\left(t-q\right)^{\frac{1}{6}}}.\label{eq:I(x) formal}
\end{equation}
Manipulating $I\left(x\right)$ to its closed form requires careful
treatment since Eq. (\ref{eq:I(x) formal}) contains nontrivial improper
integrals. One finds
\[
I\left(x\right)=-\frac{2^{2/3}}{\left(x\left(1-x\right)\right)^{1/6}}-\frac{16x^{5/6}}{2^{1/3}5\left(1-x\right)^{7/6}}\biggl(G_{+}\left(x\right)
\]
\[
-\frac{5\left(2x+1\right)}{16x}G_{-}\left(x\right)\biggl)-\frac{\left(2x+1\right)\left(H_{+}\left(x\right)+H_{-}\left(x\right)\right)}{2\sqrt{x}\left(1-x\right)^{7/6}}
\]
\begin{equation}
+\frac{16\Gamma\left[\frac{5}{6}\right]\Gamma\left[\frac{8}{3}\right]\left(K_{+}\left(x\right)-K_{-}\left(x\right)\right)}{15\sqrt{\pi}\left(1-x\right)^{7/6}}\label{eq:I(x)}
\end{equation}
where $G_{\pm}\left(x\right)=F_{1}\left[\frac{7}{6}\pm\frac{1}{2};\frac{1}{6},\frac{1}{6};\frac{13}{6}\pm\frac{1}{2};\frac{2\sqrt{x}}{\sqrt{x}-1},\frac{2\sqrt{x}}{\sqrt{x}+1}\right]$,
$H_{\pm}\left(x\right)=\left(1\pm\sqrt{x}\right)^{2/3}F_{1}\left[\frac{2}{3};\frac{1}{6},\frac{1}{6};\frac{5}{3};\frac{\sqrt{x}\pm1}{\sqrt{x}\mp1},1\right]$
and $K_{\pm}\left(x\right)=\left(1\pm\sqrt{x}\right)^{5/3}\hg 21\left[\frac{1}{6},\frac{5}{3};\frac{5}{2};\frac{\sqrt{x}\pm1}{\sqrt{x}\mp1}\right]$.
Here $F_{1}\left[a;b_{1},b_{2};c;z_{1},z_{2}\right]$ is the Appell
hypergeometric function and $\hg 21\left[a;b;c;z\right]$ is the hypergeometric
function of the second kind.
The function $I\left(x\right)$ has two interesting properties: First,
it is easy to show that the hierarchy equation (\ref{eq:J_1 5/3})
is symmetric under reflections $x\ra1-x$ and so $I\left(x\right)$
must too respect this symmetry. Induction can be used to extend this
argument to all hierarchy equations (see Appendix B). Second, one
can also show that, near the left boundary of the system, $I\left(x\right)$
behaves as
\begin{equation}
I\left(x\ra0\right)\propto x^{-1/2}+\ord{x^{-1/6}}.\label{eq:boundary singularity}
\end{equation}
This implies that, for any finite $\varepsilon$, the boundary singularity
of the leading correction $P_{1}'\left(x\right)$ dominates over that
of the asymptotic solution $P_{0}'\left(x\right)$.
Having found the closed-form solution for $P_{1}'\left(x\right)$,
the final step is to determine $\mathcal{J}_{1}$. This is done by
integrating Eq. (\ref{eq:P'1_1}) for $P_{1}\left(x\right)$ with
the appropriate boundary conditions. Since the asymptotic results
already use $P_{0}\left(1\right)-P_{0}\left(0\right)=\Delta P$ in Eq.
(\ref{eq:J0 P0}), the corrections must satisfy $P_{n}\left(0\right)=P_{n}\left(1\right)=0$
for all $n>0$. With these boundary conditions, $\mathcal{J}_{1}$
is given by
\begin{equation}
\mathcal{J}_{1}=2^{4/3}3^{-1/2}av\Delta P.\label{eq:J1}
\end{equation}
Substituting $\mathcal{J}_{1}$ back inside Eq. (\ref{eq:P'1_1})
for $P_{1}'\left(x\right)$ yields the final expression for the leading
density gradient correction:
\begin{equation}
P_{1}'\left(x\right)=-\frac{ab\Delta P}{3^{1/2}2^{1/3}}\left(\frac{2^{5/3}}{\left(x\left(1-x\right)\right)^{1/6}}+I\left(x\right)\right).\label{eq:P1'(x)-1}
\end{equation}
Collecting these results into the ansatz in Eq. (\ref{eq:P' and J ansatz 5/3})
gives the two leading contributions to the anomalous current and density
profile of the L{\'e}vy walk of order $5/3$''
\begin{equation}
\begin{cases}
P'\left(x\right)\approx a\Delta P\biggl[\left(x\left(1-x\right)\right)^{-1/6}-\frac{b}{2^{1/3}3^{1/2}}\\
\text{ }\times\varepsilon^{1/3}\left(2^{5/3}\left(x\left(1-x\right)\right)^{-1/6}+I\left(x\right)\right)\biggl]\\
J\approx-\frac{a}{b}v\Delta P\varepsilon^{2/3}\left(1-\frac{2^{4/3}b}{3^{1/2}}\varepsilon^{1/3}\right)
\end{cases}.\label{eq:P' and J}
\end{equation}
Equations (\ref{eq:J1}) and (\ref{eq:P1'(x)-1}) are the first finite-size
corrections computed in the context of anomalous transport. To verify
that $P_{1}'\left(x\right)$ and $\mathcal{J}_{1}$ indeed describe
the leading correction to the asymptotic results in Eq. (\ref{eq:J0 P0}),
they are compared to the numerical solutions of the exact L{\'e}vy walk
model equations. These are Eq. (\ref{eq:Derrida's current}) for $J_{exact}$,
Eq. (\ref{eq:Q of x}) for $Q\left(x\right)$ and Eq. (\ref{eq:Q P relation})
which relates $Q\left(x\right)$ to $P\left(x\right)$ as $P\left(x\right)=\frac{\beta t_{0}}{\beta-1}Q\left(x\right)+\ord{\varepsilon^{\beta-1}}$.
In Ref . \cite{dhar2013exact} the exact self-consistent equations
for $P\left(x\right)$ and $Q\left(x\right)$ were numerically solved
and shown to agree with Eq. (\ref{eq:Q P relation}). Figure \ref{fig:P_1 collapse}
shows $P_{1}\left(x\right)$ alongside the collapse of $\varepsilon^{1/3}\left(P_{Num}\left(x\right)-P_{0}\left(x\right)\right)$
for different values of $\varepsilon$. $P_{Num}\left(x\right)$ is obtained
by numerically solving Eq. (\ref{eq:Q of x}) for $Q_{Num}\left(x\right)$
and then using Eq. (\ref{eq:Q P relation}) to relate $Q_{Num}\left(x\right)$
to $P_{Num}\left(x\right)$. Notice that the matching to $P_{1}\left(x\right)$
breaks down near the endpoints. Indeed, the derivation of the approximate
relation between $J$ and $P\left(x\right)$ in Eq. (\ref{eq:Fredholm 2nd})
is valid only for $x\in\left[\varepsilon,1-\varepsilon\right]$ and, consequently, so
is its solution (see Appendix B). Specifically, the behavior of $P\left(x\right)$
in the intervals $x\in\co{0,\varepsilon}\cup\oc{1-\varepsilon,1}$ unfortunately remains
out of reach. The same difficulties were reported in Ref. \cite{buldyrev2001average},
which studies the closely related problem of computing the mean first-passage
time for the L{\'e}vy walk. Nevertheless, it is straightforward to show
that limiting the domain of $x$ to $\left[\varepsilon,1-\varepsilon\right]$ does not
introduce new corrections. Figure \ref{fig:J} compares $J$ of Eq.
(\ref{eq:P' and J}) to the asymptotic current $J_{0}=\varepsilon^{2/3}\mathcal{J}_{0}$
and to the exact current $J_{exact}$, obtained by numerically solving
Eq. (\ref{eq:Q of x}) for $Q\left(x\right)$ and substituting the
result into Eq. (\ref{eq:Derrida's current}) for $J_{exact}$.
\begin{figure}
\includegraphics[scale=0.65]{J_eps_1_over_n}
\caption{The current $J$ predicted in Eq. (\ref{eq:P' and J}) (blue circles)
is compared to the asymptotic current $J_{0}$ of Eq. (\ref{eq:J0 P0})
(orange squares) and to the exact current $J_{exact}$ (green diamonds)
which is obtained by numerically solving $Q\left(x\right)$ of Eq.
(\ref{eq:Q of x}) and substituting the solution into Eq. (\ref{eq:Derrida's current}).
Inset: The ratios $\frac{J}{J_{exact}}$ (blue circles) and $\frac{J_{0}}{J_{exact}}$
(orange squares) are compared. The parameters are $v=t_{0}=1$ and
$\protect\Delta Q=1\protect\rightarrow\protect\Delta P=5/2$. \label{fig:J}}
\end{figure}
\section{Other $\protect\beta$ Values and Higher Order Corrections \label{sec:Different beta}}
Let us finally discuss the application of this method to other values
of$\beta$ and higher-order corrections. For a general $\beta$ and arbitrary
order, this method faces two caveats: The first is due to the fact
that Eq. (\ref{eq:Fredholm 2nd}) for $J$ is a small-$\varepsilon$ approximation
of Eq. (\ref{eq:Derrida's current}) for $J_{exact}$, implying that
some higher-order corrections must have been neglected in its derivation.
The second follows from limitations on the source term\textquoteright s
behavior at the boundaries that are imposed by the Sonin formula.
These two caveats are explained next and additional details are provided
in Appendix B.
The appropriate ansatz for $P'\left(x\right)$ and $J$ for a L{\'e}vy
walk of order $\beta$ is
\begin{equation}
\begin{cases}
P'\left(x\right)=\sum_{m=0}^{M}\varepsilon^{\left(2-\beta\right)m}P_{m}'\left(x\right)+\ord{\varepsilon^{\left(2-\beta\right)\left(M+1\right)}}\\
J=\varepsilon^{\beta-1}\left[\sum_{m=0}^{M}\varepsilon^{\left(2-\beta\right)m}\mathcal{J}_{m}+\ord{\varepsilon^{\left(2-\beta\right)\left(M+1\right)}}\right]
\end{cases},\label{eq:General beta ansatz}
\end{equation}
where $M$ is the maximal expansion order beyond which the method
is no longer accurate. $M$ is the manifestation of the first caveat
mentioned above. From Eqs. (\ref{eq:General beta ansatz}) and (\ref{eq:first few equations}),
one learns that the hierarchy equation for $P_{M}'\left(x\right)$
is of $\ord{\varepsilon^{\beta-1+\left(2-\beta\right)M}}$. Thus, to determine $M$
we must account for the terms neglected in the derivation of Eq. (\ref{eq:Fredholm 2nd})
for $J$ (see Appendix A) and find the order at which they enter the
equation for $P_{M}'\left(x\right)$. Appendix A shows that the leading
$\ord{\varepsilon^{2\left(\beta-1\right)}}$ corrections in Eq. (\ref{eq:Fredholm 2nd})
set $M=\left\lceil \frac{\beta-1}{2-\beta}\right\rceil $.
It is important to stress that the hierarchy equations for $P'_{m}\left(x\right)$
are perfectly accurate for $0\le m<M$. Moreover, if only the anomalous
current $J$ is of interest, one can significantly increase $M$ by
working directly with Eq. (\ref{eq:J of Q}). Then the $\ord{\varepsilon^{2\left(\beta-1\right)}}$
corrections are replaced by $\ord{\varepsilon^{3}}$ corrections and $M$ increases
to $M=\left\lceil \frac{4-\beta}{2-\beta}\right\rceil $.
The second caveat is intrinsic to the Sonin formula. As explained
in Ref, \cite{samko1993fractional}, the Sonin formula applies only
when the source term's boundary singularity is weaker than the kernel's
singularity. Depending on the value of $\beta$, some of the hierarchy
equations might not satisfy this requirement, even for $m<M$. Equation
(\ref{eq:first few equations}) shows that the source term in the
hierarchy equation for $P_{m}'\left(x\right)$ is proportional to
$P_{m-1}'\left(x\right)$. In Appendix B, the boundary behavior of
$P_{m-1}'\left(x\right)$ is argued to be of the form $P_{m-1}'\left(x\ra0\right)\propto x^{\left(2m-1\right)\left(\frac{\beta-2}{2}\right)}$
for general $m$ and $\beta$, with similar behavior for $x\ra1$. Comparing
this singularity to the kernel's singularity $\beta-1$ restricts the
Sonin formula to $m<\frac{\beta}{2\left(2-\beta\right)}$.
Nevertheless, since all hierarchy equations satisfying $0\le m<M$
are precise, hierarchy equations for $P_{m}'\left(x\right)$ with
$\frac{\beta}{2\left(2-\beta\right)}<m<M$ can still be solved by any other
method, be it analytical or numerical, and yield the correction solutions.
\section{Conclusions \label{sec:CONCLUSIONS}}
In this paper, the anomalous transport properties of a 1D L{\'e}vy walk
of order $\beta$ are studied on a finite interval of size $L$ under
nonequilibrium settings. Extending the work of Ref. \cite{dhar2013exact},
which related the anomalous walker current $J$ to the density gradient
$P'\left(x\right)$ for asymptotically large $L$, a more general
integral equation which also captures finite-size corrections is derived.
A perturbative method is presented for constructing an order-by-order
solution of this equation. The method is explicitly demonstrated by
computing the leading correction to the asymptotic behavior for $\beta=5/3$,
and its results are shown to be in excellent agreement with the numerical
solution of the exact equations.
Remarkably, many other physical problems are described by similar
integral equations \cite{kirkwood1948intrinsic,douglas1989surface,lazopoulos2006non,carpinteri2011fractional,Miron2019},
bringing hope that the method presented here could be used in a verity
of different fields. In the context of anomalous transport, it is
interesting to compare the results computed here to simulations and
experiments. This could test if L{\'e}vy walks are indeed a reliable model
for anomalous transport, even beyond the asymptotic limit. Applying
this method to study additional L{\'e}vy walk properties, as well as other
physical problems, is an equally interesting and exciting prospect.
\section{Acknowledgments \label{sec:ACKNOWLEDGMENTS}}
I would like to thank my advisor D. Mukamel for his ongoing encouragement,
help, and support. I also thank O. Raz and V. V. Prasad for critical
reading of this manuscript as well as G. Falkovich for helpful comments.
In addition, previous projects with Anupam Kundu and Julien Cividini
have significantly influenced this study, and their collaboration
is greatly appreciated. This work was supported by a research grant
from the Center of Scientific Excellence at the Weizmann Institute
of Science.
\section*{Appendix A - The Derivation of Eq. (\ref{eq:Fredholm 2nd}) }
Equation (\ref{eq:Derrida's current}) for $J_{exact}$ was derived
in Ref. \cite{dhar2013exact} and serves as the basis for the derivation
of Eq. (\ref{eq:Fredholm 2nd}) for $J$ and mainly differs in the
treatment of the finite-size corrections. The key steps of the derivation
are outlined next.
Using $\psi\left(t\right)$ of Eq. (\ref{eq:psi beta}), the second
line of Eq. (\ref{eq:Derrida's current}) for $J_{exact}$ becomes
\[
\frac{L}{2}\int_{0}^{1}\mathbf{d} y\text{ }\sign{x-y}\psi\left(\frac{L\left|x-y\right|}{v}\right)Q\left(y\right)
\]
\[
=\frac{L}{2}\left(\int_{x-\varepsilon}^{x}\mathbf{d} y\text{ }Q\left(y\right)-\int_{x}^{x+\varepsilon}\mathbf{d} y\text{ }Q\left(y\right)\right)
\]
\begin{equation}
+\frac{L\varepsilon^{\beta}}{2}\left(\int_{0}^{x-\varepsilon}\frac{\mathbf{d} yQ\left(y\right)}{\left(x-y\right)^{\beta}}-\int_{x+\varepsilon}^{1}\frac{\mathbf{d} yQ\left(y\right)}{\left(y-x\right)^{\beta}}\right).\label{eq:J der first step}
\end{equation}
Expanding the second line of Eq. (\ref{eq:J der first step}) in small
$\varepsilon$ yields
\begin{equation}
-\frac{t_{0}v\varepsilon}{2}Q'\left(x\right)+\ord{\varepsilon^{3}}\label{eq:first line}
\end{equation}
and integrating the third line by parts yields
\[
\frac{t_{0}v\varepsilon}{2-\beta}Q'\left(x\right)+\frac{t_{0}v\varepsilon^{\beta-1}}{2\left(\beta-1\right)}
\]
\begin{equation}
\times\left(\frac{Q\left(1\right)}{\left(1-x\right)^{\beta-1}}-\frac{Q\left(0\right)}{x^{\beta-1}}-\int_{0}^{1}\frac{\mathbf{d} yQ'\left(y\right)}{\left|x-y\right|^{\beta-1}}\right)+\ord{\varepsilon^{3}},\label{eq:second line}
\end{equation}
where $Q\left(0\right)=Q_{L}$ and $Q\left(1\right)=Q_{R}$ follow
from Eq. (\ref{eq:baths}). Collecting these terms back into $J_{exact}$
gives
\begin{equation}
J=\frac{t_{0}v\beta\varepsilon Q'\left(x\right)}{2\left(2-\beta\right)}-\frac{t_{0}v\varepsilon^{\beta-1}}{2\left(\beta-1\right)}\int_{0}^{1}\frac{\mathbf{d} yQ'\left(y\right)}{\left|x-y\right|^{\beta-1}}+\ord{\varepsilon^{3}}.\label{eq:J of Q}
\end{equation}
To obtain Eq. (\ref{eq:Fredholm 2nd}), which relates $J$ and $P'\left(x\right)$,
one uses the relation $P\left(x\right)=\frac{\beta t_{0}}{\beta-1}Q\left(x\right)+\ord{\varepsilon^{\beta-1}}$
in Eq. (\ref{eq:Q P relation}), which inevitably introduces corrections
of $\ord{\varepsilon^{2\left(\beta-1\right)}}$ into Eq. (\ref{eq:J of Q}).
Two important comments are in order: The first is that Eq. (\ref{eq:Fredholm 2nd})
is valid only for $\beta>\frac{3}{2}$ due to the neglected $\ord{\varepsilon^{2\left(\beta-1\right)}}$
corrections. The second is that the manipulations involved in going
from Eq. (\ref{eq:J der first step}) to Eq. (\ref{eq:Fredholm 2nd})
are valid only for $x\in\left[\varepsilon,1-\varepsilon\right]$. This implies that
the density profiles in Eq. (\ref{eq:P' and J}) are not accurate
for $x\in\co{0,\varepsilon}\cup\oc{1-\varepsilon,1}$.
\section*{Appendix B - The Sonin Formula and Its Solubility Condition }
\subsubsection{The Sonin Formula}
The Sonin formula provides the formal solution to a class of WSFIEs
of the first kind. Specifically, it can be used to solve equations
of the form
\begin{equation}
h\left(x\right)=\int_{0}^{1}\mathbf{d} y\frac{\varphi\left(y\right)}{\left|x-y\right|^{\beta-1}}\label{eq:A-S-Fredholm first kind}
\end{equation}
for $\varphi\left(x\right)$ where $1<\beta<2$. For the purpose of this
study, it is sufficient to only consider source terms $h\left(x\right)$
which are symmetric under reflections $x\ra1-x$ and are of the form
\begin{equation}
h\left(x\right)=\frac{h^{*}\left(x\right)}{\left(x\left(1-x\right)\right)^{\left(\beta-1\right)-\gamma}},\label{eq:A-S-h}
\end{equation}
where $h^{*}\left(x\right)$ a smooth function of $x$ and $0<\gamma<\beta-1$.
The latter condition means that the Sonin formula applies only when
the source term\textquoteright s boundary singularity is weaker than
the kernel\textquoteright s singularity. For $\gamma$ and $h\left(x\right)$
satisfying these conditions, the Sonin formula yields the solution
\begin{equation}
\varphi\left(x\right)=\frac{\mathcal{B}}{x^{\frac{2-\beta}{2}}}\der{}x\int_{x}^{1}\frac{\mathbf{d} tt^{2-\beta}}{\left(t-x\right)^{\frac{2-\beta}{2}}}\der{}t\int_{0}^{t}\frac{\mathbf{d} qq^{\frac{\beta-2}{2}}h\left(q\right)}{\left(t-q\right)^{\frac{2-\beta}{2}}}\label{eq:A-S-Sonin sol}
\end{equation}
where $\mathcal{B}=-\frac{\sin\left[\frac{\pi\beta}{2}\right]\Gamma\left[\beta-1\right]}{\pi\Gamma\left[\frac{\beta}{2}\right]^{2}}$
and $\Gamma\left[x\right]$ is the gamma function. It is important to
stress that the Sonin formula applies to far more general WSFIE's
of the first kind. An extensive account and further details can be
found in Refs. \cite{samko1993fractional,buldyrev2001average,dhar2013exact}.
\subsubsection{Solubility Condition}
Next, the implications of the requirements on the boundary singularity
of $h\left(x\right)$ of Eq. (\ref{eq:A-S-h}) are discussed. Consider
the hierarchy of integral equations obtained by substituting the ansatz
of Eq. (\ref{eq:General beta ansatz}) into Eq. (\ref{eq:Fredholm 2nd}).
The first equation is
\begin{equation}
-\frac{2\beta\mathcal{J}_{0}}{v}=\int_{0}^{1}\mathbf{d} y\frac{P_{0}'\left(y\right)}{\left|x-y\right|^{\beta-1}}\label{eq:first eqn}
\end{equation}
and its constant source term trivially satisfies the requirements
in Eq. (\ref{eq:A-S-h}). Imposing the boundary conditions in Eq.
(\ref{eq:baths}) provides the asymptotic solution
\begin{equation}
P_{0}'\left(x\right)=\frac{\Gamma\left[\beta\right]\Delta P}{\Gamma\left[\frac{\beta}{2}\right]^{2}\left(x\left(1-x\right)\right)^{\frac{2-\beta}{2}}}.\label{eq:first eqn solution}
\end{equation}
The next equation, now for the leading correction $P_{1}'\left(x\right)$,
is
\begin{equation}
\beta\left(\frac{\beta-1}{2-\beta}P_{0}'\left(x\right)-\frac{2}{v}\mathcal{J}_{1}\right)=\int_{0}^{1}\mathbf{d} y\frac{P_{1}'\left(y\right)}{\left|x-y\right|^{\beta-1}}.\label{eq:A-S-P_1'(x) Eq}
\end{equation}
The only nonconstant source term in this equation is $\propto P_{0}'\left(x\right)$.
The range of $\beta$ for which its singularity is weaker than that of
the kernel is
\begin{equation}
\beta>\frac{4}{3}.\label{eq:A-S- P_1'(x) beta range}
\end{equation}
As such, the leading correction $P_{1}'\left(x\right)$ can be computed
from the Sonin formula for any $\beta$ in this range.
The general equation for $P_{m}'\left(x\right)$,
\begin{equation}
c_{1}P_{m-1}'\left(x\right)-c_{2}\mathcal{J}_{m}=\int_{0}^{1}\mathbf{d} y\frac{P_{m}'\left(y\right)}{\left|x-y\right|^{\beta-1}},\label{eq:general m eqn}
\end{equation}
with $c_{1}=\frac{2\beta}{v}\frac{v\left(\beta-1\right)}{2\left(2-\beta\right)}$
and $c_{2}=\frac{2\beta}{v}$, is used next to find the range of allowed
$\beta$ at any order $m$. To this end, let us take the leading singular
behavior of the source term $P_{m-1}'\left(x\right)$ to be $\propto\left(x\left(1-x\right)\right)^{-\gamma}$.
For $P_{m-1}'\left(x\right)$ to satisfy the requirements of Eq. (\ref{eq:A-S-h}),
$\gamma$ can only take values in $0<\gamma<\beta-1$. The solution of this equation
via the Sonin formula is
\begin{equation}
P_{m}'\left(x\right)\propto-x^{\frac{\beta-2}{2}}\der{}x\int_{x}^{1}\mathbf{d} t\frac{t^{2-\beta}Y\left(t\right)}{\left(t-x\right)^{\frac{2-\beta}{2}}}\label{eq:A-S-P_m'(x)}
\end{equation}
where
\begin{equation}
Y\left(t\right)=\der{}t\int_{0}^{t}\mathbf{d} q\frac{\left(q\left(1-q\right)\right)^{-\gamma}}{\left(q\left(t-q\right)\right)^{\frac{2-\beta}{2}}}\label{eq:Y(t)}
\end{equation}
and the term $\propto\mathcal{J}_{m-1}$ was neglected since its boundary
singularity is trivially weaker than that of $\left(x\left(1-x\right)\right)^{-\gamma}$.
To continue, note that, although not manifest in Eq. (\ref{eq:Fredholm 2nd}),
the hierarchy ansatz reveals the symmetry of $P'\left(x\right)$ under
reflections $x\ra1-x$. To see this, note that the source term in
Eq. (\ref{eq:first eqn}) for $P_{0}'\left(x\right)$ is independent
of $x$. It is easy to show that this equation is symmetric under
$x\ra1-x$ and so is its solution. Next, since $P_{0}'\left(x\right)$
is the only nonconstant source term in Eq. (\ref{eq:A-S-P_1'(x) Eq})
for $P_{1}'\left(x\right)$, one can show that $P_{1}'\left(x\right)$
must too be symmetric under inversion. Using induction one can show
this symmetry propagates throughout the entire hierarchy. It is thus
sufficient to consider the behavior of $P_{m}'\left(x\right)$ for
$x\ra0$. One can then use Eq. (\ref{eq:A-S-P_m'(x)}) to show that
the leading boundary singularity of $P_{m}'\left(x\right)$ is
\begin{equation}
P_{m}'\left(x\ra0\right)\propto x^{\beta-2-\gamma}.\label{eq:S' singularity}
\end{equation}
By comparing Eq. (\ref{eq:S' singularity}) to the boundary singularity
for the first few values of $m$, the range of allowed $\beta$ for any
order $m$ can be obtained: The boundary singularity of $P_{1}'\left(x\right)$,
whose source term is $\propto\left(x\left(1-x\right)\right)^{\frac{\beta-2}{2}}$,
is found by setting $\gamma=\frac{2-\beta}{2}$ and yields
\begin{equation}
P_{1}'\left(x\ra0\right)\propto x^{-3\left(\frac{2-\beta}{2}\right)}.\label{eq:P_1'(x)}
\end{equation}
Next, the boundary singularity of $P_{2}'\left(x\right)$, whose source
term is $\propto\left(x\left(1-x\right)\right)^{3\left(\frac{\beta-2}{2}\right)}$,
is found by setting $\gamma=3\left(\frac{2-\beta}{2}\right)$ and yields
\begin{equation}
P_{2}'\left(x\ra0\right)\propto x^{-5\left(\frac{2-\beta}{2}\right)}.\label{eq:P_2''(x)}
\end{equation}
Repeating this process, one finds that the boundary singularity for
general $m$ is
\begin{equation}
P_{m}'\left(x\ra0\right)\propto x^{-\left(2m+1\right)\left(\frac{2-\beta}{2}\right)}.\label{eq:P_m''(x)}
\end{equation}
As such, the highest order correction $P_{m}'\left(x\right)$ which
can be computed by the Sonin formula, for a given $\beta$, is obtained
by comparing the singularity of the source term $P_{m-1}'\left(x\ra0\right)\propto x^{-\left(2m-1\right)\left(\frac{2-\beta}{2}\right)}$
in the equation for $P_{m}'\left(x\right)$ to the kernel singularity,
providing the relation
\begin{equation}
m<\frac{\beta}{2\left(2-\beta\right)}.\label{eq:A-S-order m}
\end{equation}
\bibliographystyle{unsrt}
|
1,477,468,751,099 | arxiv | \section{Introduction}
As early as 1930s, astronomical observations hinted the existence of an unknown form of matter. In the last decades the evidence for the existence of this type of matter, dubbed dark matter (DM) \cite{Zwicky:1933gu}, has become abundant and overwhelming \cite{Tanabashi:2018oca}.
We now know that it makes up about 80$\%$ of the universe matter-content \cite{Aghanim:2018eyx} pointing out to the existence of one or several new particles that are not part of the Standard Model (SM) of particle physics.
Among the DM canditates \cite{Feng:2010gw}, weakly interacting massive particles (WIMPs) \cite{Steigman:1984ac} are some the most well-motivated candidates since the thermal annihilation cross section needed to account for the observed DM relic density is obtained for DM particles with electroweak interactions and masses.
That is, the WIMPs lie naturally at what is expected to be the scale of physics beyond the Standard Model (BSM).
Moreover, their abundance \cite{Steigman:2012nb} is governed by the generic mechanism of chemical frezee-out which has also played a role in the abundance of light elements as well as the cosmic microwave background radiation \cite{Kolb:1990vq}, both of which are in stark agreement with current observations.
It is worth noting, however, that the WIMP paradigm is not free of challenges, both at theoretical and experimental levels \cite{Baer:2014eja}.
For instance, it is not always a given that a WIMP candidate will automatically account the total DM relic abundance, which, in some cases, implies the need for some degree of fine-tuning in order for the models to still be viable\footnote{In particular, one of the most well studied WIMPs, the neutralino, tends to yield too much relic abundance if it is bino-like while its relic abundance tends to be suppressed (as long as its mass is below 2.4 TeV) if it is mainly wino \cite{Jungman:1995df,Bertone:2004pz}.}.
Additionally, despite large efforts to find evidence of WIMPs through production at colliders, elastic scattering with heavy nuclei while passing through the Earth or observation of the self-annihilation byproducts in regions with high DM density, no concluding evidence has been reported.
The null results have lead to more and more constraints on the parameter space of popular WIMP models \cite{Bertone:2010at,Baer:2014eja,Escudero:2016gzx,Arcadi:2017kky}.
Among the different approaches to overcome the challenges on WIMP models, we recall those that depart completely or partially from standard cosmology scenario \cite{Kamionkowski:1990ni,Giudice:2000ex,Gelmini:2006pw,Acharya:2009zt} or consider WIMPs to make up only a fraction of the total DM of the Universe \cite{Zurek:2008qg,Profumo:2009tb}.
In standard cosmology, the WIMP relic density is calculated assuming a set conditions during the time before Big Bang Nucleosynthesis (BBN), but there are no data or indirect evidence supporting such assumptions.
For instance, the reheating temperature is an unknown quantity which could be as low as $\sim1$ MeV, the temperature at which nucleosynthesis begins. If this temperature is small, it may have a profound implication on the DM relic density (be suppressed or enhanced compared to the standard scenario) \cite{Giudice:2000ex,Gelmini:2006pw}.
More recently, several works have addressed the problem of DM abundance under nonstandard cosmologies \cite{Aparicio:2016qqb,DEramo:2017gpl,Hamdan:2017psw,Visinelli:2017qga,DEramo:2017ecx,Bernal:2018ins,Bernal:2018kcw,Hardy:2018bph,Drees:2017iod,Arbey:2018uho}.
For instance, in Refs.~\cite{Bernal:2018kcw,Hardy:2018bph} the case for scalar DM was addressed, while in Ref.~\cite{Drees:2017iod} a generic calculation for the DM abundance with a late decaying scalar was considered, and in Ref.~\cite{Arbey:2018uho} the relic abundance is considered as a probe of the conditions of the Universe pre-BBN.
Interestingly enough, such deviations from the standard cosmology do not affect the prospects regarding DM detection and BBN.
On the other hand, although most proposals contain one single WIMP, there is no reason to consider that the DM of the Universe is composed by just one type of DM particles.
The total DM relic abundance could be a result of the presence of several DM particles, a scenario referred to as multicomponent DM.
In such scenarios, the DM of the Universe is set by the WIMP and other DM candidate, such as QCD axions \cite{Baer:2011hx,Bae:2013hma,Dasgupta:2013cwa,Alves:2016bib,Ma:2017zyb} or even another WIMP ~\cite{Zurek:2008qg,Profumo:2009tb,Esch:2014jpa,Arcadi:2016kmk,Bernal:2018aon}.
In light of this, it makes sense to study models where the relic abundance is not imposed as a constraint, either because in nonstandard cosmology it is possible to fulfil this requirement when the right combination of parameters is achieved, or because is achieved by the interplay of two separate dark sectors.
For the aforementioned reasons, we consider the doublet-triplet fermionic model (DTF) \cite{Dedes:2014hga,Abe:2014gua,Freitas:2015hsa,Lopez-Honorez:2017ora}, which is one of the ultraviolet realizations of the fermionic Higgs portal \cite{Patt:2006fw}, and is part of the minimal setup expected when the SM is extended by new physics which is, to some extent, related to lepton and baryon number conservation~\cite{Arbelaez:2015ila,Arkani-Hamed:2015vfh}.
In the DTF, the SM is enlarged with two colorless fermions, a vectorlike $SU(2)_L$ doublet and a majorana $SU(2)_L$ triplet, both being odd an exact $Z_2$ symmetry\footnote{This symmetry can be recognized as remnant symmetry at the end of the symmetry breaking chain of the SO(10) group to the SM \cite{Arbelaez:2015ila}.} which is imposed in order to render stable the lightest $Z_2$-odd particle\footnote{This particle setup has been also considered in studies associated to strengthening the electroweak phase transition \cite{Carena:2004ha}, presicion test in future colliders \cite{Arkani-Hamed:2015vfh,Bertuzzo:2017wam,Xiang:2017yfs}, electroweak precision tests \cite{Cai:2016sjz}, generation of neutrino masses \cite{Betancur:2017dhy}, fake split-supersymmetry \cite{Benakli:2013msa}, and UV completion of doublet fermion DM \cite{Dedes:2016odh}.}.
The DM candidate turns out to be a mixture, generated by the interaction with the Higgs boson, between the neutral component of the triplet and the neutral components of the doublet vectorlike fermion.
The viable dark matter regions comprise a DM mass around the electroweak scale and above 1 TeV \cite{Dedes:2014hga,Abe:2014gua,Freitas:2015hsa,Betancur:2017dhy}.
The electroweak DM region arises in the scheme of the custodial limit (the new Yukawa couplings are equal) when the DM candidate is pure doublet. However, the contribution of new the charged fermions to $h \rightarrow \gamma \gamma$ generates a considerable suppression on the Higgs diphoton decay making such a scheme severely constrained by the interplay between the DM relic density constraint and the LHC measurement of the Higgs diphoton decay rate \cite{Dedes:2014hga,Abe:2014gua,Freitas:2015hsa,Betancur:2017dhy}.
In this work we will study the custodial limit of the DTF within either a nonstandard cosmology scenario, {\it i.e.}, we depart from the standard relic density calculation, or a multicomponent DM setup but assuming that the WIMP relic density is obtained through the thermal freeze out.
We will establish the current constraints on the DFT coming from collider searches and Higgs diphoton decay, without taking care of the constraint on the DM relic density.
Then, we will go on to determine the restrictions resulting from direct detection (DD) experiments and indirect detection (ID) with gamma rays, both in the framework of nonstandard cosmology and the DM candidate as part of a multicomponent system.
The rest of the paper is organized as follows. In Sec. \ref{themodel} we present the doublet-triplet fermion model with its mass spectra and discuss the allowed interactions. In Sect. \ref{collider} we present the model restrictions arising from electroweak production at colliders of charginos and neutralinos and from precision measurements of the Higgs diphoton decay rate. In Section \ref{NONSC} we study the constraints arising from DD and ID with gamma rays for the case of a nonstandard cosmology, whereas in Section \ref{multiDM}, the same analysis is made for the case where the DM candidate is part of a multicomponent system. Finally, we conclude in Sec.~\ref{sec:conc}.
\section{The Model}\label{themodel}
Doublet-triplet fermion DM consists on extending the SM with an $SU(2)_L$ vectorlike doublet with $Y=-1$ and a Majorana $SU(2)_L$ triplet, both odd under an exact $Z_2$ symmetry.
Expressing the new $SU(2)_L$ multiplets as
\begin{align}\label{eq:fermioncontent}
\psi=\left( \begin{array}{ccc}
\psi^0 \\
\psi^- \end{array} \right),\hspace{1cm}
\Sigma_L=\left( \begin{array}{ccc}
\Sigma_L^0/\sqrt{2} & \Sigma_L^+\\
\Sigma_L^{-} & -\Sigma_L^0/\sqrt{2} \end{array} \right),
\end{align}
the most general renormalizable Lagrangian, invariant under the $SU(2)_L \times U(1)_Y \times Z_2$ symmetry can be written as
\begin{align}
\mathcal{L}&=\mathcal{L}_{\rm{SM}}+\mathcal{L}_{\rm{F}}+\mathcal{L}_{\rm{I}}-\mathcal{V}_{\rm{SM}}.
\end{align}
Here $\mathcal{L}_{\rm{SM}}$ is the SM Lagrangian and $\mathcal{V}_{\rm{SM}}$ is the scalar potential of the Higgs doublet $H=(0, (h+v)/\sqrt{2})^{\text{T}}$, with $h$ being the Higgs boson and $v=246$ GeV.
$\mathcal{L}_{\rm{F}}$ refers to the kinetic and mass terms of the new fermions,
\begin{align}
\mathcal{L}_F&=\bar{\psi} i \cancel{D}\psi-M_\psi\bar{\psi}\psi+{\rm Tr}[\bar{\Sigma}_L i\cancel{D} \Sigma_L]-\frac{1}{2}{\rm Tr}(\bar{\Sigma}_L^cM_\Sigma\Sigma_L+\mbox{h.c.}),
\end{align}
and $\mathcal{L}_{\rm{I}}$ contains the Yukawa interactions of the new fermions with the Higgs doublet,
\begin{align}\label{eq:LI}
\mathcal{L}_{\rm{I}}= -y_1 H^\dagger\bar{\Sigma}_L^c\epsilon \psi_R^c + y_2 \bar{\psi}_L^c \epsilon \Sigma_L H + {\rm h.c.},
\end{align}
where $y_1$ and $y_2$ are new Yukawa couplings that generate the mixing between the new fermion fields.
Note that the $Z_2$ symmetry not only guarantees the stability of the lightest $Z_2$-odd particle (the DM particle) but also avoids Higgs-mediated flavor changing neutral currents at tree level through the mixing terms $\bar{\psi}He_R$ and $\overline{\Sigma}^c_L\tilde{H}^\dagger L$. Therefore, lepton flavor violating processes such as $\mu\to e\gamma$ are forbidden
The particle spectrum contains three new Majorana mass eigenstates $\chi_{\alpha}^0$ ($\alpha=a,b,c$) and two new charged fermions $\chi_{a,b}^{\pm}$\,.
The mass matrix for the neutral fermions is \cite{Dedes:2014hga,Betancur:2017dhy} (in the basis $\Xi^0=(\Sigma_L^0, \psi^0_L, \psi^{0c}_R)^T$),
\begin{align}
\label{eq:MchiN}
\mathbf{M}_{\Xi^0}=\begin{pmatrix}
M_\Sigma &\frac{1}{\sqrt{2}}yv\cos\beta& \frac{1}{\sqrt{2}}yv\sin\beta\\
\frac{1}{\sqrt{2}}yv\cos\beta & 0 & M_\psi\\
\frac{1}{\sqrt{2}}yv\sin\beta& M_\psi & 0
\end{pmatrix},
\end{align}
with $y=\sqrt{(y_1^2 + y_2^2)/2}$ and $\tan\beta=y_2/y_1$.
On the other hand, the charged fermions mass matrix reads
\begin{align}
\label{eq:MchiC}
\hspace{1cm}
\mathbf{M}_{\Xi^\pm}=\begin{pmatrix}
M_\Sigma & yv\cos\beta \\
yv\sin\beta & M_\psi \\
\end{pmatrix},
\end{align}
which is diagonalized by a rotation of the gauge eigenstates into the physical states defined via
\begin{align}
\left( \begin{array}{cc}
\Sigma^+ \\
\psi^+
\end{array} \right)
= %
\left( \begin{array}{cc}
\cos\theta & -\sin\theta \\
\sin\theta & \cos\theta
\end{array} \right)
\left( \begin{array}{cc}
\chi_a^+\\
\chi_b^+
\end{array} \right), \hspace{2cm}
\sin(2 \theta)= \frac{\sqrt{2} \ y \ v}{m_{\chi_a^+}-m_{\chi_b^+}^2}.
\end{align}
Either one of the mass eigenstates $\chi_a^{\pm}$ and $\chi_b^{\pm}$ could be the lightest charged $Z_2$-odd fermion.
From now on, we will assume that $\chi_1^{\pm}$ is the lightest between $\chi_a^{\pm}$ and $\chi_b^{\pm}$, while $\chi_2^{\pm}$ is the heaviest.
Thus, for the masses $m_{\chi_1^{\pm}}$ and $m_{\chi_2^{\pm}}$ a mass ordering is implied.
On the other hand, these mass matrices evoke the neutralino and chargino mass matrices (in the wino-higgsino limit) in the minimal supersymmetric standard model \cite{Martin:1997ns}, which case is realized when $y=g/\sqrt{2}$ and has been exploited in studies such as \cite{Carena:2004ha,Arkani-Hamed:2015vfh,Benakli:2013msa}.
\subsection{The custodial limit}
An interesting scheme arises when $\tan\beta=1$ (the custodial limit) since several consequences arise.
First, the interaction Lagrangian $\mathcal{L}_{\rm{I}}$ becomes invariant under an $SU(2)_R$ symmetry which protects the $T$ and $U$ parameters \cite{Cai:2016sjz, Dedes:2014hga}, thus making the model free of the constraints coming from electroweak precision tests.
Second, all diagonal tree-level couplings of $\chi_\alpha^0$ to the $Z$ boson are zero.
And third, the neutral mass matrix may be written as\footnote{ This can be done via a similarity transformation $\mathbf{M}'_{\Xi^0}=O^\dagger\mathbf{M}_{\Xi^0}O$, with
\begin{align}
O=\begin{pmatrix}
1 & 0 & 0\\
0 & \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\
0 &\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\
\end{pmatrix}.
\end{align}.}
\begin{align}\label{eq:Mchi2}
\mathbf{M}'_{\Xi^0}=\begin{pmatrix}
M_\Sigma &yv& 0\\
yv & M_\psi & 0\\
0 & 0 & -M_\psi \\
\end{pmatrix},
\end{align}
which shows that there is a decoupled eigenvalue $M_\psi$ (the mass is not obtained from the electroweak symmetry breaking), meaning that one of the mass eigenstates (which will be labelled as $\chi_1^0$ with mass $m_{\chi_1^0}=-M_\psi$) is an equal mixture of the doublets with no triplet component.
Moreover, the coupling of $\chi_1^0$ to the Higgs $(g_{h\chi_1^0\chi_1^0 })$ is also zero at tree level.
It follows that, for the custodial limit scheme with $\chi_1^0$ as the DM candidate ($\chi_1^0$ is the lightest between $\chi_a^0, \chi_b^0$ and $\chi_c^0$)\footnote{In this case we have that the heavier neutral fermions are mass-degenerate with the charged fermions ($m_{\chi_2^0}= m_{\chi_1^\pm}$ and $m_{\chi_3^0}= m_{\chi_2^\pm}$), so $|m_{\chi_1^0}| < |m_{\chi_2^0}| < |m_{\chi_3^0}|$.}, the above features lead to the following important implications for the phenomenology of the model.
\begin{itemize}
\item DM ($\chi_1^0$) annihilates into weak gauge bosons through $t$- and $u$- channels via exchange of heavier $Z_2$ fermions, and so, the DM annihilation cross section is suppressed.
Furthermore, for large Yukawa couplings\footnote{This is the reason for which such a scheme can not be obtained in the MSSM.} $(0.5\lesssim y\lesssim3)$, the splitting between the DM candidate and the heavier new fermions is large which further suppresses the annihilation cross section, thus allowing the DM candidate to saturate the relic abundance for masses as low as the electroweak scale ($80\lesssim m_{\chi_1^0}\lesssim200$ GeV).
\item Within the region where the correct DM relic density is obtained, there are two different allowed triplet mass regions for any value of the pair $y$-$m_{\chi_1^0}$, one where $M_{\Sigma}$ is always negative ($M_{\Sigma}<-m_{\chi_1^0}$) and one where it can be either positive or negative but larger than $-m_{\chi_1^0}$.
\item Since the interactions between $\chi_1^0$ and both $h$ and $Z$ are zero a tree-level, there are not contributions to direct detection at tree-level, so the leading contributions appear at one-loop level (more on this in Sec. \ref{DD}) for both spin-independent (SI) and spin-dependent cross sections. These blind spots for the model has been studied in works such as \cite{Dedes:2014hga,Freitas:2015hsa,Betancur:2017dhy}
\item Though the custudial limit scheme presents interesting possibilities, it is also severely constrained by the Higgs diphoton decay measurement.
The origin of this is due to the presence of new charged fermions that couple to the Higgs boson and create an effective $h\gamma\gamma$ coupling through a triangular diagram similar to the top quark contribution (though with a larger electric charge).
The decay ratio with respect to the SM rate can be parametrized as
\begin{align}
\label{eq:higgsgammagamma}
&R_{\gamma\gamma}= \left|1 + \frac{1}{A_{SM}}\left[ \frac{y^2v^2}{m_{\chi_2^\pm}-m_{\chi_1^\pm}}\left(\frac{A_F(\tau_{\chi_2^\pm})}{m_{\chi_{2}^{\pm}}}-\frac{A_F(\tau_{\chi_1^\pm})}{m_{\chi_{1}^{\pm}}}\right)\right] \right|^2,
\end{align}
where $A_{{\rm SM}}=-6.5$ includes the contribution from all charged SM particles such as gauge bosons and fermions, and the loop factor is $A_F(\tau)= 2 \tau ^{-2}[\tau + (\tau-1)\arcsin^2{\sqrt{\tau}}]$ for $\tau\leq1$ where $\tau_X=m^2_{h}/(4 m^2_{X})$.
As can be seen from Eq.~(\ref{eq:higgsgammagamma}), the new fermion contribution is always positive for real Yukawa couplings (which is the case at hand) and because it is opposite in sign to SM contribution, it may suppress its value causing large deviations from the results published by the ATLAS~\cite{Aaboud:2018xdt} and CMS~\cite{Sirunyan:2018ouh} collaborations.
Indeed, that is what occurs when large values of $y$ as well as large splitting between the $Z_2$ odd heavier fermions are required, the same conditions that lead to the correct DM relic abundance.
As a result, the new contribution conspirates to create large discrepancies from the expected value, up to the point of only allowing the DM mass to lie between the narrow range of $70 < m_{\chi_1^0}< 80$ (GeV).
\end{itemize}
All in all, despite the custodial limit of the DTF model being an appealing and promising scenario (thus being an excellent exponent of the WIMP paradigm), it is severely constrained by the interplay between the DM relic density constraint and the LHC measurement of the Higgs diphoton decay rate.
Nonetheless, if the DM abundance is generated within a nonstandard cosmology or is part of multicomponent dark sector, the requirement of having large values for $y$ as well as a large mass splitting between the charged fermions would be substantially weaken.
This in turn would result in a larger portion of the parameter space that is still allowed\footnote{Other way of recovering part of the parameter space is via introduction of additional charged scalars \cite{Betancur:2017dhy}.}.
In the following sections, we will explore the DTF in the case of $\tan \beta=1$ with $\chi_1^0$ as the DM candidate, our aim is to find current as well as upcoming restrictions due to collider, direct detection and indirect detection experiments.
We choose the set of free parameters of the model to be $M_\psi, M_\Sigma$ and $y$, and due to the freedom to make field redefinitions we assume $M_\psi,y>0$ and $M_\Sigma$ to be real \cite{Dedes:2014hga}, implying CP conservation in the $Z_2$-odd sector and that three intrinsic CP phases of the $Z_2$-odd fields (including $\chi_1^0$) are fixed.
\section{Collider Bounds}\label{collider}
The Large Hadron Collider (LHC) is currently running at an outstanding 13 TeV energy and has collected more that 100 fb$^{-1}$ of data. One of its current goals is to probe BSM models either by direct production of new particles or by measuring possible deviations from the SM. In that regard, the parameter space of the DTF model may be constrained by the ATLAS and CMS experiments.
\subsection{Higgs diphoton decay}\label{sec:diphoton}
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.35]{infer_MD_R_MT_allR.pdf}
\includegraphics[scale=0.35]{sup_MD_R_MT_allR.pdf}\\
\includegraphics[scale=0.35]{infer_MD_R_y_allR.pdf}
\includegraphics[scale=0.35]{sup_MD_R_y_allR.pdf}
\caption{Scan on the parameter space of the model against $R_{\gamma \gamma}$ for the region where $M_{\Sigma}<-m_{\chi^0_1}$ (left panels) and the region where $M_{\Sigma}>-m_{\chi^0_1}$ (right panels). The solid and dashed horizontal lines represent the lowest bound at a 2$\sigma$ deviation from the central value reported by the ATLAS and CMS collaboration respectively. }
\label{fig:Rgg-1}
\end{center}
\end{figure}
As explained before, the DTF may generate large deviations from the current measurements of the Higgs diphoton decay rate.
The ATLAS and CMS collaborations have presented results for the decay at $\sqrt{s}=13$ TeV with $\sim 36\, \mathrm{fb^{-1}}$ of data in Refs.~\cite{Aaboud:2018xdt} and CMS \cite{Sirunyan:2018ouh}, respectively.
We use these results to find the regions of the parameter space that are in agreement with the Higgs diphoton decay rate. For this we consider models that deviate at most $2\sigma$ from the central value reported by each collaboration.
In order to obtain constraints from $R_{\gamma \gamma}$ without imposing the DM relic density constraint, we performed a scan of the free parameters of the model as follows:
\begin{align}\label{eq:scan}
0.1 < &\; y<3.5,\nonumber\\
-2000 \ \mathrm{GeV} < &\; m_{\Sigma}<2000\,\mathrm{GeV},\nonumber\\
75.0 \ \mathrm{GeV} < &\; m_{\psi}<500 \ \mathrm{GeV}.
\end{align}
Additionally, we only considered models where the lightest charged fermion is heavier than 100 GeV in order to satisfy LEP limits \cite{Achard:2001qw}.
The results are presented in Fig.~\ref{fig:Rgg-1} where the parameter space has been divided into two regions, $M_{\Sigma}<-m_{\chi^0_1}$ (left panels) and $M_{\Sigma}>-m_{\chi^0_1}$ (right panels). We will do this throughout the paper because the phenomenology in these two regions tend to yield different results.
The scan shows that, considering the ATLAS results, for the region where $M_{\Sigma}<-m_{\chi^0_1}$ there are no restrictions on the Yukawa coupling $y$ or on $M_{\Sigma}$ whereas in the region $M_{\Sigma}>-m_{\chi^0_1}$ the decay rate forbids $y$ values larger than 2.25 and $M_{\Sigma}\lesssim -60$ GeV.
On the other hand, CMS results yield the severe constraint $y<2.0$ for $M_{\Sigma}<-m_{\chi^0_1}$ and $y<1.0$ for $M_{\Sigma}>-m_{\chi^0_1}$, whereas $M_{\Sigma}$ must be less than $\sim-92$ GeV. Hence, positive triplet masses are no longer consistent with $R_{\gamma \gamma}$ results.
We also consider the impact on the fermion mixing angle from $R_{\gamma \gamma}$ (see Fig \ref{fig:Rgg-2}).
For the region where $M_{\Sigma}<-m_{\chi^0_1}$ (left panel) the mixing angle must be small such that $|\cos \theta| \lesssim 0.3$ ($|\cos \theta|\lesssim 0.2$) in order to be consistent with ATLAS (CMS) measurements.
Accordingly, in that region the lightest charged fermion is mostly doublet.
On the other hand, the results for the region where $M_{\Sigma}>-m_{\chi^0_1}$ (right panel) point out that the lightest charged fermion is mostly triplet, with $|\cos \theta| \gtrsim 0.94$ ($|\cos \theta|\gtrsim 0.98$) for the ATLAS (CMS) data.
A comment regarding this region is in order: for $R_{\gamma \gamma} \sim 0.2$, the mixing angle exhibits a rather complex behavior which is seen as large changes in $\cos \theta$ (from -0.8 to 0.6) right next to a boundary where $\cos \theta \sim0.8$.
This stems from the fact that at this boundary the triplet mass is changing sign, thus having an impact on the mixing angle behavior.
The results for the mixing angle in both region are important because they will have a direct impact on the production cross section of the heavier fermions at the LHC, as will be discussed below.
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.35]{inf_MD_R_cos_allR.pdf}
\includegraphics[scale=0.35]{sup_MD_R_cos_allR.pdf}
\caption{Impact of the cosine of the mixing angle $\theta$ on the Higgs diphoton decay rate for the allowed values of the DM mass. The conventions are the same as those of Fig. \ref{fig:Rgg-1}.}
\label{fig:Rgg-2}
\end{center}
\end{figure}
\subsection{Constraints from electroweak production searches}
Other LHC results that may potentially constraint the DTF model are those searching for electroweak production of neutralinos and charginos in different simplified SUSY models (with all other SUSY particles decoupled), where
the relevant detection channels are those with several leptons (and missing energy) in the final state.
In the DTF, $\chi_{1,2}^{\pm}$ and $\chi_{2,3}^{0}$ play the role of charginos and heavier neutralinos, respectively, with the same mass degeneracy that characterizes the simplified supersymmetric scenarios.
The CMS collaboration has recently published results for such searches at $\sqrt{s}=13$ TeV and 35.9 fb$^{-1}$ \cite{Sirunyan:2018ubx}.
For the case of $m_{\chi_1^0}\lesssim500$ GeV and a nondegenerate spectrum,
the most sensitive channel is that with three final state leptons where at least two of them have opposite sign and same flavor. Thus, DM production proceeds via the following process
\begin{align}
& q \bar{q}' \rightarrow W^{*\pm} \rightarrow \chi^\pm \chi_{2,3}^0:
\begin{cases}
\chi^\pm \rightarrow \chi^0_{1} W^{*\pm} \rightarrow \chi_1^0 \ell^\pm \nu_\ell , \\
\chi_{2,3}^0 \rightarrow \chi^0_{1} Z^* \rightarrow \chi_1^0 \ell^+ \ell^-.
\end{cases}
\end{align}
where the mediators $\chi^{\pm}$ and $\chi^{0}$ are considered to be winos and thus mass degenerate, with the neutral fermion decaying 100$\%$ via $Z$ boson.
To recast the LHC constraints (and other experimental restrictions that will be discussed below) for the DTF we implemented the model in {\tt SARAH-4.12.3} package \cite{Staub:2013tta} whose output was used with {\tt SPheno-4.0.3} \cite{Porod:2003um} in order to obtain the particle spectrum and with {\tt MadGraph5\textunderscore}a{\tt MC@NLO} to obtain the production cross sections \cite{Alwall:2014hca}.
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.35]{inferior_chargino_neutralino_rest.pdf}
\includegraphics[scale=0.35]{superior_chargino_neutralino_rest.pdf}
\caption{DM mass versus lightest chargino mass for the regions where $M_{\Sigma}<-m_{\chi^0_1}$ (left panel) and $M_{\Sigma}>-m_{\chi^0_1}$ (right panel). The region below the blue dashed line is excluded from CMS electroweak production while the region bounded by the blue solid line represents the exclusion by the ATLAS collaboration using compressed spectra. Points below the solid (dashed) black contour are excluded by the $R_{\gamma \gamma}$ results reported by the ATLAS (CMS) collaboration.}
\label{fig:prod}
\end{center}
\end{figure}
Fig. \ref{fig:prod} shows the constraints from electroweak production, where the excluded region corresponds to the points below the blue dashed line.
Moreover, the points below the solid and dashed black lines yield a lower diphoton decay ratio than the one allowed by ATLAS and CMS, respectively. In the region where $M_{\Sigma}>-m_{\chi^0_1}$ (right panel) the diphoton decay ratio restricts the lightest charged and the next-to-lightest neutral fermions to be mostly doublet. A consequence of this is that the production cross section is nearly the same even for all values of the allowed Yukawa coupling and the triplet mass, which means that the boundary of the excluded region is nearly independent of $y$ and $M_{\Sigma}$. Moreover, due to the mixing angle, the production cross section resembles that of SUSY Higgsino with all scalars decoupled. The figure shows that the strongest constraints come mostly from $R_{\gamma \gamma}$ , except for a small area where electroweak production cross section is more restrictive. Nonetheless, there are no additional restrictions placed on the free parameters $M_{\psi}$, $y$ and $M_{\Sigma}$.
In the region where $M_{\Sigma}<-m_{\chi^0_1}$, the diphoton decay rate restricts $\chi_2^0$ and $\chi_1^{\pm}$ to be mostly triplet. The production cross section is large but again independent of $y$ and $M_{\Sigma}$, and so, the region excluded by electroweak production is presented with only one contour. In this region, due to the larger production cross section, the curve is shifted to the left in the $m_{\chi_1^+}$ line and so $R_{\gamma \gamma}$ places the strongest constrains for the whole plain.
\subsection{Constraints from compressed spectra searches}
The ATLAS collaboration has also published relevant results for the DTF for the case of compressed spectra \cite{Aaboud:2017leg}, {\it i.e.}, the next-to-lightest fermion is close in mass to the neutralino DM ($\leqslant$35 GeV) and a mass degeneracy between the next-to-lightest neutralino and lightest chargino. In that region, the DM production proceeds via:
\begin{align}
&q \bar{q}' \rightarrow W^{*\pm} \rightarrow \chi^\pm \chi_{2,3}^0:\,\,\,
\begin{cases}
\chi^\pm \rightarrow \chi^0_{1} W^{*\pm} \rightarrow \chi^0_{1} q \bar{q}' ,\\
\chi_{2,3}^0 \rightarrow \chi^0_{1} Z^* \rightarrow \chi_1^0 \ell^+ \ell^- .
\end{cases}
\end{align}
The search then focuses on two leptons with opposite sign and same flavor with soft momentum and large $\not\mathrel{E}$ which is present due to the two DM particles recoiling against initial state radiation.
For this search, small mass splittings are required, in order to ensure DM coannihilations.
In the DTF this low mass splitting is not needed, in fact $0 \lesssim m_{\chi_1^{\pm}}-m_{\chi_1^{0}} \lesssim 140$ GeV, however, we may still use the constraints for small mass splittings between the next-to-lightest $\chi^\pm$ and the DM.
We find that, for the region $M_{\Sigma}>-m_{\chi_1^0}$ where $\chi_1^{\pm}$ and $\chi_2^{0}$ are mostly triplet, and so the restrictions resemble those of the ATLAS collaboration which is shown with a solid blue contour.
In terms of the free parameters, we find that the triplet mass is now restricted to be smaller than $\sim -165$ GeV, whereas the Yukawa coupling is not constrained.
For the case of $M_{\Sigma}<-m_{\chi_1^0}$, since $\chi_1^{\pm} \ \chi_2^{0}$ are mostly doublet, there is a lower production cross section and so the restriction is negligible.
\section{DM detection in a nonstandard cosmology}\label{NONSC}
In a nonstandard cosmology scenario, the late decay of a heavy scalar field could either increase or decrease the DM relic abundance compared to the standard calculation.
For the DTF\footnote{The case of the SUSY neutralino was studied in Ref.~\cite{Gelmini:2006pw}, whereas in Ref.~\cite{Cheung:2012qy} the wino-higgsino (similar to the DTF) case was also considered.}, it could be possible to satisfy the relic abundance, for instance, due to the presence of a heavy scalar particle which decays into heavier $Z_2$-odd particles that later on decay into the DM candidate, thus increasing the relic abundance compared to that of the model in the standard cosmology.
Hence, we expect the DTF model to saturate, in one way or another, the DM relic abundance. Therefore, we look into current experimental constraints coming from direct searches and indirect detection via gamma rays. Since the diphoton decay is by far more restrictive than production at colliders, in this section we impose the $R_{\gamma \gamma}$ restriction coming from ATLAS and when relevant to the parameter space, we present the restriction arising in this observable from the CMS experiment.
\subsection{Direct detection}\label{DD}
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.5]{direct_detection_higgs.pdf}
\caption{Feynman diagrams contributing to the spin independent DD. The diagrams arise from a loop correction to the $h\chi_1^0\chi_1^0$ coupling which is absent at tree-level when $\tan \beta=1$. The interaction shown in the left diagram is independent on the Yukawa coupling $y$ while the right one depends on it. The $Z_2$-odd $\chi$ fermion shown in both figures represent the charged $\chi_{1,2}^{\pm}$ (neutral $\chi_{2,3}^0$) fermion when the loop is mediated by $W$ ($Z$) boson. }
\label{fig:DD_h}
\end{center}
\end{figure}
Within the custodial limit scheme, the SI elastic scattering is only achieved at the loop level since both $g_{\chi_1^0\chi_1^0 Z}$ and $g_{\chi_1^0\chi_1^0 h}$ couplings vanish at tree-level.
However, at loop-level there are, in principle, several contributions that could be relevant.
First, there is an effective nonzero $\chi_1^0\chi_1^0h$ coupling originating from loops mediated by the new heavier fermions and weak gauge bosons, thus allowing for spin-independent direct detection (see Fig.~\ref{fig:DD_h}).
Additionally, box diagrams mediated by gauge bosons and twist-2 operators \cite{Hisano:2011cs,Drees:1993bu} contribute to the SI cross section.
In principle, these two contributions should be taken into account to obtain a reliable calculation.
However, it has been shown that they are sub-leading except when the two contributions arising from the Higgs vertex corrections cancel each other out, which happens for low values of $\sigma_{SI}$ ($\lesssim 10^{-47}$cm$^{2}$) \cite{Drees:1993bu}.
Moreover, the authors of Ref.~\cite{Hisano:2011cs} have shown that when the cancellation happens, two-loop contribution to an effective scalar interaction with external gluons are of the same order as the box ones.
Since these calculations (boxes from gauge, twist-2 and two-loop) are quiet involved and only relevant for the case of specific cancellations, we will not take them into account for the calculation of the SI cross section. Moreover, they tend to create a larger suppression of the cross section that is already out of reach of current experiments. As a result, the restrictions that we will present below from DD are not strongly affected by this assumption.
In order to obtain the most up-to-date limits from DD, we calculated the effective $g_{h\chi_1^0\chi_1^0}$ coupling following Ref.~\cite{Freitas:2015hsa} and used that to compute the spin independent (SI) cross sections. We then compared to the current upper limits on the DM-nucleon SI scattering cross section, where the strongest ones (within the DM mass range we are considering) are those reported by the XENON1T collaboration \cite{Aprile:2018dbl}. We also show the projected sensitivity of DARWIN \cite{Aalbers:2016jon}, the most sensitive DD experiment planned for DM at the electroweak scale. However, the expected SI cross section around the DARWIN limit must be taken with a grain of salt since sub-leading corrections might change $\sigma_{SI}$ in that region.
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.35]{Inferior_un-scaled_DD_color_y.pdf}
\includegraphics[scale=0.35]{Superior_un-scaled_DD.pdf}
\caption{Spin-independent cross section for the regions $M_{\Sigma}<-m_{\chi_1^0}$ (left) and $M_{\Sigma}>-m_{\chi_1^0}$ (right). The blue curve represents the upper limit imposed by XENON1T \cite{Aprile:2018dbl} whislt the green curve shows the projected sensitivity of DARWIN \cite{Aalbers:2016jon}.
The black dashed line represents the limit when the $R_{\gamma \gamma}$ restriction from CMS is considered.}
\label{fig:Un-sup-DD}
\end{center}
\end{figure}
In Fig. \ref{fig:Un-sup-DD} we display the results for the spin-independent cross section as a function of the DM mass, for the regions $M_{\Sigma}<-m_{\chi_1^0}$ (left) and $M_{\Sigma}>-m_{\chi_1^0}$ (right).
It follows that XENON1T restricts the coupling to be less than 1.75 if the lower bound on $R_{\gamma \gamma}$ from ATLAS is imposed.
The dashed black line in both panels shows the CMS lower bound on $R_{\gamma \gamma}$, which excludes models even further and for the region $M_{\Sigma}>-m_{\chi_1^0}$ imposes $y \leq 1.2$.
We also checked the impact of DD results on the other free parameter of the model, $M_{\Sigma}$, but we found that they place no further restrictions on it.
The prospects coming from the DARWIN experiment correspond to the green solid line, which show that couplings as small as 0.5 may be probed. It is worth mentioning that the lower limit on the SI cross section is due to the cancellation between the two one-loop corrections to the $h\chi_1^0\chi_1^0$ vertex; hence, in order to have a precise value of $\sigma_{SI}$ in this region, a more detailed calculation is necessary.
\subsection{Indirect detection from dwarf spheroidal galaxies}\label{ID1}
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.35]{Inferior_un-scaled_DD_difuseID_MT.pdf}
\includegraphics[scale=0.35]{Superior_un-scaled_DD_difuseID_MT.pdf}
\caption{ID restrictions and prospects coming from the observation of dSphs of the Fermi satellite applied to the regions $M_{\Sigma}<-m_{\chi_1^0}$ (left panel) and $M_{\Sigma}>-m_{\chi_1^0}$ (right panel). The blue and green curve show current limits from the $W^+W^-$ channel for 6 years of observation and 15 dSphs, and the projected sensitivity for 45 dSphs and 15 years of observation, respectively.
Whereas points above the red dotted line are excluded from CMB measurements by the Planck Collaboration.
Points below the black dashed line are excluded when the $R_{\gamma \gamma}$ restriction from CMS is considered.}
\label{fig:sup-inf-ID}
\end{center}
\end{figure}
In regions of high dark matter density such as dwarf spheroidal galaxies (dSphs) or the center of the Milky Way, DM particles may more easily find each other and annihilate into SM particles.
The dSphs are particularly interesting because of their proximity to the Milky Way, their high DM to baryon mass-ratio, and their low background, thus making the DM detection via gamma-rays feasible.
The Fermi satellite has searched for gamma rays in dSphs founding no deviations from the expected spectrum, which has lead to upper limits on the thermally averaged DM annihilation cross section \cite{Ackermann:2015zua}.
It is worth noting that in this model DM annihilation can also affect the cosmic microwave radiation (CMB). If DM annihilates during the time of recombination, it will inject energy that will ionize Hydrogen. This will have a direct effect on the anisotropies of the CMB, thus, altering what is currently observed. Therefore, measurements of the CMB can constrain the parameter space of DM models, with the advantage that for this observable astrophysical uncertainties do not play a role \cite{Kawasaki:2015peu,Lopez-Honorez:2013lcm}.
For the DTF, the DM annihilation proceeds in the same channels as the ones in the early Universe, {\it i.e.}, via $t$- and $u$-channel annihilation into $W^+ W^-$ and $Z Z$ bosons. The gauge bosons then decay and produce, for instance, gamma rays that may be detected as an excess in the spectrum.To obtain the constraints we calculated the thermally averaged cross section using the public available package {\tt micrOMEGAS} \cite{Belanger:2013oya} and used this to compare with limits reported in Ref.~\cite{Ackermann:2015zua}.
In the DTF this cross section is, to a leading order approximation, independent of the DM velocity. Thus, its value matches that of the early universe, which allow us to compare our results with the limits reported in Ref.~\cite{Aghanim:2018eyx}.
The results are shown in Fig.~\ref{fig:sup-inf-ID} where all points shown satisfy the ATLAS $R_{\gamma \gamma}$ constraint and DD bounds as explained in previous sections. As can be seen, the Fermi satellite observation over 15 dSphs imposes stringent limits on the model in a such a way that a large portion of the DM mass range is ruled out. Moreover, stringent limits on the mass of the next-to-lightest fermion also arise, since such particles act as the mediators in the $t$- and $u$-channels of the DM annihilation.
On the other hand, though CMB measurements do place constraints, they are far less restrictive than those coming from dSphs.
For the region where $M_{\Sigma}<-m_{\chi_1^0}$ we find that $86\,\text{GeV}<m_{\chi_1^0}<280$ GeV is already ruled out, this also leads to a restriction on $m_{\chi^{+}}>340$ GeV for $m_{\chi_1^0}>280$ GeV. On the other hand, for the region where $M_{\Sigma}>-m_{\chi_1^0}$ we find that the diffuse spectrum requires that $m_{\chi_1^0}>280$ GeV, $m_{\chi^{+}}>300$ GeV while $M_{\Sigma}\lesssim -230$ GeV.
The expected 15 years and 45 dSphs observation will explore the whole region of the right panel and will leave a very narrow range of $m_{\chi_1^0}$ of $\sim$ 80 GeV un-explored.
We also note that points that satisfy the $R_{\gamma \gamma}$ restriction of the CMS experiment are those with the higher $\langle \sigma v \rangle$ since both observables depende on the mixing angle and are maximal for large $y$\footnote{For $R_{\gamma \gamma}$ the dependence was already shown in Sec. \ref{collider}, while for the diffuse spectrum, the dependence on $\cos\theta$ enters through the vertices of the annihilation channels. In Appendix \ref{AppendixA} this dependence is shown for the DM interaction with the $W^{\pm}$ gauge boson and a $Z_2$-odd charged fermion.}.
\subsection{Indirect detection from gamma-ray lines }\label{ID2}
Another promising detection channel is DM annihilation into two photons within regions with high DM density.
In this case, the photon energies will be closely related to the DM mass leading to a spectrum exhibiting a sharp peak referred to as a linelike feature \cite{Funk:2013gxa}.
In this regard, the Fermi~\cite{Ackermann:2015lka} and H.E.S.S. \cite{Abdalla:2016olq,Abdallah:2018qtu} collaborations have looked for gamma-ray lines coming from the center of the Milky Way, with no evidence of DM so far. This in turn has lead to constraints on the DM $ \langle \sigma v \rangle_{\gamma\gamma}$ annihilation into photons.
In the DTF, the DM annihilation into two photons is mediated by heavier $Z_2$-odd fermions interacting with vector and Goldstone bosons. Though the annihilation cross section in this case is loop suppressed, it may be possible to place constraints. In order to calculate the $ \langle \sigma v \rangle_{\gamma\gamma}$ we follow the procedure given in Ref.~\cite{Garcia-Cely:2016hsk} (the specific calculations along with the topologies that contribute are given in Appendix \ref{AppendixA}).
After considering all the restrictions coming from collider, DD and ID in the diffuse spectrum, our results show that the Fermi and H.E.S.S. results do not place additional constraints on the model for both $M_{\Sigma}<-m_{\chi_1^0}$ and $M_{\Sigma}>-m_{\chi_1^0}$ regions since $ \langle \sigma v \rangle_{\gamma\gamma} \sim 10^{-29}\, \text{cm}^3/\text{s}$, which is nearly an order of magnitude lower than the most sensitive results which are presented by the H.E.S.S collaboration in Ref.~\cite{Abdallah:2018qtu}. As a result, this observable does not restrict the parameter space of the model.
\section{DM detection in multicomponent dark sectors}\label{multiDM}
An interesting possibility that has recently taken momentum is for the DM to be composed of different sectors, which is a far more general setting than the usual one DM candidate. For instance, the observed relic density could be the result of WIMP and Axion particles. In this case, it is possible that the sectors do not communicate, and so they behave as two completely independent DM particles, without affecting each other's relic density and experimental bounds. For this section we will consider the WIMP DM candidate from the DTF to be part of multicomponent DM, that is, we obtain experimental bounds for models where the WIMP's relic density is less than or equal to the central value reported by the Planck collaboration, $\Omega_{\text{Planck}}$ \cite{Aghanim:2018eyx}.
Figure \ref{fig:perc_omega} shows the ration $\epsilon_{\chi_1^0}=\Omega_{\chi_1^0}/\Omega_{\text{Planck}}$ as a function of $m_{\chi_1^0}$.
For the region where $M_{\Sigma}<-m_{\chi_1^0}$, the relic abundance is at most 40$\%$ of the observed value except for the narrow region where $m_{\chi_1^0} \sim 80$ GeV (where annihilation into weak gauge bosons is kinematically suppressed).
On the other hand, in the region where $M_{\Sigma}>-m_{\chi_1^0}$ there are no models that saturate relic density, and so, the DTF accounts at most 40$\%$ of the Universe DM content.
We must add a comment here, unlike the previous section, we are assuming that the DM arises from a standard cosmology scenario, and in that sence the relic abundance of the WIMP DM is the one calculated through the usual method of solving the Boltzman's equation (the one calculated via {\tt micrOMEGAS} \cite{Belanger:2013oya}).
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.35]{inferior_percentage_omega.pdf}
\includegraphics[scale=0.35]{superior_percentage_omega.pdf}
\caption{ $\epsilon_{\chi_1^0}$ vs. $m_{\chi_1^0}$ for $M_{\Sigma}<-m_{\chi_1^0}$ (left panel) and $M_{\Sigma}>-m_{\chi_1^0}$ (right panel). All points satisfy collider bounds presented in Sec. \ref{collider}.}
\label{fig:perc_omega}
\end{center}
\end{figure}
Now we set out to investigate experimental bounds on the model. For colliders, the restrictions are the same as those presented in Sec. \ref{collider} since they are independent of the DM abundance. On the other hand, DD and ID rates do depend in the local DM density, and as a result the constraints presented in Sec.~\ref{NONSC} will be different in this scenario.
To quantify this, we used the parameter $\epsilon_{\chi_1^0}$ \cite{Cao:2007fy,Hur:2007ur} to re-scale the DD and ID observables.
For the case of DD, the expected scattering rate will be rescaled by $\epsilon_{\chi_1^0}$ which means that the SI cross section is effectively rescaled to be $\sigma_{SI}=\epsilon_{\chi_1^0}\sigma_{SI-\chi_1^0}$; hence, DD constrains are now relaxed. The results are shown in Fig.~\ref{fig:sup-inf-DDre} for $M_{\Sigma}<-m_{\chi_1^0}$ (left panel) and $M_{\Sigma}>-m_{\chi_1^0}$ (right panel). The left panel shows that for models that satisfy the lowest ATLAS limit on $R_{\gamma \gamma}$, DD imposes $y \leqslant 2.1$ while for models that satisfy lowest CMS limits $y \leqslant 1.9$. On the other hand, in the right panel, for models that satisfy the lowest ATLAS limit on $R_{\gamma \gamma}$, DD imposes $y \leqslant 2.2$ while for models that satisfy lowest CMS limits $y \leqslant 0.95$ which means that in this case CMS diphoton decay is more restrictive than DD (even considering DARWIN prospects).
For indirect detection, the situation is far less restrictive because the thermally averaged cross section is rescaled by a factor of $\epsilon_{\chi_1^0}^2$, thus suppressing it.
As a result, ID does not impose additional constraints on the model.
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.35]{inferior_re-scaled_DD_pi1.pdf}
\includegraphics[scale=0.35]{Superior_re-scaled_DD_colory.pdf}
\caption{Direct detection results for $M_{\Sigma}<-m_{\chi_1^0}$ (left panel) and $M_{\Sigma}>-m_{\chi_1^0}$ (right panel), the conventions are the same as in Fig.~\ref{fig:Un-sup-DD}.}
\label{fig:sup-inf-DDre}
\end{center}
\end{figure}
\section{Conclusions}\label{sec:conc}
In this work we have studied a simplified DM model, the doublet-triplet fermionic model, where the SM is enlarged with an $SU(2)_L$ vectorlike doublet and a Majorana triplet, both new fields are odd under a $Z_2$ symmetry while the SM fields are even. As a result the new fields Lagrangian include two Yukawa type interactions with the Higgs field.
It follows that when the two allowed Yukawa couplings of the dark sector are equal, the model exhibits a custodial symmetry and, when the DM particle is pure doublet, the diagonal couplings of it with the Higgs and $Z$ bosons are forbidden at tree-level.
In this case, the model may saturate the relic abundance at the electroweak scale, but that comes with the disadvantage of a strong suppression of the Higgs diphoton decay rate.
For this reason, we have considered the model (in the aforementioned case) framed within two different scenarios: one where the relic abundance is set by nonstandard cosmology, {\it i.e.}, we assumed the relic abundance is saturated due to new physics before BBN, and another where the DM is made up of multiple particles which do not affect each other's abundance or DD limits.
As a result, the mass of the heavier charged and neutral fermions may lie close to the DM mass, which lifts partly the $R_{\gamma \gamma}$ restriction.
Regarding DD and ID for the nonstandard cosmology scenario, we found that Xenon1T results demand a Yukawa coupling $y<1.75$, whereas the Fermi results imply that the DM mass is in general restricted to be $m_{\chi_1^0} < 280$ GeV except for a narrow region of $m_{\chi_1^0} \sim 80$ GeV when $M_{\Sigma} < m_{\chi_1^0}$.
On the other hand, for the scenario of the DM as part of the multicomponent dark sectors we found that DD impose a less severe constraint on the Yukawa coupling ($y<2.2$) while current ID does not impose additional constraints on the model.
\section*{Acknowledgements}
We are thankful to Diego Restrepo, Andrés Rivera and Guillermo Palacio for enlightening discussions. We are also thankful to Florian Staub for help with the SARAH package and Olivier Mattelaer for help with MadGraph.
A. B. has been supported by Colciencias, Universidad EIA and Fulbright Colombia.
O.Z. has been supported by the Sostenibilidad program of Universidad de Antioquia UdeA and by COLCIENCIAS through the grants 111565842691 and 111577657253. O.Z. acknowledges the ICTP Simons associates program and the kind hospitality of the Abdus Salam ICTP where the final stage of this work was done.
|
1,477,468,751,100 | arxiv | \section{The Local REINFORCE Estimator is Unbiased}\label{local_REIN_unbiased}
Here we show that the local REINFORCE estimator $\hat{G}^{RE}_{\Phi}=\frac{\partial\log(\pi_\Phi(\Phi|B(\Phi)))}{\partial\theta_\Phi}R$ is an unbiased estimator of the gradient of the expected reward with respect to $\theta_\Phi$.
\begin{align*}
\E[\hat{G}^{RE}_{\Phi}]&=\E\left[\frac{\partial\log(\pi_\Phi(\Phi|B(\Phi)))}{\partial\theta_\Phi}R\right]\\
&=\sum_{b}\P(B(\Phi)=b)\E\left[\frac{\partial\log(\pi_\Phi(\Phi|B(\Phi)))}{\partial\theta_\Phi}R\middle|B(\phi)=b\right]\\
&=\sum_{b}\P(B(\Phi)=b)\sum_\phi\pi_\Phi(\phi|b)\frac{\partial\log(\pi_\Phi(\phi|b))}{\partial\theta_\Phi}\E\left[R\middle|B(\Phi)=b,\Phi=\phi\right]\\
&=\sum_{b}\P(B(\Phi)=b)\sum_\phi\frac{\partial\pi_\Phi(\phi|b)}{\partial\theta_\Phi}\E\left[R\middle|B(\Phi)=b,\Phi=\phi\right]\\
&\stackrel{(a)}{=}\frac{\partial}{\partial\theta_\Phi}\sum_{b}\P(B(\Phi)=b)\sum_\phi\pi_\Phi(\phi|b)\E\left[R\middle|B(\Phi)=b,\Phi=\phi\right]\\
&=\frac{\partial\E[R]}{\partial\theta_\Phi},
\end{align*}
where $(a)$ follows from the fact that the probability of the parents of $\Phi$, $\P(B(\Phi)=b))$, does not depend on the parameters $\Theta$ controlling $\Phi$ itself, nor does the expected reward once conditioned on $\Phi$.
\section{Any Gradient Estimator Based on an Action-Value Estimator Obeying Equation~\ref{action_value_estimator} is Unbiased}\label{action_value_estiamtor_unbiased}
Assume we have access to an action value $\hat{Q}_\Phi(\phi)$ obeying Equation~\ref{action_value_estimator} and construct a gradient estimator $\hat{G}_{\Phi}=\sum_{\phi}\frac{\partial\pi_\Phi(\phi|B(\Phi))}{\partial\theta_\Phi}\hat{Q}_\Phi(\phi)$ as specified by Equation~\ref{PG_estimator}, then we can rewrite $\E[\hat{G}_{\Phi}]$ as follows:
\begin{align*}
\E[\hat{G}_{\Phi}]&=\E\left[\sum_{\phi}\frac{\partial\pi_\Phi(\phi|B(\Phi))}{\partial\theta_\Phi}\hat{Q}_\Phi(\phi)\right]\\
&=\sum_{b}\P(B(\Phi)=b)\sum_{\phi}\frac{\partial\pi_\Phi(\phi|b)}{\partial\theta_\Phi}\E\left[\hat{Q}_\Phi(\phi)\middle|B(\phi)=b\right]\\
&\stackrel{(a)}{=}\sum_{b}\P(B(\Phi)=b)\sum_{\phi}\frac{\partial\pi_\Phi(\phi|b)}{\partial\theta_\Phi}\E[R|B(\Phi)=b,\Phi=\phi]\\
&=\frac{\partial}{\partial\theta_\Phi}\sum_{b}\P(B(\Phi)=b)\sum_\phi\pi_\Phi(\phi|b)\E\left[R\middle|B(\Phi)=b,\Phi=\phi\right]\\
&=\frac{\partial\E[R]}{\partial\theta_\Phi}.
\end{align*}
where $(a)$ follows from the assumption that $\hat{Q}_\Phi(\phi)$ obeys Equation~\ref{action_value_estimator}.
\newpage
\section{Examples where Assumption~\ref{parent_of_child} Holds and is Violated}\label{parent_of_child_examples}
Figure~\ref{obeyed_and_violated} illustrates the meaning of assumption~\ref{parent_of_child} by showing a simple example where it holds, and another where it does not.
\begin{figure*}[!h]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\includegraphics[width=\textwidth]{img/obeyed.pdf}
\caption{An example where assumption~\ref{parent_of_child} holds.}
\label{obeyed}
\end{subfigure}
\hspace{0.13\textwidth}
\begin{subfigure}[t]{0.3\textwidth}
\includegraphics[width=\textwidth]{img/violated.pdf}
\caption{An example where assumption~\ref{parent_of_child} fails to hold.}
\label{violated}
\end{subfigure}
\caption{}\label{obeyed_and_violated}
\end{figure*}
\section{The HNCA Action Value Estimator has Lower Variance than the Reinforce Estimator}\label{HNCA_action_value_low_var}
Here we provide the proof of Theorem~\ref{reduced_variance}.
\begingroup
\def\ref{reduced_variance}{\ref{reduced_variance}}
\begin{theorem}
$\Var(\hat{Q}^{HNCA}_{\Phi}(\Phi)|B(\phi)=b)\leq \Var(\hat{Q}^{RE}_{\Phi}(\Phi)|B(\phi)=b).$
\end{theorem}
\addtocounter{theorem}{-1}
\endgroup
\begin{proof}
The proof follows from the law of total variance. First, note that by inverting up to step $(b)$ in Equation~\ref{HNCA_derivation} we know:
\begin{equation*}
\hat{Q}^{HNCA}_\Phi(\phi)=\E\left[\frac{\ind(\Phi=\phi)}{\pi_\Phi(\phi|B(\Phi))}R\middle|C(\Phi),B(\Phi),B(C(\Phi))\setminus\Phi,R\right].
\end{equation*}
Now apply the law of total variance to rewrite the variance of the reinforce estimator as follows:
\begin{align*}
\Var&(\hat{Q}^{RE}_{\Phi}(\Phi)|B(\phi)=b)\\
&=\Var\left(\frac{\ind(\Phi=\phi)}{\pi_{\Phi}(\phi|B(\phi))}R\middle|B(\phi)=b\right)\\
&=\begin{multlined}[t]
\E\left[\Var\left(\frac{\ind(\Phi=\phi)}{\pi_{\Phi}(\phi|B(\phi))}R\middle|C(\Phi),B(\Phi),B(C(\Phi))\setminus\Phi,R\right)\middle|B(\phi)=b\right]\\
+\Var\left(\E\left[\frac{\ind(\Phi=\phi)}{\pi_{\Phi}(\phi|B(\phi))}R\middle|C(\Phi),B(\Phi),B(C(\Phi))\setminus\Phi,R\right]\middle|B(\phi)=b\right)
\end{multlined}\\
&\geq \Var\left(\E\left[\frac{\ind(\Phi=\phi)}{\pi_{\Phi}(\phi|B(\phi))}R\middle|C(\Phi),B(\Phi),B(C(\Phi))\setminus\Phi,R\right]\middle|B(\phi)=b\right)\\
&=\Var(\hat{Q}^{HNCA}(\Phi)|B(\phi)=b).
\end{align*}
\end{proof}
\section{Variance of the HNCA Gradient Estimator}\label{HNCA_gradient_low_var}
Theorem~\ref{reduced_variance} shows that the HNCA action value estimator $\hat{Q}^{HNCA}_\Phi(\Phi)$ has lower variance than $\hat{Q}^{RE}_\Phi(\Phi)$. In this section we discuss how this impacts the variance of the associated gradient estimator $\hat{G}_{\Phi}=\sum_{\phi}\frac{\partial\pi_\Phi(\phi|B(\Phi))}{\partial\theta_\Phi}\hat{Q}_\Phi(\phi)$. We can write this using the law of total variance as follows:
\begin{equation*}
\Var(\hat{G}_{\Phi})=\E\left[\Var\left(\hat{G}_{\Phi}\middle|B(\Phi)\right)\right]+\Var\left(\E\left[\hat{G}_{\Phi}\middle|B(\Phi)\right]\right).
\end{equation*}
$\E\left[\hat{G}_{\Phi}\middle|B(\Phi)\right]$ is the same for both estimators so we will focus on $\E\left[\Var\left(\hat{G}_{\Phi}\middle|B(\Phi)\right)\right]$. Let $\Cov$ represent covariance.
\begin{equation}\label{expected_variance}
\E\left[\Var\left(\hat{G}_{\Phi}\middle|B(\Phi)\right)\right]=\begin{multlined}[t]\sum_{b}\P(B(\Phi)=b)\Biggl(\sum_{\phi}\left(\frac{\partial\pi_\Phi(\phi|B(\Phi))}{\partial\theta_\Phi}\right)^2\Var(\hat{Q}_\Phi(\phi)|B(\Phi))\\
+\sum_{\phi}\sum_{\phi^{\prime}\neq\phi}\left(\frac{\partial\pi_\Phi(\phi|B(\Phi))}{\partial\theta_\Phi}\frac{\partial\pi_\Phi(\phi^\prime|B(\Phi))}{\partial\theta_\Phi}\right)\Cov(\hat{Q}_\Phi(\phi),\hat{Q}_\Phi(\phi^\prime)|B(\Phi))\Biggr).
\end{multlined}
\end{equation}
We have already shown that $\Var(\hat{Q}_\Phi(\phi)|B(\Phi))$ is lower for $\hat{Q}^{HNCA}_\Phi(\phi)$. Let us now look at $\Cov(\hat{Q}_\Phi(\phi),\hat{Q}_\Phi(\phi^\prime)|B(\Phi))$. For $\hat{Q}^{RE}_\Phi(\phi)$ only one $\phi$ takes nonzero value at a time, hence the covariance can be expressed as:
\begin{align*}
\Cov(\hat{Q}^{RE}_\Phi(\phi),\hat{Q}^{RE}_\Phi(\phi^\prime)|B(\Phi))&=-\E[\hat{Q}^{RE}_\Phi(\phi)|B(\Phi)]\E[\hat{Q}^{RE}_\Phi(\phi^\prime)|B(\Phi)]\\
&=-\E[R|B(\Phi),\Phi=\phi]\E[R|B(\Phi),\Phi=\phi^\prime].
\end{align*}
For $\hat{Q}^{HNCA}_\Phi(\phi)$ we can express the covariance as follows:
\begin{align*}
\Cov&(\hat{Q}^{HNCA}_\Phi(\phi),\hat{Q}^{HNCA}_\Phi(\phi^\prime)|B(\Phi))=\begin{multlined}[t]\E[\hat{Q}^{HNCA}_\Phi(\phi)\hat{Q}^{HNCA}_\Phi(\phi^\prime)|B(\Phi)]\\
-\E[\hat{Q}^{HNCA}_\Phi(\phi)|B(\Phi)]\E[\hat{Q}^{HNCA}_\Phi(\phi^\prime)|B(\Phi)]
\end{multlined}\\
&=\begin{multlined}[t]\E\left[\frac{\P\left(C(\Phi)\middle|B(C(\Phi))\setminus\Phi,\Phi=\phi\right)\P\left(C(\Phi)\middle|B(C(\Phi))\setminus\Phi,\Phi=\phi^\prime\right)}{\P(C(\Phi)|B(C(\Phi))\setminus\Phi,B(\Phi))^2}R^2|B(\Phi),\Phi=\phi\right]\\
-\E\left[R|B(\Phi),\Phi=\phi]\E[R|B(\Phi),\Phi=\phi^\prime\right].
\end{multlined}
\end{align*}
Note that the first term in this expression is always positive while the second is equal to the covariance expression for $\hat{Q}^{RE}_\Phi(\phi)$. Thus, $ \Cov(\hat{Q}^{HNCA}_\Phi(\phi),\hat{Q}^{HNCA}_\Phi(\phi^\prime)|B(\Phi))\geq\Cov(\hat{Q}^{RE}_\Phi(\phi),\hat{Q}^{RE}_\Phi(\phi^\prime)|B(\Phi))$ for all $\phi,\phi^\prime$. Putting this all together and looking at Equation~\ref{expected_variance} we can conclude that $\Var(\hat{G}^{HNCA}_{\Phi})\leq\Var(\hat{G}^{RE}_{\Phi})$ as long as $\frac{\partial\pi_\Phi(\phi|B(\Phi))}{\partial\theta_\Phi}\frac{\partial\pi_\Phi(\phi^\prime|B(\Phi))}{\partial\theta_\Phi}$ is always negative. This is the case for Bernouli neurons, since changing any given parameter in this case will increase the probability of one actions and decrease the probability of the other. Here we use Bernoulli neurons in all but the final layer, where we cannot apply $\hat{G}^{HNCA}_{\Phi}$ anyways since $C(\hat\Phi)=\emptyset$. For more complex parameterizations (including softmax) this will not hold exactly. However, speaking roughly, since the policy gradients still need to sum to one over all $\phi$, the gradients with respect to different actions will tend to be negatively correlated, and thus $\hat{G}^{HNCA}_{\Phi}$ will tend to be lower variance.
\section{Details on Computationally Efficient Implementation}\label{implementation_details}
\begin{figure}[h]
\begin{minipage}{0.46\textwidth}
\begin{algorithm}[H]
Receive $\vec{x}$ from parents\\
$l=\vec{\theta}\cdot\vec{x}+b$\\
$p=\sigma(l)$\\
$\phi\sim \textit{Bernoulli}(p)$\\
Pass $\phi$ to children\\
Receive $\vec{q}_1,\vec{q}_0,R$ from children\\
$q_1=\prod_{i}\vec{q}_1[i]$\\
$q_0=\prod_{i}\vec{q}_0[i]$\\
$\bar{q}=pq_1+(1-p)q_o$\\
$\vec{l}_1=l+\vec{\theta}\odot(1-\vec{x})$\\
$\vec{l}_0=l-\vec{\theta}\odot\vec{x}$\\
$\vec{p}_1=\sigma(\vec{l}_1)$\\
$\vec{p}_0=\sigma(\vec{l}_0)$\\
Pass $\vec{p}_1,\vec{p}_0,R$ to parents\\
$\vec{\theta}=\vec{\theta}+\alpha\sigma^{\prime}(l)\vec{x}\left(\frac{q_1-q_0}{\bar{q}}\right)R$\\
$b=b+\alpha\sigma^{\prime}(l)\left(\frac{q_1-q_0}{\bar{q}}\right)R$
\caption{HNCA algorithm for Bernoulli hidden neuron}\label{bernoulli_alg}
\end{algorithm}
\end{minipage}
\hfill
\begin{minipage}{0.46\textwidth}
\begin{algorithm}[H]
Receive $\vec{x}$ from parents\\
$\vec{l}=\Theta\vec{x}+\vec{b}$\\
$\vec{p}=\frac{\exp{\vec{l}}}{\sum_i\exp{\vec{l}[i]}}$\\
Output $\phi\sim \vec{p}$\\
Receive $R$ from environment\\
\For{all $i$}{
$L_1[i]=\vec{l}+\Theta[i]\odot(1-\vec{x})$\\
$L_0[i]=\vec{l}-\Theta[i]\odot\vec{x}$
}
$\vec{p}_1=\frac{\exp{L_1[\phi]}}{\sum_i\exp{L_1[i]}}$\\
$\vec{p}_0=\frac{\exp{L_0[\phi]}}{\sum_i\exp{L_0[i]}}$\\
Pass $\vec{p}_1,\vec{p}_0,R$ to parents\\
\For{all $i$}{
$\Theta[i]=\Theta[i]+\alpha\vec{x}(\ind(\phi=i)-\vec{p}[i])R$\\
$b[i]=b[i]+\alpha(\ind(\phi=i)-\vec{p}[i])R$
}
\caption{HNCA algorithm for Softmax output neuron}\label{softmax_alg}
\end{algorithm}
\end{minipage}
\end{figure}
The computational complexity of HNCA depends on how difficult it is to compute $\P\left(C(\Phi)\middle|B(C(\Phi))\setminus\Phi,\Phi=\phi\right)$ and $\P(C(\Phi)|B(C(\Phi))\setminus\Phi,B(\Phi))$ in Equation~\ref{HNCA_estimator}. When the individual neurons use a sufficiently simple parameterization, the method can be implemented as a backpropagation-like message passing procedure. In this case, the overall computation is proportional to the number of connections, as in backpropagation itself.
In our experiments, we will apply HNCA to solve a classification task formulated as a contextual bandit (the agent must guess the class from the input and receives a reward of 1 only if the guess is correct, otherwise it does not receive the true class).
Our stochastic neural network model will consist of a number of hidden layers, wherein each neuron outputs according to a Bernoulli distribution. The policy of each Bernoulli neuron is parametrized as a linear function of it's inputs, followed by a sigmoid activation. The policy of the output layer is a distribution over class labels, parameterized as a softmax. We now separately highlight the implementations for the softmax output and Bernoulli hidden nodes.
Algorithm~\ref{bernoulli_alg} shows the implementation of HNCA for the Bernoulli hidden nodes. The pseudo-code provided is for Bernoulli nodes with a zero-one mapping, but is straightforward to modify to use negative one and one instead, as we do in our main experiments. Lines 1-5 implement the forward pass. The forward pass simply takes input from the parents, uses it to compute the fire probability $p$ and samples $\phi\in\{0,1\}$. The backward pass receives two vectors of probabilities $\vec{q}_1$ and $\vec{q}_0$, each with one element for each child of the node. A given element represents $\vec{q}_{0/1}[i]=\P\left(C_i\middle|B(C_i)\setminus\Phi,\Phi=0/1\right)$ for a given child $C_i\in C(\Phi)$. Lines 7 and 8 take the product of all these child probabilities to compute $\P\left(C(\Phi)\middle|B(C)\setminus\Phi,\Phi=0/1\right)$. Note that computing $\P\left(C(\Phi)\middle|B(C)\setminus\Phi,\Phi=0/1\right)$ in this way is made possible by Assumption~\ref{parent_of_child}. Due to Assumption~\ref{parent_of_child}, no $C_i\in C(\Phi)$ can influence another $C_j\in C(\Phi)$ via a downstream path. Thus $\P\left(C(\Phi)\middle|B(C)\setminus\Phi,\Phi=0/1\right)=\prod_i\P\left(C_i\middle|B(C_i)\setminus\Phi,\Phi=0/1\right)$. Line 9 uses $\vec{q}_{0/1}[i]$ along with the fire probability to compute $\bar{q}=\P(C(\Phi)|B(C(\Phi))\setminus\Phi,B(\Phi))$.
Line 10-13 use the already computed logit $l$ to efficiently compute a vector of probabilities $\vec{p}_1$ and $\vec{p}_0$ where each element corresponds to a counterfactual fire probability if all else was the same but a given parent's value was fixed to 1 or 0. Here, $\odot$ represents the element-wise product. Note, that computing this in this way only requires compute time on the order of the number of parents (while naively computing each counterfactual probability from scratch would require time on the order of the number of children squared). Line 14 passes the nessesary information to the node's children. Lines 15 and 16 finally update the parameter using $\hat{G}^{HNCA}_{\Phi}$ with learning-rate hyperparameter $\alpha$.
Algorithm~\ref{softmax_alg} shows the implementation for the softmax output node. Note that the output node itself uses the REINFORCE estimator in its update, as it has no children. Nonetheless, the output node still needs to provide information to its parents, which use HNCA. Lines 1-4 implement the forward pass, in this case producing an integer $\phi$ drawn from the possible classes. Lines 6-11 compute counterfactual probabilities of the given output class conditional on fixing the value of each parent. Note that $\Theta[i]$ refers to the $i_{th}$ row of the matrix $\Theta$. In this case, computing these counterfactual probabilities requires computation on the order of the number of parents, times the number of possible classes. Line 12 passes the necessary information back to the parents. Lines 13-16 update the parameters according to $\hat{G}^{RE}_{\Phi}$.
Overall, the output node requires computation proportional to the number of parents times the number of classes, this is the same the number of parameters in that node. Similarly, the hidden nodes each require computation proportional to it's number of parents, which again is the same as the number of parameters. Thus the overall computation required for HNCA is on the order of the number of parameters, the same order as doing a forward pass, and the same order as backpropagation.
It's worth noting that, in principle, HNCA is also more parallelizable than backpropagation. Since no information from a node's children is needed to compute the information passed to its parents, the backward pass could be done in parallel across all layers. However, this can also be seen as a limitation of the proposed approach since the ability to condition on information further upstream could lead to further variance reduction.
\section{Experiments with Zero-One Output Mapping}~\label{exp_zero-one}
In the main body of the paper our experiments mapping the binary output of Bernoulli neurons to one or negative one. We found that this worked much better than mapping to zero or one. In Figure~\ref{results_zero-one}, we show the results with of REINFORCE and HNCA with a zero-one mapping. We also use a sigmoid activation instead of tanh backpropagation baseline, as this offers a closer analog to the stochastic zero-one mapping. In each case, the use of a zero-one mapping significantly hurt performance. Replacing tanh with sigmoid or the backpropagation baseline makes the difference between significantly outperforming ReLU and barely learning in the 2 and 3 hidden layers cases.
\begin{figure*}[h]
\centering
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{img/test_set_accuracies_vs_alpha_1_hidden.png}
\caption{Learning-rate sensitivity curves for 1 hidden layer.}
\label{SCE_oscillatory}
\end{subfigure}
\hfill
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{img/test_set_accuracies_vs_alpha_2_hidden.png}
\caption{Learning-rate sensitivity curves for 2 hidden layers.}
\label{SCE_oscillatory}
\end{subfigure}
\hfill
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{img/test_set_accuracies_vs_alpha_3_hidden.png}
\caption{Learning-rate sensitivity curves for 3 hidden layers.}
\end{subfigure}
\linebreak
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{img/test_set_accuracies_vs_training_epochs_1_hidden.png}
\caption{Learning curves for best learning-rate for 1 hidden layer.}
\end{subfigure}
\hfill
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{img/test_set_accuracies_vs_training_epochs_2_hidden.png}
\caption{Learning curves for best learning-rate for 2 hidden layers.}
\end{subfigure}
\hfill
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{img/test_set_accuracies_vs_training_epochs_3_hidden.png}
\caption{Learning curves for best learning-rate for 3 hidden layers.}
\end{subfigure}
\caption{Learning curves and learning-rate sensitivity for HNCA (with a zero-one mapping) and baselines on contextual bandit MNIST. Green curves are HNCA, red are REINFORCE (each with a zero-one mapping), blue are backprop with sigmoid activations, and orange are backprop with ReLU activations. The architecture is a small neural network with 64 units per layer with different numbers of hidden layers. All plots show 10 random seeds with error bars showing 95\% confidence intervals. In order to show the fastest convergence among settings where final performance is similar, the best learning-rate is taken to be the highest learning-rate that is no more than one standard error from the learning-rate that gives the highest final accuracy.}\label{results_zero-one}
\end{figure*}
\end{document} |
1,477,468,751,101 | arxiv | \section{Introduction: }
Reductive Lie algebras have been shown to be the most convenient
class of algebras for physical applications. They arise naturally
as the Lie algebras of compact groups, and contain the class of
semisimple algebras. Moreover, they have an important property,
namely an invariant metric\footnote{This metric arises immediately
from the Killing tensor for the semisimple case.}, of crucial
importance in problems like defining Wess-Zumino-Witten models.
Classically made on semisimple and reductive algebras, models
based on non-reductive algebras have been shown to be of physical
interest \cite{Wi}. Other important applications of Lie algebras
endowed with a symmetric non-degenerate invariant form, which we
call here quasi-classical\footnote{Other authors call algebras
like these symmetric self-dual.}, are for example conformal field
theory, where they correspond to the Lie algebras admitting a
Sugawara construction, or the Yang-Baxter equations, where
quasi-classical algebras provide classes of solutions
\cite{Fi,Dr,Ok}.
\noindent
A Lie algebra $L$ is called quasi-classical if it possesses a
bilinear symmetric form $\langle .,.\rangle$ that satisfies the
constraints \begin{eqnarray} \langle \left[X,Y\right],Z \rangle = \langle
X,\left[Y,Z\right]\rangle, \\
{\rm If} \langle X,Y\rangle=0, \forall X\in L \Rightarrow Y=0.
\end{eqnarray}
The first condition shows that the bilinear form satisfies an
associativity condition (also called invariance), while the second
expresses non-degenerateness. Given a basis
$\left\{X_{1},..,X_{n}\right\}$ of $L$ and the corresponding
structure tensor $\left\{C_{ij}^{k}\right\}$, we obtain the
expression of $\langle .,.\rangle$ as: \begin{equation} \langle X_{i},X_{j}
\rangle= g_{ij}. \end{equation}
Since the form is non-degenerate, we find an inverse to the
coefficient matrix of $\langle .,.\rangle$:
$g^{ij}=(g_{ij})^{-1}$. Obviously semisimple Lie algebras satisfy
these requirements for the Killing form. Also reductive and
abelian Lie algebras are trivially quasi-classical although in
this case the Killing metric is no more non-degenerate. In
\cite{Ok} it was shown that a necessary and sufficient condition
for the existence of such a form is that $L$ admits a
non-degenerate quadratic Casimir operator
$C=g^{\alpha\beta}x_{\alpha}x_{\beta}$. Using the realization by
differential operators
$\widehat{X}_{i}=C_{ij}^{k}x_{k}\frac{\partial}{\partial_{x_{j}}}$,
this means that $C$ is a solution of the following system of
partial differential equations: \begin{equation}
C_{ij}^{k}x_{k}\frac{\partial}{\partial_{x_{j}}}C=0. \end{equation}
Using this characterization, we obtain a useful criterion to test
whether a Lie algebra is quasi-classical or not, and in certain
situations more practical than various pure algebraic structural
results (see e.g. \cite{Fa} and references therein). In
particular, for any given dimension, the classification of
quasi-classical Lie algebras follows from the classification of
isomorphism classes once the invariants of the coadjoint
representation have been computed. Therefore the problem of
finding metrics reduces to an analytical problem, which is solved
in low dimension \cite{Pa,C48}.
\medskip
This paper is structured as follows: In section 2 we reformulate
the Yang-Baxter equations in terms of triple products, which
enables us to obtain some sufficiency criteria basing only on the
structure tensor of a quasi-classical algebra. This triple
product formulation is used in combination with contractions of
Lie algebras to construct large classes of indecomposable
quasi-classical algebras that preserve the quadratic
non-degenerate Casimir operator of a semisimple classical Lie
algebra. In section 3 we focus on a kind of inverse problem,
namely, deformations of quasi-classical Lie algebras that preserve
the quadratic Casimir operator, and therefore, the associated
metric. This leads to a characterization of such deformations in
terms of integrable cocycles in the adjoint cohomology.
\section{Yang-Baxter equations and quasi-classical algebras}
\subsection{Yang-Baxter equations and triple products}
Yang-Baxter equations (YBE) have been known a long time to embody
the symmetries of two dimensional integrable models \cite{Ji}, and
also appear in many problems concerning statistical physics and
quantum groups. In addition to the classical semisimple case,
non-reductive quasi-classical Lie algebras were recognized to
provide some solutions of the YBE when these are rewritten in
terms of triple products \cite{Ok3}. With this reformulation, some
useful sufficient conditions can be found in dependence of the
structure tensor of the quasi-classical Lie algebra.
Given a finite dimensional vector space $V$ with inner product
$\left\langle .,.\right\rangle $, then, for any basis $\left\{
v_{1},..,v_{n}\right\}$, we set the coefficients in the usual way
\begin{equation*}
\langle v_{i},v_{j}\rangle :=g_{ij}=g_{ji}
\end{equation*}
and define the raising of indices
\[
v^{j}=\sum_{i=1}^{n}g^{ij}v_{i}.
\]
Given a spectral parameter $\theta,$ we consider the map $R\left(
\theta\right) :V\otimes V\rightarrow V\otimes V$ defined by
\begin{equation}
R\left( \theta\right) \left( v_{i}\otimes v_{j}\right) =\sum_{k,l=1}%
^{n}R_{ij}^{kl}\left( \theta\right) v_{k}\otimes v_{l}. \end{equation} We
obtain the Yang-Baxter equations in its usual form \cite{Ji} from
the relations \begin{equation} R_{12}\left( \theta\right) R_{13}\left(
\theta^{\prime}\right) R_{23}\left( \theta^{\prime\prime}\right)
=R_{23}\left( \theta ^{\prime\prime}\right) R_{13}\left(
\theta^{\prime}\right) R_{12}\left( \theta\right) ,
\end{equation} where
$\theta^{\prime\prime}=\theta^{\prime}-\theta$. The equations can
be rewritten using triple products, which provides sometimes a
more convenient
presentation for solutions governed by certain types of purely solvable
quasi-classical Lie algebras. Introducing the triple products \cite{Ok3}%
\begin{equation}
\left[ v^{j},v_{k},v_{l}\right]
_{\theta}=\sum_{i=1}^{n}R_{kl}^{ij}\left(
\theta\right) v_{i},\;\left[ v^{i},v_{k},v_{l}\right] _{\theta}^{\ast}%
=\sum_{j=1}^{n}R_{kl}^{ij}\left( \theta\right) v_{i},
\end{equation}
the YBE reduces to the relation%
\begin{equation} \sum_{j=1}^{n}\left[ u,\left[ v,v_{j},w\right]
_{\theta^{\prime}},\left[ e^{j},x,y\right] _{\theta}\right]
_{\theta^{\prime\prime}}^{\ast}=\sum _{j=1}^{n}\left[ v,\left[
u,v_{j},x\right] _{\theta^{\prime}},\left[ e^{j},w,y\right]
_{\theta^{\prime\prime}}^{\ast}\right] _{\theta}, \label{YB11}
\end{equation}
where $u,v,w,x,y\in V$. A particularly interesting case is given
when the scattering matrix elements $R_{kl}^{ij}\left(
\theta\right) $ satisfy the
following constraint:%
\begin{equation} R_{kl}^{ij}\left( \theta\right) -R_{lk}^{ji}\left(
\theta\right) =0.
\end{equation}
In this case, the equation (\ref{YB11}) becomes%
\begin{equation}
\sum_{j=1}^{n}\left[ u,\left[ v,v_{j},w\right]
_{\theta^{\prime}},\left[
e^{j},x,y\right] _{\theta}\right] _{\theta^{\prime\prime}}=\sum_{j=1}%
^{n}\left[ v,\left[ u,v_{j},x\right] _{\theta^{\prime}},\left[
e^{j},w,y\right] _{\theta^{\prime\prime}}\right] _{\theta},
\end{equation}
subjected to the condition
\[
\left\langle u,\left[ v,w,x\right] _{\theta}\right\rangle
=\left\langle v,\left[ u,x,w\right] _{\theta}\right\rangle .
\]
Even in this case, the solving of the equations is far from being
trivial. However, it was found in \cite{Ok3} that if $L$ satisfies the condition%
\begin{equation}
\left[ L,\left[ \left[ L,L\right] ,\left[ L,L\right] \right]
\right]
=0,\label{MA}%
\end{equation}
then we have commutation relation \begin{equation} \left[ R_{jk}\left(
\theta\right) ,R_{lm}\left( \theta^{\prime}\right) \right]
=0,\;j,k,l,m=1,2,3. \label{MA1} \end{equation} This is particular implies
that the YBE (\ref{YB11}) is satisfied. Classes of solvable Lie
algebras in arbitrary dimension that satisfy these conditions have
been constructed in \cite{Ok3}, as well as examples where the
commutation relation (\ref{MA1}) is not necessarily satisfied.
\subsection{Contractions of quasi-classical Lie algebras}
In this paragraph we obtain additional classes of indecomposable
nilpotent Lie algebras satisfying condition (\ref{MA}). In
comparison with previous constructions, the class of algebras
obtained here follows naturally from contractions of simple real
Lie algebras that preserve the quadratic Casimir operator. We can
therefore construct quasi-classical algebras with prescribed inner
product.
We recall that a contraction $L\rightsquigarrow L^{\prime}$ of a
Lie algebra is given by the commutators
\begin{equation}
\left[X,Y\right]^{\prime}:=\lim_{t\rightarrow \infty}
\Phi_{t}^{-1}\left[\Phi_{t}(X),\Phi_{t}(Y)\right], \label{Kow}
\end{equation}
where $\Phi_{t}$ is a parameterized family of non-singular linear
maps in $L$ for all $t<\infty$.\footnote{It is assumed that the
limit (\ref{Kow}) exists for any pair $X,Y$ of generators in $L$.}
Among the various types of contractions existing, we consider here
the so-called generalized In\"on\"u-Wigner contractions given by
automorphims of the type\footnote{For properties of this type of
contractions see e.g. \cite{We}.}
\begin{equation}
\Phi_{t}(X_{i})=t^{-n_{i}}X_{i},\quad n_{i}\in\mathbb{Z}.
\end{equation}
Contractions can also be extended to invariants. Let
$F(X_{1},...,X_{n})=\alpha^{i_{1}...i_{p}}X_{i_{1}}...X_{i_{p}}$
be a Casimir operator of degree $p$. Expressing it over the
transformed basis we get
\begin{equation}
F(\Phi_{t}(X_{1}),..,\Phi_{t}(X_{n}))=t^{n_{i_{1}}+...+n_{i_{p}}}\alpha^{i_{1}...i_{p}}X_{i_{1}}...X_{i_{p}}.
\end{equation}
Now let
\begin{equation}
M=\max \left\{n_{i_{1}}+...+n_{i_{p}}\quad |\quad
\alpha^{i_{1}..i_{p}}\neq 0\right\},
\end{equation}
and consider the limit
\begin{equation}
F^{\prime}(X_{1},..,X_{n})=\lim_{t\rightarrow \infty}
t^{-M}F(\Phi_{t}(X_{1}),...,\Phi_{t}(X_{n}))=\sum_{n_{i_{1}}+...+n_{i_{p}}=M}
\alpha^{i_{1}...i_{p}}X_{i_{1}}...X_{i_{p}}.
\end{equation}
It is not difficult to see that this expression is a Casimir
operator of degree $p$ of the contraction. Imposing that the
invariant remains unchanged by the contraction implies certain
restriction that must not necessarily occur \cite{C43}. For our
purpose, preservation of a non-degenerate quadratic Casimir
operator implies automatically that the contraction is
quasi-classical, and the induced inner product the same. To this
extent, let $\frak{s}$ be a complex semisimple Lie algebra of
classical type $A_{l},B_{l},C_{l},D_{l}$ and let $\frak{sl}\left(
l+1,\mathbb{C}\right)
,\frak{so}\left( 2l+1,\mathbb{C}\right) ,\frak{sp}\left( 2l,\mathbb{C}%
\right) \,$\ and $\frak{so}\left( 2l,\mathbb{C}\right) $ be the
non-compact simple real Lie algebra with complexification
$\frak{s\oplus s}$. Let $\left\{ X_{1},..,X_{n}\right\} $ and
$\left\{ Y_{1},..,Y_{n}\right\} $ be a basis of each copy of
$\frak{s}$ such that
\[
\left[ X_{i},X_{j}\right] =C_{ij}^{k}X_{k},\;\left[
Y_{i},Y_{j}\right] =C_{ij}^{k}Y_{k},\;\left[ X_{i},Y_{j}\right]
=0,
\]
i.e., the structure tensor is the same in both copies. Considering
the change of basis given by
\begin{equation}
\overline{X}_{i}=X_{i}+Y_{i},\;\overline{Y}_{i}=\sqrt{-1}\left( Y_{i}%
-X_{i}\right) ,\;i=1,..,n,\; \end{equation} the structure tensor over the
basis $\left\{ \overline{X}_{1},..,\overline
{X}_{n},\overline{Y}_{1},..,\overline{Y}_{n}\right\} $ is
expressed by
\begin{equation}
\left[ \overline{X}_{i},\overline{X}_{j}\right] =C_{ij}^{k}\overline{X}%
_{k},\;\left[ \overline{X}_{i},\overline{Y}_{j}\right] =C_{ij}^{k}%
\overline{Y}_{k},\;\left[ \overline{Y}_{i},\overline{Y}_{j}\right]
=-C_{ij}^{k}\overline{X}_{k}.
\end{equation}
Since $\frak{s}\oplus\frak{s}$
is quasi-classical for being semisimple, it
admits quadratic Casimir operators%
\[
C_{1}=g^{ab}X_{a}X_{b},\;C_{2}=g^{ab}Y_{a}Y_{b}.
\]
Suitable linear combinations of them provide a nondegenerate
quadratic operator on the direct sum. Rewriting these operators in
the new basis, we obtain the operators \begin{equation}
\overline{C}_{1}=g^{ij}\left(
\overline{X}_{i}\overline{X}_{j}-\overline
{Y}_{i}\overline{Y}_{j}\right) ,\;\overline{C}_{2}=g^{ij}\left(
\overline
{X}_{i}\overline{Y}_{j}+\overline{Y}_{i}\overline{X}_{j}\right) ,
\end{equation}
which are easily seen to be non-degenerate. It is natural to
ask whether there exist non-trivial contractions of
$\frak{s}\oplus\frak{s}$ such that the contraction preserves at
least one of the preceding Casimir operators. In this way, the
contraction is also quasi-classical with the same bilinear form as
the contracted algebra. The suitable operator to be tested is
$\overline{C}_{2}$. There is the well known obvious contraction
$\frak{s}\oplus \frak{s\rightsquigarrow
s}\overrightarrow{\oplus}_{ad\left( \frak{s}\right)
}nL_{1}$ determined by the parameterized changes of basis%
\[
F_{t}\left( \overline{X}_{i}\right)
=\overline{X}_{i},\;F_{t}\left( \overline{Y}_{i}\right)
=\frac{1}{t}\overline{Y}_{i},\;i=1,..,n.
\]
The contracted invariants are as follows:%
\begin{eqnarray}
\lim_{t\rightarrow\infty}\frac{1}{t^{2}}\overline{C}_{1} =-g^{ij}%
\overline{Y}_{i}\overline{Y}_{j},\nonumber \\
\lim_{t\rightarrow\infty}\frac{1}{t}\overline{C}_{2} =g^{ij}\left(
\overline{X}_{i}\overline{Y}_{j}+\overline{Y}_{i}\overline{X}_{j}\right)
. \end{eqnarray}
Therefore the contraction preserves the invariant
$\overline{C}_{2}$ and is
quasi-classical. There is another possibility of contracting $\frak{s}%
\oplus\frak{s}$ that also leads to a quasi-classical contraction.
Consider the parameterized change of basis
\[
G_{t}\left( \overline{X}_{i}\right) =\frac{1}{t}\overline{X}_{i}%
,\;G_{t}\left( \overline{Y}_{i}\right) =\frac{1}{\sqrt{t}}\overline{Y}%
_{i},\;i=1,..,n.
\]
Then the contraction $L^{\prime}$ has the following brackets%
\begin{equation}
\left[ \overline{X}_{i},\overline{X}_{j}\right] =0,\;\left[
\overline {X}_{i},\overline{Y}_{j}\right] =0,\;\left[
\overline{Y}_{i},\overline {Y}_{j}\right]
=-C_{ij}^{k}\overline{X}_{k}.
\end{equation}
Therefore $L^{\prime}$ is an indecomposable
nilpotent Lie algebra with $n$-dimensional centre. Contracting the
invariants of $\frak{s}\oplus\frak{s}$ we obtain
\begin{eqnarray}
\lim_{t\rightarrow\infty}\frac{1}{t^{2}}\overline{C}_{1} =g^{ij}%
\overline{X}_{i}\overline{Y}_{j},\nonumber \\
\lim_{t\rightarrow\infty}\frac{1}{t}\overline{C}_{2} =g^{ij}\left(
\overline{X}_{i}\overline{Y}_{j}+\overline{Y}_{i}\overline{X}_{j}\right)
. \end{eqnarray}
Thus the contraction is again quasi-classical, and it further satisfies the conditions%
\[
C^{1}L^{\prime}=\left[ L^{\prime},L^{\prime}\right]
=\left\langle \overline{X}_{1},..,\overline{X}_{n}\right\rangle
,\;C^{2}L^{\prime}=\left[ L^{\prime},C^{1}L^{\prime}\right] =0,
\]
showing that $L^{\prime}$ is a metabelian Lie algebra. It actually
satisfies the condition (\ref{MA}), and therefore the Yang-Baxter
equations (\ref{YB11}). The same formal construction holds for
direct sums of copies of exceptional complex Lie algebras.
Using specific realizations, like boson and fermion operators,
other classes of quasi-classical algebras that preserve the
quadratic Casimir operator of a simple Lie algebra can be
constructed \cite{C43}. This occurs for example for the symplectic
Lie algebras $\frak{sp}(N,\mathbb{R})$ given by the creation and
annihilation operators: consider the linear operators
$a_{i},a_{j}^{\dagger}\;\left( i,j=1..N\right) $ satisfying the
commutation relations \begin{equation} \left[ a_{i},a_{j}^{\dagger}\right]
=\delta_{ij}\mathbb{I},\quad \left[ a_{i},a_{j}\right] =\left[
a_{i}^{\dagger},a_{j}^{\dagger }\right] =0 \label{Kl2} \end{equation}
Considering the operators $\left\{ a_{i}^{\dagger}a_{j},a_{i}^{\dagger}%
a_{j}^{\dagger},a_{i}a_{j}\right\} $, we generate
$\frak{sp}\left( N,\mathbb{R}\right) $, the brackets of which
follow easily from (\ref{Kl2}). If we introduce the labelling \begin{equation}
X_{i,j} =a_{i}^{\dagger}a_{j},\quad X_{-i,j}
=a_{i}^{\dagger}a_{j}^{\dagger},\quad X_{i,-j} =a_{i}a_{j},\;1\leq
i,j\leq N \label{Er2}, \end{equation}
the brackets of $\frak{sp}\left(
N,\mathbb{R}\right) $ are comprised as: \begin{equation}
\left[ X_{i,j},X_{k,l}\right] =\delta_{jk}X_{il}-\delta_{il}X_{kj}%
+\varepsilon_{i}\varepsilon_{j}\delta_{j,-l}X_{k,-i}-\varepsilon
_{i}\varepsilon_{j}\delta_{i,-k}X_{-j,l}, \label{Kl3} \end{equation} where
$-N\leq i,j,k,l\leq N$, $\varepsilon_{i}={\rm sgn}\left( i\right)
$ and $X_{i,j}+\varepsilon_{i}\varepsilon_{j}X_{-j,-i}=0$.
Defining now the family of automorphisms \begin{eqnarray}
\Phi_{t}(X_{i,i})=\frac{1}{\sqrt{t}}X_{i,i},\; (1\leq i\leq
N),\quad \Phi_{t}(X_{i,j})=\frac{1}{t}X_{i,j},\; (i < j),\quad
\Phi_{t}(X_{i,j})=X_{i,j},\; (i > j)\nonumber\\
\Phi_{t}(X_{-i,j})=\frac{1}{t}X_{-i,j},\quad
\Phi_{t}(X_{i,-j})=X_{i,-j},\; (1\leq i,j\leq N), \end{eqnarray} it is
straightforward to verify that
\begin{equation}
\lim_{t\rightarrow \infty}
\Phi_{t}^{-1}\left[\Phi_{t}(X),\Phi_{t}(Y)\right], \label{Ko}
\end{equation}
exists for any pair of generators $X,Y\in\frak{sp}(N,\mathbb{R})$
and defines a nilpotent Lie algebra. Moreover, it follows that the
(non-symmetrized) quadratic non-degenerate quadratic Casimir
operator \begin{equation}
C=2\left(x_{i,-j}x_{-i,j}-x_{i,j}x_{j,i}\right)+x_{i,-i}x_{-i,i}-x_{i,i}^{2}
\end{equation} is preserved by the contraction. In contrast to the previous
example, the contraction has no fixed nilpotency index, and
therefore must not satisfy the sufficient condition (\ref{MA}).
We remark that the contraction method has previously been used in
\cite{Ol} to obtain limits of WZW-models, but without imposing
preservation of invariants by contraction.
\section{Deformations of quasi-classical algebras}\label{sec:three}
Although the obtainment of quasi-classical Lie algebras using
contractions is quite natural, we can also consider a kind of
inverse procedure, namely construct quasi-classical algebras by
deformation of a given one. Like in the preceding case, the Ansatz
is to impose that a non-degenerate quadratic Casimir operator
remains invariant by the deformation. This imposition will lead to
certain tensor equations that the cocycle generating the
deformation must satisfy.
Recall that a cocycle $\varphi\in H^{2}(L,L)$ is a bilinear
skew-symmetric form that satisfies the constraint \cite{Ri}: \begin{eqnarray}
d\varphi(X_{i},X_{j},X_{k}):=\left[X_{i},\varphi(X_{j},X_{k})\right]+\left[X_{k},\varphi(X_{i},X_{j})\right]+
\left[X_{j},\varphi(X_{k},X_{i})\right]+\nonumber \\
+\varphi(X_{i},\left[X_{j},X_{k}\right])
+\varphi(X_{k},\left[X_{i},X_{j}\right])+\varphi(X_{j},\left[X_{k},X_{i}\right])=0.
\label{K1} \end{eqnarray}
for all generators $X_{i},X_{j},X_{k}$ of $L$. We further say that
$\varphi$ is integrable if it additionally satisfies the condition
\begin{eqnarray}
\varphi\left(\varphi(X_{i},X_{j}),X_{k}\right)+
\varphi\left(\varphi(X_{j},X_{k}),X_{i}\right)+\varphi\left(\varphi(X_{k},X_{i}),X_{j}\right)=0
\label{K3}.
\end{eqnarray}
Under these conditions, it is straightforward to verify that the
linear deformation $L+\varphi$ is a Lie algebra \cite{Ri} with the
deformed bracket
\begin{equation}
\left[X,Y\right]_{\varphi}=\left[X,Y\right]+\varphi(X,Y).
\end{equation}
Supposed that $L$ is a quasi-classical Lie algebra with
(unsymmetrized) quadratic Casimir operator
$C=g^{\alpha\beta}x_{\alpha}x_{\beta}$, we want to determine
conditions on the integrable cocycle in order to impose that
$L+\varphi$ is also quasi-classical, and has the same quadratic
invariant $C$. To this extent, we realize the deformation
$L+\varphi$ by differential operators, and obtain the system of
PDEs \begin{equation}
\widehat{X}_{i}=C_{ij}^{k}x_{k}\frac{\partial C}{\partial x_{j}}-\alpha_{ij}^{k}x_{k}%
\frac{\partial C}{\partial x_{j}}=0. \label{L1} \end{equation}
Here
\begin{equation}
\varphi\left( X_{i},X_{j}\right) =\alpha_{ij}^{k}X_{k}%
\end{equation}
is the expression of the cocycle $\varphi$ over the given basis.
Inserting the operator $C$ into the previous system (\ref{L1}), we
obtain \begin{equation}
\widehat{X}_{i}(C)=C_{ij}^{k}x_{k}g^{\alpha\beta}\frac{\partial\left(
x_{\alpha}x_{\beta }\right) }{\partial
x_{j}}-\alpha_{ij}^{k}x_{k}g^{\alpha\beta}\frac {\partial\left(
x_{\alpha}x_{\beta}\right) }{\partial x_{j}},\label{S1} \end{equation}
Since $C$ is an invariant of the undeformed Lie algebra $L$, the
first term reduce to zero, i.e.,
\[
C_{ij}^{k}x_{k}g^{\alpha\beta}\frac{\partial\left(
x_{\alpha}x_{\beta }\right) }{\partial x_{j}}=0,
\]
and equation (\ref{S1}) reduces to \begin{equation} \widehat{X}_{i}(C)=-\alpha
_{ij}^{k}x_{k}g^{\alpha\beta}\frac{\partial\left(
x_{\alpha}x_{\beta}\right) }{\partial x_{j}}.\label{S2} \end{equation}
If $C$ is a common invariant of $L$ and $L+\varphi$, then equation
(\ref{S2}) must vanish. Taking into account that for any $1\leq
j\leq N$ the derivatives are given by
\begin{equation} \frac{\partial}{\partial
x_{j}}\left( g^{kl}x_{k}x_{l}\right) =\sum_{l\neq
j}g^{lj}x_{l}+2g^{jj}x_{j}, \end{equation}
inserting it into equation (\ref{S2}) and reordering the terms, we
obtain that for any fixed $1\leq i\leq N$ the following system of
equations must be satisfied:
\begin{eqnarray}
\sum_{j=1}^{N}\alpha_{ij}^{i}g^{ij} =0,\label{T1} \\
2\alpha_{ij}^{j}g^{jj}+\sum_{k=1}^{N}\alpha_{ik}^{j}g^{ik}+\sum_{k\neq
j}\alpha_{ik}^{i}g^{jk} =0,\\
2\alpha_{ij}^{j}g^{jj}+\sum_{k\neq j}\alpha_{ik}^{j}g^{kj} =0,\\
\alpha_{ij}^{j}g^{jk}+\alpha_{ik}^{k}g^{jk}+2\alpha_{ij}^{k}g^{jj}%
+2\alpha_{ik}^{j}g^{kk}+\sum_{l\neq i,j,k}\left( \alpha_{il}^{j}g^{kl}%
+\alpha_{il}^{k}g^{kl}\right) =0.\label{T2}
\end{eqnarray}
System (\ref{T1})--(\ref{T2}) provides a necessary and sufficient
condition for a linear deformation $L+\varphi$ (with respect to an
integrable cocycle $\varphi$) to be quasi-classical and preserve
the non-degenerate quadratic Casimir operator of $L$.
As example, consider the indecomposable six dimensional nilpotent
Lie algebra $\frak{n}$ given by the brackets
\[
\left[X_{4},X_{5}\right]=2X_{2},\;
\left[X_{4},X_{6}\right]=-2X_{3},\;
\left[X_{5},X_{6}\right]=-X_{1}.
\]
This algebra is trivially seen to be quasi-classical with
non-degenerate quadratic Casimir operator
$C=x_{1}x_{4}+2x_{2}x_{6}+x_{3}x_{5}$.\footnote{Here the
symetrized form is simply
$C_{symm}=X_{1}X_{4}+2X_{2}X_{6}+X_{3}X_{5}$.} The coefficients of
the associated form are
\[
g^{14}=g^{41}=\frac{1}{2},\; g^{26}=g^{62}=g^{35}=g^{53}=1.
\]
It can be shown that $\dim H^{2}(\frak{n},\frak{n})=30$. Consider
now the nontrivial cocycle given by \begin{eqnarray}
\varphi(X_{1},X_{2})=2X_{2},\; \varphi(X_{1},X_{3})=-2X_{3},\;
\varphi(X_{1},X_{5})=2X_{5},\;\varphi(X_{1},X_{6})=-2X_{6},\;\varphi(X_{2},X_{3})=X_{1},\nonumber\\
\varphi(X_{2},X_{4})=-2X_{5},\;\varphi(X_{2},X_{6})=X_{4},\;
\varphi(X_{3},X_{4})=2X_{6},\;\varphi(X_{3},X_{5})=-X_{4}. \end{eqnarray}
It can be verified that $\varphi$ satisfies equation (\ref{K3}),
and is therefore integrable. Let $\frak{g}=\frak{n}+\varphi$ be
the corresponding linear deformation. With some computation it can
be shown that $\varphi$ satisfies the system
(\ref{T1})-(\ref{T2}), which implies that the deformation
$\frak{g}$ is quasi-classical and has $C$ as invariant. Actually,
$\frak{g}$ is isomorphic to the semisimple algebra
$\frak{so}(2,2)$, and considering the maps
\[
f_{t}(X_{i})=\frac{1}{t}X_{i},\; (i=1,2,3),\quad
f_{t}(X_{i})=\frac{1}{\sqrt{t}}X_{i},\; (i=4,5,6)
\]
in $\frak{g}=\frak{n}+\varphi$, the corresponding contraction
recovers $\frak{n}$ and preserves the invariant. Although in
general a deformation is not associated to a contraction
\cite{C63}, this example illustrates a general result, the proof
of which follows by direct computation:
Let $L\rightsquigarrow L^{\prime}$ be a non-trivial contraction of
a quasi-classical Lie algebra $L$ that preserves a non-degenerate
Casimir operator $C$. Then there exists an integrable cocycle
$\varphi\in H^{2}(L^{\prime},L^{\prime})$ such that $\varphi$
satisfies (\ref{T1})-(\ref{T2}) and the linear deformation
$L^{\prime}+\varphi$ preserves the Casimir operator $C$.
We finally remark that deforming quasi-classical Lie algebras by
means of integrable cocycles satisfying conditions
(\ref{T1})-(\ref{T2}) has the advantage of preserving the
signature of the metric. For applications to the WZW model, this
means that the signature of the space-time is preserved, and
therefore, both the deformed and non-deformed models can be
compared since they are described by the same metric. Although a
difficult task in general, it would be an interesting problem to
characterize those Lie algebras which actually admit cocycles of
this type. Work in this direction is in progress.
\section*{Acknowledgements}
\noindent
The author was partially supported by the research project
MTM2006-09152 of the Ministerio de Educaci\'on y Ciencia.
|
1,477,468,751,102 | arxiv |
\section{Introduction}
Cardinality estimation (\textsf{CardEst}), which aims at predicting the result size of a SQL query without its actual execution, is a longstanding and fundamental problem in DBMS. It is the core component of query optimizers~\cite{howgoodare,4,6} to produce high-quality query plans. Although a variety of \textsf{CardEst} methods have been proposed in the last several decades, it remains to be a notoriously challenging problem in the DB community.
\smallskip
\noindent{\underline{\textbf{Status and challenges of \textsf{CardEst}.}}}
Given a table $T$ on attributes $\{T_1, \ldots, T_n\}$ and a query $Q$, \textsf{CardEst} is equivalent to estimating the probability of tuples in $T$ satisfying $Q$.
Therefore, the core problem of \textsf{CardEst} is how to model the distribution of $T$ to estimate the probability of $Q$. Based on existing work~\cite{wang2020ready}, we believe that an applicable \textsf{CardEst} method should satisfy criteria from three aspects, namely \emph{A(Algorithm)}, \emph{D(Data)} and \emph{S(System)}. (\emph{A}): the \textsf{CardEst} algorithm itself should have high estimation accuracy, fast inference (and training) time, lightweight storage cost, and efficient updating process, in order to generate high-quality query plans~\cite{zhu2020flat, perron2019learned}.
(\emph{D}): the \textsf{CardEst} method should maintain stable performance for different data with varied distribution, attribute correlation, domain size, and number of attributes. (\emph{S}): the \textsf{CardEst} method should be friendly for system deployment with interpretable model, predictable behaviors, reproducible results, and easy for debugging~\cite{wang2020ready}.
The simplest \textsf{CardEst} method assumes that all attributes are mutually independent and builds a histogram on each $T_i$. Its estimation latency is low but the error is high since correlations between attributes are ignored. Another class of methods samples tuples from $T$ for \textsf{CardEst}. They can be inaccurate on high-dimensional data or queries with small cardinality. These traditional \textsf{CardEst} methods have significant algorithm drawbacks and unstable performance w.r.t. varied data but friendly for system deployment.
Recently, numerous works attempt to utilize machine learning (ML), especially deep learning (DL) techniques for \textsf{CardEst}. They either build supervised models mapping featurized query $Q$ to its cardinality~\cite{7, MSCN} or learn unsupervised models of $P_T$, the joint probability distribution of table $T$, to support computing the probability of any query $Q$ on $T$~\cite{zhu2020flat, deepDB, naru}. DL-based \textsf{CardEst} methods greatly improve the estimation accuracy but often sacrifice other algorithm aspects. More importantly, their performance can be greatly affected by data and often difficult for system deployment, such as the hyper-parameter tuning and the ``black-box'' property.
Table~\ref{ADSsummary} summarizes the status of existing \textsf{CardEst} methods according to the \emph{ADS} criteria. We can clearly see that no existing solution satisfactorily addresses this problem.
\begin{table}[t]
\centering
\caption{Status of \textsf{CardEst} methods according to \emph{ADS} criteria.}
\vspace{-1em}
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\tabincell{c}{\textsf{CardEst} \\ Methods}} & \multicolumn{5}{c|}{Algorithm} & \multicolumn{4}{c|}{Data} & \multicolumn{4}{c|}{System}
\\\cline{2-14}
& \rot{Accuracy} & \rot{Latency} & \rot{Training} & \rot{Model Size} & \rot{Updating} & \rot{Distribution} & \rot{Correlation} &\rot{Domain} & \rot{Scale} &\rot{Debug} &\rot{Interpret} &\rot{Predict} &\rot{Reproduce} \\\hline
\it Histogram &\xmark &\cmark &\cmark &\cmark &\cmark &\cmark &\xmark &\cmark & \xmark &\cmark &\cmark &\cmark &\cmark \\ \cline{1-14}
\it Sampling &\xmark &\xmark &\cmark &\cmark &\cmark &\xmark &\cmark &\xmark & \xmark &\cmark &\xmark &\cmark &\xmark \\ \cline{1-14}
\it Naru &\cmark &\xmark &\cmark &\cmark &\xmark &\xmark &\cmark &\xmark & \xmark &\xmark &\xmark &\xmark &\xmark \\ \cline{1-14}
\it DeepDB &\cmark &\cmark &\cmark &\cmark &\xmark &\cmark &\xmark &\cmark & \xmark &\xmark &\xmark &\cmark &\cmark \\ \cline{1-14}
\it FLAT &\cmark &\cmark &\cmark &\cmark
&\xmark &\cmark &\cmark &\cmark & \xmark &\xmark &\xmark &\cmark &\cmark \\ \cline{1-14}
\it MSCN &\xmark &\cmark &\xmark &\cmark
&\xmark &\cmark &\cmark &\cmark & \xmark &\xmark &\xmark &\xmark &\cmark \\ \cline{1-14}
\it BN &\cmark &\xmark &\xmark &\cmark
&\cmark &\cmark &\cmark &\cmark & \cmark &\cmark &\cmark &\cmark &\cmark \\ \cline{1-14}
\textbf{\textit{BayesCard}} &\cmark &\cmark &\cmark &\cmark &\cmark &\cmark &\cmark &\cmark & \cmark &\cmark &\cmark &\cmark &\cmark \\ \cline{1-14}
\end{tabular}}
\vspace{-1.5em}
\label{ADSsummary}
\end{table}
\smallskip
\noindent{\underline{\textbf{Our motivation.}}}
Recently, a classical method Bayesian networks (BNs) have re-attracted numerous attentions in the ML community to overcome the drawbacks of deep models~\cite{zhu2020efficient,lee2019scaling,ye2020optimizing}, and they are naturally suitable for \textsf{CardEst} ~\cite{2001SigmodGreedy, tzoumas2013vldb, dasfaa2019}. In comparison with other methods, BNs have significant advantages in terms of the \emph{ADS} criteria.
First, from the algorithm perspective, BNs are very compact and easy to update.
Second, BNs reflect the intrinsic causal relations between attributes, which are robust to the data changes. Thus, they tend to maintain stable performance as the data varies in terms of correlation, distribution, and etc.
Third, BNs are \emph{interpretable}, easy to predict, maintain, validate and improve with expert knowledge, thus friendly for system deployment.
These attractive models have been proposed decades ago~\cite{2001SigmodGreedy, tzoumas2013vldb}, but the BNs' NP-hard model construction process and intractable probability inference algorithm make them impractical for DBMS.
\emph{In summary, as long as we can overcome the inefficiency of model construction and probability inference of BNs, we can obtain a desirable method for \textsf{CardEst} satisfying the ADS criteria simultaneously.}
\smallskip
\noindent{\underline{\textbf{Our contributions.}}}
In this paper, we try to resolve the \textsf{CardEst} challenges by revitalizing BNs with new equipments. We propose \textit{BayesCard}, a unified Bayesian framework for \textsf{CardEst}. The key idea of \textit{BayesCard} is to build an ensemble of BNs to model the distributions of tables in a database, and use the constructed BNs to estimate the cardinality of any query. \textit{BayesCard} incorporates the recent advances in probabilistic programming languages (PPLs) for building BNs~\cite{edward,pyro,InferNET18,schreiber2018pomegranate,ankan2015pgmpy, pymc}.
PPLs allow for a declarative specification of probabilistic models, within which each variable is defined as a probability distribution influenced by others.
Based on PPLs, we can easily define BNs to support various structure learning, parameter learning, and inference algorithms. Therefore, \textit{BayesCard} provides a user-friendly framework of building different BNs suitable for various data and system settings.
The key techniques of \textit{BayesCard} overcome the deficiency of existing BNs.
First, based on PPLs, \textit{BayesCard} designs the \emph{progressive sampling} and \emph{compiled variable elimination} probability inference algorithms, which significantly accelerate the traditional BN's inference process. Moreover, \textit{BayesCard} adapts its inference algorithms to efficiently handle multi-table join queries. Second, \textit{BayesCard} designs an efficient model construction algorithm for building an ensemble of BNs. Furthermore, using PPLs, \textit{BayesCard} can pre-specify constraints on the learned BN structure with prior knowledge to speed up the structure learning process. An accurate and lightweight BN structure could be obtained efficiently.
By our benchmark evaluation results, in comparison with DL-based \textsf{CardEst} methods, \textit{BayesCard} achieves comparable or better accuracy, $1$--$2$ orders of magnitude lower inference latency (near histogram) and update time, and $1$--$3$ orders faster training time and smaller model size.
Meanwhile, \textit{BayesCard} keeps stable performance when varying data with different settings.
We also integrate \textit{BayesCard} into PostgreSQL. On the benchmark workload, it improves the end-to-end query time by $13.3\%$, which is very close to the optimal result of $14.2\%$ using the true cardinality.
In summary, the main contributions of this paper are as follows:
\indent $\bullet$
We analyze the existing \textsf{CardEst} methods in terms of the \emph{ADS} criteria to evaluate a good and practical \textsf{CardEst} method. (Section~\ref{sect2})
\indent $\bullet$
We propose \textit{BayesCard}, a general framework that unifies the efforts behind PPLs for constructing BNs for \textsf{CardEst}. (Section~\ref{sect3})
\indent $\bullet$
We develop algorithms and techniques in \textit{BayesCard} using PPLs to improve inference latency and reduce the model construction cost, which help \textit{BayesCard} attain the desired properties of \textsf{CardEst} methods.
(Section~\ref{sect4} and~\ref{sect5})
\indent $\bullet$ We conduct extensive experiments on benchmarks and integrate \textit{BayesCard} into real-world system to demonstrate its superiority from \textit{ADS} criteria. (Section~\ref{sect6})
\section{Problem definition and Analysis}
\label{sect2}
In this section, we first formally define the \textsf{CardEst} problem from both database and statistical perspectives and then exhaustively examine the existing traditional methods and state-of-the-art DL-based methods for \textsf{CardEst} from the \emph{ADS} criteria.
\smallskip
\noindent\underline{\textbf{\textsf{CardEst} problem.}}
Let $T$ be a table with $n$ attributes $T_1, \cdots, T_n$.
For each $1 \leq i \leq n$, let $D_i$ denote the domain (all unique values) of attribute $T_i$. Any selection query $Q$ on $T$ can be represented in a canonical form\footnote{Handling pattern matching queries or string predicates (e.g., ``LIKE'' queries) require extensions (such as q-grams~\cite{chaudhuri2004selectivity}), which we do not consider in this paper.} as $Q = \{T_1 \in R_Q(T_1) \wedge T_2 \in R_Q(T_2) \wedge \cdots \wedge T_n \in R_Q(T_n)\}$, where $R_Q(T_i) \subseteq D(T_i)$ is the region specified by $Q$ over attribute $T_i$. Without loss of generality, we have
$R_Q(T_i) = D(T_i)$ if $Q$ has no constraint on attribute $T_i$.
Let $C_Q$ denote the cardinality, i.e., the number of tuples in $T$ satisfying query $Q$. From a statistical perspective, we can also regard all tuples in $T$ as points sampled according to the joint distribution $P_T = P_T(T_1, T_2, \dots, T_n)$ of all attributes. Let $P_T(Q) = P_T(T_1 \in R_Q(T_1), T_2 \in R_Q(T_2), \cdots , T_n \in R_Q(T_n)$ be the probability specified by the region of $Q$. Then, we have $C_Q = P_T(Q) \cdot |T|$. Thus, the \textsf{CardEst} problem can essentially be reduced to model the probability density function (PDF) $P_T$ of table $T$. In this paper, we focus on data-driven \textsf{CardEst} methods, which try to model $P_T$ directly. For query-driven \textsf{CardEst} methods,
they implicitly model $P_T$ by building functions mapping $Q$ to $P_T(Q)$.
\smallskip
\noindent\underline{\textbf{Existing \textsf{CardEst} Methods.}} We review the two traditional methods widely used by commercial DBMS and four state-of-the-art (SOTA) DL-based methods.
\textit{1). Histogram}~\cite{10} method assumes all attributes in $T$ are independent, and thus $P_T$ can be estimated as the $\prod_{i=1}^n P_T(T_i)$.
\textit{2). Sampling} is a model-free method, which fetches tuples from $T$ on-the-fly to estimate the probability of $Q$ on the samples.
\textit{3). Naru}~\cite{naru}, based on deep auto-regression models (DAR)~\cite{made}, factorizes $P_T$ as $P_T(T_1) * \prod_{i=2}^{n} P_T(T_i|T_1,\ldots,T_{n-1})$ and approximate each conditional PDF by a deep neural network (DNN).
\textit{4). DeepDB}~\cite{deepDB}, based on sum-product networks (SPN)~\cite{SPN}, approximates $P_T$ by recursively decomposing it into local and simpler PDFs. Specifically, the tree-structured SPN contains sum node to split $P_T$ to multiple $P_{T'}$ on tuple subset $T' \subseteq T$, product node to decompose $P_{T'}$ to $P_{T'}(T_i) \cdot P_{T'}(T_j)$ if attributes $T_i$ and $T_j$ are independent and leaf node if $P_{T}$ is a univariate PDF.
\textit{5). FLAT}~\cite{zhu2020flat}, based on factorized-split-sum-product networks (FSPN)~\cite{wu2020fspn}, improves over SPN by
adaptively decomposing $P_T$ according to the attribute dependence level. It adds the factorize node to split $P_T$ as $P_T(W) \cdot P_T(H | W)$ where $H$ and $W$ are highly and weakly correlated attributes in $T$. $P_T(W)$ is modeled in the same way as SPN. $P_T(H | W)$ is decomposed into small PDFs by the split nodes until $H$ is locally independent of $W$. Then, the multi-leaf node is used to model the multivariate PDF $P_T(H)$ directly.
\textit{6). MSCN}~\cite{MSCN}, is a query-driven method, which uses the set-convolutional DNN to learn the mapping functions between the input query $Q$ and its probability $P_T(Q)$.
\smallskip
\noindent\underline{\textbf{Analysis Results.}} We elaborate the \emph{ADS} criteria for \textsf{CardEst} problem and analyze the aforementioned methods in details. The results are summarized in Table~\ref{ADSsummary}.
\noindent\textit{$\bullet$ \textbf{Algorithm.}}
From the algorithm's perspective, we consider five important metrics that are widely used in existing work~\cite{deepDB, zhu2020flat} to evaluate the performance of \textsf{CardEst} methods.
$\bigcdot$
\emph{Estimation accuracy} is one of the priorities for \textsf{CardEst} since inaccurate estimation typically leads to sub-optimal and slow query plan~\cite{howgoodare}. Unfortunately, the traditional methods frequently incur poor estimations: \emph{Histogram} can cause large estimation error in presence of attributes correlations and \emph{Sampling} may be inaccurate on high-dimensional data with limited sampling size.
Query-driven methods, such as \emph{MSCN}, also have poor accuracy if the target query does not follow the same distribution of the query workload that the model is trained on. By existing evaluations~\cite{naru, deepDB, zhu2020flat}, DL-based \textsf{CardEst} methods can produce accurate results.
$\bigcdot$
\emph{Inference latency} is crucial since \textsf{CardEst} method needs to be executed numerous times in query optimization~\cite{3,6}. As a result, slow latency may degrade the end-to-end query time on plan generation and execution.
The inference latency of \emph{Naru} is high because of its large underlying DNN models and repetitive sampling process. \emph{Sampling} is also not efficient when the sample size is large.
$\bigcdot$
\emph{Training cost} refers to \textsf{CardEst} model construction time for a given database.
Query-driven based methods, such as \emph{MSCN}, are in general slow for training, since an enormous amount of queries need to be executed to learn the models.
$\bigcdot$
\emph{Model size} is related to the storage cost of models. In nowadays DBMS, the space costs of all these \textsf{CardEst} methods are affordable.
$\bigcdot$
\emph{Update time} is also important since table data frequently changes. Traditional methods are easy to update while no existing DL-based method can keep up with the fast data updates~\cite{wang2020ready}.
\smallskip
\noindent\textit{$\bullet$ \textbf{Data.}}
Generally, a DBMS will process various data with different settings. Therefore, we analyze whether the \textsf{CardEst} methods have a stable performance on four typical variations of data settings, namely data \textit{distribution}, attribute \textit{correlation}, attribute \textit{domain} size, and the number of attributes (\textit{scale}).
For traditional methods, \emph{Histogram}'s estimation error grows exponentially when data are highly correlated. \emph{Sampling}'s accuracy degrades on high-dimensional data with larger domain size and more attributes. In addition, for highly skewed data, the fetched samples tend to miss the query ranges with small probability, which also degrades its accuracy.
For DL-based methods, the poor performance stability of \emph{Naru}, \emph{DeepDB} and \emph{MSCN} is demonstrated in a recent benchmark study~\cite{wang2020ready}.
In a nutshell, their accuracy decreases while inference and training cost increases with more attributes. \emph{Naru} is also sensitive to data distribution and domain size since skewed or large PDF is more difficult to model. \emph{DeepDB} has the intrinsic drawback that tends to generate large and inaccurate SPNs on highly correlated attributes~\cite{expsSPN}. \emph{FLAT} overcomes the drawback of \emph{DeepDB} but its performance also degrades severely with more attributes.
\smallskip
\noindent\textit{$\bullet$ \textbf{System.}}
The \textsf{CardEst} method should satisfy the following properties for friendly system deployment~\cite{wang2020ready}.
$\bigcdot$
\emph{Debuggability} and easy to tune are crucial to the DB experts. The DL-based methods with ``black-box'' components may fail silently and contain high risks of missing a bug~\cite{wang2020ready}.
$\bigcdot$
\emph{Interpretability} is necessary when system developers would like to explain and validate the learned component, which is not satisfied by the DL-based methods~\cite{interpretation}.
$\bigcdot$
\emph{Predictability} is important since the system developers would like to predict the performance before actual deployment. As \emph{Naru} and \emph{MSCN} contain DNNs with illogical behaviors~\cite{wang2020ready}, their performance is hard to predict.
$\bigcdot$
\emph{Reproducibility} is necessary to locate system issues. As \emph{Sampling} and \emph{Naru} involve stochastic processes, their results cannot be reproduced by estimating the same query one more time.
\smallskip
\noindent\underline{\textbf{Summary.}}
From Table~\ref{ADSsummary}, we observe that \emph{no} existing \textsf{CardEst} method is satisfactory in all criteria. Our detailed experimental evaluation in Section~7 also verifies this observation.
Therefore, we design a new \textsf{CardEst} framework \emph{BayesCard} that successfully satisfies all criteria for the first time.
\begin{figure*}[t]
\centering
\includegraphics[width=17.5cm]{images/framework_new.pdf}
\caption{An example workflow of \textit{BayesCard}. }
\label{fig_model}
\end{figure*}
\section{BayesCard Overview}
\label{sect3}
In this section, we briefly review the background knowledge on BN and PPL in Section~\ref{sect3.1}, which are the foundations of \textit{BayesCard}. Then we overview our new framework \textit{BayesCard} for \textsf{CardEst} in Section~\ref{sect3.2}.
\subsection{Background Knowledge}
\label{sect3.1}
\noindent\underline{\textbf{Bayesian networks}} specifies a probability distribution $P_T$ of table $T$, whose attributes form a directed acyclic graph (DAG), such as Image (2.ii) in Figure~\ref{fig_model}. Each node of the DAG corresponds to an attribute and each edge defines the causal dependency between two nodes. An attribute is dependent on its parents (the source nodes with edges directing to this attribute) and conditionally independent of all other attributes given its parents~\cite{PGM}. Thus, the $P_T$ can be compactly represented as $P_T(T_1, \cdots, T_n) = \prod_{i=1}^n P_T(T_i|Par(T_i))$, where $Par(T_i)$ denotes the set of parents of $T_i$ in the defined DAG.
\smallskip
\noindent\underline{\textbf{Probabilistic programming languages}}
are general-purpose programming paradigm to specify probabilistic models and perform inference on the models automatically. Unlike in traditional programming languages (TPLs), each variable in PPLs is defined as a probability distribution, whose value can condition on a set of other variables. The compilers of PPLs are optimized to efficiently learn parameters of variable distribution and sample from these distributions.
PPLs have been applied to various ML domains, such as computer vision~\cite{kulkarni2015picture}, with remarkable performance.
To define a BN, for each attribute $T_i$, the PPLs can define a variable whose distribution is conditioned on variables in $Par(T_i)$. For example, the first seven lines in the PPL program on the right side of Image (2.ii) in Figure~\ref{fig_model} sufficiently defines the BN on the left as seven variables.
PPLs have the following properties.
First, PPLs can define variables of any general distribution, including tabular and continuous distributions, which helps to build BNs with continuous attributes. Whereas, existing BNs for \textsf{CardEst} problems~\cite{2001SigmodGreedy, dasfaa2019, tzoumas2013vldb} only support discrete variables.
Second, PPLs can efficiently learn the parameters using maximum likelihood estimation (MLE)~\cite{InferNET18}; e.g. the parameters of the example BN in Image (2.ii) can be derived by simply executing the last two lines of code.
Third, PPLs~\cite{pomegranate} also incorporates several main-stream algorithms for learning the BNs' structure, which captures the causal pattern of attributes in the data. The structure learning procedure of PPLs supports pre-specifying sub-structures.
Forth, PPLs can efficiently generate samples from the distribution of each variable.
\vspace{1em}
\subsection{BayesCard framework}
\label{sect3.2}
In this paper, we propose \textit{BayesCard}, a framework for \textsf{CardEst}.
The key idea of \textit{BayesCard} is to build an ensemble of BNs to model the distributions of tables in a database and use the constructed BNs to estimate the cardinality of any query. This framework, including model construction and probability inference of BNs, is implemented using PPLs in order to leverage its compiler and execution advantages of presenting probability distribution.
Specifically, the inputs of \textit{BayesCard} are a DB $\mathcal{D}$ containing $n$ tables and its join schema $\mathcal{J}$.
Following prior work's assumption~\cite{zhu2020flat, NeuroCard, Sampling}, \textit{BayesCard} only considers the join schema to be a tree, i.e. without self joins or cyclic joins.
In the join tree $\mathcal{J}$, each node represents a table and each edge represents a join relation between two tables.
For example, Figure~\ref{fig_model}-(1) illustrates a DB with 11 tables and the join tree schema on the tables.
Given $\mathcal{D}$ and $\mathcal{J}$, \textit{BayesCard} constructs an ensemble of $m$ BNs.
Each BN models the joint distribution of a subset of connected tables in $\mathcal{J}$.
For example in Figure~\ref{fig_model}-(1), \textit{BayesCard} builds 5 BNs ($BN_1, \ldots, BN_5$ in the red circles) to characterize the distributions of tables in the DB, where $BN_4$ is built to represent the joint distribution of tables $H$ and $K$.
To accurately model the joint distribution of multiple tables $\mathcal{T}$, \textit{BayesCard} uses the \emph{fanout} method as in prior works~\cite{deepDB, zhu2020flat, NeuroCard}, by creating a BN on the full outer join results of $\mathcal{T}$, along with additional fanout attributes. For example, as shown in Figure~\ref{fig_model}-(2.i), $BN_4$ models $\Omega$, the full outer join of $H$ and $K$ (shown in Figure~\ref{fig_model}-(2.iii)), along with the added fanout attributes:
$F_{H\xrightarrow{}\Omega}$, indicating how many tuples in $\Omega$ does a particular tuple in $H$ fanouts to; $F_{K\xrightarrow{}\Omega}$, indicating how many tuples in $\Omega$ does a particular tuple in $K$ fanouts to; $F_{\Omega \xrightarrow{} \{A,D\}}$, indicating how many tuples in the outer join table $\Omega \mathbin{\ojoin\mkern-5.5mu\bowtie\mkern-5.5mu\ojoin} A \mathbin{\ojoin\mkern-5.5mu\bowtie\mkern-5.5mu\ojoin} D$ does a particular tuple in $\Omega$ fanouts to.
Each BN can be represented as a PPL program, such as $BN_4$ in Figure~\ref{fig_model}-(2.ii). The probability $P_{\mathcal{T}}(Q)$ of any query $Q$ on a subset of tables $\mathcal{T}$ can be estimated based on the combination of multiple BNs containing tables covered in $\mathcal{T}$. The process of estimating the probability of a given query $P_{\mathcal{T}}(Q)$ is called probability inference.
\smallskip
\noindent \underline{\textbf{Challenges.}} Existing PPLs are not optimized for \textsf{CardEst} tasks in terms of probability inference and model construction, which are all addressed and optimized in \textit{BayesCard}.
\noindent \textbf{Probability inference.}
After the PPL program is successfully declared to represent a BN, existing PPLs do not support using this program for efficient probability inference, which is the key to \textsf{CardEst} problem. Therefore, \textit{BayesCard} tailors existing PPLs and designs two efficient inference algorithms. Using PPLs' extremely efficient sampling process, \textit{BayesCard} proposes the \emph{progressive sampling} algorithm, which guarantees to run in linear time complexity for estimating any query (Section~\ref{sect4.1}). In addition, \textit{BayesCard} invents \emph{compiled variable elimination} to further accelerate the inference algorithm (Section~\ref{sect4.2}). Furthermore, \textit{BayesCard} adapts its inference algorithms for the \emph{fanout} method to efficiently combine results from multiple BNs to estimate the probability of join queries (Section~\ref{sect4.3}).
\noindent \textbf{Model construction.}
A database generally contains multiple tables and deciding which ensemble of BNs corresponding to the partition of tables to learn significantly affects the \textsf{CardEst} accuracy and efficiency. Therefore, \textit{BayesCard} designs the ensemble construction algorithm to explore the optimal partition of all tables in the DB and optimizes the \textsf{CardEst} quality (Section~\ref{sect5.1}).
Furthermore, Existing PPLs do not explore how to accelerates the structure learning algorithms in DB scenarios. \textit{BayesCard} tailors and speeds up these algorithms by exploring and exploiting functional dependencies and other user-defined expert knowledge (Section~\ref{sect5.2}).
\section{Probability Inference in BayesCard}
\label{sect4}
In this section, we address the \emph{probability inference} in \textit{BayesCard}. Specifically, we first propose two novel inference algorithms based on PPLs for a single BN model, namely \emph{progressive sampling} (Section~\ref{sect4.1}), which guarantees to return an approximate probability estimation in linear time, and \emph{complied variable elimination} (Section~\ref{sect4.2}), which returns the exact probability with two orders of magnitude acceleration. Next, we present how to extend these two algorithms on multiple BNs to support join queries (Section~\ref{sect4.3}).
\subsection{Progressive sampling}
\label{sect4.1}
\begin{algorithm}[t]
\small
\caption{Progressive Sampling Inference Algorithm}
\label{prog_samp_algo}
\begin{flushleft}
\textbf{Input}: a table $T$ with $n$ attributes, a query $Q$ with region $R_{Q}$ and a PPL program defining the BN on $P_T$
\end{flushleft}
\begin{algorithmic}[1]
\State Align the attributes in topological order $T_1, \ldots, T_n$
\State $p \gets 1$, $S \gets [0]_{k \times n}$, an $k \times n$ dimension matrix of samples
\For{$i \in \{1, \ldots, n\}$}
\State Take $S[Par(T_i)]$, the columns in $S$ corresponding to attributes in $Par(T_i)$
\State $\hat{P_i}(T_i) \gets \frac{1}{k} \sum_{d \in S[Par(T_i)]} P_T(T_i|d)$
\State $p \gets p * \hat{P_i}(T_i \in R_Q(T_i))$
\State Define a PPL variable $P'_i$ by normalizing $\hat{P_i}(t_i|t_i \in R_Q(T_i))$
\State $S[i] \gets $ $k$ points sampled from $P'_i$
\EndFor
\State \textbf{return} $p$
\end{algorithmic}
\end{algorithm}
We define the inference procedure of a simple case, where we have a query $Q$ on tables $T$ in a DB and a single BN that exactly models $P_T$ on the full outer join of tables $T$. In this case, estimating the cardinality of $Q$, $P_T(Q)$ can be derived directly on this BN.
As defined in Section~\ref{sect2}, a query $Q$ takes the form of $\{T_1 \in R_Q(T_1) \wedge T_2 \in R_Q(T_2) \wedge \cdots \wedge T_n \in R_Q(T_n)\}$, where $R_Q$ is the region defined by $Q$ over attributes in $T$.
Thus, we can represent the probability of $Q$ as:
$P_T(Q) = \prod_{i=1}^n P_T \break (T_i \in R_Q(T_i)|Par(T_i) \in R_Q(Par(T_i))) = \prod_{i=1}^n P_i$, where $R_Q(Par(T_i))$ denotes the query region over the set of parent attributes $Par(T_i)$ and we can denote each term as $P_i$, for simplicity. Therefore, to compute $P_T(Q)$, we only need to compute or estimate each $P_i$.
In PPLs, accessing the probability $P_T(T_i|s)$ for each fixed value assignment $s \in R_Q(Par(T_i))$ takes constant time complexity. However, computing $P_i$ is generally intractable, as there can be exponential or infinite number of unique values in $R_Q(Par(T_i))$. Specifically, for large BNs with complex structures, the PPLs' existing inference algorithms can not have an efficiency guarantee, which is required for \textsf{CardEst} in practical DBMS. Therefore, \textit{BayesCard} designs the \emph{progressive sampling} inference algorithm, which uses the Monte Carlo approximation of $P_i$ based on a sample $S$ of $R_Q(Par(T_i))$ to ensure the computation efficiency, i.e., $P_i \approx \frac{1}{|S|} \sum_{s \in S} P_T(R_Q(T_i)|s)$.
The default sampling procedure in PPLs only supports sampling values from a variable's domain, which are not like to fail in the query range $R_Q$. Naively using this sampling algorithm will result in enormous ineffective points. Therefore, we can leverage the learned model, create variables to materialize the distribution $P(Par(T_i)| Par(T_i) \in R_Q(Par(T_i)))$, and progressively sample points from $R_Q(Par(T_i))$ accordingly, which greatly improves the sample effectiveness.
\smallskip
\noindent \underline{\textbf{Algorithm description.}} We present the details in Algorithm~\ref{prog_samp_algo}. Specifically, we first align the attributes from $T$ in topological order as $T_1, \ldots, T_n$, where $T_1$ is the root of the BN's DAG structure (line 1). We can directly obtain from the PPL $P_T(T_1)$ as it does not depend on any other attribute, and compute $P_1 = P_T(R_Q(T_1))$. Then, we can define a new variable in PPLs to represent the distribution $P_T(t_1|t_1 \in R_Q(T_1))$ and generate sample $S_1$ of $R_Q(T_1)$ from this variable. Next, for each of the rest attributes $T_i$, the samples of its parents $Par(T_i)$ must have already been generated because the attributes are aligned in topological order (line 5). We can derive a new distribution $\hat{P_i}$ approximating $P_T(T_i | R_Q(Par(T_i)))$ using these samples (line 6). This distribution $\hat{P_i}$ will be used to estimate $P_i$ (line 7) and generate samples from $R_Q(T_i)$ (line 8). At last, after we achieve the estimated value for each $P_i$, $P_T(Q)$ can be computed as their product (line~10).
\smallskip
\noindent \underline{\textbf{Analysis.}}
Sampling $|S|$ points and evaluating the probability with each fixed point takes $O(|S|)$ time complexity to approximate each $P_i$. Thereafter, time complexity of \emph{progressive sampling} on BN with any structure is guaranteed to be $O(|S|*n)$.
This inference algorithm is very efficient because generally, a small sample $S$ would suffice to make a very accurate estimation and the sampling process is extremely efficient in PPL. The progressive sampling algorithm in PPL resembles the one in the DAR model, proposed by Naru~\cite{naru}. Our method is different from theirs in the following aspects: 1) Efficient sampling is naturally supported in PPL for various continuous distributions, whereas the sampling procedure in DAR is post-equipped for categorical distributions only. 2) The progressive sampling in \textit{BayesCard} estimates each $P_i$ using sample $S$ during the sampling process, whereas in DAR, the samples $S$ are used to directly compute the $P_T$, which is less effective.
\smallskip
\noindent \underline{\textbf{Graph reduction optimization.}} To further accelerate the \emph{progressive sampling} algorithm, \textit{BayesCard} proposes the graph reduction optimization, which significantly speeds up the inference latency for datasets with large amount of attributes.
\noindent \textbf{Main idea.} In fact, the \emph{progressive sampling} algorithm involves a large amount of redundant computation. For example, for an attribute $T_i$, which is not constrained by predicates in $Q$, i.e. $R_Q(T_i) = D(T_i)$, the estimation of $P_i$ should equal to $1$. If all the decedents $T_j$ of $T_i$ are not constrained in $Q$, there is no need to sample $T_i$ since each $P_j$ should equal to $1$ regardless of the samples. Therefore, we can reduce the larger BN model to a much smaller one by removing these redundant attributes, and perform probability inference on it without affecting the estimation accuracy.
\noindent \textbf{Formulation.} First, we make the following rigorous definition of reduced graph $G'$. Intuitively, $G'$ only contains all constrained attributes in the query and other necessary attributes to connect them to form a minimal BN. An example of a reduced graph can be found in Figure~\ref{fig_RG}.
\begin{definition}
Given a BN representing a table $T$ with attributes $V$ = \{$T_1, \cdots T_n$\}, its defined DAG $G = (V, E)$, and a query $Q=(T'_1=t'_1 \wedge \cdots \wedge T'_k=t'_k)$ where $T'_i \in V$. We define the reduced graph $G' = (V', E')$ to be a sub-graph of $G$ where $V'$ equals $\bigcup_{1\leq i \leq k} Ancestor(T'_i)$, and $E'$ equals all edges in $E$ with both endpoints in $V'$. $Ancestor(T'_i)$ includes all parent nodes of $T'_i$ and their parent nodes recursively.
\end{definition}
Based on this definition, we can reduce the original BN model (i.e. PPL program with variables $V$) into a much smaller one (i.e. PPL program with variable $V'$), and perform inference on it. The correctness of the graph reduction optimization is stated in Theorem~1. Due to space limits, we put the proof of all theorems in the Appendix~A of the accompanied technical report~\cite{wu2020bayescard}.
\begin{theorem}
\label{thm_rg}
Given a BN $B$ defining $G$, a query $Q$ and the reduced BN $B'$ defining $G'$ on $Q$, computing $P_T(Q)$ on $B'$ is equivalent to computing $P_T(Q)$ on $B$.
\end{theorem}
\subsection{Compiled variable elimination}
\label{sect4.2}
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{images/exact_jit.pdf}
\vspace{-2.5em}
\caption{ Graph reduction and the compiled program with JIT. The left image shows the graph reduction for query $K_2 \in \{10, 20\}, H_2 \in \{-1, 10\}$. The red nodes refer to the attributes in the query. All red, green nodes and the red edges form the reduced graph $G'$.}
\vspace{-1em}
\label{fig_RG}
\end{figure}
Progressive sampling works for general PPL programs with any distribution type. For programs restricted to categorical distributions, we can further accelerate the inference algorithm using an alternative approach: compiled variable elimination. Inspired by the impressive results of compilation for query processing~\cite{legobase_tods,dblablb,Neumann11}, we investigate the usage of just-in-time compilation (JIT) and compiler optimizations to improve inference latency.
\smallskip
\noindent \underline{\textbf{Observation.}}
Let us revisit the example $BN_4$ built on tables $H$ and $K$, in the left image of Figure~\ref{fig_RG}. Consider a query $Q$ = ($K_1 \in \{10, 20\} \wedge H_2 \in \{-1, 10\}$), where we remove all ``black'' attributes by the graph reduction technique, based on Theorem~\ref{thm_rg}. For the ``green'' attributes, we have $R_Q(H_1) = D(H_1)$, $R_Q(K_2) = D(K_2)$, and $R_Q(F_{H\xrightarrow{}\Omega}) = D(F_{H\xrightarrow{}\Omega})$. The variable elimination algorithm (VE) compute the probability $P_T(Q)$ based on the following equation.
\vspace{-1em}
\begin{align*}
\small
P_T(Q) = \! \! \! \! \!\! \! \! \! \! \sum_{h_1 \in R_Q(H_1)} \! \! \! \cdots \! \! \! \! \sum_{h_2 \in R_Q(H_2) } \! \! \! \! P_T(h_1) * P_T(k_1) * \cdots * P_T(h_2|, f_{H\xrightarrow{}\Omega}, k_2)
\label{VE}
\end{align*}
\vspace{-1em}
This computation can be very inefficient in PPLs and repeated for estimating multiple queries. However, we observe that the VE algorithm only involves sum and product over attributes. If each variable in PPL (attribute in BN) is defined as categorical conditional distribution, they can be materialized as vectors or matrices. Thus, the VE algorithm essentially defines a program of linear algebra operations, whose execution time can be significantly enhanced by nowadays computing resource. Furthermore, we observe that the linear algebra program computing VE is fixed for a target query as long as the elimination order is fixed.
\smallskip
\noindent \underline{\textbf{JIT of VE.}} For any query, the \textit{BayesCard} can first decide an optimal variable elimination order and then compile the learned BN from the PPL program into a static program containing only matrix or tensor operations to maximize the execution efficiency. Furthermore, this program can be re-used to infer other queries with the same reduced graph by only changing the input query regions $R_Q$ (as shown in Figure~\ref{fig_RG}). Therefore, JIT can remember the execution pattern for this query and will re-use this pattern to infer the probability of future queries for further speed-up.
An example program showing the JIT compilation of VE on the same query $Q$ is shown in Figure~\ref{fig_RG}. Specifically, for each variable $T_i$ of PPLs in the reduced graph $G'$, the JIT program first extract the parameters of its distribution $P_T(T_i|Par(T_i))$. Since VE only supports categorical distributions, the extracted parameters of $T_i$ forms a matrix $M_{T_i}$. Next, based on the query region $R_Q$, the JIT program can further reduce $M_{T_i}$ by keeping only useful information, i.e. slicing its rows with $R_Q(T_i)$ and its columns with $R_Q(Par(T_i))$ (lines 2-6 of the code in Figure~\ref{fig_RG}). This reduction not only eliminates the redundant computation but also enables a close-form linear algebra equation.
Then, \textit{BayesCard} can determine an elimination order for these variables using the reversed topological order or standard procedure~\cite{darwiche2009modeling}. A fixed program containing only linear algebra operations can be derived, like the one in line 8, where ``\textsf{matmal}'' refers to matrix multiplication, ``\textsf{colsum}'' refers to column sum, and ``\textsf{.T}'' refers to the transpose. At last, this generated static program can execute efficiently, thanks to the batch processing of the tensor operations with various performance tuning techniques (e.g., loop tiling, parallelization, and vectorization).
By our evaluation, such program can achieve up to two orders of magnitude speed-ups over the original VE algorithm.
\subsection{Probability inference for fanout method}
\label{sect4.3}
Previous sections discuss the process of inferring the probability $P_T(Q)$ of a query $Q$ on the table(s) $T$, represented by exactly a single BN. For a database with multiple tables, this process needs to be modified for the following two types of queries: (1) a query $Q$ on tables, that cover many BNs (i.e. $Q$ on $T = \{A,D,H,K\}$ in Figure~\ref{fig_model})-(1); (2) a query on tables, that only cover a subset of a single BN (i.e. $Q$ on $T=\{H\}$). In these cases, the \textit{BayesCard} does not contain an exact BN representing $P_T$ to estimate this query $Q$. Fortunately, based on the fanout method explained earlier in Section~\ref{sect3.2}, we can use the following theorem to calculate $P_T(Q)$, which is proposed and proved in~\cite{zhu2020flat}.
\begin{theorem}
Given a query $Q$, let $V = \{V_{1}, V_{2}, \dots, V_{d} \}$ denote all vertices (nodes) in the join tree touched by $Q$ and let $\mathcal{V}$ denotes the full outer join of all tables in $V$. On each node $V_i$, let $F = \{ F_{A_{1}, B_{1}}, F_{A_{2}, B_{2}}, \ldots, F_{A_{n}, B_{n}}\}$, where each $(A_j, B_j)$ is a distinct join where $B_j$ is not in $Q$. Let $f = (f_1, f_2, \ldots, f_n)$ where $F_{A_{j}, B_{j}} = f_j$ for all $1 \leq i \leq n$, denote an assignment to $F$ and $\text{dlm}(f) = \prod_{j=1}^{n} \max\{f_j, 1\}$. Let
\begin{equation}
\small
p_i \! = \frac{|\mathcal{V}_i|}{|\mathcal{V}|} \! \cdot \!
\sum\limits_{f, v} \left( P_{\mathcal{V}_i}(Q_i \wedge F \! = \! f \wedge F_{V_{i}, V} \! = \! v) \cdot \frac{\max\{v, 1\}}{\text{dlm}(f)} \right).
\end{equation}
Then, the cardinality of $Q$ is $|\mathcal{V}| \cdot \prod_{i = 1}^{d} p_i$.
\end{theorem}
In short, since all the fanout attributes involved in this computation are pre-stored in the table $V_i$ and there exists a BN for $P_{\mathcal{V}_i}$, \textit{BayesCard} can directly use this theorem for probability inference of multi-table join queries.
\smallskip
\noindent\underline{\textbf{Efficient summation computation in \textit{BayesCard}.}}
We can compute the summation $\sum_{f, v} ( P_{\mathcal{V}_i}(Q_i \wedge F \! = \! f \wedge F_{V_{i}, V} \! = \! v) \cdot \frac{\max\{v, 1\}}{\text{dlm}(f)} )$ over all assignments of $f$ and $v$ as efficiently as computing the probability $P_{\mathcal{V}_i}(Q_i)$ for any query. We will explain the detailed procedure for calculating $\sum_{f \in D(F)} P_T(Q, F=f) * f$ using progressive sample and complied variable elimination, where $D(F)$ denotes the domain of unique values in $F$. Then, this procedure can naturally generalize to more complex cases.
Our calculation procedure is motivated by the Bayesian rule, that $P_T(Q, F=f) = P_T(F=f|Q) * P_T(Q)$. We observe that $P_T(Q)$ is a fixed value independent of $F$ because the fanout attributes are artificial attributes that will not be involved in $Q$. Furthermore, by property of BN, we know that $P_T(F|Q) = P_T(f|R_Q(Par(F)))$, so can derive the following equation. It spots a common term $P_T(Q)$ so the calculation can avoid repeatedly computing $P_T(Q)$.
\begin{equation*}
\sum_{f \in D(F)} P_T(Q, F=f) * f = P_T(Q) * \left(\sum_{f \in D(F)} P_T(f|R_Q(Par(F)))*f \right)
\end{equation*}
\noindent \textbf{Progressive sampling.} Recall in Section~\ref{sect4.2}, \textit{BayesCard} estimates $P_i = P_T(T_i|R_Q(Par(T_i)))$ by making progressive samples of $R_Q$ and approximate the $P_T(Q)$ as $\prod P_i$. After finishing estimating $P_T(Q)$ with sample $S$, \textit{BayesCard} can directly estimate $\sum_{f \in D(F)} \break P_T(f|R_Q(Par(F)))*f$ using the same sample $S$, i.e. as $\sum_{f \in S[F]} \break \hat{P}_T(f| S[Par(F)])*f$. The final result can be achieved by multiplying these two terms together.
\noindent \textbf{Compiled variable elimination.} Recall in Section~\ref{sect4.2}, \textit{BayesCard} can specify a particular elimination order by choosing the fanout variable $F$ as the last variable to eliminate. Using PPL, the intermediate result after each elimination step is materialized as a distribution. Therefore, before the last elimination step of VE algorithm for computing $P_T(Q)$, \textit{BayesCard} can store the intermediate result, which represents the conditional distribution $P_T(F|Q)$. Then, the summation $\sum_{f \in D(F)} P_T(f|R_Q(Par(F)))*f$ equals to $P_T(F|Q) \cdot D(F)$, where $\cdot$ denotes the vector dot product. Therefore, similar to computing $P_T(Q)$, this process only involves linear algebra operations, which can be compiled and efficiently calculated using JIT.
\section{Model construction of \textit{BayesCard}}
\label{sect5}
In this section, we explain how \textit{BayesCard} constructs an ensemble of BNs for a multi-table database. Specifically, Section~5.1 first introduces the BN ensemble construction method with budget, which clusters all tables in the database into several groups and builds a single BN on each group of tables. Then, Section~5.2 introduces some optimizations for building a single BN using PPLs. Finally, Section~5.3 shows how to incrementally update the BN model.
\begin{figure}[t]
\centering
\vspace{-1.5em}
\includegraphics[width=8.5cm]{images/PRM_learn.pdf}
\vspace{-1.5em}
\caption{\textit{BayesCard} ensemble learning algorithm demo.}
\vspace{-2em}
\label{PRM_learn}
\end{figure}
\subsection{Ensemble construction with budget}
\label{sect5.1}
\noindent\underline{\textbf{Main idea.}}
Consider the example database in Figure~\ref{PRM_learn} with 11 tables $A, B, \ldots, K$ forming a join tree, where each node represents a table and each edge represents a possible join between two tables. A previous approach~\cite{deepDB} suggests to create every possible two-table join results, examine the level of dependence between attributes across the two, and determine whether to create one large model on their full outer join table or two separate models. Since generating the full outer join of multiple tables could require exponential memory, this approach normally can not explore the possibility of creating a model on the join of more than three tables.
Another approach~\cite{NeuroCard} generates an unbiased sample $S$ on the full outer join of all tables in the schema and builds a single large model on $S$ directly. As the resulting model is built on all attributes in the database, the model construction and the probability inference can be very inefficient. Moreover, the size of $S$ is relatively small with respect to the full outer join size, suggesting a large amount of information loss, so the learned model on $S$ might not accurately represent the actual data distribution.
In order to balance the estimation accuracy and inference efficiency, we want to explore the full possibility of learning different BN ensembles such that the number of joined tables in each BN is no more than a threshold. Therefore, the resulting ensemble should capture as much dependence between tables as possible and simultaneously keep each BN in this ensemble as small as possible.
\begin{algorithm}[t]
\small
\caption{BN Ensemble Construction Algorithm}
\label{PRM_learn_algo}
\begin{flushleft}
\textbf{Input}: a DB schema with n tables $T_1, \cdots, T_n$ and a budget $k$
\end{flushleft}
\begin{algorithmic}[1]
\State Create the join tree $\mathcal{T} = (V, E)$ for the schema
\State Generate unbiased samples $S$ for full outer join of the entire schema
\State Initialize a dependence matrix $M \in \mathbb{R}^{n \times n}$
\For{Each pair of tables $e = (T_i, T_j)$}
\State Calculate the RDC dependence level scores between all attributes in $T_i$ and attributes in $T_j$
\State $w_e$ $\gets$ average RDC scores
\EndFor
\If{$k = 1$} \State \textbf{return} $\mathcal{T}$ and learn a single PRM for each table
\EndIf
\For{$k' \gets 2, \cdots, k$}
\State Sort $E$ in decreasing order based on $w_e$.
\For{$e = (u, v) \in E$}
\If{$u$ and $v$ contain exactly $k'$ tables in total}
\State Update $T$ by contracting nodes $u, v$ to a single node $\{u, v\}$
\EndIf
\EndFor
\EndFor
\State \textbf{return} $\mathcal{T}$ and learn a single PRM for each node in $\mathcal{T}$
\end{algorithmic}
\end{algorithm}
\smallskip
\noindent\underline{\textbf{Algorithm description.}}
The details of the ensemble construction algorithm is given in Algorithm~\ref{PRM_learn_algo}.
First, we define the budget $k$ such that a single BN model can only be constructed on (a sample of) the full outer join of no more than $k$ tables. The budget $k$ is a hyper-parameter decided by the dataset, system, and computing resource. The algorithm generally works as follows:
1) \textit{Computing dependency between tables (lines 1-7).} Given a tree-structured join schema $\mathcal{T}$, we first generate the unbiased sample $S$ of the full outer join of all tables according to~\cite{Sampling}. Specifically, the join tree is regarded as a rooted tree and samples $S$ are obtained by scanning all tables in $\mathcal{T}$ in a bottom-up manner. Then, we calculate the randomized dependence coefficient, i.e., RDC value~\cite{rdc}, between each pair of join tables using $S$. The detailed computation method is given in Appendix~B of our technical report~\cite{wu2020bayescard}. In Figure~\ref{PRM_learn}, the RDC value is shown as red numbers on each edge.
2) \textit{Contracting nodes (lines 8-18).} Intuitively, we would like to build a model on the full outer join of tables with high dependency. We can iteratively contract the nodes (tables) with high RDC value in $\mathcal{T}$ in a greedy manner. Let $k' = 2$ at the beginning. In each iteration, if $k' \leq k$, we first sort all edges $e = (u,v)$ (joins) in a descending order based on their RDC values. According to this edge order, we aggregate $u, v$, i.e. two endpoints of edge $e$, into a single node if they contain exactly $k'$ tables in total and update the RDC values of $e$ accordingly, whose details is given in Appendix~B of our technical report~\cite{wu2020bayescard}. We iterate this process until $k' = k$, and in the end, we obtain a tree where each node contains at most $k$ tables. For example, in Figure~\ref{PRM_learn}, let the budget $k = 3$. In the first iteration where $k' = 2$, the algorithm considers joining two tables together. The edge $(B, E)$ has the highest RDC value, so $B$ and $E$ are aggregated in the first step (\textcircled{1} in Figure~\ref{PRM_learn}(a)). After the first iteration, the join schema $\mathcal{T}$ has been transformed into a new tree in Figure~\ref{PRM_learn}(b). Similarly, in the second iteration where $k' = 3$, the node $\{B, E\}$ is first merged with the node $I$. Finally, the join tree is transformed to a tree in Figure~\ref{PRM_learn}(c).
3) \textit{Building BNs (line 19).} In the end, \textit{BayesCard} will construct a single BN model on (a sample of) the full outer join of tables within each node and fanout attributes will be added accordingly.
\smallskip
\noindent\underline{\textbf{Time Complexity analysis.}}
As shown in~\cite{Sampling}, creating the samples $S$ on the full outer join of tables $T_1, \cdots, T_n$ takes $O(\sum_{i=1}^{n}|T_i|)$ time.
Let $m$ be the attribute number in the full outer join of the tables. Calculating the pairwise RDC values takes $O(m^2|S|\log |S|) $. The rest of Algorithm~\ref{PRM_learn_algo} takes $O(kn^2)$ time since the algorithm terminates in $k$ iterations and in each iteration we only need to check the tables defined by two endpoints of each edge, which is at most $n^2$. Thus, the whole time complexity is $O(\sum_{i=1}^{n}|T_i| + m^2|S|\log |S| + kn^2)$.
\subsection{Single model construction optimizations}
\label{sect5.2}
The structure learning process, i.e., learning the causal structure from data of a single BN, is an NP-hard combinatorial optimization problem~\cite{34}. Current structure learning algorithms supported by PPLs either produce a general DAG structure or a simplified tree structure. We show optimization techniques for them as follows:
\smallskip
\noindent \underline{\textbf{Optimization for DAG structure learning algorithms.}}
The exact DAG structure learning algorithms explore the super-exponential searching space of all possible DAGs and select the best candidate~\cite{MDL, BIC, BDeu, A-start}). The learned structure is accurate but inefficient, which only scales to tens of attributes. Approximate methods limit the searching space with local heuristic (i.e. \emph{greedy} algorithms~\cite{greedysearch, 36, 37}), but they may produce inaccurate results.
Based on PPLs, \textit{BayesCard} supports pre-specifying sub-structures before running the \emph{exact} and \emph{greedy} structure learning algorithms, which limits the DAG searching space and makes the structure learning much more efficient. Specifically, practical databases generally exist attributes with \emph{functional dependencies}~\cite{fan2010discovering} or obvious causal relations between attributes, such as one's ``age'' determining one's ``school level''. First, users of \textit{BayesCard} can use their ``expert knowledge'' to pre-specify certain causal structures for subsets of attributes. Then, the PPLs within \textit{BayesCard} can define the variables corresponding to these attributes, and condition the variables with each other according to the pre-specified structure. At last, \textit{BayesCard} can rely on the existing algorithms to construct the remaining causal structure on these variables. Since the algorithms are forced to maintain these sub-structures, the number of qualified DAG candidates is significantly curtailed, making the structure learning process more efficient without loss in accuracy.
\smallskip
\noindent \underline{\textbf{Optimization for tree structure learning algorithms.}}
The tree structure learning algorithm learns a tree structure such as \emph{Chow-Liu tree}~\cite{23}, which sacrifices accuracy for efficiency.
\textit{BayesCard} can also improve the accuracy of a learned structure using the aforementioned ``expert knowledge'' after running the \emph{Chow-Liu tree} algorithm. This efficient algorithm forces the learned BN structure to be a tree, which could contain ``false'' causality or miss important attribute dependence. For example, intuitively we know that the number of ``children'' raised by someone is largely dependent on one's ``income'' and one's ``marital status'', which can not be captured simultaneously by the tree BN, since one node is only allowed to have one parent. Thus, after the structure is learned, \textit{BayesCard} can add the edge from ``Income'' to ``Children'' to improve its accuracy. With PPLs, only the parameters of the affected sub-structure (the ``Children'' variable in this example) need to be updated.
\subsection{Model updates}
Most of the practical databases update their data frequently, requiring the cardinality estimators to adjust their underlying models dynamically~\cite{wang2020ready}. When the data distribution changes, \textit{BayesCard} can update its underlying BNs very efficiently. Specifically, the learned structure of BN captures the \emph{intrinsic} causal pattern of the attributes, which is not likely to change even in the case of massive data updates. Therefore, in most cases, \textit{BayesCard} can preserve the original BN structure and only \emph{incrementally} update its distribution parameters. Such parameter updates are extremely efficient using MLE in PPLs. By our testing, it generally takes less than one second for an insertion or deletion of a thousand tuples. In some rare cases involving the insertion or deletion of attributes, a new BN structure should be constructed. Even in this case, the causal pattern of the original attributes is largely preserved. Therefore, \textit{BayesCard} can pre-specify some sub-structures and learn the new structure efficiently using the methods stated in the previous section.
\section{Analysis of BayesCard}
In this section, we analyze and demonstrate that \textit{BayesCard} satisfies the ADS criteria from all aspects, as shown in Table~\ref{ADSsummary}.
\noindent\textbf{Algorithm.} A BN with exact learned structure can losslessly capture the data distribution, a.k.a. near-perfect \emph{estimation accuracy} for all queries. We show empirically that even with an approximate tree structure, \textit{BayesCard} can achieve comparable or better accuracy than the current SOTA methods. The \emph{inference latency} of \textit{BayesCard} is roughly 1ms per query (close to Histogram method), thanks to our novel inference algorithms. Furthermore, as explained in Section~\ref{sect4}, \textit{BayesCard} can learn a compact structure of small \emph{model size} with fast \emph{training and update time}.
\noindent\textbf{Data.} Every dataset contains an inherent causal pattern, which can be discovered by \textit{BayesCard}. Building upon this structure, \textit{BayesCard} can represent its PDF accurately and efficiently. Specifically, the variables in PPL can characterize most data \emph{distribution} types with varied \emph{domain size}. \emph{Attribute correlation} is merely a manifestation of the underlying causal pattern, which can be accurately represented. Moreover, for data with more attributes (larger \emph{scale}), the proposed \emph{graph reduction} inference technique can reduce a larger graph into a much smaller one for efficient inference. Therefore, the inference latency is also stable for various data settings.
\noindent\textbf{System.} Both the structure and the distribution parameters of \textit{BayesCard} model are \emph{interpretable} and \emph{debuggable}. Specifically, a DB expert can verify a learned structure based on his prior knowledge of data causality (functional dependency in DBs), and validate the learned parameter using basic probability rules (non-negative and sum to one). Since the probability inference of \textit{BayesCard} follows the Bayesian rule, its performance is logical and \emph{predictable}. Furthermore, the compiled VE does not contain any stochasticity, so the users' error is \emph{reproducible}.
\section{Experimental Results}
\label{sect6}
In this section, we empirically demonstrate the superiority of our \textit{BayesCard} over other \textsf{CardEst} methods. In the following,
Section~\ref{sect6.1} first introduces the experimental setups.
Next, Section~\ref{sect6.2} thoroughly compares different \textsf{CardEst} methods in terms of the \emph{ADS} criteria on single table datasets.
Then, Section~\ref{sect6.3} evaluates the performance and end-to-end query plan execution time on multi-table datasets.
At last, Section~\ref{sect6.4} performs ablation studies on our proposed algorithms and optimizations in \textit{BayesCard} method.
\subsection{Experimental setups}
\label{sect6.1}
\underline{\textbf{\textsf{CardEst} methods to compare with.}}
We compare our \textit{BayesCard} framework with the following \textsf{CardEst} methods, including both traditional methods widely used in DBMS and four existing SOTA DL-based methods. For each ML-based \textsf{CardEst} method, we adopt the authors’ source code and apply the same hyper-parameters as used in the original paper.
\textit{1). Histogram} is the simplest \textsf{CardEst} method widely used in DBMS such as Postgres~\cite{postgresql}.
\textit{2). Sampling} has been used in DBMS such as MySQL~\cite{mysql}. In our testing, we randomly sample $1\%$ of all tuples for \textsf{CardEst}.
\textit{3). Naru/NeuroCard}~\cite{naru,NeuroCard} are \textsf{DAR}-based \textsf{CardEst} methods for single table and multi-table join queries, respectively.
\textit{4). DeepDB}~\cite{deepDB} is a SPN-based \textsf{CardEst} method.
\textit{5). FLAT}~\cite{zhu2020flat} is an FSPN-based \textsf{CardEst} method.
\textit{6). MSCN}~\cite{MSCN} is the SOTA query-driven \textsf{CardEst} method. For each dataset, we train it with $10^5$ queries generated in the same way as the workload.
Our \textit{BayesCard} framework subsumes BNs with various combination of structure learning and inference algorithms as described in previous sections. In Section~\ref{sect6.2} and~\ref{sect6.3}, we use an exemplary BN with \emph{Chow-Liu} tree structure learning algorithm and \emph{compiled variable elimination} inference algorithm with graph reduction optimizations. The comparison of different BNs realizable in \textit{BayesCard} and controlled ablation studies are deferred to Section~\ref{sect6.4}.
\smallskip
\noindent\underline{\textbf{Datasets and query workloads.}}
Our single table experiments are performed on three datasets:
\noindent 1).\textbf{DMV} dataset is a real-world dataset consisting of 11,575,483 tuples of vehicle registration information in New York. We use the same attributes as in~\cite{naru, wang2020ready}.
\noindent 2). \textbf{CENSUS} dataset contains population survey by
U.S. Census Bureau conducted in 1990. This dataset has 2,458,285 tuples and 68 attributes, containing highly correlated attributes. Based on RDC test~\cite{rdc}, we find that more half of the attributes are highly correlated with at least one other attribute. This dataset is very large in scale and has very complicated distribution.
\noindent 3) \textbf{SYNTHETIC} datasets are a collection of human-generated datasets with varied data distribution skewness, attributes correlation, domain size and number of attributes. We generated these datasets using the similar approach as a recent benchmark study~\cite{wang2020ready}. They are used to evaluate models' stability w.r.t.~changes in data.
For each dataset, we generate $1,500$ selection queries as workload. For each query $Q$, first we select a subset of attributes as filter attributes of $Q$. For each selected attribute $c$, if it represents a continuous variable, we uniformly generate two values ($v_1, v_2$) from its value domain and then add the filter predicate ``$v_1 \leq c \leq v_2$'' to $Q$. Otherwise, if $c$ is a categorical variable, we uniformly sample $k$ unique values\{$v_1, v_2, \cdots, v_k$\} from its domain and place a predicate ``$c$ \textsc{ IN } \{$v_1,\cdots, v_k$\}'' in $Q$.
\noindent 4). \textbf{Multi-table IMDB:}
We conduct the multi-table experiment on international movie database (IMDB) benchmark. Prior work~\cite{howgoodare} claims that this DB contains complicated data structure and establishes it to be a good test benchmark for cardinality estimators. We use \emph{JOB-light} benchmark query workload with 70 queries proposed in the original paper~\cite{howgoodare} and create another workload of 1500 \emph{JOB-comp} with more \underline{comp}rehensive and \underline{comp}licated queries.
\textit{JOB-light}'s IMDB schema contains six tables (\textsl{title}, \textsl{cast\_info}, \textsl{movie\_info}, \textsl{movie\_companies}, \textsl{movie\_keyword}, \textsl{movie\_info\_idx}) and five join operations in total where every other tables can only join with the primary table ``title''. Each \textit{JOB-light} query involves 3-6 tables with 1-4 filter predicates. The filter variety is not very diverse with equality filters on all attributes but the ``title.production\_year'' attribute only. In addition, \textit{JOB-light}'s workload only contains 70 queries, which is not enough to account for the variance in model prediction. Thus, we synthesize 1,500 \emph{JOB-comp} queries based on the schema of \emph{JOB-light} with more number of filter predicates per query. Each \emph{JOB-comp} query involves 4-6 tables with 2-7 filter predicates. The queries are uniformly distributed to each join of 4-6 tables. After determining the join graph, the filter predicates selection process is similar as in single table cases.
\smallskip
\noindent\underline{\textbf{Evaluation metric:}} We use the Q-error as our evaluation metrics, which is define as follow:
\begin{equation*}
\textbf{Q-error} = max(\frac{\text{Estimated Cardinality}}{\text{True Cardinality}}, \frac{\text{True Cardinality}}{\text{Estimated Cardinality}})
\end{equation*}
This evaluation metric is well recognized in DBMS community and widely used in recent papers on cardinality estimation~\cite{deepDB, naru, NeuroCard, 2001SigmodGreedy, tzoumas2011lightweight}. We report the \textbf{50\%}(median), \textbf{90\%}, \textbf{95\%} and \textbf{100\%}(worst) Q-error quantiles as evaluation of estimation accuracy.
\noindent\underline{\textbf{Experimental environment:}}
All models are evaluated on Intel(R) Xeon(R) Platinum 8163 CPU with 64 cores, 128GB DDR4 main memory, and 1TB SSD. For a fair comparison, we compare the model inference latency on CPU only since apart from the DAR model (\textit{Naru} and \textit{NeuroCard}) and \textit{MSCN}, the rest methods' inference algorithms do not support GPU.
\begin{table}[t]
\caption{Performance of \textsf{CardEst} algorithms on single tables.}
\vspace{-1em}
\resizebox{\columnwidth}{!}{
\begin{tabular}{c|c|cccc|c}
\hline
Dataset& Method & $50\%$ & $90\%$ & $95\%$ & $100\%$ & Latency (ms) \\ \hline
\multirow{7}{*}{DMV}
&\textbf{\textit{BayesCard}} &\firstcell{1.001} &\firstcell{1.024} &\secondcell{1.049} &\secondcell{7.641} &\thirdcell{2.1} \\ \cline{2-7}
&Histogram &1.318 & 12.32 & 143.6 & $1\cdot 10^4$ &\firstcell{0.1} \\ \cline{2-7}
&Sampling & 1.004 & 1.052 & 1.140 & 143.0 & 79 \\ \cline{2-7}
&Naru & \thirdcell{1.003} & \secondcell{1.026} & \firstcell{1.035} & \firstcell{5.500} & 86 \\ \cline{2-7}
&DeepDB & 1.006 & 1.124 & 1.193 & 108.1 & 5.1 \\ \cline{2-7}
&FLAT & \firstcell{1.001} & \thirdcell{1.028} & \thirdcell{1.066} & \thirdcell{11.37} & \secondcell{0.6} \\ \cline{2-7}
&MSCN & 1.210 & 2.263 & 4.507 & 151.8 & 3.4 \\
\thickhline
\multirow{7}{*}{CENSUS}
&\textbf{\textit{BayesCard}} & \firstcell{\textbf{1.063}} & \firstcell{\textbf{1.484}} & \firstcell{\textbf{2.052}} & \firstcell{\textbf{227.5}} & \secondcell{2.4} \\ \cline{2-7}
&Histogram &5.561 &259.8 &$5\cdot 10^4$ & $5\cdot 10^5$ & \firstcell{\textbf{0.2}}\\ \cline{2-7}
&Sampling & \secondcell{1.130} & \secondcell{1.412} & 374.2 & \thirdcell{1703} & 113 \\ \cline{2-7}
&Naru &\thirdcell{1.229} & \thirdcell{2.210} & \secondcell{7.156} & \secondcell{1095} & 129 \\\cline{2-7}
&DeepDB & 1.469 & 6.295 & 178.21 & $1\cdot 10^4$ & 25 \\ \cline{2-7}
&FLAT & 1.452 & 6.326 & \thirdcell{174.93} & $1\cdot 10^4$ & 25 \\ \cline{2-7}
&MSCN & 2.700 & 15.83 &$1\cdot 10^4$ & $1\cdot 10^5$ & \thirdcell{4.8} \\
\hline
\end{tabular}}
\vspace{-0.5em}
\label{tab: exp-single}
\end{table}
\subsection{Model evaluation on single tables}
\label{sect6.2}
In this section, we compare the performance of \textsf{CardEst} methods in terms of \emph{Algorithm} and \emph{Data} criteria.
\smallskip
\noindent\underline{\textbf{Algorithm criteria.}}
We evaluate the \textsf{CardEst} methods from four aspects: estimation accuracy, inference latency, model size and training time, and updating effects.
\noindent\textbf{Estimation accuracy:}
The estimation accuracy on two real-world single table datasets is reported in Table~\ref{tab: exp-single}, where the color shade in each cell corresponds to the rank among different \textsf{CardEst} methods. When compared with traditional models (\textit{Histogram} and \textit{Sampling}), \textit{BayesCard} achieves $1$--$3$ order of magnitude higher accuracy than both models. When compared with DL-based methods (\textit{Naru}, \textit{DeepDB} and \textit{FLAT}), \textit{BayesCard} has comparable or better estimate accuracy on DMV dataset, but significantly more accurate on CENSUS dataset. This is because these DL models can accurately represent the data distribution of DMV, which contains relatively less attribute correlation and fewer number of attributes. CENSUS, however, contains seven times larger number of attributes with more complex attribute correlations.
As the learning space grows exponentially with the number of attributes, \textit{Naru}'s accuracy dropped significantly.
For \textit{DeepDB} and \textit{FLAT}, their SPN or FSPN structure can not well capture the data distribution in presence of a large number of highly correlated attributes, so their performance also heavily degrades.
\noindent\textbf{Inference latency:}
As shown in Table~\ref{tab: exp-single}, apart from \textit{Histogram}, which leverages the attribute independence assumption for fast inference, \textit{BayesCard} generally attains the best ($1$--$2$ orders of magnitude) inference latency among the result methods.
Worth noticing that we observe significant increase in latency from DMV to CENSUS datasets for all methods except for \textit{BayesCard}. \textit{BayesCard}'s inference time appears to be insensitive to the number of attributes, mainly because the novel \emph{graph reduction} technique can reduce a large CENSUS attribute graph to a much smaller one for inference.
\noindent\textbf{Model size and training time:} As shown in Figure~\ref{model_size}, apart from the traditional methods, \textit{BayesCard} achieves the smallest model size with the fastest training time because the causal pattern of datasets enables a compact representation of data distribution. Worth noticing that Sampling is a model-free method that does not have model size or training time, so we do not include it in the figure.
\noindent\textbf{Updating time:}
We evaluate each method's updating effects by following a similar experimental setup of prior work~\cite{wang2020ready}.
Specifically, we create a copy of the original DMV dataset and sort the tuples based on the value of each column in an ascending order. Then, we take the first $20\%$ of the data to train a stale model and use the rest $80\%$ as data insertion updates. This procedure will make sure that the training dataset has different data distribution than the testing dataset; otherwise, the stale model would perform well without model updates. Then, after the model finishes the updating process, we test the model using the original query workload same as in Table~\ref{tab: exp-single} and report their $95\%$ q-errors and total update time in Figure~\ref{update}. Here, we refrain from comparing with the query-driven method \textit{MSCN} because it requires a new query workload to update its model, which is unavailable in our experimental settings.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/model_size.pdf}
\vspace{-2.5em}
\caption{ Model storage and training time. }
\vspace{-1em}
\label{model_size}
\end{figure}
\begin{table}[t]
\caption{Performance of model updates of different \textsf{CardEst} methods on DMV. The baseline q-error is the 95\% q-error quoted from Table~\ref{tab: exp-single} for comparison.}
\vspace{-1em}
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{c|c|c|c|c|c}
\hline
Method & \textbf{\textit{BayesCard}} & Histogram & Naru & DeepDB & FLAT \\ \hline
baseline q-error & 1.049 & 143.6 &1.035 & 1.193 &1.066\\ \hline
95\% q-error & \textbf{1.049} & 143.6 & 14.79 & 18.83 & 1.451\\ \hline
Update time (s) &103 &\textbf{25} & 1980 & 142 & 257 \\
\hline
\end{tabular}}
\vspace{-1.5em}
\label{update}
\end{table}
\textit{BayesCard}, \textit{Histogram}, and \textit{DeepDB} all preserve the original structure and only incrementally update the parameters, so in general, they have the fastest update time. Among them, \textit{Histogram} has the least amount of parameters to update, so it has the best update time. We use the method described in the original paper~\cite{zhu2020flat} to update \textit{FLAT}, which generates new sub-structures to fit the inserted data distribution, so it is slightly slower than the previous three. \textit{Naru} uses the incoming data to fine-tune its pre-trained DNNs for three epochs, which is significantly slower than others.
After the model updates, we observe that \textit{BayesCard} has no drop in estimation accuracy, whereas the deep probabilistic models have degraded performance. The reasons can be summarized as follow: (1) \textit{BayesCard}'s structure captures the data causal pattern, which often does not change after update; (2) \textit{DeepDB}'s preserved structure is not robust against data distribution changes; (3) fine-tuning the \textit{Naru}'s underlying DAR model overfits the information from the $20\%$ previously trained data, leading to degraded performance.
\noindent\textbf{\textit{Summary:}}
\textit{\textit{BayesCard} attains comparable or better estimation accuracy, lower inference latency, smaller model size, less training and update time than DL-based models. In addition, \textit{BayesCard} is 1-3 orders of magnitude more accurate than traditional methods.}
\smallskip
\noindent\underline{\textbf{Data criteria.}}
We evaluate the stability of \textsf{CardEst} methods in terms of \emph{Data} criteria from four aspects: data distribution, attribute correlation, domain size, and number of attributes.
SYNTHETIC datasets are generated using the similar approach in a recent benchmark study~\cite{wang2020ready}. Specifically, suppose we would like to generate a table $T$ with attributes $\{T_1,\ldots,T_n\}$ and $10^6$ tuples, where is the $n$ denotes the number of attributes (\emph{scale}). We generate the first column for $T_1$ using a Pareto distribution (using scipy.stats.pareto function), with a controlled skewness $s$ and domain size $d$. For each of the rest attribute $T_i$, we generate a column based on a previous attribute $T_j$ with $j<i$, to control the correlation $c$. For each tuple ($t_1, \ldots, t_n$) in $T$, we set $t_i$ to $t_j$ with a probability of $c$, and set $t_i$ to a random value drawn from the Pareto distribution with the probability of $1-c$.
The experimental results on SYNTHETIC are shown in Figure~\ref{synth}. Due to space limit, we only plot the comparison results between \textit{BayesCard} and \textit{DeepDB} on the estimation accuracy metric. The additional experimental results are reported in the appendix of the technical report~\cite{wu2020bayescard}. We summarize our observations as follows.
\noindent\textbf{\text{Distribution (s):}} Similar to the previous study~\cite{wang2020ready}, we find that increasing the Pareto distribution skewness severely degrades the performance of \textit{Naru} and \textit{Sampling} methods, but has only mild effect on \textit{BayesCard} and other methods. This is because \textit{BayesCard}, \textit{Histogram}, \textit{FLAT}, and \textit{DeepDB} all use (multi-)histograms to represent distributions, which are robust against distribution changes.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{images/stability.pdf}
\vspace{-2em}
\caption{ Comparing \textit{BayesCard} and DeepDB's stability. }
\vspace{-2em}
\label{synth}
\end{figure}
\noindent\textbf{\text{Correlation (c):}} The increase in $c$ has no impact on \textit{BayesCard}, mild impact on \textit{Sampling}, \textit{Naru}, \textit{FLAT} and \textit{MSCN}, and severe impact on \textit{Histogram} and \textit{DeepDB}, which make local or global attribute independence assumptions. \textit{BayesCard} is able to capture the causal pattern of the datasets, and thus can represent any attribute correlation accurately.
\noindent\textbf{\text{Domain (d):}} The increase in the domain size degrades the estimation accuracy for all methods, because increasing $d$ may increase the data complexity exponentially as there are $d^n$ possible values that a tuple can take. Fortunately, except for \textit{Naru}, the degrades in accuracy are within a reasonable range for all other methods.
\noindent\textbf{\text{Scale (n):}} Similar to domain size, increasing the number of attributes also increases the data complexity exponentially, and thus we expect to see a decrease in accuracy for all methods. Surprisingly, the performance of \textit{BayesCard} was not affected by $n$ at all. This is owing to the graph reduction technique, which significantly reduces the number of attributes involved during inference. This technique not only improves the inference latency but also increases the estimation accuracy as potential modeling errors on the reduced attributes are also eliminated.
Apart from estimation accuracy, \textit{BayesCard} also maintains very stable and robust performance in terms of inference latency, model size, and training time, which is analyzed in Appendix~C~\cite{wu2020bayescard}.
\noindent\textbf{\textit{Summary:}}
\textit{\textit{BayesCard} is much more stable and robust than other \textsf{CardEst} methods for datasets with various settings of data.}
\subsection{Model performance on multi-table dataset}
\label{sect6.3}
As reported in Table~\ref{tab: exp-multi} and Figure~\ref{model_size}, \textit{BayesCard} achieves comparable performance with the current SOTAs on the two query workloads of the IMDB dataset and preserves its superior inference latency, lightweight model storage, and fast training. Specifically, the estimation accuracy of \textit{BayesCard} is comparable to \textit{NeuroCard}, slightly better than \textit{DeepDB}, and slightly worse than \textit{FLAT}, but with up to $60\times$ smaller model size, and $10\times$ faster training and inference.
\begin{table}[t]
\vspace{-0.5em}
\caption{Performance of cardinality estimation algorithms on IMDB datasets with two query workloads.}
\vspace{-1em}
\resizebox{\columnwidth}{!}{
\begin{tabular}{c|c|cccc|c}
\hline
Workload& Method & $50\%$ & $90\%$ & $95\%$ & $100\%$ & Latency (ms) \\ \hline
\multirow{7}{*}{JOB-light}
&\textbf{\textit{BayesCard}} & \secondcell{1.300} & \thirdcell{3.534} & \thirdcell{4.836} & \thirdcell{19.13} & \secondcell{5.4} \\ \cline{2-7}
& Histogram & 7.318 & 1006 & 5295 & $1 \cdot 10^7$ & \firstcell{\textbf{0.1}} \\ \cline{2-7}
& Sampling &2.464 &55.29 &276.1 & $4 \cdot 10^4$ &63 \\ \cline{2-7}
&NeuroCard & 1.580 & 4.545 & 5.910 & \firstcell{\textbf{8.510}} & 673 \\ \cline{2-7}
&DeepDB &\thirdcell{1.318} & \secondcell{2.500} & \secondcell{3.161} & 39.60 & 49 \\ \cline{2-7}
&FLAT &\firstcell{\textbf{1.150}} & \firstcell{\textbf{1.819}} & \firstcell{\textbf{2.247}} & \secondcell{10.86} & 6.8 \\ \cline{2-7}
&MSCN & 2.750 &19.70 &97.60 & 661.0 & \thirdcell{6.7} \\
\thickhline
\multirow{7}{*}{JOB-Comp}
&\textbf{\textit{BayesCard}} & \secondcell{1.271} & \secondcell{9.053} & \thirdcell{86.3} & \secondcell{$4 \cdot 10^4$} & \secondcell{6.2}\\ \cline{2-7}
&Histogram & 15.78 & 7480 & $4\cdot10^4$ & $1\cdot10^8$ & \firstcell{\textbf{0.2}} \\\cline{2-7}
&Sampling & 3.631 & 102.7 & 1374 & $8\cdot10^6$ & 101 \\ \cline{2-7}
&NeuroCard &\thirdcell{1.538} & \thirdcell{9.506} & \secondcell{81.23} & \thirdcell{$1 \cdot 10^5$} & 73\\ \cline{2-7}
&DeepDB & 1.930 & 28.30 & 248.0 & $1 \cdot 10^5$ &55\\ \cline{2-7}
&FLAT &\firstcell{\textbf{1.202}} & \firstcell{\textbf{6.495}} & \firstcell{\textbf{57.23}} & \firstcell{$\boldsymbol{1\cdot10^4}$} & 10.1\\ \cline{2-7}
&MSCN & 4.961 &45.7 &447.0 & $1\cdot10^5$ & \thirdcell{6.6} \\
\hline
\end{tabular}}
\vspace{-1em}
\label{tab: exp-multi}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{images/JOB_e2e_BN.pdf}
\vspace{-2.5em}
\caption{ End-to-End evaluation of \textit{BayesCard}.}
\vspace{-2.5em}
\label{e2e}
\end{figure}
\noindent\underline{\textbf{End-to-End evaluation on PostgreSQL.}}
Furthermore, we use the IMDB dataset to demonstrate \textit{BayesCard}'s behavior in terms of \emph{System} criteria. The four aspects of \emph{System} criteria are rather conceptual and hard to compare quantitatively in experiment, so we incorporate \textit{BayesCard} into a commercial DBMS, \emph{Postgres 9.6.6}, to show that it can improve the query optimization process of a real system. Specifically, we evaluate the end-to-end query processing time for JOB-light queries as shown in Figure~\ref{e2e}, and compare \textit{BayesCard} with the Postgres baseline, \textit{FLAT}, and optimal result derived by inserting the true cardinality during query optimization. We do not compare with other methods since \textit{FLAT} has established its SOTA performance in the same experiment, as reported in the original paper~\cite{zhu2020flat}. We observe that:
1) \textit{BayesCard} improves the Postgres baseline by $13.3\%$, suggesting that with more accurate \textsf{CardEst} results, the query optimizer can generate better query plans with lower execution cost.
2) The improvement of \textit{BayesCard} is very close to the method using true cardinality in query compiling (14.2\%). This verifies that the accuracy of \textit{BayesCard} is sufficient to generate high-quality query plans. Besides, even though \textit{BayesCard} has a slightly worse estimation accuracy, it still marginally outperforms \textit{FLAT}. Both methods produce similar execution plans and the marginal gain of \textit{BayesCard} over \textit{FLAT} mainly credits to its faster inference latency.
3) The improvement of \textit{BayesCard} and \textit{FLAT} becomes more significant on queries joining more tables because the execution plan for a query joining 2 or 3 is almost fixed. Whereas, for queries joining more tables, the inaccurate Postgres baseline results may lead to a sub-optimal query plan, while \textit{BayesCard} and \textit{FLAT} providing more accurate \textsf{CardEst} results can find a better plan. This phenomenon has also observed and explained in~\cite{zhu2020flat, perron2019learned}.
\noindent\textbf{\textit{Summary:}}
The integration of \textit{BayesCard} into Postgres validates it as a practical counterpart of the \textsf{CardEst} component in Postgres and also verifies that \textit{BayesCard} is a system-friendly \textsf{CardEst} method.
\subsection{Comparing algorithms within \textit{BayesCard}}
\label{sect6.4}
In this section, we compare different \textit{BayesCard}'s structure learning algorithms, perform ablation studies on the inference algorithms and summarize the take-home messages for using \textit{BayesCard}.
\begin{table}[t]
\caption{Comparing different structure learning algorithms of \textit{BayesCard} on CENSUS.}
\vspace{-1em}
\resizebox{\columnwidth}{!}{
\begin{tabular}{c|c|c|c|c|c}
\hline
\multirow{2}{*}{Algorithms} & 95\% & Infer. & Model & Train & Update \\
& q-error & Time (s) & Size (mb) & Time (min) & Time (s) \\ \hline
Exact & \textbf{1.24} & 16.5 & 43.7 & 298 & 1391\\ \hline
Greedy & 1.88 & 2.45 & 2.53 & 62.1 & 442\\ \hline
Chow-Liu &2.05 &\textbf{0.78} & \textbf{0.08} & \textbf{19.8} & \textbf{103} \\
\hline
\end{tabular}}
\label{structlearn}
\end{table}
\smallskip
\noindent\underline{\textbf{Comparing structure learning algorithms.}} We report the estimation accuracy, inference latency without any proposed techniques, training time, model size, and update time on CENSUS dataset for various structure learning algorithms in Table~\ref{structlearn}. For \emph{exact} and \emph{greedy} algorithms, we incorporated the ``expert knowledge'' as described in Section~\ref{sect4.3}; otherwise, these algorithms become intractable and can not generate the BN's structure. We observe that with a more accurate structure learning algorithm (exact), the estimate accuracy has a significant improvement, but it sacrifices the other four dimensions to a great extent.
We did not report the result for DMV and IMDB datasets with a much fewer number of attributes because their data causal patterns are much simpler and different structure learning algorithms have similar performance.
\smallskip
\noindent\underline{\textbf{Ablation study of inference algorithms.}} We compare the novel inference optimizations of \textit{BayesCard} with the original variable elimination (VE) and belief propagation (BP) algorithms on a model learned with Chow-Liu tree algorithm on the CENSUS dataset, shown in Table~\ref{ablation}. We have the following observations: (1) the latency of original algorithms, VE and BP is unaffordable (780 ms per query) for practical systems; (2) the graph reduction (GR) and just-in-time compilation (JIT) optimization do not affect the estimation accuracy; (3) the GR and JIT alone improve the inference latency by 5 and 30 times respectively, and 325 times when combined for VE; (4) the progressive sampling algorithm (PS) produces 4 times larger estimation error but with significant improvement in latency. Worth noticing that the inference latency of PS and PS+GR can be much faster than VE+GR+JIT for \textit{BayesCard} with a complex structure (e.g. learned by exact structure learning algorithm).
\smallskip
\noindent\underline{\textbf{Take-home messages for \textit{BayesCard} users.}} (1) The Chow-Liu tree structure learning algorithm can efficiently generate a compact model, which has improved inference latency and stable performance over other structure learning algorithms. The degrades in accuracy can be compensated using ``expert knowledge'' described in Section~\ref{sect4.3}. (2) The \emph{VE+GR+JIT} inference algorithm efficiently produces exact estimation for BNs with discrete attributes, which is debuggable, predictable, reproducible, and very friendly for system development. However, \emph{PS+GR} is a general approach that has guaranteed efficiency for any complex DAG-structured BN, and support continuous attributes with any distribution. (3) \textit{BayesCard} provides a general \textsf{CardEst} framework for users to explore different trade-offs to suit their data and system settings.
\begin{table}[t]
\caption{Ablation study of different inference algorithms of \textit{BayesCard} on CENSUS.}
\vspace{-1em}
\resizebox{\columnwidth}{!}{
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
Algorithms & VE & BP & VE+GR & VE+JIT & VE+GR+JIT &PS & PS+GR \\ \hline
95\% q-error & \textbf{2.05} & \textbf{2.05} & \textbf{2.05} & \textbf{2.05} & \textbf{2.05} & 7.47 & 7.47 \\ \hline
Latency (ms) & 780 & 685 & 190 & 21.9 & \textbf{2.4} & 8.8 & 3.5 \\
\hline
\end{tabular}}
\vspace{-0.5em}
\label{ablation}
\end{table}
\smallskip
\section{Related Work}
\label{sect8}
We will briefly revisit the existing \textsf{CardEst} methods based on BN and the supervised \textsf{CardEst} methods.
\smallskip
\noindent \underline{\textbf{BN-based methods}} have been explored decades ago for \textsf{CardEst}. Getoor et al.~\cite{2001SigmodGreedy} used a \textit{greedy} algorithm for BN structure learning, the variable elimination for probability inference, and referential integrity assumption for join estimation. Tzoumas et al.~\cite{tzoumas2011lightweight} learned an exact-structured BN and used belief propagation for inference. Halford et al.~\cite{dasfaa2019} adopted the Chow-Liu tree structure learning algorithm, the VE inference algorithm, and the uniformity assumption for join estimation. However, none of the practical DBMSes incorporates these methods due to their impractical structure learning process, intractable inference latency, or inaccurate estimation for join queries due to over-simplified assumptions.
\smallskip
\noindent \underline{\textbf{Supervised \textsf{CardEst} methods}} use the feedback of past queries to train ML models, which maps the featurized query $Q$ to its actual cardinality. The first approach using neural networks on cardinality estimation was published for UDF predicates~\cite{5}. Later on, a regression-based model~\cite{7} and a semi-automatic alternative~\cite{8} were presented. Recently, supervised DL-based approaches, used multi-set convolutional network (\textit{MSCN})~\cite{MSCN}, tree-LSTM~\cite{sun2019end}, and lightweight XG-boost model~\cite{dutt2019selectivity} for \textsf{CardEst}. However, the supervised learning approaches have two major drawbacks as mentioned in ~\cite{deepDB}: (1) Their models neglect the data itself and are not robust to changes in query workload.
(2) Collecting the training data can be very expensive and training data has to be recollected when the workload changes.
Therefore, in general, query-driven supervised ML methods on cardinality estimation are not as flexible and accurate as data-driven unsupervised ML methods.
\smallskip
\section{Conclusion}
\label{sect8}
This paper proposes \textit{BayesCard}, the first framework that unifies the existing efforts on PPLs and BNs and optimizes them for \textsf{CardEst} in different data and system settings. \textit{BayesCard} revitalizes BNs with new equipments in model construction and probability inference, which make it a desirable \textsf{CardEst} method satisfying the \emph{algorithm}, \emph{data} and \emph{system} criteria at the same time. Extensive experimental studies and end-to-end system deployment establish \textit{BayesCard}'s superiority over existing \textsf{CardEst} methods.
Furthermore, \textit{BayesCard} captures the underlying data causality, which benefits other data-related tasks. In future work, we plan to explore the possibility of using \textit{BayesCard} for other tasks, such as data cleaning, entity matching, and approximate query processing.
\clearpage
\newpage
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
Cardinality estimation (\textsf{CardEst}), which aims at predicting the result size of a SQL query without its actual execution, is a longstanding and fundamental problem in DBMS. It is the core component of query optimizers~\cite{howgoodare,4,6} to produce high-quality query plans. Although a variety of \textsf{CardEst} methods have been proposed in the last several decades, it remains to be a notoriously challenging problem in the DB community.
\smallskip
\noindent{\underline{\textbf{Status and challenges of \textsf{CardEst}.}}}
Given a table $T$ on attributes $\{T_1, \ldots, T_n\}$ and a query $Q$, \textsf{CardEst} is equivalent to estimating the probability of tuples in $T$ satisfying $Q$.
Therefore, the core problem of \textsf{CardEst} is how to model the distribution of $T$ to estimate the probability of $Q$. Based on existing work~\cite{wang2020ready}, we believe that an applicable \textsf{CardEst} method should satisfy criteria from three aspects, namely \emph{A(Algorithm)}, \emph{D(Data)} and \emph{S(System)}. (\emph{A}): the \textsf{CardEst} algorithm itself should have high estimation accuracy, fast inference (and training) time, lightweight storage cost, and efficient updating process, in order to generate high-quality query plans~\cite{zhu2020flat, perron2019learned}.
(\emph{D}): the \textsf{CardEst} method should maintain stable performance for different data with varied distribution, attribute correlation, domain size, and number of attributes. (\emph{S}): the \textsf{CardEst} method should be friendly for system deployment with interpretable model, predictable behaviors, reproducible results, and easy for debugging~\cite{wang2020ready}.
The simplest \textsf{CardEst} method assumes that all attributes are mutually independent and builds a histogram on each $T_i$. Its estimation latency is low but the error is high since correlations between attributes are ignored. Another class of methods samples tuples from $T$ for \textsf{CardEst}. They can be inaccurate on high-dimensional data or queries with small cardinality. These traditional \textsf{CardEst} methods have significant algorithm drawbacks and unstable performance w.r.t. varied data but friendly for system deployment.
Recently, numerous works attempt to utilize machine learning (ML), especially deep learning (DL) techniques for \textsf{CardEst}. They either build supervised models mapping featurized query $Q$ to its cardinality~\cite{7, MSCN} or learn unsupervised models of $P_T$, the joint probability distribution of table $T$, to support computing the probability of any query $Q$ on $T$~\cite{zhu2020flat, deepDB, naru}. DL-based \textsf{CardEst} methods greatly improve the estimation accuracy but often sacrifice other algorithm aspects. More importantly, their performance can be greatly affected by data and often difficult for system deployment, such as the hyper-parameter tuning and the ``black-box'' property.
Table~\ref{ADSsummary} summarizes the status of existing \textsf{CardEst} methods according to the \emph{ADS} criteria. We can clearly see that no existing solution satisfactorily addresses this problem.
\begin{table}[t]
\centering
\caption{Status of \textsf{CardEst} methods according to \emph{ADS} criteria.}
\vspace{-1em}
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\tabincell{c}{\textsf{CardEst} \\ Methods}} & \multicolumn{5}{c|}{Algorithm} & \multicolumn{4}{c|}{Data} & \multicolumn{4}{c|}{System}
\\\cline{2-14}
& \rot{Accuracy} & \rot{Latency} & \rot{Training} & \rot{Model Size} & \rot{Updating} & \rot{Distribution} & \rot{Correlation} &\rot{Domain} & \rot{Scale} &\rot{Debug} &\rot{Interpret} &\rot{Predict} &\rot{Reproduce} \\\hline
\it Histogram &\xmark &\cmark &\cmark &\cmark &\cmark &\cmark &\xmark &\cmark & \xmark &\cmark &\cmark &\cmark &\cmark \\ \cline{1-14}
\it Sampling &\xmark &\xmark &\cmark &\cmark &\cmark &\xmark &\cmark &\xmark & \xmark &\cmark &\xmark &\cmark &\xmark \\ \cline{1-14}
\it Naru &\cmark &\xmark &\cmark &\cmark &\xmark &\xmark &\cmark &\xmark & \xmark &\xmark &\xmark &\xmark &\xmark \\ \cline{1-14}
\it DeepDB &\cmark &\cmark &\cmark &\cmark &\xmark &\cmark &\xmark &\cmark & \xmark &\xmark &\xmark &\cmark &\cmark \\ \cline{1-14}
\it FLAT &\cmark &\cmark &\cmark &\cmark
&\xmark &\cmark &\cmark &\cmark & \xmark &\xmark &\xmark &\cmark &\cmark \\ \cline{1-14}
\it MSCN &\xmark &\cmark &\xmark &\cmark
&\xmark &\cmark &\cmark &\cmark & \xmark &\xmark &\xmark &\xmark &\cmark \\ \cline{1-14}
\it BN &\cmark &\xmark &\xmark &\cmark
&\cmark &\cmark &\cmark &\cmark & \cmark &\cmark &\cmark &\cmark &\cmark \\ \cline{1-14}
\textbf{\textit{BayesCard}} &\cmark &\cmark &\cmark &\cmark &\cmark &\cmark &\cmark &\cmark & \cmark &\cmark &\cmark &\cmark &\cmark \\ \cline{1-14}
\end{tabular}}
\vspace{-1.5em}
\label{ADSsummary}
\end{table}
\smallskip
\noindent{\underline{\textbf{Our motivation.}}}
Recently, a classical method Bayesian networks (BNs) have re-attracted numerous attentions in the ML community to overcome the drawbacks of deep models~\cite{zhu2020efficient,lee2019scaling,ye2020optimizing}, and they are naturally suitable for \textsf{CardEst} ~\cite{2001SigmodGreedy, tzoumas2013vldb, dasfaa2019}. In comparison with other methods, BNs have significant advantages in terms of the \emph{ADS} criteria.
First, from the algorithm perspective, BNs are very compact and easy to update.
Second, BNs reflect the intrinsic causal relations between attributes, which are robust to the data changes. Thus, they tend to maintain stable performance as the data varies in terms of correlation, distribution, and etc.
Third, BNs are \emph{interpretable}, easy to predict, maintain, validate and improve with expert knowledge, thus friendly for system deployment.
These attractive models have been proposed decades ago~\cite{2001SigmodGreedy, tzoumas2013vldb}, but the BNs' NP-hard model construction process and intractable probability inference algorithm make them impractical for DBMS.
\emph{In summary, as long as we can overcome the inefficiency of model construction and probability inference of BNs, we can obtain a desirable method for \textsf{CardEst} satisfying the ADS criteria simultaneously.}
\smallskip
\noindent{\underline{\textbf{Our contributions.}}}
In this paper, we try to resolve the \textsf{CardEst} challenges by revitalizing BNs with new equipments. We propose \textit{BayesCard}, a unified Bayesian framework for \textsf{CardEst}. The key idea of \textit{BayesCard} is to build an ensemble of BNs to model the distributions of tables in a database, and use the constructed BNs to estimate the cardinality of any query. \textit{BayesCard} incorporates the recent advances in probabilistic programming languages (PPLs) for building BNs~\cite{edward,pyro,InferNET18,schreiber2018pomegranate,ankan2015pgmpy, pymc}.
PPLs allow for a declarative specification of probabilistic models, within which each variable is defined as a probability distribution influenced by others.
Based on PPLs, we can easily define BNs to support various structure learning, parameter learning, and inference algorithms. Therefore, \textit{BayesCard} provides a user-friendly framework of building different BNs suitable for various data and system settings.
The key techniques of \textit{BayesCard} overcome the deficiency of existing BNs.
First, based on PPLs, \textit{BayesCard} designs the \emph{progressive sampling} and \emph{compiled variable elimination} probability inference algorithms, which significantly accelerate the traditional BN's inference process. Moreover, \textit{BayesCard} adapts its inference algorithms to efficiently handle multi-table join queries. Second, \textit{BayesCard} designs an efficient model construction algorithm for building an ensemble of BNs. Furthermore, using PPLs, \textit{BayesCard} can pre-specify constraints on the learned BN structure with prior knowledge to speed up the structure learning process. An accurate and lightweight BN structure could be obtained efficiently.
By our benchmark evaluation results, in comparison with DL-based \textsf{CardEst} methods, \textit{BayesCard} achieves comparable or better accuracy, $1$--$2$ orders of magnitude lower inference latency (near histogram) and update time, and $1$--$3$ orders faster training time and smaller model size.
Meanwhile, \textit{BayesCard} keeps stable performance when varying data with different settings.
We also integrate \textit{BayesCard} into PostgreSQL. On the benchmark workload, it improves the end-to-end query time by $13.3\%$, which is very close to the optimal result of $14.2\%$ using the true cardinality.
In summary, the main contributions of this paper are as follows:
\indent $\bullet$
We analyze the existing \textsf{CardEst} methods in terms of the \emph{ADS} criteria to evaluate a good and practical \textsf{CardEst} method. (Section~\ref{sect2})
\indent $\bullet$
We propose \textit{BayesCard}, a general framework that unifies the efforts behind PPLs for constructing BNs for \textsf{CardEst}. (Section~\ref{sect3})
\indent $\bullet$
We develop algorithms and techniques in \textit{BayesCard} using PPLs to improve inference latency and reduce the model construction cost, which help \textit{BayesCard} attain the desired properties of \textsf{CardEst} methods.
(Section~\ref{sect4} and~\ref{sect5})
\indent $\bullet$ We conduct extensive experiments on benchmarks and integrate \textit{BayesCard} into real-world system to demonstrate its superiority from \textit{ADS} criteria. (Section~\ref{sect6})
\section{Problem definition and Analysis}
\label{sect2}
In this section, we first formally define the \textsf{CardEst} problem from both database and statistical perspectives and then exhaustively examine the existing traditional methods and state-of-the-art DL-based methods for \textsf{CardEst} from the \emph{ADS} criteria.
\smallskip
\noindent\underline{\textbf{\textsf{CardEst} problem.}}
Let $T$ be a table with $n$ attributes $T_1, \cdots, T_n$.
For each $1 \leq i \leq n$, let $D_i$ denote the domain (all unique values) of attribute $T_i$. Any selection query $Q$ on $T$ can be represented in a canonical form\footnote{Handling pattern matching queries or string predicates (e.g., ``LIKE'' queries) require extensions (such as q-grams~\cite{chaudhuri2004selectivity}), which we do not consider in this paper.} as $Q = \{T_1 \in R_Q(T_1) \wedge T_2 \in R_Q(T_2) \wedge \cdots \wedge T_n \in R_Q(T_n)\}$, where $R_Q(T_i) \subseteq D(T_i)$ is the region specified by $Q$ over attribute $T_i$. Without loss of generality, we have
$R_Q(T_i) = D(T_i)$ if $Q$ has no constraint on attribute $T_i$.
Let $C_Q$ denote the cardinality, i.e., the number of tuples in $T$ satisfying query $Q$. From a statistical perspective, we can also regard all tuples in $T$ as points sampled according to the joint distribution $P_T = P_T(T_1, T_2, \dots, T_n)$ of all attributes. Let $P_T(Q) = P_T(T_1 \in R_Q(T_1), T_2 \in R_Q(T_2), \cdots , T_n \in R_Q(T_n)$ be the probability specified by the region of $Q$. Then, we have $C_Q = P_T(Q) \cdot |T|$. Thus, the \textsf{CardEst} problem can essentially be reduced to model the probability density function (PDF) $P_T$ of table $T$. In this paper, we focus on data-driven \textsf{CardEst} methods, which try to model $P_T$ directly. For query-driven \textsf{CardEst} methods,
they implicitly model $P_T$ by building functions mapping $Q$ to $P_T(Q)$.
\smallskip
\noindent\underline{\textbf{Existing \textsf{CardEst} Methods.}} We review the two traditional methods widely used by commercial DBMS and four state-of-the-art (SOTA) DL-based methods.
\textit{1). Histogram}~\cite{10} method assumes all attributes in $T$ are independent, and thus $P_T$ can be estimated as the $\prod_{i=1}^n P_T(T_i)$.
\textit{2). Sampling} is a model-free method, which fetches tuples from $T$ on-the-fly to estimate the probability of $Q$ on the samples.
\textit{3). Naru}~\cite{naru}, based on deep auto-regression models (DAR)~\cite{made}, factorizes $P_T$ as $P_T(T_1) * \prod_{i=2}^{n} P_T(T_i|T_1,\ldots,T_{n-1})$ and approximate each conditional PDF by a deep neural network (DNN).
\textit{4). DeepDB}~\cite{deepDB}, based on sum-product networks (SPN)~\cite{SPN}, approximates $P_T$ by recursively decomposing it into local and simpler PDFs. Specifically, the tree-structured SPN contains sum node to split $P_T$ to multiple $P_{T'}$ on tuple subset $T' \subseteq T$, product node to decompose $P_{T'}$ to $P_{T'}(T_i) \cdot P_{T'}(T_j)$ if attributes $T_i$ and $T_j$ are independent and leaf node if $P_{T}$ is a univariate PDF.
\textit{5). FLAT}~\cite{zhu2020flat}, based on factorized-split-sum-product networks (FSPN)~\cite{wu2020fspn}, improves over SPN by
adaptively decomposing $P_T$ according to the attribute dependence level. It adds the factorize node to split $P_T$ as $P_T(W) \cdot P_T(H | W)$ where $H$ and $W$ are highly and weakly correlated attributes in $T$. $P_T(W)$ is modeled in the same way as SPN. $P_T(H | W)$ is decomposed into small PDFs by the split nodes until $H$ is locally independent of $W$. Then, the multi-leaf node is used to model the multivariate PDF $P_T(H)$ directly.
\textit{6). MSCN}~\cite{MSCN}, is a query-driven method, which uses the set-convolutional DNN to learn the mapping functions between the input query $Q$ and its probability $P_T(Q)$.
\smallskip
\noindent\underline{\textbf{Analysis Results.}} We elaborate the \emph{ADS} criteria for \textsf{CardEst} problem and analyze the aforementioned methods in details. The results are summarized in Table~\ref{ADSsummary}.
\noindent\textit{$\bullet$ \textbf{Algorithm.}}
From the algorithm's perspective, we consider five important metrics that are widely used in existing work~\cite{deepDB, zhu2020flat} to evaluate the performance of \textsf{CardEst} methods.
$\bigcdot$
\emph{Estimation accuracy} is one of the priorities for \textsf{CardEst} since inaccurate estimation typically leads to sub-optimal and slow query plan~\cite{howgoodare}. Unfortunately, the traditional methods frequently incur poor estimations: \emph{Histogram} can cause large estimation error in presence of attributes correlations and \emph{Sampling} may be inaccurate on high-dimensional data with limited sampling size.
Query-driven methods, such as \emph{MSCN}, also have poor accuracy if the target query does not follow the same distribution of the query workload that the model is trained on. By existing evaluations~\cite{naru, deepDB, zhu2020flat}, DL-based \textsf{CardEst} methods can produce accurate results.
$\bigcdot$
\emph{Inference latency} is crucial since \textsf{CardEst} method needs to be executed numerous times in query optimization~\cite{3,6}. As a result, slow latency may degrade the end-to-end query time on plan generation and execution.
The inference latency of \emph{Naru} is high because of its large underlying DNN models and repetitive sampling process. \emph{Sampling} is also not efficient when the sample size is large.
$\bigcdot$
\emph{Training cost} refers to \textsf{CardEst} model construction time for a given database.
Query-driven based methods, such as \emph{MSCN}, are in general slow for training, since an enormous amount of queries need to be executed to learn the models.
$\bigcdot$
\emph{Model size} is related to the storage cost of models. In nowadays DBMS, the space costs of all these \textsf{CardEst} methods are affordable.
$\bigcdot$
\emph{Update time} is also important since table data frequently changes. Traditional methods are easy to update while no existing DL-based method can keep up with the fast data updates~\cite{wang2020ready}.
\smallskip
\noindent\textit{$\bullet$ \textbf{Data.}}
Generally, a DBMS will process various data with different settings. Therefore, we analyze whether the \textsf{CardEst} methods have a stable performance on four typical variations of data settings, namely data \textit{distribution}, attribute \textit{correlation}, attribute \textit{domain} size, and the number of attributes (\textit{scale}).
For traditional methods, \emph{Histogram}'s estimation error grows exponentially when data are highly correlated. \emph{Sampling}'s accuracy degrades on high-dimensional data with larger domain size and more attributes. In addition, for highly skewed data, the fetched samples tend to miss the query ranges with small probability, which also degrades its accuracy.
For DL-based methods, the poor performance stability of \emph{Naru}, \emph{DeepDB} and \emph{MSCN} is demonstrated in a recent benchmark study~\cite{wang2020ready}.
In a nutshell, their accuracy decreases while inference and training cost increases with more attributes. \emph{Naru} is also sensitive to data distribution and domain size since skewed or large PDF is more difficult to model. \emph{DeepDB} has the intrinsic drawback that tends to generate large and inaccurate SPNs on highly correlated attributes~\cite{expsSPN}. \emph{FLAT} overcomes the drawback of \emph{DeepDB} but its performance also degrades severely with more attributes.
\smallskip
\noindent\textit{$\bullet$ \textbf{System.}}
The \textsf{CardEst} method should satisfy the following properties for friendly system deployment~\cite{wang2020ready}.
$\bigcdot$
\emph{Debuggability} and easy to tune are crucial to the DB experts. The DL-based methods with ``black-box'' components may fail silently and contain high risks of missing a bug~\cite{wang2020ready}.
$\bigcdot$
\emph{Interpretability} is necessary when system developers would like to explain and validate the learned component, which is not satisfied by the DL-based methods~\cite{interpretation}.
$\bigcdot$
\emph{Predictability} is important since the system developers would like to predict the performance before actual deployment. As \emph{Naru} and \emph{MSCN} contain DNNs with illogical behaviors~\cite{wang2020ready}, their performance is hard to predict.
$\bigcdot$
\emph{Reproducibility} is necessary to locate system issues. As \emph{Sampling} and \emph{Naru} involve stochastic processes, their results cannot be reproduced by estimating the same query one more time.
\smallskip
\noindent\underline{\textbf{Summary.}}
From Table~\ref{ADSsummary}, we observe that \emph{no} existing \textsf{CardEst} method is satisfactory in all criteria. Our detailed experimental evaluation in Section~7 also verifies this observation.
Therefore, we design a new \textsf{CardEst} framework \emph{BayesCard} that successfully satisfies all criteria for the first time.
\begin{figure*}[t]
\centering
\includegraphics[width=17.5cm]{images/framework_new.pdf}
\caption{An example workflow of \textit{BayesCard}. }
\label{fig_model}
\end{figure*}
\section{BayesCard Overview}
\label{sect3}
In this section, we briefly review the background knowledge on BN and PPL in Section~\ref{sect3.1}, which are the foundations of \textit{BayesCard}. Then we overview our new framework \textit{BayesCard} for \textsf{CardEst} in Section~\ref{sect3.2}.
\subsection{Background Knowledge}
\label{sect3.1}
\noindent\underline{\textbf{Bayesian networks}} specifies a probability distribution $P_T$ of table $T$, whose attributes form a directed acyclic graph (DAG), such as Image (2.ii) in Figure~\ref{fig_model}. Each node of the DAG corresponds to an attribute and each edge defines the causal dependency between two nodes. An attribute is dependent on its parents (the source nodes with edges directing to this attribute) and conditionally independent of all other attributes given its parents~\cite{PGM}. Thus, the $P_T$ can be compactly represented as $P_T(T_1, \cdots, T_n) = \prod_{i=1}^n P_T(T_i|Par(T_i))$, where $Par(T_i)$ denotes the set of parents of $T_i$ in the defined DAG.
\smallskip
\noindent\underline{\textbf{Probabilistic programming languages}}
are general-purpose programming paradigm to specify probabilistic models and perform inference on the models automatically. Unlike in traditional programming languages (TPLs), each variable in PPLs is defined as a probability distribution, whose value can condition on a set of other variables. The compilers of PPLs are optimized to efficiently learn parameters of variable distribution and sample from these distributions.
PPLs have been applied to various ML domains, such as computer vision~\cite{kulkarni2015picture}, with remarkable performance.
To define a BN, for each attribute $T_i$, the PPLs can define a variable whose distribution is conditioned on variables in $Par(T_i)$. For example, the first seven lines in the PPL program on the right side of Image (2.ii) in Figure~\ref{fig_model} sufficiently defines the BN on the left as seven variables.
PPLs have the following properties.
First, PPLs can define variables of any general distribution, including tabular and continuous distributions, which helps to build BNs with continuous attributes. Whereas, existing BNs for \textsf{CardEst} problems~\cite{2001SigmodGreedy, dasfaa2019, tzoumas2013vldb} only support discrete variables.
Second, PPLs can efficiently learn the parameters using maximum likelihood estimation (MLE)~\cite{InferNET18}; e.g. the parameters of the example BN in Image (2.ii) can be derived by simply executing the last two lines of code.
Third, PPLs~\cite{pomegranate} also incorporates several main-stream algorithms for learning the BNs' structure, which captures the causal pattern of attributes in the data. The structure learning procedure of PPLs supports pre-specifying sub-structures.
Forth, PPLs can efficiently generate samples from the distribution of each variable.
\vspace{1em}
\subsection{BayesCard framework}
\label{sect3.2}
In this paper, we propose \textit{BayesCard}, a framework for \textsf{CardEst}.
The key idea of \textit{BayesCard} is to build an ensemble of BNs to model the distributions of tables in a database and use the constructed BNs to estimate the cardinality of any query. This framework, including model construction and probability inference of BNs, is implemented using PPLs in order to leverage its compiler and execution advantages of presenting probability distribution.
Specifically, the inputs of \textit{BayesCard} are a DB $\mathcal{D}$ containing $n$ tables and its join schema $\mathcal{J}$.
Following prior work's assumption~\cite{zhu2020flat, NeuroCard, Sampling}, \textit{BayesCard} only considers the join schema to be a tree, i.e. without self joins or cyclic joins.
In the join tree $\mathcal{J}$, each node represents a table and each edge represents a join relation between two tables.
For example, Figure~\ref{fig_model}-(1) illustrates a DB with 11 tables and the join tree schema on the tables.
Given $\mathcal{D}$ and $\mathcal{J}$, \textit{BayesCard} constructs an ensemble of $m$ BNs.
Each BN models the joint distribution of a subset of connected tables in $\mathcal{J}$.
For example in Figure~\ref{fig_model}-(1), \textit{BayesCard} builds 5 BNs ($BN_1, \ldots, BN_5$ in the red circles) to characterize the distributions of tables in the DB, where $BN_4$ is built to represent the joint distribution of tables $H$ and $K$.
To accurately model the joint distribution of multiple tables $\mathcal{T}$, \textit{BayesCard} uses the \emph{fanout} method as in prior works~\cite{deepDB, zhu2020flat, NeuroCard}, by creating a BN on the full outer join results of $\mathcal{T}$, along with additional fanout attributes. For example, as shown in Figure~\ref{fig_model}-(2.i), $BN_4$ models $\Omega$, the full outer join of $H$ and $K$ (shown in Figure~\ref{fig_model}-(2.iii)), along with the added fanout attributes:
$F_{H\xrightarrow{}\Omega}$, indicating how many tuples in $\Omega$ does a particular tuple in $H$ fanouts to; $F_{K\xrightarrow{}\Omega}$, indicating how many tuples in $\Omega$ does a particular tuple in $K$ fanouts to; $F_{\Omega \xrightarrow{} \{A,D\}}$, indicating how many tuples in the outer join table $\Omega \mathbin{\ojoin\mkern-5.5mu\bowtie\mkern-5.5mu\ojoin} A \mathbin{\ojoin\mkern-5.5mu\bowtie\mkern-5.5mu\ojoin} D$ does a particular tuple in $\Omega$ fanouts to.
Each BN can be represented as a PPL program, such as $BN_4$ in Figure~\ref{fig_model}-(2.ii). The probability $P_{\mathcal{T}}(Q)$ of any query $Q$ on a subset of tables $\mathcal{T}$ can be estimated based on the combination of multiple BNs containing tables covered in $\mathcal{T}$. The process of estimating the probability of a given query $P_{\mathcal{T}}(Q)$ is called probability inference.
\smallskip
\noindent \underline{\textbf{Challenges.}} Existing PPLs are not optimized for \textsf{CardEst} tasks in terms of probability inference and model construction, which are all addressed and optimized in \textit{BayesCard}.
\noindent \textbf{Probability inference.}
After the PPL program is successfully declared to represent a BN, existing PPLs do not support using this program for efficient probability inference, which is the key to \textsf{CardEst} problem. Therefore, \textit{BayesCard} tailors existing PPLs and designs two efficient inference algorithms. Using PPLs' extremely efficient sampling process, \textit{BayesCard} proposes the \emph{progressive sampling} algorithm, which guarantees to run in linear time complexity for estimating any query (Section~\ref{sect4.1}). In addition, \textit{BayesCard} invents \emph{compiled variable elimination} to further accelerate the inference algorithm (Section~\ref{sect4.2}). Furthermore, \textit{BayesCard} adapts its inference algorithms for the \emph{fanout} method to efficiently combine results from multiple BNs to estimate the probability of join queries (Section~\ref{sect4.3}).
\noindent \textbf{Model construction.}
A database generally contains multiple tables and deciding which ensemble of BNs corresponding to the partition of tables to learn significantly affects the \textsf{CardEst} accuracy and efficiency. Therefore, \textit{BayesCard} designs the ensemble construction algorithm to explore the optimal partition of all tables in the DB and optimizes the \textsf{CardEst} quality (Section~\ref{sect5.1}).
Furthermore, Existing PPLs do not explore how to accelerates the structure learning algorithms in DB scenarios. \textit{BayesCard} tailors and speeds up these algorithms by exploring and exploiting functional dependencies and other user-defined expert knowledge (Section~\ref{sect5.2}).
\section{Probability Inference in BayesCard}
\label{sect4}
In this section, we address the \emph{probability inference} in \textit{BayesCard}. Specifically, we first propose two novel inference algorithms based on PPLs for a single BN model, namely \emph{progressive sampling} (Section~\ref{sect4.1}), which guarantees to return an approximate probability estimation in linear time, and \emph{complied variable elimination} (Section~\ref{sect4.2}), which returns the exact probability with two orders of magnitude acceleration. Next, we present how to extend these two algorithms on multiple BNs to support join queries (Section~\ref{sect4.3}).
\subsection{Progressive sampling}
\label{sect4.1}
\begin{algorithm}[t]
\small
\caption{Progressive Sampling Inference Algorithm}
\label{prog_samp_algo}
\begin{flushleft}
\textbf{Input}: a table $T$ with $n$ attributes, a query $Q$ with region $R_{Q}$ and a PPL program defining the BN on $P_T$
\end{flushleft}
\begin{algorithmic}[1]
\State Align the attributes in topological order $T_1, \ldots, T_n$
\State $p \gets 1$, $S \gets [0]_{k \times n}$, an $k \times n$ dimension matrix of samples
\For{$i \in \{1, \ldots, n\}$}
\State Take $S[Par(T_i)]$, the columns in $S$ corresponding to attributes in $Par(T_i)$
\State $\hat{P_i}(T_i) \gets \frac{1}{k} \sum_{d \in S[Par(T_i)]} P_T(T_i|d)$
\State $p \gets p * \hat{P_i}(T_i \in R_Q(T_i))$
\State Define a PPL variable $P'_i$ by normalizing $\hat{P_i}(t_i|t_i \in R_Q(T_i))$
\State $S[i] \gets $ $k$ points sampled from $P'_i$
\EndFor
\State \textbf{return} $p$
\end{algorithmic}
\end{algorithm}
We define the inference procedure of a simple case, where we have a query $Q$ on tables $T$ in a DB and a single BN that exactly models $P_T$ on the full outer join of tables $T$. In this case, estimating the cardinality of $Q$, $P_T(Q)$ can be derived directly on this BN.
As defined in Section~\ref{sect2}, a query $Q$ takes the form of $\{T_1 \in R_Q(T_1) \wedge T_2 \in R_Q(T_2) \wedge \cdots \wedge T_n \in R_Q(T_n)\}$, where $R_Q$ is the region defined by $Q$ over attributes in $T$.
Thus, we can represent the probability of $Q$ as:
$P_T(Q) = \prod_{i=1}^n P_T \break (T_i \in R_Q(T_i)|Par(T_i) \in R_Q(Par(T_i))) = \prod_{i=1}^n P_i$, where $R_Q(Par(T_i))$ denotes the query region over the set of parent attributes $Par(T_i)$ and we can denote each term as $P_i$, for simplicity. Therefore, to compute $P_T(Q)$, we only need to compute or estimate each $P_i$.
In PPLs, accessing the probability $P_T(T_i|s)$ for each fixed value assignment $s \in R_Q(Par(T_i))$ takes constant time complexity. However, computing $P_i$ is generally intractable, as there can be exponential or infinite number of unique values in $R_Q(Par(T_i))$. Specifically, for large BNs with complex structures, the PPLs' existing inference algorithms can not have an efficiency guarantee, which is required for \textsf{CardEst} in practical DBMS. Therefore, \textit{BayesCard} designs the \emph{progressive sampling} inference algorithm, which uses the Monte Carlo approximation of $P_i$ based on a sample $S$ of $R_Q(Par(T_i))$ to ensure the computation efficiency, i.e., $P_i \approx \frac{1}{|S|} \sum_{s \in S} P_T(R_Q(T_i)|s)$.
The default sampling procedure in PPLs only supports sampling values from a variable's domain, which are not like to fail in the query range $R_Q$. Naively using this sampling algorithm will result in enormous ineffective points. Therefore, we can leverage the learned model, create variables to materialize the distribution $P(Par(T_i)| Par(T_i) \in R_Q(Par(T_i)))$, and progressively sample points from $R_Q(Par(T_i))$ accordingly, which greatly improves the sample effectiveness.
\smallskip
\noindent \underline{\textbf{Algorithm description.}} We present the details in Algorithm~\ref{prog_samp_algo}. Specifically, we first align the attributes from $T$ in topological order as $T_1, \ldots, T_n$, where $T_1$ is the root of the BN's DAG structure (line 1). We can directly obtain from the PPL $P_T(T_1)$ as it does not depend on any other attribute, and compute $P_1 = P_T(R_Q(T_1))$. Then, we can define a new variable in PPLs to represent the distribution $P_T(t_1|t_1 \in R_Q(T_1))$ and generate sample $S_1$ of $R_Q(T_1)$ from this variable. Next, for each of the rest attributes $T_i$, the samples of its parents $Par(T_i)$ must have already been generated because the attributes are aligned in topological order (line 5). We can derive a new distribution $\hat{P_i}$ approximating $P_T(T_i | R_Q(Par(T_i)))$ using these samples (line 6). This distribution $\hat{P_i}$ will be used to estimate $P_i$ (line 7) and generate samples from $R_Q(T_i)$ (line 8). At last, after we achieve the estimated value for each $P_i$, $P_T(Q)$ can be computed as their product (line~10).
\smallskip
\noindent \underline{\textbf{Analysis.}}
Sampling $|S|$ points and evaluating the probability with each fixed point takes $O(|S|)$ time complexity to approximate each $P_i$. Thereafter, time complexity of \emph{progressive sampling} on BN with any structure is guaranteed to be $O(|S|*n)$.
This inference algorithm is very efficient because generally, a small sample $S$ would suffice to make a very accurate estimation and the sampling process is extremely efficient in PPL. The progressive sampling algorithm in PPL resembles the one in the DAR model, proposed by Naru~\cite{naru}. Our method is different from theirs in the following aspects: 1) Efficient sampling is naturally supported in PPL for various continuous distributions, whereas the sampling procedure in DAR is post-equipped for categorical distributions only. 2) The progressive sampling in \textit{BayesCard} estimates each $P_i$ using sample $S$ during the sampling process, whereas in DAR, the samples $S$ are used to directly compute the $P_T$, which is less effective.
\smallskip
\noindent \underline{\textbf{Graph reduction optimization.}} To further accelerate the \emph{progressive sampling} algorithm, \textit{BayesCard} proposes the graph reduction optimization, which significantly speeds up the inference latency for datasets with large amount of attributes.
\noindent \textbf{Main idea.} In fact, the \emph{progressive sampling} algorithm involves a large amount of redundant computation. For example, for an attribute $T_i$, which is not constrained by predicates in $Q$, i.e. $R_Q(T_i) = D(T_i)$, the estimation of $P_i$ should equal to $1$. If all the decedents $T_j$ of $T_i$ are not constrained in $Q$, there is no need to sample $T_i$ since each $P_j$ should equal to $1$ regardless of the samples. Therefore, we can reduce the larger BN model to a much smaller one by removing these redundant attributes, and perform probability inference on it without affecting the estimation accuracy.
\noindent \textbf{Formulation.} First, we make the following rigorous definition of reduced graph $G'$. Intuitively, $G'$ only contains all constrained attributes in the query and other necessary attributes to connect them to form a minimal BN. An example of a reduced graph can be found in Figure~\ref{fig_RG}.
\begin{definition}
Given a BN representing a table $T$ with attributes $V$ = \{$T_1, \cdots T_n$\}, its defined DAG $G = (V, E)$, and a query $Q=(T'_1=t'_1 \wedge \cdots \wedge T'_k=t'_k)$ where $T'_i \in V$. We define the reduced graph $G' = (V', E')$ to be a sub-graph of $G$ where $V'$ equals $\bigcup_{1\leq i \leq k} Ancestor(T'_i)$, and $E'$ equals all edges in $E$ with both endpoints in $V'$. $Ancestor(T'_i)$ includes all parent nodes of $T'_i$ and their parent nodes recursively.
\end{definition}
Based on this definition, we can reduce the original BN model (i.e. PPL program with variables $V$) into a much smaller one (i.e. PPL program with variable $V'$), and perform inference on it. The correctness of the graph reduction optimization is stated in Theorem~1. Due to space limits, we put the proof of all theorems in the Appendix~A of the accompanied technical report~\cite{wu2020bayescard}.
\begin{theorem}
\label{thm_rg}
Given a BN $B$ defining $G$, a query $Q$ and the reduced BN $B'$ defining $G'$ on $Q$, computing $P_T(Q)$ on $B'$ is equivalent to computing $P_T(Q)$ on $B$.
\end{theorem}
\subsection{Compiled variable elimination}
\label{sect4.2}
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{images/exact_jit.pdf}
\vspace{-2.5em}
\caption{ Graph reduction and the compiled program with JIT. The left image shows the graph reduction for query $K_2 \in \{10, 20\}, H_2 \in \{-1, 10\}$. The red nodes refer to the attributes in the query. All red, green nodes and the red edges form the reduced graph $G'$.}
\vspace{-1em}
\label{fig_RG}
\end{figure}
Progressive sampling works for general PPL programs with any distribution type. For programs restricted to categorical distributions, we can further accelerate the inference algorithm using an alternative approach: compiled variable elimination. Inspired by the impressive results of compilation for query processing~\cite{legobase_tods,dblablb,Neumann11}, we investigate the usage of just-in-time compilation (JIT) and compiler optimizations to improve inference latency.
\smallskip
\noindent \underline{\textbf{Observation.}}
Let us revisit the example $BN_4$ built on tables $H$ and $K$, in the left image of Figure~\ref{fig_RG}. Consider a query $Q$ = ($K_1 \in \{10, 20\} \wedge H_2 \in \{-1, 10\}$), where we remove all ``black'' attributes by the graph reduction technique, based on Theorem~\ref{thm_rg}. For the ``green'' attributes, we have $R_Q(H_1) = D(H_1)$, $R_Q(K_2) = D(K_2)$, and $R_Q(F_{H\xrightarrow{}\Omega}) = D(F_{H\xrightarrow{}\Omega})$. The variable elimination algorithm (VE) compute the probability $P_T(Q)$ based on the following equation.
\vspace{-1em}
\begin{align*}
\small
P_T(Q) = \! \! \! \! \!\! \! \! \! \! \sum_{h_1 \in R_Q(H_1)} \! \! \! \cdots \! \! \! \! \sum_{h_2 \in R_Q(H_2) } \! \! \! \! P_T(h_1) * P_T(k_1) * \cdots * P_T(h_2|, f_{H\xrightarrow{}\Omega}, k_2)
\label{VE}
\end{align*}
\vspace{-1em}
This computation can be very inefficient in PPLs and repeated for estimating multiple queries. However, we observe that the VE algorithm only involves sum and product over attributes. If each variable in PPL (attribute in BN) is defined as categorical conditional distribution, they can be materialized as vectors or matrices. Thus, the VE algorithm essentially defines a program of linear algebra operations, whose execution time can be significantly enhanced by nowadays computing resource. Furthermore, we observe that the linear algebra program computing VE is fixed for a target query as long as the elimination order is fixed.
\smallskip
\noindent \underline{\textbf{JIT of VE.}} For any query, the \textit{BayesCard} can first decide an optimal variable elimination order and then compile the learned BN from the PPL program into a static program containing only matrix or tensor operations to maximize the execution efficiency. Furthermore, this program can be re-used to infer other queries with the same reduced graph by only changing the input query regions $R_Q$ (as shown in Figure~\ref{fig_RG}). Therefore, JIT can remember the execution pattern for this query and will re-use this pattern to infer the probability of future queries for further speed-up.
An example program showing the JIT compilation of VE on the same query $Q$ is shown in Figure~\ref{fig_RG}. Specifically, for each variable $T_i$ of PPLs in the reduced graph $G'$, the JIT program first extract the parameters of its distribution $P_T(T_i|Par(T_i))$. Since VE only supports categorical distributions, the extracted parameters of $T_i$ forms a matrix $M_{T_i}$. Next, based on the query region $R_Q$, the JIT program can further reduce $M_{T_i}$ by keeping only useful information, i.e. slicing its rows with $R_Q(T_i)$ and its columns with $R_Q(Par(T_i))$ (lines 2-6 of the code in Figure~\ref{fig_RG}). This reduction not only eliminates the redundant computation but also enables a close-form linear algebra equation.
Then, \textit{BayesCard} can determine an elimination order for these variables using the reversed topological order or standard procedure~\cite{darwiche2009modeling}. A fixed program containing only linear algebra operations can be derived, like the one in line 8, where ``\textsf{matmal}'' refers to matrix multiplication, ``\textsf{colsum}'' refers to column sum, and ``\textsf{.T}'' refers to the transpose. At last, this generated static program can execute efficiently, thanks to the batch processing of the tensor operations with various performance tuning techniques (e.g., loop tiling, parallelization, and vectorization).
By our evaluation, such program can achieve up to two orders of magnitude speed-ups over the original VE algorithm.
\subsection{Probability inference for fanout method}
\label{sect4.3}
Previous sections discuss the process of inferring the probability $P_T(Q)$ of a query $Q$ on the table(s) $T$, represented by exactly a single BN. For a database with multiple tables, this process needs to be modified for the following two types of queries: (1) a query $Q$ on tables, that cover many BNs (i.e. $Q$ on $T = \{A,D,H,K\}$ in Figure~\ref{fig_model})-(1); (2) a query on tables, that only cover a subset of a single BN (i.e. $Q$ on $T=\{H\}$). In these cases, the \textit{BayesCard} does not contain an exact BN representing $P_T$ to estimate this query $Q$. Fortunately, based on the fanout method explained earlier in Section~\ref{sect3.2}, we can use the following theorem to calculate $P_T(Q)$, which is proposed and proved in~\cite{zhu2020flat}.
\begin{theorem}
Given a query $Q$, let $V = \{V_{1}, V_{2}, \dots, V_{d} \}$ denote all vertices (nodes) in the join tree touched by $Q$ and let $\mathcal{V}$ denotes the full outer join of all tables in $V$. On each node $V_i$, let $F = \{ F_{A_{1}, B_{1}}, F_{A_{2}, B_{2}}, \ldots, F_{A_{n}, B_{n}}\}$, where each $(A_j, B_j)$ is a distinct join where $B_j$ is not in $Q$. Let $f = (f_1, f_2, \ldots, f_n)$ where $F_{A_{j}, B_{j}} = f_j$ for all $1 \leq i \leq n$, denote an assignment to $F$ and $\text{dlm}(f) = \prod_{j=1}^{n} \max\{f_j, 1\}$. Let
\begin{equation}
\small
p_i \! = \frac{|\mathcal{V}_i|}{|\mathcal{V}|} \! \cdot \!
\sum\limits_{f, v} \left( P_{\mathcal{V}_i}(Q_i \wedge F \! = \! f \wedge F_{V_{i}, V} \! = \! v) \cdot \frac{\max\{v, 1\}}{\text{dlm}(f)} \right).
\end{equation}
Then, the cardinality of $Q$ is $|\mathcal{V}| \cdot \prod_{i = 1}^{d} p_i$.
\end{theorem}
In short, since all the fanout attributes involved in this computation are pre-stored in the table $V_i$ and there exists a BN for $P_{\mathcal{V}_i}$, \textit{BayesCard} can directly use this theorem for probability inference of multi-table join queries.
\smallskip
\noindent\underline{\textbf{Efficient summation computation in \textit{BayesCard}.}}
We can compute the summation $\sum_{f, v} ( P_{\mathcal{V}_i}(Q_i \wedge F \! = \! f \wedge F_{V_{i}, V} \! = \! v) \cdot \frac{\max\{v, 1\}}{\text{dlm}(f)} )$ over all assignments of $f$ and $v$ as efficiently as computing the probability $P_{\mathcal{V}_i}(Q_i)$ for any query. We will explain the detailed procedure for calculating $\sum_{f \in D(F)} P_T(Q, F=f) * f$ using progressive sample and complied variable elimination, where $D(F)$ denotes the domain of unique values in $F$. Then, this procedure can naturally generalize to more complex cases.
Our calculation procedure is motivated by the Bayesian rule, that $P_T(Q, F=f) = P_T(F=f|Q) * P_T(Q)$. We observe that $P_T(Q)$ is a fixed value independent of $F$ because the fanout attributes are artificial attributes that will not be involved in $Q$. Furthermore, by property of BN, we know that $P_T(F|Q) = P_T(f|R_Q(Par(F)))$, so can derive the following equation. It spots a common term $P_T(Q)$ so the calculation can avoid repeatedly computing $P_T(Q)$.
\begin{equation*}
\sum_{f \in D(F)} P_T(Q, F=f) * f = P_T(Q) * \left(\sum_{f \in D(F)} P_T(f|R_Q(Par(F)))*f \right)
\end{equation*}
\noindent \textbf{Progressive sampling.} Recall in Section~\ref{sect4.2}, \textit{BayesCard} estimates $P_i = P_T(T_i|R_Q(Par(T_i)))$ by making progressive samples of $R_Q$ and approximate the $P_T(Q)$ as $\prod P_i$. After finishing estimating $P_T(Q)$ with sample $S$, \textit{BayesCard} can directly estimate $\sum_{f \in D(F)} \break P_T(f|R_Q(Par(F)))*f$ using the same sample $S$, i.e. as $\sum_{f \in S[F]} \break \hat{P}_T(f| S[Par(F)])*f$. The final result can be achieved by multiplying these two terms together.
\noindent \textbf{Compiled variable elimination.} Recall in Section~\ref{sect4.2}, \textit{BayesCard} can specify a particular elimination order by choosing the fanout variable $F$ as the last variable to eliminate. Using PPL, the intermediate result after each elimination step is materialized as a distribution. Therefore, before the last elimination step of VE algorithm for computing $P_T(Q)$, \textit{BayesCard} can store the intermediate result, which represents the conditional distribution $P_T(F|Q)$. Then, the summation $\sum_{f \in D(F)} P_T(f|R_Q(Par(F)))*f$ equals to $P_T(F|Q) \cdot D(F)$, where $\cdot$ denotes the vector dot product. Therefore, similar to computing $P_T(Q)$, this process only involves linear algebra operations, which can be compiled and efficiently calculated using JIT.
\section{Model construction of \textit{BayesCard}}
\label{sect5}
In this section, we explain how \textit{BayesCard} constructs an ensemble of BNs for a multi-table database. Specifically, Section~5.1 first introduces the BN ensemble construction method with budget, which clusters all tables in the database into several groups and builds a single BN on each group of tables. Then, Section~5.2 introduces some optimizations for building a single BN using PPLs. Finally, Section~5.3 shows how to incrementally update the BN model.
\begin{figure}[t]
\centering
\vspace{-1.5em}
\includegraphics[width=8.5cm]{images/PRM_learn.pdf}
\vspace{-1.5em}
\caption{\textit{BayesCard} ensemble learning algorithm demo.}
\vspace{-2em}
\label{PRM_learn}
\end{figure}
\subsection{Ensemble construction with budget}
\label{sect5.1}
\noindent\underline{\textbf{Main idea.}}
Consider the example database in Figure~\ref{PRM_learn} with 11 tables $A, B, \ldots, K$ forming a join tree, where each node represents a table and each edge represents a possible join between two tables. A previous approach~\cite{deepDB} suggests to create every possible two-table join results, examine the level of dependence between attributes across the two, and determine whether to create one large model on their full outer join table or two separate models. Since generating the full outer join of multiple tables could require exponential memory, this approach normally can not explore the possibility of creating a model on the join of more than three tables.
Another approach~\cite{NeuroCard} generates an unbiased sample $S$ on the full outer join of all tables in the schema and builds a single large model on $S$ directly. As the resulting model is built on all attributes in the database, the model construction and the probability inference can be very inefficient. Moreover, the size of $S$ is relatively small with respect to the full outer join size, suggesting a large amount of information loss, so the learned model on $S$ might not accurately represent the actual data distribution.
In order to balance the estimation accuracy and inference efficiency, we want to explore the full possibility of learning different BN ensembles such that the number of joined tables in each BN is no more than a threshold. Therefore, the resulting ensemble should capture as much dependence between tables as possible and simultaneously keep each BN in this ensemble as small as possible.
\begin{algorithm}[t]
\small
\caption{BN Ensemble Construction Algorithm}
\label{PRM_learn_algo}
\begin{flushleft}
\textbf{Input}: a DB schema with n tables $T_1, \cdots, T_n$ and a budget $k$
\end{flushleft}
\begin{algorithmic}[1]
\State Create the join tree $\mathcal{T} = (V, E)$ for the schema
\State Generate unbiased samples $S$ for full outer join of the entire schema
\State Initialize a dependence matrix $M \in \mathbb{R}^{n \times n}$
\For{Each pair of tables $e = (T_i, T_j)$}
\State Calculate the RDC dependence level scores between all attributes in $T_i$ and attributes in $T_j$
\State $w_e$ $\gets$ average RDC scores
\EndFor
\If{$k = 1$} \State \textbf{return} $\mathcal{T}$ and learn a single PRM for each table
\EndIf
\For{$k' \gets 2, \cdots, k$}
\State Sort $E$ in decreasing order based on $w_e$.
\For{$e = (u, v) \in E$}
\If{$u$ and $v$ contain exactly $k'$ tables in total}
\State Update $T$ by contracting nodes $u, v$ to a single node $\{u, v\}$
\EndIf
\EndFor
\EndFor
\State \textbf{return} $\mathcal{T}$ and learn a single PRM for each node in $\mathcal{T}$
\end{algorithmic}
\end{algorithm}
\smallskip
\noindent\underline{\textbf{Algorithm description.}}
The details of the ensemble construction algorithm is given in Algorithm~\ref{PRM_learn_algo}.
First, we define the budget $k$ such that a single BN model can only be constructed on (a sample of) the full outer join of no more than $k$ tables. The budget $k$ is a hyper-parameter decided by the dataset, system, and computing resource. The algorithm generally works as follows:
1) \textit{Computing dependency between tables (lines 1-7).} Given a tree-structured join schema $\mathcal{T}$, we first generate the unbiased sample $S$ of the full outer join of all tables according to~\cite{Sampling}. Specifically, the join tree is regarded as a rooted tree and samples $S$ are obtained by scanning all tables in $\mathcal{T}$ in a bottom-up manner. Then, we calculate the randomized dependence coefficient, i.e., RDC value~\cite{rdc}, between each pair of join tables using $S$. The detailed computation method is given in Appendix~B of our technical report~\cite{wu2020bayescard}. In Figure~\ref{PRM_learn}, the RDC value is shown as red numbers on each edge.
2) \textit{Contracting nodes (lines 8-18).} Intuitively, we would like to build a model on the full outer join of tables with high dependency. We can iteratively contract the nodes (tables) with high RDC value in $\mathcal{T}$ in a greedy manner. Let $k' = 2$ at the beginning. In each iteration, if $k' \leq k$, we first sort all edges $e = (u,v)$ (joins) in a descending order based on their RDC values. According to this edge order, we aggregate $u, v$, i.e. two endpoints of edge $e$, into a single node if they contain exactly $k'$ tables in total and update the RDC values of $e$ accordingly, whose details is given in Appendix~B of our technical report~\cite{wu2020bayescard}. We iterate this process until $k' = k$, and in the end, we obtain a tree where each node contains at most $k$ tables. For example, in Figure~\ref{PRM_learn}, let the budget $k = 3$. In the first iteration where $k' = 2$, the algorithm considers joining two tables together. The edge $(B, E)$ has the highest RDC value, so $B$ and $E$ are aggregated in the first step (\textcircled{1} in Figure~\ref{PRM_learn}(a)). After the first iteration, the join schema $\mathcal{T}$ has been transformed into a new tree in Figure~\ref{PRM_learn}(b). Similarly, in the second iteration where $k' = 3$, the node $\{B, E\}$ is first merged with the node $I$. Finally, the join tree is transformed to a tree in Figure~\ref{PRM_learn}(c).
3) \textit{Building BNs (line 19).} In the end, \textit{BayesCard} will construct a single BN model on (a sample of) the full outer join of tables within each node and fanout attributes will be added accordingly.
\smallskip
\noindent\underline{\textbf{Time Complexity analysis.}}
As shown in~\cite{Sampling}, creating the samples $S$ on the full outer join of tables $T_1, \cdots, T_n$ takes $O(\sum_{i=1}^{n}|T_i|)$ time.
Let $m$ be the attribute number in the full outer join of the tables. Calculating the pairwise RDC values takes $O(m^2|S|\log |S|) $. The rest of Algorithm~\ref{PRM_learn_algo} takes $O(kn^2)$ time since the algorithm terminates in $k$ iterations and in each iteration we only need to check the tables defined by two endpoints of each edge, which is at most $n^2$. Thus, the whole time complexity is $O(\sum_{i=1}^{n}|T_i| + m^2|S|\log |S| + kn^2)$.
\subsection{Single model construction optimizations}
\label{sect5.2}
The structure learning process, i.e., learning the causal structure from data of a single BN, is an NP-hard combinatorial optimization problem~\cite{34}. Current structure learning algorithms supported by PPLs either produce a general DAG structure or a simplified tree structure. We show optimization techniques for them as follows:
\smallskip
\noindent \underline{\textbf{Optimization for DAG structure learning algorithms.}}
The exact DAG structure learning algorithms explore the super-exponential searching space of all possible DAGs and select the best candidate~\cite{MDL, BIC, BDeu, A-start}). The learned structure is accurate but inefficient, which only scales to tens of attributes. Approximate methods limit the searching space with local heuristic (i.e. \emph{greedy} algorithms~\cite{greedysearch, 36, 37}), but they may produce inaccurate results.
Based on PPLs, \textit{BayesCard} supports pre-specifying sub-structures before running the \emph{exact} and \emph{greedy} structure learning algorithms, which limits the DAG searching space and makes the structure learning much more efficient. Specifically, practical databases generally exist attributes with \emph{functional dependencies}~\cite{fan2010discovering} or obvious causal relations between attributes, such as one's ``age'' determining one's ``school level''. First, users of \textit{BayesCard} can use their ``expert knowledge'' to pre-specify certain causal structures for subsets of attributes. Then, the PPLs within \textit{BayesCard} can define the variables corresponding to these attributes, and condition the variables with each other according to the pre-specified structure. At last, \textit{BayesCard} can rely on the existing algorithms to construct the remaining causal structure on these variables. Since the algorithms are forced to maintain these sub-structures, the number of qualified DAG candidates is significantly curtailed, making the structure learning process more efficient without loss in accuracy.
\smallskip
\noindent \underline{\textbf{Optimization for tree structure learning algorithms.}}
The tree structure learning algorithm learns a tree structure such as \emph{Chow-Liu tree}~\cite{23}, which sacrifices accuracy for efficiency.
\textit{BayesCard} can also improve the accuracy of a learned structure using the aforementioned ``expert knowledge'' after running the \emph{Chow-Liu tree} algorithm. This efficient algorithm forces the learned BN structure to be a tree, which could contain ``false'' causality or miss important attribute dependence. For example, intuitively we know that the number of ``children'' raised by someone is largely dependent on one's ``income'' and one's ``marital status'', which can not be captured simultaneously by the tree BN, since one node is only allowed to have one parent. Thus, after the structure is learned, \textit{BayesCard} can add the edge from ``Income'' to ``Children'' to improve its accuracy. With PPLs, only the parameters of the affected sub-structure (the ``Children'' variable in this example) need to be updated.
\subsection{Model updates}
Most of the practical databases update their data frequently, requiring the cardinality estimators to adjust their underlying models dynamically~\cite{wang2020ready}. When the data distribution changes, \textit{BayesCard} can update its underlying BNs very efficiently. Specifically, the learned structure of BN captures the \emph{intrinsic} causal pattern of the attributes, which is not likely to change even in the case of massive data updates. Therefore, in most cases, \textit{BayesCard} can preserve the original BN structure and only \emph{incrementally} update its distribution parameters. Such parameter updates are extremely efficient using MLE in PPLs. By our testing, it generally takes less than one second for an insertion or deletion of a thousand tuples. In some rare cases involving the insertion or deletion of attributes, a new BN structure should be constructed. Even in this case, the causal pattern of the original attributes is largely preserved. Therefore, \textit{BayesCard} can pre-specify some sub-structures and learn the new structure efficiently using the methods stated in the previous section.
\section{Analysis of BayesCard}
In this section, we analyze and demonstrate that \textit{BayesCard} satisfies the ADS criteria from all aspects, as shown in Table~\ref{ADSsummary}.
\noindent\textbf{Algorithm.} A BN with exact learned structure can losslessly capture the data distribution, a.k.a. near-perfect \emph{estimation accuracy} for all queries. We show empirically that even with an approximate tree structure, \textit{BayesCard} can achieve comparable or better accuracy than the current SOTA methods. The \emph{inference latency} of \textit{BayesCard} is roughly 1ms per query (close to Histogram method), thanks to our novel inference algorithms. Furthermore, as explained in Section~\ref{sect4}, \textit{BayesCard} can learn a compact structure of small \emph{model size} with fast \emph{training and update time}.
\noindent\textbf{Data.} Every dataset contains an inherent causal pattern, which can be discovered by \textit{BayesCard}. Building upon this structure, \textit{BayesCard} can represent its PDF accurately and efficiently. Specifically, the variables in PPL can characterize most data \emph{distribution} types with varied \emph{domain size}. \emph{Attribute correlation} is merely a manifestation of the underlying causal pattern, which can be accurately represented. Moreover, for data with more attributes (larger \emph{scale}), the proposed \emph{graph reduction} inference technique can reduce a larger graph into a much smaller one for efficient inference. Therefore, the inference latency is also stable for various data settings.
\noindent\textbf{System.} Both the structure and the distribution parameters of \textit{BayesCard} model are \emph{interpretable} and \emph{debuggable}. Specifically, a DB expert can verify a learned structure based on his prior knowledge of data causality (functional dependency in DBs), and validate the learned parameter using basic probability rules (non-negative and sum to one). Since the probability inference of \textit{BayesCard} follows the Bayesian rule, its performance is logical and \emph{predictable}. Furthermore, the compiled VE does not contain any stochasticity, so the users' error is \emph{reproducible}.
\section{Experimental Results}
\label{sect6}
In this section, we empirically demonstrate the superiority of our \textit{BayesCard} over other \textsf{CardEst} methods. In the following,
Section~\ref{sect6.1} first introduces the experimental setups.
Next, Section~\ref{sect6.2} thoroughly compares different \textsf{CardEst} methods in terms of the \emph{ADS} criteria on single table datasets.
Then, Section~\ref{sect6.3} evaluates the performance and end-to-end query plan execution time on multi-table datasets.
At last, Section~\ref{sect6.4} performs ablation studies on our proposed algorithms and optimizations in \textit{BayesCard} method.
\subsection{Experimental setups}
\label{sect6.1}
\underline{\textbf{\textsf{CardEst} methods to compare with.}}
We compare our \textit{BayesCard} framework with the following \textsf{CardEst} methods, including both traditional methods widely used in DBMS and four existing SOTA DL-based methods. For each ML-based \textsf{CardEst} method, we adopt the authors’ source code and apply the same hyper-parameters as used in the original paper.
\textit{1). Histogram} is the simplest \textsf{CardEst} method widely used in DBMS such as Postgres~\cite{postgresql}.
\textit{2). Sampling} has been used in DBMS such as MySQL~\cite{mysql}. In our testing, we randomly sample $1\%$ of all tuples for \textsf{CardEst}.
\textit{3). Naru/NeuroCard}~\cite{naru,NeuroCard} are \textsf{DAR}-based \textsf{CardEst} methods for single table and multi-table join queries, respectively.
\textit{4). DeepDB}~\cite{deepDB} is a SPN-based \textsf{CardEst} method.
\textit{5). FLAT}~\cite{zhu2020flat} is an FSPN-based \textsf{CardEst} method.
\textit{6). MSCN}~\cite{MSCN} is the SOTA query-driven \textsf{CardEst} method. For each dataset, we train it with $10^5$ queries generated in the same way as the workload.
Our \textit{BayesCard} framework subsumes BNs with various combination of structure learning and inference algorithms as described in previous sections. In Section~\ref{sect6.2} and~\ref{sect6.3}, we use an exemplary BN with \emph{Chow-Liu} tree structure learning algorithm and \emph{compiled variable elimination} inference algorithm with graph reduction optimizations. The comparison of different BNs realizable in \textit{BayesCard} and controlled ablation studies are deferred to Section~\ref{sect6.4}.
\smallskip
\noindent\underline{\textbf{Datasets and query workloads.}}
Our single table experiments are performed on three datasets:
\noindent 1).\textbf{DMV} dataset is a real-world dataset consisting of 11,575,483 tuples of vehicle registration information in New York. We use the same attributes as in~\cite{naru, wang2020ready}.
\noindent 2). \textbf{CENSUS} dataset contains population survey by
U.S. Census Bureau conducted in 1990. This dataset has 2,458,285 tuples and 68 attributes, containing highly correlated attributes. Based on RDC test~\cite{rdc}, we find that more half of the attributes are highly correlated with at least one other attribute. This dataset is very large in scale and has very complicated distribution.
\noindent 3) \textbf{SYNTHETIC} datasets are a collection of human-generated datasets with varied data distribution skewness, attributes correlation, domain size and number of attributes. We generated these datasets using the similar approach as a recent benchmark study~\cite{wang2020ready}. They are used to evaluate models' stability w.r.t.~changes in data.
For each dataset, we generate $1,500$ selection queries as workload. For each query $Q$, first we select a subset of attributes as filter attributes of $Q$. For each selected attribute $c$, if it represents a continuous variable, we uniformly generate two values ($v_1, v_2$) from its value domain and then add the filter predicate ``$v_1 \leq c \leq v_2$'' to $Q$. Otherwise, if $c$ is a categorical variable, we uniformly sample $k$ unique values\{$v_1, v_2, \cdots, v_k$\} from its domain and place a predicate ``$c$ \textsc{ IN } \{$v_1,\cdots, v_k$\}'' in $Q$.
\noindent 4). \textbf{Multi-table IMDB:}
We conduct the multi-table experiment on international movie database (IMDB) benchmark. Prior work~\cite{howgoodare} claims that this DB contains complicated data structure and establishes it to be a good test benchmark for cardinality estimators. We use \emph{JOB-light} benchmark query workload with 70 queries proposed in the original paper~\cite{howgoodare} and create another workload of 1500 \emph{JOB-comp} with more \underline{comp}rehensive and \underline{comp}licated queries.
\textit{JOB-light}'s IMDB schema contains six tables (\textsl{title}, \textsl{cast\_info}, \textsl{movie\_info}, \textsl{movie\_companies}, \textsl{movie\_keyword}, \textsl{movie\_info\_idx}) and five join operations in total where every other tables can only join with the primary table ``title''. Each \textit{JOB-light} query involves 3-6 tables with 1-4 filter predicates. The filter variety is not very diverse with equality filters on all attributes but the ``title.production\_year'' attribute only. In addition, \textit{JOB-light}'s workload only contains 70 queries, which is not enough to account for the variance in model prediction. Thus, we synthesize 1,500 \emph{JOB-comp} queries based on the schema of \emph{JOB-light} with more number of filter predicates per query. Each \emph{JOB-comp} query involves 4-6 tables with 2-7 filter predicates. The queries are uniformly distributed to each join of 4-6 tables. After determining the join graph, the filter predicates selection process is similar as in single table cases.
\smallskip
\noindent\underline{\textbf{Evaluation metric:}} We use the Q-error as our evaluation metrics, which is define as follow:
\begin{equation*}
\textbf{Q-error} = max(\frac{\text{Estimated Cardinality}}{\text{True Cardinality}}, \frac{\text{True Cardinality}}{\text{Estimated Cardinality}})
\end{equation*}
This evaluation metric is well recognized in DBMS community and widely used in recent papers on cardinality estimation~\cite{deepDB, naru, NeuroCard, 2001SigmodGreedy, tzoumas2011lightweight}. We report the \textbf{50\%}(median), \textbf{90\%}, \textbf{95\%} and \textbf{100\%}(worst) Q-error quantiles as evaluation of estimation accuracy.
\noindent\underline{\textbf{Experimental environment:}}
All models are evaluated on Intel(R) Xeon(R) Platinum 8163 CPU with 64 cores, 128GB DDR4 main memory, and 1TB SSD. For a fair comparison, we compare the model inference latency on CPU only since apart from the DAR model (\textit{Naru} and \textit{NeuroCard}) and \textit{MSCN}, the rest methods' inference algorithms do not support GPU.
\begin{table}[t]
\caption{Performance of \textsf{CardEst} algorithms on single tables.}
\vspace{-1em}
\resizebox{\columnwidth}{!}{
\begin{tabular}{c|c|cccc|c}
\hline
Dataset& Method & $50\%$ & $90\%$ & $95\%$ & $100\%$ & Latency (ms) \\ \hline
\multirow{7}{*}{DMV}
&\textbf{\textit{BayesCard}} &\firstcell{1.001} &\firstcell{1.024} &\secondcell{1.049} &\secondcell{7.641} &\thirdcell{2.1} \\ \cline{2-7}
&Histogram &1.318 & 12.32 & 143.6 & $1\cdot 10^4$ &\firstcell{0.1} \\ \cline{2-7}
&Sampling & 1.004 & 1.052 & 1.140 & 143.0 & 79 \\ \cline{2-7}
&Naru & \thirdcell{1.003} & \secondcell{1.026} & \firstcell{1.035} & \firstcell{5.500} & 86 \\ \cline{2-7}
&DeepDB & 1.006 & 1.124 & 1.193 & 108.1 & 5.1 \\ \cline{2-7}
&FLAT & \firstcell{1.001} & \thirdcell{1.028} & \thirdcell{1.066} & \thirdcell{11.37} & \secondcell{0.6} \\ \cline{2-7}
&MSCN & 1.210 & 2.263 & 4.507 & 151.8 & 3.4 \\
\thickhline
\multirow{7}{*}{CENSUS}
&\textbf{\textit{BayesCard}} & \firstcell{\textbf{1.063}} & \firstcell{\textbf{1.484}} & \firstcell{\textbf{2.052}} & \firstcell{\textbf{227.5}} & \secondcell{2.4} \\ \cline{2-7}
&Histogram &5.561 &259.8 &$5\cdot 10^4$ & $5\cdot 10^5$ & \firstcell{\textbf{0.2}}\\ \cline{2-7}
&Sampling & \secondcell{1.130} & \secondcell{1.412} & 374.2 & \thirdcell{1703} & 113 \\ \cline{2-7}
&Naru &\thirdcell{1.229} & \thirdcell{2.210} & \secondcell{7.156} & \secondcell{1095} & 129 \\\cline{2-7}
&DeepDB & 1.469 & 6.295 & 178.21 & $1\cdot 10^4$ & 25 \\ \cline{2-7}
&FLAT & 1.452 & 6.326 & \thirdcell{174.93} & $1\cdot 10^4$ & 25 \\ \cline{2-7}
&MSCN & 2.700 & 15.83 &$1\cdot 10^4$ & $1\cdot 10^5$ & \thirdcell{4.8} \\
\hline
\end{tabular}}
\vspace{-0.5em}
\label{tab: exp-single}
\end{table}
\subsection{Model evaluation on single tables}
\label{sect6.2}
In this section, we compare the performance of \textsf{CardEst} methods in terms of \emph{Algorithm} and \emph{Data} criteria.
\smallskip
\noindent\underline{\textbf{Algorithm criteria.}}
We evaluate the \textsf{CardEst} methods from four aspects: estimation accuracy, inference latency, model size and training time, and updating effects.
\noindent\textbf{Estimation accuracy:}
The estimation accuracy on two real-world single table datasets is reported in Table~\ref{tab: exp-single}, where the color shade in each cell corresponds to the rank among different \textsf{CardEst} methods. When compared with traditional models (\textit{Histogram} and \textit{Sampling}), \textit{BayesCard} achieves $1$--$3$ order of magnitude higher accuracy than both models. When compared with DL-based methods (\textit{Naru}, \textit{DeepDB} and \textit{FLAT}), \textit{BayesCard} has comparable or better estimate accuracy on DMV dataset, but significantly more accurate on CENSUS dataset. This is because these DL models can accurately represent the data distribution of DMV, which contains relatively less attribute correlation and fewer number of attributes. CENSUS, however, contains seven times larger number of attributes with more complex attribute correlations.
As the learning space grows exponentially with the number of attributes, \textit{Naru}'s accuracy dropped significantly.
For \textit{DeepDB} and \textit{FLAT}, their SPN or FSPN structure can not well capture the data distribution in presence of a large number of highly correlated attributes, so their performance also heavily degrades.
\noindent\textbf{Inference latency:}
As shown in Table~\ref{tab: exp-single}, apart from \textit{Histogram}, which leverages the attribute independence assumption for fast inference, \textit{BayesCard} generally attains the best ($1$--$2$ orders of magnitude) inference latency among the result methods.
Worth noticing that we observe significant increase in latency from DMV to CENSUS datasets for all methods except for \textit{BayesCard}. \textit{BayesCard}'s inference time appears to be insensitive to the number of attributes, mainly because the novel \emph{graph reduction} technique can reduce a large CENSUS attribute graph to a much smaller one for inference.
\noindent\textbf{Model size and training time:} As shown in Figure~\ref{model_size}, apart from the traditional methods, \textit{BayesCard} achieves the smallest model size with the fastest training time because the causal pattern of datasets enables a compact representation of data distribution. Worth noticing that Sampling is a model-free method that does not have model size or training time, so we do not include it in the figure.
\noindent\textbf{Updating time:}
We evaluate each method's updating effects by following a similar experimental setup of prior work~\cite{wang2020ready}.
Specifically, we create a copy of the original DMV dataset and sort the tuples based on the value of each column in an ascending order. Then, we take the first $20\%$ of the data to train a stale model and use the rest $80\%$ as data insertion updates. This procedure will make sure that the training dataset has different data distribution than the testing dataset; otherwise, the stale model would perform well without model updates. Then, after the model finishes the updating process, we test the model using the original query workload same as in Table~\ref{tab: exp-single} and report their $95\%$ q-errors and total update time in Figure~\ref{update}. Here, we refrain from comparing with the query-driven method \textit{MSCN} because it requires a new query workload to update its model, which is unavailable in our experimental settings.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/model_size.pdf}
\vspace{-2.5em}
\caption{ Model storage and training time. }
\vspace{-1em}
\label{model_size}
\end{figure}
\begin{table}[t]
\caption{Performance of model updates of different \textsf{CardEst} methods on DMV. The baseline q-error is the 95\% q-error quoted from Table~\ref{tab: exp-single} for comparison.}
\vspace{-1em}
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{c|c|c|c|c|c}
\hline
Method & \textbf{\textit{BayesCard}} & Histogram & Naru & DeepDB & FLAT \\ \hline
baseline q-error & 1.049 & 143.6 &1.035 & 1.193 &1.066\\ \hline
95\% q-error & \textbf{1.049} & 143.6 & 14.79 & 18.83 & 1.451\\ \hline
Update time (s) &103 &\textbf{25} & 1980 & 142 & 257 \\
\hline
\end{tabular}}
\vspace{-1.5em}
\label{update}
\end{table}
\textit{BayesCard}, \textit{Histogram}, and \textit{DeepDB} all preserve the original structure and only incrementally update the parameters, so in general, they have the fastest update time. Among them, \textit{Histogram} has the least amount of parameters to update, so it has the best update time. We use the method described in the original paper~\cite{zhu2020flat} to update \textit{FLAT}, which generates new sub-structures to fit the inserted data distribution, so it is slightly slower than the previous three. \textit{Naru} uses the incoming data to fine-tune its pre-trained DNNs for three epochs, which is significantly slower than others.
After the model updates, we observe that \textit{BayesCard} has no drop in estimation accuracy, whereas the deep probabilistic models have degraded performance. The reasons can be summarized as follow: (1) \textit{BayesCard}'s structure captures the data causal pattern, which often does not change after update; (2) \textit{DeepDB}'s preserved structure is not robust against data distribution changes; (3) fine-tuning the \textit{Naru}'s underlying DAR model overfits the information from the $20\%$ previously trained data, leading to degraded performance.
\noindent\textbf{\textit{Summary:}}
\textit{\textit{BayesCard} attains comparable or better estimation accuracy, lower inference latency, smaller model size, less training and update time than DL-based models. In addition, \textit{BayesCard} is 1-3 orders of magnitude more accurate than traditional methods.}
\smallskip
\noindent\underline{\textbf{Data criteria.}}
We evaluate the stability of \textsf{CardEst} methods in terms of \emph{Data} criteria from four aspects: data distribution, attribute correlation, domain size, and number of attributes.
SYNTHETIC datasets are generated using the similar approach in a recent benchmark study~\cite{wang2020ready}. Specifically, suppose we would like to generate a table $T$ with attributes $\{T_1,\ldots,T_n\}$ and $10^6$ tuples, where is the $n$ denotes the number of attributes (\emph{scale}). We generate the first column for $T_1$ using a Pareto distribution (using scipy.stats.pareto function), with a controlled skewness $s$ and domain size $d$. For each of the rest attribute $T_i$, we generate a column based on a previous attribute $T_j$ with $j<i$, to control the correlation $c$. For each tuple ($t_1, \ldots, t_n$) in $T$, we set $t_i$ to $t_j$ with a probability of $c$, and set $t_i$ to a random value drawn from the Pareto distribution with the probability of $1-c$.
The experimental results on SYNTHETIC are shown in Figure~\ref{synth}. Due to space limit, we only plot the comparison results between \textit{BayesCard} and \textit{DeepDB} on the estimation accuracy metric. The additional experimental results are reported in the appendix of the technical report~\cite{wu2020bayescard}. We summarize our observations as follows.
\noindent\textbf{\text{Distribution (s):}} Similar to the previous study~\cite{wang2020ready}, we find that increasing the Pareto distribution skewness severely degrades the performance of \textit{Naru} and \textit{Sampling} methods, but has only mild effect on \textit{BayesCard} and other methods. This is because \textit{BayesCard}, \textit{Histogram}, \textit{FLAT}, and \textit{DeepDB} all use (multi-)histograms to represent distributions, which are robust against distribution changes.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{images/stability.pdf}
\vspace{-2em}
\caption{ Comparing \textit{BayesCard} and DeepDB's stability. }
\vspace{-2em}
\label{synth}
\end{figure}
\noindent\textbf{\text{Correlation (c):}} The increase in $c$ has no impact on \textit{BayesCard}, mild impact on \textit{Sampling}, \textit{Naru}, \textit{FLAT} and \textit{MSCN}, and severe impact on \textit{Histogram} and \textit{DeepDB}, which make local or global attribute independence assumptions. \textit{BayesCard} is able to capture the causal pattern of the datasets, and thus can represent any attribute correlation accurately.
\noindent\textbf{\text{Domain (d):}} The increase in the domain size degrades the estimation accuracy for all methods, because increasing $d$ may increase the data complexity exponentially as there are $d^n$ possible values that a tuple can take. Fortunately, except for \textit{Naru}, the degrades in accuracy are within a reasonable range for all other methods.
\noindent\textbf{\text{Scale (n):}} Similar to domain size, increasing the number of attributes also increases the data complexity exponentially, and thus we expect to see a decrease in accuracy for all methods. Surprisingly, the performance of \textit{BayesCard} was not affected by $n$ at all. This is owing to the graph reduction technique, which significantly reduces the number of attributes involved during inference. This technique not only improves the inference latency but also increases the estimation accuracy as potential modeling errors on the reduced attributes are also eliminated.
Apart from estimation accuracy, \textit{BayesCard} also maintains very stable and robust performance in terms of inference latency, model size, and training time, which is analyzed in Appendix~C~\cite{wu2020bayescard}.
\noindent\textbf{\textit{Summary:}}
\textit{\textit{BayesCard} is much more stable and robust than other \textsf{CardEst} methods for datasets with various settings of data.}
\subsection{Model performance on multi-table dataset}
\label{sect6.3}
As reported in Table~\ref{tab: exp-multi} and Figure~\ref{model_size}, \textit{BayesCard} achieves comparable performance with the current SOTAs on the two query workloads of the IMDB dataset and preserves its superior inference latency, lightweight model storage, and fast training. Specifically, the estimation accuracy of \textit{BayesCard} is comparable to \textit{NeuroCard}, slightly better than \textit{DeepDB}, and slightly worse than \textit{FLAT}, but with up to $60\times$ smaller model size, and $10\times$ faster training and inference.
\begin{table}[t]
\vspace{-0.5em}
\caption{Performance of cardinality estimation algorithms on IMDB datasets with two query workloads.}
\vspace{-1em}
\resizebox{\columnwidth}{!}{
\begin{tabular}{c|c|cccc|c}
\hline
Workload& Method & $50\%$ & $90\%$ & $95\%$ & $100\%$ & Latency (ms) \\ \hline
\multirow{7}{*}{JOB-light}
&\textbf{\textit{BayesCard}} & \secondcell{1.300} & \thirdcell{3.534} & \thirdcell{4.836} & \thirdcell{19.13} & \secondcell{5.4} \\ \cline{2-7}
& Histogram & 7.318 & 1006 & 5295 & $1 \cdot 10^7$ & \firstcell{\textbf{0.1}} \\ \cline{2-7}
& Sampling &2.464 &55.29 &276.1 & $4 \cdot 10^4$ &63 \\ \cline{2-7}
&NeuroCard & 1.580 & 4.545 & 5.910 & \firstcell{\textbf{8.510}} & 673 \\ \cline{2-7}
&DeepDB &\thirdcell{1.318} & \secondcell{2.500} & \secondcell{3.161} & 39.60 & 49 \\ \cline{2-7}
&FLAT &\firstcell{\textbf{1.150}} & \firstcell{\textbf{1.819}} & \firstcell{\textbf{2.247}} & \secondcell{10.86} & 6.8 \\ \cline{2-7}
&MSCN & 2.750 &19.70 &97.60 & 661.0 & \thirdcell{6.7} \\
\thickhline
\multirow{7}{*}{JOB-Comp}
&\textbf{\textit{BayesCard}} & \secondcell{1.271} & \secondcell{9.053} & \thirdcell{86.3} & \secondcell{$4 \cdot 10^4$} & \secondcell{6.2}\\ \cline{2-7}
&Histogram & 15.78 & 7480 & $4\cdot10^4$ & $1\cdot10^8$ & \firstcell{\textbf{0.2}} \\\cline{2-7}
&Sampling & 3.631 & 102.7 & 1374 & $8\cdot10^6$ & 101 \\ \cline{2-7}
&NeuroCard &\thirdcell{1.538} & \thirdcell{9.506} & \secondcell{81.23} & \thirdcell{$1 \cdot 10^5$} & 73\\ \cline{2-7}
&DeepDB & 1.930 & 28.30 & 248.0 & $1 \cdot 10^5$ &55\\ \cline{2-7}
&FLAT &\firstcell{\textbf{1.202}} & \firstcell{\textbf{6.495}} & \firstcell{\textbf{57.23}} & \firstcell{$\boldsymbol{1\cdot10^4}$} & 10.1\\ \cline{2-7}
&MSCN & 4.961 &45.7 &447.0 & $1\cdot10^5$ & \thirdcell{6.6} \\
\hline
\end{tabular}}
\vspace{-1em}
\label{tab: exp-multi}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{images/JOB_e2e_BN.pdf}
\vspace{-2.5em}
\caption{ End-to-End evaluation of \textit{BayesCard}.}
\vspace{-2.5em}
\label{e2e}
\end{figure}
\noindent\underline{\textbf{End-to-End evaluation on PostgreSQL.}}
Furthermore, we use the IMDB dataset to demonstrate \textit{BayesCard}'s behavior in terms of \emph{System} criteria. The four aspects of \emph{System} criteria are rather conceptual and hard to compare quantitatively in experiment, so we incorporate \textit{BayesCard} into a commercial DBMS, \emph{Postgres 9.6.6}, to show that it can improve the query optimization process of a real system. Specifically, we evaluate the end-to-end query processing time for JOB-light queries as shown in Figure~\ref{e2e}, and compare \textit{BayesCard} with the Postgres baseline, \textit{FLAT}, and optimal result derived by inserting the true cardinality during query optimization. We do not compare with other methods since \textit{FLAT} has established its SOTA performance in the same experiment, as reported in the original paper~\cite{zhu2020flat}. We observe that:
1) \textit{BayesCard} improves the Postgres baseline by $13.3\%$, suggesting that with more accurate \textsf{CardEst} results, the query optimizer can generate better query plans with lower execution cost.
2) The improvement of \textit{BayesCard} is very close to the method using true cardinality in query compiling (14.2\%). This verifies that the accuracy of \textit{BayesCard} is sufficient to generate high-quality query plans. Besides, even though \textit{BayesCard} has a slightly worse estimation accuracy, it still marginally outperforms \textit{FLAT}. Both methods produce similar execution plans and the marginal gain of \textit{BayesCard} over \textit{FLAT} mainly credits to its faster inference latency.
3) The improvement of \textit{BayesCard} and \textit{FLAT} becomes more significant on queries joining more tables because the execution plan for a query joining 2 or 3 is almost fixed. Whereas, for queries joining more tables, the inaccurate Postgres baseline results may lead to a sub-optimal query plan, while \textit{BayesCard} and \textit{FLAT} providing more accurate \textsf{CardEst} results can find a better plan. This phenomenon has also observed and explained in~\cite{zhu2020flat, perron2019learned}.
\noindent\textbf{\textit{Summary:}}
The integration of \textit{BayesCard} into Postgres validates it as a practical counterpart of the \textsf{CardEst} component in Postgres and also verifies that \textit{BayesCard} is a system-friendly \textsf{CardEst} method.
\subsection{Comparing algorithms within \textit{BayesCard}}
\label{sect6.4}
In this section, we compare different \textit{BayesCard}'s structure learning algorithms, perform ablation studies on the inference algorithms and summarize the take-home messages for using \textit{BayesCard}.
\begin{table}[t]
\caption{Comparing different structure learning algorithms of \textit{BayesCard} on CENSUS.}
\vspace{-1em}
\resizebox{\columnwidth}{!}{
\begin{tabular}{c|c|c|c|c|c}
\hline
\multirow{2}{*}{Algorithms} & 95\% & Infer. & Model & Train & Update \\
& q-error & Time (s) & Size (mb) & Time (min) & Time (s) \\ \hline
Exact & \textbf{1.24} & 16.5 & 43.7 & 298 & 1391\\ \hline
Greedy & 1.88 & 2.45 & 2.53 & 62.1 & 442\\ \hline
Chow-Liu &2.05 &\textbf{0.78} & \textbf{0.08} & \textbf{19.8} & \textbf{103} \\
\hline
\end{tabular}}
\label{structlearn}
\end{table}
\smallskip
\noindent\underline{\textbf{Comparing structure learning algorithms.}} We report the estimation accuracy, inference latency without any proposed techniques, training time, model size, and update time on CENSUS dataset for various structure learning algorithms in Table~\ref{structlearn}. For \emph{exact} and \emph{greedy} algorithms, we incorporated the ``expert knowledge'' as described in Section~\ref{sect4.3}; otherwise, these algorithms become intractable and can not generate the BN's structure. We observe that with a more accurate structure learning algorithm (exact), the estimate accuracy has a significant improvement, but it sacrifices the other four dimensions to a great extent.
We did not report the result for DMV and IMDB datasets with a much fewer number of attributes because their data causal patterns are much simpler and different structure learning algorithms have similar performance.
\smallskip
\noindent\underline{\textbf{Ablation study of inference algorithms.}} We compare the novel inference optimizations of \textit{BayesCard} with the original variable elimination (VE) and belief propagation (BP) algorithms on a model learned with Chow-Liu tree algorithm on the CENSUS dataset, shown in Table~\ref{ablation}. We have the following observations: (1) the latency of original algorithms, VE and BP is unaffordable (780 ms per query) for practical systems; (2) the graph reduction (GR) and just-in-time compilation (JIT) optimization do not affect the estimation accuracy; (3) the GR and JIT alone improve the inference latency by 5 and 30 times respectively, and 325 times when combined for VE; (4) the progressive sampling algorithm (PS) produces 4 times larger estimation error but with significant improvement in latency. Worth noticing that the inference latency of PS and PS+GR can be much faster than VE+GR+JIT for \textit{BayesCard} with a complex structure (e.g. learned by exact structure learning algorithm).
\smallskip
\noindent\underline{\textbf{Take-home messages for \textit{BayesCard} users.}} (1) The Chow-Liu tree structure learning algorithm can efficiently generate a compact model, which has improved inference latency and stable performance over other structure learning algorithms. The degrades in accuracy can be compensated using ``expert knowledge'' described in Section~\ref{sect4.3}. (2) The \emph{VE+GR+JIT} inference algorithm efficiently produces exact estimation for BNs with discrete attributes, which is debuggable, predictable, reproducible, and very friendly for system development. However, \emph{PS+GR} is a general approach that has guaranteed efficiency for any complex DAG-structured BN, and support continuous attributes with any distribution. (3) \textit{BayesCard} provides a general \textsf{CardEst} framework for users to explore different trade-offs to suit their data and system settings.
\begin{table}[t]
\caption{Ablation study of different inference algorithms of \textit{BayesCard} on CENSUS.}
\vspace{-1em}
\resizebox{\columnwidth}{!}{
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
Algorithms & VE & BP & VE+GR & VE+JIT & VE+GR+JIT &PS & PS+GR \\ \hline
95\% q-error & \textbf{2.05} & \textbf{2.05} & \textbf{2.05} & \textbf{2.05} & \textbf{2.05} & 7.47 & 7.47 \\ \hline
Latency (ms) & 780 & 685 & 190 & 21.9 & \textbf{2.4} & 8.8 & 3.5 \\
\hline
\end{tabular}}
\vspace{-0.5em}
\label{ablation}
\end{table}
\smallskip
\section{Related Work}
\label{sect8}
We will briefly revisit the existing \textsf{CardEst} methods based on BN and the supervised \textsf{CardEst} methods.
\smallskip
\noindent \underline{\textbf{BN-based methods}} have been explored decades ago for \textsf{CardEst}. Getoor et al.~\cite{2001SigmodGreedy} used a \textit{greedy} algorithm for BN structure learning, the variable elimination for probability inference, and referential integrity assumption for join estimation. Tzoumas et al.~\cite{tzoumas2011lightweight} learned an exact-structured BN and used belief propagation for inference. Halford et al.~\cite{dasfaa2019} adopted the Chow-Liu tree structure learning algorithm, the VE inference algorithm, and the uniformity assumption for join estimation. However, none of the practical DBMSes incorporates these methods due to their impractical structure learning process, intractable inference latency, or inaccurate estimation for join queries due to over-simplified assumptions.
\smallskip
\noindent \underline{\textbf{Supervised \textsf{CardEst} methods}} use the feedback of past queries to train ML models, which maps the featurized query $Q$ to its actual cardinality. The first approach using neural networks on cardinality estimation was published for UDF predicates~\cite{5}. Later on, a regression-based model~\cite{7} and a semi-automatic alternative~\cite{8} were presented. Recently, supervised DL-based approaches, used multi-set convolutional network (\textit{MSCN})~\cite{MSCN}, tree-LSTM~\cite{sun2019end}, and lightweight XG-boost model~\cite{dutt2019selectivity} for \textsf{CardEst}. However, the supervised learning approaches have two major drawbacks as mentioned in ~\cite{deepDB}: (1) Their models neglect the data itself and are not robust to changes in query workload.
(2) Collecting the training data can be very expensive and training data has to be recollected when the workload changes.
Therefore, in general, query-driven supervised ML methods on cardinality estimation are not as flexible and accurate as data-driven unsupervised ML methods.
\smallskip
\section{Conclusion}
\label{sect8}
This paper proposes \textit{BayesCard}, the first framework that unifies the existing efforts on PPLs and BNs and optimizes them for \textsf{CardEst} in different data and system settings. \textit{BayesCard} revitalizes BNs with new equipments in model construction and probability inference, which make it a desirable \textsf{CardEst} method satisfying the \emph{algorithm}, \emph{data} and \emph{system} criteria at the same time. Extensive experimental studies and end-to-end system deployment establish \textit{BayesCard}'s superiority over existing \textsf{CardEst} methods.
Furthermore, \textit{BayesCard} captures the underlying data causality, which benefits other data-related tasks. In future work, we plan to explore the possibility of using \textit{BayesCard} for other tasks, such as data cleaning, entity matching, and approximate query processing.
\clearpage
\newpage
\bibliographystyle{ACM-Reference-Format}
|
1,477,468,751,103 | arxiv | \section{\@startsection{section}{1}{\z@}%
\def\section{\@startsection {section}{1}{\z@}{-3.5ex plus-1ex minus
-.2ex}{1.5ex plus.2ex}{\reset@font\large\bf}}
\def\subsection{\@startsection{subsection}{2}{\z@}{-3.25ex plus-1ex
minus-.2ex}{1.5ex plus.2ex}{\reset@font\normalsize\bf}}
\def\subsubsection{\@startsection
{paragraph}{4}{\z@}{3.25ex plus1ex minus.2ex}{-1em}{\reset@font
\normalsize\bf}}
\catcode`@=12
\title{Multibracket simple Lie algebras
\footnote{Talk at XXI Int. Coll. on Group Theor. Methods in
Phys. (Goslar, July 1996), to appear in the Proceedings.}}
\author{J.A. de Azc\'{a}rraga\footnote{St. John's College Overseas Visiting
Scholar.}
\hskip 0pt\ {\rm and}\ J.C. P\'{e}rez Bueno\footnote{
On sabbatical (J.A.) leave and on leave of absence (J.C.P.B.)
from Departamento de F\'{\i}sica Te\'orica and IFIC
(Centro Mixto Univ. de Valencia-CSIC), E--46100 Burjassot (Valencia), Spain.}}
\m@th\@ifnextchar[\@address{\@address[]}{Department of Applied Mathematics and Theoretical Physics,
Silver St., Cambridge CB3 9EW, UK}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def{\it i.e.}{{\it i.e.}}
\def{\it e.g.}{{\it e.g.}}
\def\eq#1{(\ref{#1})}
\font\black=msbm10 scaled\magstep1
\def\mathop{\cdots}\limits{\mathop{\cdots}\limits}
\def\field #1{\hbox{{\black #1}}}
\def\R{{\hbox{{\field R}}}}
\def{\cal G}{{\cal G}}
\begin{document}
\maketitle
\begin{abstract}
We introduce higher-order (or multibracket) simple Lie algebras that
generalize the ordinary Lie algebras.
Their `structure constants' are given by
Lie algebra cohomology cocycles which, by virtue of being such, satisfy a
suitable generalization of the Jacobi identity.
Finally, we introduce a nilpotent, complete BRST operator associated with the
$l$ multibracket algebras which are based on a given simple Lie algebra of
rank $l$.
\end{abstract}
\noindent
Given $[X,Y]:=XY-YX$, the standard Jacobi identity (JI)
$[[X,Y],Z]+[[Y,Z],X]+[[Z,X],Y]=0$
is automatically satisfied if the product is associative.
For a Lie algebra ${\cal G}$,
$[X_i,X_j]=C_{ij}^k X_k$,
the JI may be written in terms of $C_{ij}^k$ as
\begin{equation}
{1\over 2}\epsilon^{j_1j_2j_3}_{i_1i_2i_3}
C^\rho_{j_1 j_2}C^\sigma_{\rho j_3}=0\quad.
\label{JIb}
\end{equation}
Let ${\cal G}$ be simple and (for simplicity) compact.
Then, the Killing metric $k$, with coordinates $k_{ij}=k(X_i,X_j)$, is
non-degenerate and, after suitable normalization, can be
brought to the form $k_{ij}=\delta_{ij}$.
Moreover, $k$ is an invariant polynomial, {\it i.e.}
\begin{equation}
k([Y,X],Z)+k(X,[Y,Z])=0\quad.
\label{INV}
\end{equation}
We also know that $k$ defines the second order Casimir invariant.
Using this symmetric polynomial we may always construct a non-trivial
three-cocycle
\begin{equation}
\omega_{i_1i_2i_3}:=k([X_{i_1},X_{i_2}],X_{i_3})=C_{i_1 i_2}^\rho k_{\rho i_3}
\label{threecocycle}
\end{equation}
which is indeed skew-symmetric as consequence of \eq{JIb} or \eq{INV}.
In fact, it is known since the classical work of Cartan, Pontrjagin, Hopf and
others that, from a topological point of view, the group manifolds of all
simple compact groups are essentially equivalent to (have the [real] homology
of) products of odd spheres, that $S^3$ is always present in these products and
that the simple Lie algebra cocycles are, via the `localization' process, in
one-to-one correspondence with bi-invariant de Rham cocycles on the associated
compact group manifolds $G$.
This is due to the intimate relation between the order of the
$l(=$rank$\,{\cal G})$
primitive symmetric polynomials which can be defined on a simple Lie algebra,
their $l$ associated generalized Casimir-Racah invariants \cite{RACAH} and the
topology of the associated simple groups. Such a relation was a key fact in the
eighties for the understanding of non-abelian anomalies in gauge theories
\cite{TJZW}.
The simplest (of order 3) higher-order invariant polynomial
$d_{ijk}=d(X_i,X_j,X_k)$ appears for $su(3)$ (and only for $A_l$-type
algebras, $l\ge 2$);
it is given by the symmetric trace of three $su(3)$ generators.
{}From $d_{ijk}$ we may construct
\begin{equation}
\omega_{i_1i_2i_3i_4i_5}:=
\epsilon^{j_2 j_3 j_4}_{i_2 i_3 i_4}
d([X_{i_1},X_{j_2}],[X_{j_3},X_{j_4}],X_{i_5})
=
\epsilon^{j_2 j_3 j_4}_{i_2 i_3 i_4}
C_{i_1 j_2}^\rho C_{j_3 j_4}^\sigma d_{\rho \sigma i_5}
\label{fivecocycle}
\end{equation}
(cf. \eq{threecocycle}), and it
can be checked that \eq{fivecocycle} defines a fifth-order
invariant form (the proof will be given in the general case).
The existence of this five-form $\omega$ shows us that $su(3)$ is,
from a topological point of view, equivalent to $S^3\times S^5$.
If we calculate in $su(3)$ the `four-bracket' we find that
\begin{equation}
[X_{j_1},X_{j_2},X_{j_3},X_{j_4}]=
\sum_{s\in S_4}\pi(s)X_{s(j_1)}X_{s(j_2)}X_{s(j_3)}X_{s(j_4)}
={\omega_{j_1j_2j_3j_4}}^\sigma X_\sigma\quad,
\label{fourcomm}
\end{equation}
where the generators $X_i$ may be taken proportional to the Gell-Mann matrices,
$X_i={\lambda_i \over 2}$, and $\pi(s)$ is the parity sign of the permutation
$s$.
Thus, ${\omega_{j_1j_2j_3j_4}}^\sigma$ is related to the four-bracket and a
five-cocycle (five-form) in the same way as ${C_{j_1j_2}}^\sigma$ is associated
with the standard Lie bracket and a three-cocycle (three-form).
We may ask ourselves whether this construction could be extended to all the
higher-order polynomials to define from them higher-order simple Lie algebras
satisfying an appropriate generalization of the JI.
The affirmative answer is given in \cite{HIGHER}; we outline below
the main steps that led to it.
It is interesting to note that this construction may be used to produce
examples of a generalization \cite{APPB} of the Poisson structure different
from that underlying Nambu mechanics \cite{Na}.
\medskip
\noindent
{\it a) Invariant polynomials on the Lie algebra ${\cal G}$}
Let $T_i$ be the elements of a representation of ${\cal G}$. Then the symmetric trace
$k_{i_1\ldots i_m}\equiv\hbox{sTr}(T_{i_1}\ldots T_{i_m})$ (we shall only
consider here sTr although not all invariant polynomials are of this form
\cite{RACAH}; see \cite{HIGHER}) verifies the
invariance condition
\begin{equation}
\sum_{s=1}^m C^{\rho}_{\nu i_s} k_{i_1\ldots i_{s-1} \rho i_{s+1}\ldots i_m}
=0\quad.
\label{invariance}
\end{equation}
\medskip
\noindent
{\it Proof}: \quad
By definition of $k$, the $l.h.s.$ of \eq{invariance} (cf. \eq{INV}) is
\begin{equation}
\hbox{sTr}\left(\sum_{s=1}^m T_{i_1}\ldots T_{i_{s-1}}[T_\nu,T_{i_s}]
T_{i_{s+1}}\ldots T_{i_m}\right)
=
\hbox{sTr}\left(T_\nu T_{i_1}\ldots T_{i_m}- T_{i_1}\ldots T_{i_m}T_\nu
\right)
= 0\ ,
\label{proof1}
\end{equation}
{\it q.e.d.}
The above symmetric polynomial is associated to an invariant symmetric tensor
field on the group $G$ associated with ${\cal G}$,
$k(g)=k_{i_1\ldots i_m}\omega^{i_1}(g)\otimes\ldots\otimes \omega^{i_m}(g)$,
where the $\omega^{i}(g)$ are left invariant one-forms on $G$.
Since the Lie derivative of $\omega^k$ is given by
$L_{X_i}\omega^k=-C^k_{ij}\omega ^j$ for a LI vector
field $X_i$ on $G$, the invariance condition is the statement
\begin{equation}
(L_{X_\nu} k)(X_{i_1},\ldots,X_{i_m})=
-\sum_{s=1}^m k(X_{i_1},\ldots,[X_\nu,X_{i_s}],\ldots,X_{i_m})=0
\label{liederivative}
\end{equation}
{\it c.f.} \eq{INV}.
On forms, the invariance condition \eq{liederivative} may be written as
\begin{equation}
\epsilon^{j_1\ldots j_{q}}_{i_1\ldots i_{q}}C^{\rho}_{\nu j_1}
\omega_{\rho j_2\ldots j_q}=0\quad.
\label{formsinv}
\end{equation}
\medskip
\noindent
{\it b) Invariant forms on the Lie group $G$}
Let
$k_{i_1\ldots i_m}$ be
an invariant symmetric polynomial on ${\cal G}$ and let us define
\begin{equation}
\tilde\omega_{\rho j_2\ldots j_{2m-2}\sigma}:=
k_{i_1\ldots i_{m-1}\sigma}
C^{i_1}_{\rho j_2}\ldots C^{i_{m-1}}_{j_{2m-3}j_{2m-2}}\quad.
\label{SNBa}
\end{equation}
Then the odd order $(2m-1)$-tensor
\begin{equation}
\omega_{\rho l_2\ldots l_{2m-2} \sigma}:=
\epsilon^{j_2\ldots j_{2m-2}}_{l_2\ldots l_{2m-2}}
\tilde\omega_{\rho j_2\ldots j_{2m-2} \sigma}
\label{SNBb}
\end{equation}
is a fully skew-symmetric tensor. We refer to Lemma 8.1 in \cite{APPB}
for the proof.
Moreover, $\omega$ is an invariant form because for $q=2m-1$ the
$l.h.s.$ of \eq{formsinv} is
\begin{eqnarray}
&&\epsilon^{j_1\ldots j_{2m-1}}_{i_1\ldots i_{2m-1}}
C^{\rho}_{\nu j_1}\omega_{j_2\ldots j_{2m-1} \rho}
=
\epsilon^{j_1\ldots j_{2m-1}}_{i_1\ldots i_{2m-1}}
C^{\rho}_{\nu j_1}\epsilon^{l_3\ldots l_{2m-1}}_{j_3\ldots j_{2m-1}}
\tilde\omega_{j_2 l_3 \ldots l_{2m-1} \rho}
\nonumber \\
&&=
(2m-3)!\epsilon^{j_1\ldots j_{2m-1}}_{i_1\ldots i_{2m-1}}
k_{l_1\ldots l_{m}}C^{l_1}_{\nu j_1}\ldots C^{l_{m}}_{j_{2m-2}j_{2m-1}}
\label{proofb}
\\
&&=
(2m-3)!\epsilon^{j_1\ldots j_{2m-1}}_{i_1\ldots i_{2m-1}}
\left[\sum_{s=2}^m k_{\nu\l_2\ldots l_{s-1} \rho l_{s+1}\ldots l_{m}}
C^{\rho}_{j_1 l_s}\right]
C^{l_2}_{j_2 j_3}\ldots
C^{l_{m}}_{j_{2m-2}j_{2m-1}}=0\quad.
\nonumber
\end{eqnarray}
This result follows recalling
\begin{equation}
\epsilon_{i_1\ldots i_p i_{p+1} \ldots i_n}^{j_1\ldots j_p j_{p+1} \ldots j_n}
\epsilon_{j_{p+1} \ldots j_n}^{l_{p+1} \ldots l_n}=
(n-p)!
\epsilon_{i_1\ldots i_p i_{p+1} \ldots i_n}^{j_1\ldots j_p l_{p+1} \ldots l_n}
\label{recordatorio}
\end{equation}
in the second equality, using the invariance of $k$ [eq. \eq{invariance}]
in the third one and the JI in the last equality for each of the $(m-1)$ terms
in the bracket.
This may be seen without using coordinates; indeed \eq{SNBa} is expressed as
\begin{equation}
\tilde\omega(X_\rho,X_{j_2},\ldots,X_{j_{2m-2}},X_\sigma):=
k([X_\rho,X_{j_2}],\ldots,[X_{j_{2m-3}},X_{j_{2m-2}}],X_\sigma)
\label{newomega}
\quad,
\end{equation}
and the ($2m$-1)-form $\omega$ is obtained antisymmetrizing \eq{newomega}
as in \eq{SNBb}
(cf. \eq{fivecocycle}).
Hence
\begin{eqnarray}
&&
\hskip-25pt
(L_{X_\nu}\tilde\omega)(X_{i_1},\ldots,X_{i_{2m-1}})=
-\sum_{p=1}^{2m-1}
\tilde\omega(X_{i_1},\ldots,[X_\nu,X_{i_p}],\ldots,X_{i_{2m-1}})
\nonumber \\
&&
\hskip-25pt
=-\sum_{s=1}^{m-1}
k([X_{i_1},X_{i_2}],\ldots,[[X_\nu,X_{i_{2s-1}}],X_{i_{2s}}]+
[X_{i_{2s-1}},[X_\nu,X_{i_{2s}}]],\ldots,
\nonumber \\
&&
\hskip-25pt
[X_{i_{2m-3}},X_{i_{2m-2}}],X_{i_{2m-1}})
-k([X_{i_1},X_{i_2}],\ldots,[X_{i_{2m-3}},X_{i_{2m-2}}],[X_\nu,X_{i_{2m-1}}])
\nonumber \\
&&
\hskip-25pt
=-\sum_{s=1}^{m-1}
k([X_{i_1},X_{i_2}],\ldots,[X_\nu,[X_{i_{2s-1}},X_{i_{2s}}]],\ldots,
[X_{i_{2m-3}},X_{i_{2m-2}}],X_{i_{2m-1}})
\nonumber \\
&&
\hskip-25pt
-k([X_{i_1},X_{i_2}],\ldots,[X_{i_{2m-3}},X_{i_{2m-2}}],[X_\nu,X_{i_{2m-1}}])
\nonumber \\
&&
\hskip-25pt
=(L_{X_\nu} k)
([X_{i_1},X_{i_2}],\ldots,[X_{i_{2m-3}},X_{i_{2m-2}}],X_{i_{2m-1}})
=0
\quad;
\label{newproof}
\end{eqnarray}
where the JI has been used in the third equality
and \eq{liederivative} in the last, {\it q.e.d.}
\medskip
\noindent
{\it c) The generalized Jacobi condition}
Now we are ready to check that the tensor $\omega$ introduced above verifies a
generalized Jacobi condition that extends eq. \eq{JIb} to multibracket
algebras.
\medskip
\noindent
{\bf Theorem}\quad
Let ${\cal G}$ be a simple compact
algebra, and let $\omega$ be the
non-trivial Lie algebra $(2p+1)$-cocycle obtained from the
associated $p$ invariant symmetric tensor on ${\cal G}$.
Then $\omega$ verifies the {\it generalized Jacobi condition} (GJC)
\begin{equation}
\epsilon^{j_1\ldots j_{4p-1}}_{i_1\ldots i_{4p-1}}
{\omega_{\sigma j_1\ldots j_{2p-1}\cdot}}^\rho
{\omega_{\rho j_{2p} \ldots j_{4p-1}}}=0
\quad.
\label{theorem}
\end{equation}
\noindent
{\it Proof:}\quad
Using \eq{SNBb}, \eq{SNBa} and \eq{recordatorio}, the $l.h.s.$ of \eq{theorem}
is equal to
\begin{eqnarray}
&&
-(2p-3)!\epsilon^{j_1\ldots j_{4p-1}}_{i_1\ldots i_{4p-1}}
k_{l_1 \ldots l_{p}\sigma} C^{l_1}_{\rho j_1} \ldots
C^{l_{p}}_{j_{2p-2} j_{2p-1}}
{\omega^\rho_{\cdot j_{2p} \ldots j_{4p-1}}}
\nonumber
\\
&&
=-(2p-3)!\epsilon^{j_1\ldots j_{4p-1}}_{i_1\ldots i_{4p-1}}
k^{l_1}_{\cdot \ldots l_{p}\sigma} C^{l_{2}}_{j_2 j_3}\ldots
C^{l_{p}}_{j_{2p-2} j_{2p-1}}C^\rho_{l_1 j_1}
\omega_{\rho j_{2p} \ldots j_{4p-1}}
=0\quad,
\label{withname}
\end{eqnarray}
where the invariance of $\omega$ (eq. \eq{formsinv}) has been used in the last
equality, {\it q.e.d.}
\medskip
\noindent
{\it d) Multibrackets and higher-order simple Lie algebras}
Eq. \eq{theorem} now allows us to define higher-order simple Lie algebras
based on ${\cal G}$ using \cite{HIGHER} the Lie algebra cocycles $\omega$ on ${\cal G}$ as
generalized structure constants:
\begin{equation}
[X_{i_1},\ldots,X_{i_{2m-2}}]={\omega_{i_1\ldots i_{2m-2}}}^\sigma_\cdot
X_\sigma
\quad.
\label{cocycle}
\end{equation}
The GJC \eq{theorem} satisfied by the cocycles is necessary since for
{\it even} $n$-brackets of associative operators one has the generalized Jacobi
identity
\begin{equation}
{1\over (n-1)!n!}\sum_{\sigma\in S_{2n-1}} (-)^{\pi(\sigma)}
[[X_{\sigma(1)},\ldots,X_{\sigma(n)}],X_{\sigma(n+1)},\ldots,X_{\sigma(2n-1)}]
=0\quad.
\label{genjacid}
\end{equation}
This establishes the link between the ${\cal G}$-based {\it even} multibracket
algebras and the {\it odd} Lie algebra cohomology cocycles on ${\cal G}$
(note that for $n$ odd the $l.h.s$ is proportional to the odd
($2n$-1)-multibracket $[X_1,\ldots,X_{2n-1}]$ \cite{HIGHER}).
Finally we comment that just in the same way that we can introduce for a Lie
algebra a BRST nilpotent operator by
\begin{equation}
s=-{1\over 2}c^ic^j{C_{ij}}^k_\cdot{\partial\over\partial c^k}\equiv s_2\quad,
\quad s^2=0\quad,
\label{BRST}
\end{equation}
with $c^ic^j=-c^jc^i$, the set of invariant forms $\omega$ associated with a
simple ${\cal G}$ allows us to
{\it complete} this operator in the form
\begin{eqnarray}
s=
-{1\over 2}c^{j_1}c^{j_2}{\omega_{j_1j_2}}^\sigma_\cdot
{\partial\over\partial c^\sigma}
-\ldots-
{1\over (2m_i-2)!}c^{j_1}\ldots c^{j_{2m_i-2}}
{\omega_{j_1\ldots j_{2m_i-2}}}^\sigma_\cdot
{\partial\over\partial c^\sigma}
-\ldots
\nonumber \\
-
{1\over (2m_l-2)!}c^{j_1}\ldots c^{j_{2m_l-2}}
{\omega_{j_1\ldots j_{2m_l-2}}}^\sigma_\cdot
{\partial\over\partial c^\sigma}
\equiv s_2+\ldots+s_{2m_i-2}+\ldots +s_{2m_l-2}.
\label{HOBRST}
\end{eqnarray}
This new nilpotent operator $s$ is the {\it complete BRST operator}
\cite{HIGHER} associated with ${\cal G}$.
For the relation of these constructions with the strongly homotopy algebras
\cite{LAST}, possible extensions and connections with physics we refer to
\cite{HIGHER} and references therein.
\section*{Acknowledgements}
This research has been partially supported by the CICYT and DGICYT, Spain (AEN
96--1669, PR 95--439).
Both authors wish to thank the kind hospitality extended to them at
DAMTP. Finally, the support of St. John's College (J.A.) and
an FPI grant from the Spanish M.E.C. and the CSIC
(J.C.P.B.) are gratefully acknowledged.
|
1,477,468,751,104 | arxiv | \section{Introduction}
\label{sec:intro}
Today, our daily life is highly dependent on the increasingly growing satellite infrastructure. This infrastructure is used in many services; from communication, transportation, to weather forecast and many more. It is therefore essential to protect all space assets. One of the major threats is the risk of collision in space. Providing the spacecraft with the ability to autonomously recognize the surrounding objects is of utmost importance to minimize such risk and is one of the main objectives of Space Situational Awareness (SSA).
Objects in question include active and inactive satellites, as well as space debris.
Over the past years, image-based sensors have been considered as a great source of information for SSA. This has triggered multiple research efforts in the field \cite{opromolla2017review,strube2015raven, yol:hal-01304728,chabot:hal-01784234,forshaw2016removedebris}, and recently, multiple works have been proposed to investigate the potential of deep learning (DL) from images for space applications~\cite{sharma2018pose,proencca2020deep}. \\
The performance of DL methods is highly dependent on the availability and quality of data used for training them. However, in the space domain, data are very scarce and costly to obtain. Efforts have been initiated, nonetheless, for the problem of 6D spacecraft pose estimation and two dedicated synthetic (or mixed with laboratory-acquired data) datasets were proposed for this purpose, i.e., the \textit{Spacecraft pose estimation dataset (SPEED)} ~\cite{speedchallange}~\cite{challenge} and the \textit{Unreal Rendered Spacecraft On-Orbit (URSO)} dataset~\cite{proencca2020deep}.
However, to the best of our knowledge, no data exist so far for the task of space target recognition and detection while these tasks are extremely important for SSA and a crucial step towards reaching autonomy in space. Moreover, in view of the major advances made in object recognition using DL~\cite{liu2020deep}, it is interesting to investigate the applicability of these methods for space data, and identify directions for furthering their performance in the space environment.
In this paper, we introduce a new dataset dedicated for the task of space target recognition and detection. This dataset, named \emph{SPAcecraft Recognition leveraging Knowledge of Space Environment (SPARK)} dataset, is a unique space multi-modal annotated image dataset. It contains a total of 150k RGB images with bounding box annotation for the target object in each image, and the same number, 150k, of depth images of 11 object classes, with 10 spacecrafts and one class of space debris. Sample images are shown in Fig.\ref{fig:CALIPSO}. The data have been generated under a photo-realistic space simulation environment, with a large diversity in sensing conditions, including extreme and challenging ones. The SPARK dataset will be shared publicly with the research community in the context of a competition\footnote{A corresponding challenge will be held during ICIP 2021 \url{https://cvi2.uni.lu/about-spark-2021/}}.
In addition to describing the different features of the SPARK data and their generation, we conduct experiments testing state-of-the-art (SoA) target recognition methods, highlighting interesting challenges for the research community. These include looking into the important question of multi-modal RGB-Depth space object recognition with preliminary results encouraging the added value of multi-modal learning.
The rest of the paper is organized as follows: Section~\ref{sec:Related work} describes related SSA datasets. Details about the simulator and the proposed SPARK dataset are given in Section~\ref{SPARK}. Section~\ref{sec:results} presents the conducted experiments and discusses interesting uses of the proposed data. Section~\ref{sec:conclusion} concludes this work.
\section{Related Work} \label{sec:Related work}
There are multiple datasets available for the task of object recognition. Most of them contain real images~\cite{ILSVRC15,lin2014microsoft,Geiger2013IJRR,DBLP:journals/corr/abs-1803-06184}, while some contain synthetically-generated images~\cite{sadat2018effective,cabon2020virtual,Ros_2016_CVPR}. In the space domain, given the difficulty of obtaining large real datasets, synthetic datasets are currently the default approach for developing DL methods for SSA tasks. Moreover, we have identified two such datasets only~\cite{proencca2020deep,speedchallange}, and both have been designed specifically for spacecraft 6D pose estimation. We herein review and discuss these datasets, comparing them to the SPARK dataset as summarized in Table~\ref{tab:Table1}.
SPEED \cite{speedchallange} consists of 15,300 gray scale images of the same mock-up spacecraft, with a resolution of 1920×1200 pixels. Two types of images are used: a) 300 images are lab captured. The lab setup employs a robotic arm to position and orient optical sensors with respect to the target mock-up, custom illumination devices to simulate Earth albedo, and Sun simulator to emulate the illumination conditions present in space. b) 15k images of the same mock-up are synthetically generated using an augmented reality software that fuses synthetic and actual space imagery. The testing set contains 300 real lab images and $\sim$3k synthetic images, whereas the training set contains 12k synthetic images and only 5 real lab images. The ground truth 6D pose of the mock-up with respect to the optical sensor is calculated using a motion capture system with calibrated cameras.
URSO~\cite{proencca2020deep} provides three datasets of satellite images at a resolution of 1080×960 pixels. One dataset is for the \emph{`Dragon'} spacecraft and two datasets for the \emph{`Soyuz'} spacecraft with different operating ranges. Each of these datasets contains 5k images, of which 10\% are dedicated for testing and 10\% for validation. The images were randomly generated and sampled around the day side of the Earth from low Earth orbit altitude. The Earth rotation, camera orientation, and target object pose are all randomized. The target object is placed randomly within the camera field of view and an operating range between 10m and 40m. All images are labelled with the corresponding target pose with respect to the virtual vision sensor.
While the two datasets, SPEED and URSO, are dedicated for the task of 6D pose estimation of a satellite, the proposed SPARK dataset is designed for a different SSA task; that of space object detection, including \emph{localization, segmentation and classification}. Indeed, our dataset covers a larger number of satellite models and space debris forming \emph{11 object classes} with their annotated 2D images, depth images, and segmentation masks. This is detailed in Section~\ref{SPARK} below.
\begin{table}[t!]
\centering
\vspace{-0.8 cm}
\begin{tabular}{l}
\includegraphics[width=\linewidth]{images/SPARK_table.png}
\end{tabular}
\caption{Comparison of SSA related learning datasets.}
\label{tab:Table1}
\vspace{-0.5 cm}
\end{table}
\section{Proposed SPARK dataset} \label{SPARK}
The proposed SPARK dataset was generated using the Unity3D game engine as a simulation environment~\cite{unity}. The simulation consists of the following main components:\\
\textbf{Earth model:} It consists of a high resolution textured and realistic Earth model composed of 16k polygons, obtained from a third party Unity asset \cite{Asset}, based on the NASA Blue Marble collection~\cite{blue-marble}. It presents clouds, clouds' shadows, and atmospheric outer scattering effects. This model is located at the center of the 3D environment with respect to all orbital scenarios. The surrounding space background uses a high resolution panoramic photo of the Milky way galaxy from the European Southern Observatory (ESO)\cite{eso}. \\
\textbf{Target spacecraft:} Ten different realistic models of spacecrafts were used (\textit{`AcrimSat', `Aquarius', `Aura', `Calipso', `CloudSat', `Jason', `Terra', `TRMM', `Sentinel-6'}, and the \textit{`1RU Generic CubeSat'}). They were obtained from NASA 3D resources~\cite{nasa}. The debris are parts of satellites and rockets after adding corrupted texture to simulate dysfunction conditions (\emph{`space shuttle external tank', `orbital docking system', `damaged communication dish', `thermal protection tiles',} and \textit{`connector ring'}). They were placed around the Earth and within the low Earth orbit (LEO) altitude.\\
\textbf{Chaser spacecraft:} It represents the observer equipped with different vision sensors used to acquire data.\\
\textbf{Camera:} A pinhole camera model was used with known intrinsic camera parameters and optical sensor specifications, as well as a depth camera for range imaging.
The SPARK dataset is generated by randomly placing the target satellite or spacecraft within the field of view of a camera mounted on a chaser. We consider different ranges and orientations of the chaser model. Furthermore, the Sun and Earth are randomly rotated around their respective axes in every frame. This allows to obtain high-resolution photo-realistic RGB images, depth maps, and the corresponding segmentation masks in multiple and different environmental conditions. The quality of spaceborne imaging is highly dependent on many factors such as varying illumination conditions, signal-to-noise ratio, and high contrast. Therefore, the SPARK dataset has been created in a way that covers a wide range of cases, including extreme and challenging ones. The whole dataset is spanned by sampling the following axes:\\
\textbf{Scene illumination:} We model the prominence of the Sun flares, rays, and reflections on Earth from the space in order to represent the different illumination conditions and the high contrast of the spaceborne images. Extreme illumination cases are considered where the sunlight directly faces the optical navigation sensors or reflects from the target surface or the Earth, hence causing lens flare and optical sensor blooming effect (see example distribution in Fig.~\ref{fig:cubesatIllumination} (top)).\\
\textbf{Scene background:} In different orbital scenarios, the target spacecraft can be oriented towards the Earth or towards a dark space, which means that the target spacecraft's background can change with respect to different positions and orientations. When the Earth is in the background, the scene will have additional features of the planet's surface and high reflectively of oceans and clouds. Whereas, having a dark space in the background will lead to featureless space with sparsely illuminated stars in the background.\\
\textbf{Distance between the camera and the target:} We model different ranges between the target spacecraft and the optical sensor mounted on the chaser spacecraft. Range is inversely proportional to the percentage of target occupation of the scene (see example distribution in Fig.~\ref{fig:cubesatIllumination} (bottom)). \\
\textbf{Optical sensor noise:} Spaceborne images suffer from high noise levels due to small sensor sizes and high dynamic range imaging \cite{sharma2018pose}. Accordingly, varying levels of zero-mean white Gaussian noise were added to the generated synthetic images to simulate the noise in real spaceborne images.
\begin{figure}[t!]
\includegraphics[width=\linewidth , height=3.5cm]{images/Sentinel_L.png}
\includegraphics[width=\linewidth , height=3.5cm]{images/Sentinel_R.png}
\caption{Data distribution of the \emph{`Sentinel-6'} satellite with respect to: illumination (top) and range or target occupation of the scene (bottom).}
\vspace{-0.5 cm}
\label{fig:cubesatIllumination}
\end{figure}
The final SPARK dataset consists of $\sim$150k high-resolution photo-realistic RGB images with bounding box annotation for the target object in each image, $\sim$150k depth images, and the corresponding $\sim$150k segmentation masks in multiple and different space environmental conditions. It includes 10 different satellites, 12.5k images for each satellite, and 5 debris objects, with 5k images for each one, and all debris combined in one class.
\begin{figure*}[!th]
\centering
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[width=\linewidth,height=3.5cm]{images/plot_l.png}
(a)
\end{minipage}%
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[width=\linewidth,height=3.5cm]{images/plot_R.png}
(b)
\end{minipage}
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[width=\linewidth,height=3.5cm]{images/f_lplot.png}
(c)
\end{minipage}
\caption{Accuracy of ResNet18 considering the three cases of \textbf{\texttt{Random Initialization}}, \textbf{\texttt{Feature Extraction}}, and \textbf{\texttt{Fine Tuning}}. (a) Different illumination levels (High, Mid, and Low). (b) Different ranges (Close, Mid, and Far). (c) Training using full dataset versus training using challenging cases containing far range with low illumination images .}
\label{fig:plot_results_illumination_range}
\vspace{-0.5 cm}
\end{figure*}
\section{Experiments} \label{sec:results}
In order to analyze the features of the SPARK dataset and highlight interesting research questions this novel dataset opens, we conduct multiple space object classification experiments using SoA approaches.
We run two categories of experiments to look into two aspects: 1) impact of the scene and sensor, analyzing the effect of scene illumination, object occupation and noise contamination; and 2) potential impact of multi-modal spacecraft recognition.\\
\textbf{Impact of scene and sensor:}
We tested different variants of ResNet~\cite{He_2016_CVPR} and EfficientNet~\cite{tan2020efficientnet}. Specifically, we tested three different models, i.e., ResNet18, ResNet34, and EfficientNet B0, and for each model we considered three cases: \\
(a) \textbf{\texttt{Random Initialization}}: where training is done from scratch without any pre-trained weights, with a classifier on top of the feature extraction backbone; \\
(b) \textbf{\texttt{Feature Extraction}}: where we apply transfer learning on the model pre-trained on ImageNet~\cite{ILSVRC15} then freeze the feature extraction and train a classifier on top of it; \\
(c) \textbf{\texttt{Fine Tuning}}: where we use a model pre-trained on ImageNet and add a classifier on top of it. Then, we retrain both on the dataset under consideration.
\\
All the tested networks were trained using ADAM optimizer with a $1e-4$ learning rate and a batch size of 11 images, with cross entropy loss for classification. For ResNet18 and EfficientNet B0, we trained for 20 epochs and for ResNet34 we trained for 50 epochs. Datasets have been split as follows, 80\% for training and 20\% for validation. For practicality, all images were resized to 512 by 512 pixels.
\begin{figure}
\includegraphics[width=\linewidth , height=3.5cm]{images/noise_psnr.png}
\caption{Validation accuracy of a fine-tuned ResNet18 with respect to the average peak signal-to-noise ratio (PSNR) due to the added Gaussian noise.}
\label{fig:NosieLevel}
\vspace{-0.5 cm}
\end{figure}
\noindent We found that scene illumination level and range have direct impact on the performance of DL algorithms regardless of the model architecture\footnote{We plot the results for ResNet18, but similar results have been obtained using ResNet34 and EfficientNet B0.}. For instance, we can see in Fig.\ref{fig:plot_results_illumination_range}(a) that training using subset of the dataset with high illumination leads to higher validation accuracy as compared to mid and low illumination levels regardless of the tested models and initialization schemes. Similar behavior was observed while training with images with different ranges as reported in Fig.\ref{fig:plot_results_illumination_range}(b) where the accuracy generally drops by increasing the distance between the camera and target object. \\
In order to address extreme cases, we considered a subset of our dataset that contains only images with low illumination and far ranges. This is considered to be a very challenging scenario for SSA tasks using a monocular camera. We observed a high drop in the accuracy in the case of \textbf{\texttt{Random Initialization}}, and a drop of around $ 7\%$ for the two other initialization schemes (see Fig.\ref{fig:plot_results_illumination_range}(c)). \\
Overall, we found that the \textbf{\texttt{Fine Tuning}} initialization scheme provides the best results across all the conducted experiments, while using the \textbf{\texttt{Feature Extraction}} scheme leads to lower performance as compared to \textbf{\texttt{Random Initialization}} (see Fig.\ref{fig:plot_results_illumination_range} (a) and (b)). \\
Finally, we tested the impact of the sensor noise on classification accuracy by progressively increasing the level of contamination with noise We observe a systematic decrease in performance as reported in Fig.\ref{fig:NosieLevel}. \\
\textbf{Multi-modal spacecraft recognition:}
In order to show the potential of multi-modal spacecraft classification, we tested the multi-modal classification approach proposed in~\cite{oyedotun2019learning}. We retrain from scratch on the SPARK dataset using both the provided RGB and depth data modalities. The validation accuracy was boosted to 90.05\% when fusing data as opposed to 75\% and 88.01\% when learning respectively from RGB data, or depth data alone.
Note that these preliminary results have been obtained using largely resized input images, i.e., 64 by 64 pixels, during training. The objective of this experiment is mainly to show the advantage that multi-modal data gives to the task of SSA. A more thorough evaluation with larger image resolutions will be reported in a dedicated paper.
\section{Conclusion} \label{sec:conclusion}
We proposed the SPARK dataset for spacecraft recognition, localization, and segmentation. This dataset was generated under a realistic space simulation environment, providing a large range of diversity in sensing conditions and multiple spacecraft classes.
This dataset is proposed to pave the way for developing dedicated deep learning approaches for debris and spacecraft recognition and detection in the context of SSA missions.
First experiments on SPARK identified the most challenging sensing conditions that are important to focus on, i.e., far range, low illumination, and large contamination with noise. Furthermore, multi-modal RGB-Depth spacecraft recognition is identified as a relevant research direction. A competition using the SPARK dataset will follow to foster efforts on data-driven SSA.
\label{sec:ref}
\bibliographystyle{IEEEbib}
|
1,477,468,751,105 | arxiv | \section{Introduction}
High-redshift quasars are among the most luminous sources in the distant Universe.
Their large luminosities ($L \sim 10^{47}$ erg/s) suggest that the powering mechanism of the strong
radiative emission is the accretion of gas onto a Super Massive Black Hole (SMBH), with a mass
$M_{\rm BH} \gtrsim 10^9 M_{\odot}$ settled in the center of the host galaxy \citep[e.g.][]{Fan01, Fan03, Willott07}.
This phenomenon, in fact, can convert up to $30\%$ of the energy in radiation, explaining the nature of
this powerful emission \citep{Shakura73}.
The most distant quasars are observed up to redshifts $z \sim 7$ \citep{Mortlock11}, corresponding to a
Universe younger than 1 Gyr old. How these SMBHs form and grow in such a short time is still an open
question.
In the hierarchical scenario of structure formation, SMBHs are expected to grow via mergers with other
BHs and gas accretion, starting from a less massive BH, generally referred to as BH seed.
Hence, the formation and accretion history of SMBHs depend on the initial mass of BH seeds and on their
formation epoch.
The nature of the first BH seeds is still uncertain and different formation mechanisms have been proposed in the literature \citep[see e.g.][and references therein]{Volonteri10}:
\begin{enumerate}
\item primordial black holes, with masses ranging from the Planck mass up to $10^5 M_{\odot}$ could have
formed in the early Universe, well before galaxy formation \citep{Khoplov05};
\item remnants of the first generation of metal-poor stars, the so-called population III (Pop III) stars (see e.g. Bromm 2013 for a review), that can form to black holes of $\sim 100 \, M_{\odot}$, at $z \sim 20$ \citep{Madau01,Abel02, Bromm02, Oshea07, Turk09, Tanaka09, Greif12,Valiante16};
\item gas-dynamical processes in massive environment can lead to the direct collapse of gas into a massive BH of [$10^4 - 10^6$] M$_\odot$ \citep{Bromm03, Koushiappas04, Begelman06, Lodato06, Ferrara14, Pacucci15,Valiante16};
\item stellar-dynamical processes allow BHs to form in nuclear clusters of second generation stars with masses ~$\sim [10^2 - 10^5] M_{\odot}$ \citep{Devecchi09, Devecchi10, Devecchi12};
\item gas-driven core-collapse of dense stellar clusters due to the rapid infall of gas with a mass comparable to that of the stellar cluster can lead to the formation of BHs of $\sim 10^3 M_\odot$ or larger
\citep{Davies11, Lupi14}.
\end{enumerate}
In order to grow up to billion solar masses at $z \sim 6$, seed BHs must accrete gas at the Eddington rate almost uninterruptedly for several hundreds Myr, even if they start as ``heavy seeds" of $[10^5 - 10^6] \, M_{\odot}$. Alternatively, short episodes of super-Eddington accretion have been suggested as a viable way to allow the efficient growth of SMBHs, especially if these start
from ``light seeds'' of $\sim 100 \, M_\odot$ \citep{Haiman04,Yoo04, Shapiro05, Volonteri05, Volonteri06, Pelupessy07, Tanaka09, Madau14, Volonteri15}.
In a recent numerical study, \citet{Lupi15} show that, if a large reservoir of dense cold gas is available, a $M_{\rm BH} \sim 10^5 M_{\sun}$ can grow in a $\sim \rm Myr$ timescale
starting from a seed mass of $\sim 20-100 \, M_{\odot}$, under the assumption of a slim accretion disk solution (Abramowicz et al. 1988).
The slim disk solution, that we better describe in Section \ref{Sec:bh growth}, represents an advective, optically thick
flows that generalise the standard Shakura $\&$ Sunyaev solution \citep{Shakura73}. In this model, the radiative efficiencies, that depend on the accretion rate, are low: the radiation is trapped and advected inward by the accretion flow (see however the recent simulations by \citealt{Sadowski16}).
In this scenario, the outflow has a negligible effect and the BH can accrete up to $80\% - 100\%$ of the gas mass available \citep{Pacucci15}.
Indeed, there is observational evidence of mildly super-critical
accretion \citep{Kelly13, Page14} in quasars at redshift up to $\sim$ 7.
In addition, recent numerical simulations aimed to study super-Eddington accretion onto a rapidly rotating BH \citep{McKinney14} and the energy, momentum and mass outflow rates from radiatively inefficient accretion discs \citep{Sadowski13} predict Eddington ratios $\eta_{\rm Edd} = L/L_{\rm Edd}$ up to 10, where $L_{\rm Edd}$ is the Eddington luminosity, defined as:
\begin{equation}
L_{\rm Edd} = \frac{4\pi G M_{\rm BH} m_p c}{\sigma_T} \simeq 3.3 \times 10^{10} \left( \frac{M_{\rm BH}}{10^6 M_{\odot}} \right) L_{\odot}
\end{equation}
\noindent
with $M_{\rm BH}$ the central BH mass, $m_p$ the proton mass, $c$ the speed of light and $\sigma_T$ the Thomson scattering cross section.
Such a high ratio has been also invoked to explain the nature of ultraluminous X-ray sources \citep[e.g.][]{Middleton13}.
In this paper, we investigate the role of super-Eddington accretion in the formation of the first
SMBHs at redshift $z\sim 6$, with the aim to understand what are the environments where it can occur and discuss the implications for the coevolution of the SMBHs and their host galaxies at high redshifts. We base our analysis on the data-constrained semi-analytical model \textsc{GAMETE/QSOdust} that allows to simulate a large number of hierarchical histories of a quasar host dark matter halo, following the star formation history, chemical evolution and nuclear black hole growth in all its progenitor galaxies. The model has been first successfully used to investigate the properties of the $z = 6.4$ quasar SDSS J1148+5251 by \citet{Valiante11,Valiante12}, applied to a sample of quasars at $5 < z < 6.4$ by \citet{Valiante14} and more recently used to investigate the relative importance of light and heavy
seeds in the early growth of high-z SMBHs under the assumption of Eddington-limited accretion \citep{Valiante16}. Here we present an improved version
of the model, that has been modified to follow gas cooling, disk and bulge formation, and BH gas accretion in all the progenitor systems of a $z = 6.4$ quasar, using
SDSS J1148+5251 (hereafter J1148) as a prototype for the general class of luminous high-redshift quasars.
The paper is organized as follows.
In Section \ref{sec:model} we introduce the model, describing assumptions and physical
prescriptions. In Section \ref{sec:results} we present the results. Finally, a discussion and the main conclusions are given in Section \ref{sec:discussion}.
\begin{table*}
\begin{center}
\caption{Observed and inferred properties of the quasar SDSS J1148+5251. The black hole mass, $M_{\rm BH}$, is estimated from the $\rm Mg_{\rm II}$ doublet and the $\lambda = 3000 \, \AA$ continuum \citep{Derosa11}. The mass of molecular gas, $M_{\rm H_2}$, and the dynamical mass, $M_{\rm dyn}\sin^2 i$, have been estimated from CO observations
(see \citealt{Valiante14} for more details). The star formation rate, SFR, has been computed from the far-infrared (FIR) luminosity using the Kennicutt relation {(see Section \ref{sec:results} fore further details)}.
The value of $L_{\rm FIR}$ and $M_{\rm dust}$ have been computed by \citet{Valiante11,Valiante14}. The bolometric luminosity $L_{\rm bol}$ is estimated from the observed flux at $1450 \, {\AA}$ \citep{Fan03}
using the bolometric correction by \citet{Richards06}.}\label{Tab1}
\scalebox{0.86}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\hline
{$z$} & {$M_{\rm BH} \, [10^{9}M_{\odot}$]} &{$M_{\rm H2} \, [10^{10} M_{\odot}$ ]} & {$M_{\rm dyn} \sin^2 i \, [10^{10} M_{\odot}$]} & {$L_{\rm FIR} \, [10^{13} L_{\odot}$]} & {$L_{\rm bol} \, [10^{14} L_{\odot}$]} & {$\rm SFR \, [10^3 M_{\odot}/\rm yr$]} & {$M_{\rm dust}\, [10^8 M_{\odot}$]} \\
\hline
6.42 & $4.9 \pm 2.5 $ & $2.3 \pm 1.9 $ & $3.4 \pm 1.3$ & $2.2 \pm 0.33$ & $1.36 \pm 0.74$ & $2.0 \pm 0.5$ & $3.4^{+1.38}_{-1.54}$\\
\hline
\end{tabular}
}
\end{center}
\end{table*}
\section{The model}
\label{sec:model}
In this section we provide a brief summary of the original \textsc{GAMETE/QSOdust} model (referring the readers to \citealt{Valiante11, Valiante12, Valiante14} for a more detailed description) and we present the new features that have been implemented for the present study.
We reconstruct 30 independent merger histories of a dark matter halo at redshift 6.4 assumed to be the host of J1148. We adopt a Navarro Frenk $\&$ White (1995, NFW) density profile with a mass of $M_{\rm h} = 10^{13} M_{\odot}$, within the range supposed to host high-$z$ bright quasars \citep{Volonteri06,Fan04} and simulate its hierarchical history using a binary Monte Carlo merger tree algorithm based on the Extended Press-Schechter theory \citep{Lacey93}.\\
The code follows the time evolution of the mass of gas, stars, metals and dust in a 2-phase ISM inside each progenitor galaxy \citep[see also][]{deBennassuti14}, taking into account chemical enrichment from Asymptotic Giant Branch (AGB) stars and Supernovae (SNe), which inject dust and metals into the ISM, grain destruction by SN shocks and grain growth in dense molecular clouds. \\
Energy-driven outflows, powered by both AGN and SN feedback, are considered in the model: the energy released by the BH accretion process and SN explosions couples with the gas and can unbind a huge amount of interstellar gas \citep{Silk98}. Although the physical mechanisms that trigger these galaxy-scale winds are still controversial, the model predicts mass ejection rates comparable to the observed ones \citep{Maiolino12, Valiante12, Cicone15}.
Following \citet{Valiante11, Valiante16} we focus our study on one of the most distant and best studied quasar, J1148, discovered at redshift $z \simeq 6.4$ \citep{Fan03}. The observationally inferred properties of this
quasar are reported in Table \ref{Tab1}. These are used to calibrate the model by fixing the adjustable free parameters shown in Table \ref{Tab:free}, as
described below.
In what follows, we discuss the new features of the code, namely: ({\it a}) the formation of the disk via gas cooling;
({\it b}) the formation of the bulge via major mergers; ({\it c}) bursted and quiescent star formation both in the disk and in the bulge;
({\it d}) the BH seeding prescription; ({\it e}) the BH growth via accretion and coalescences, considering also the recoil velocities that can be generated by the product of the merging pair due to asymmetric gravitational wave emission; ({\it f}) SNe and AGN feedback, responsible of galactic-scale winds.
We adopt a Lambda cold dark matter ($\Lambda$CDM) cosmology with parameters $\Omega_m = 0.314$, $\Omega_\Lambda = 0.686$, $h=0.674 $ \citep{Planck14}, so that the Hubble time at redshift 6.4 is 851 Myr. The difference with the cosmological parameters adopted in previous works (Valiante et al. 2011, 2014) is mainly the larger value of $\sigma_8$ (Planck $\sigma_8 = 0.834 $, WMAP7 $\sigma_8 = 0.761$ ), which implies an increased power at small scales, leading to a larger number of progenitor systems at high redshifts.
\subsection{Gas cooling}
In each newly virialized dark matter halo with mass $M_h$, the initial gas mass is assumed to be the cosmic baryon fraction
$M_{\rm \rm diff} = (\Omega_{\rm b}/\Omega_{\rm m}) \, M_{h}$.
We suppose this gas to be all in the diffuse phase, i.e. pressure-supported, and to follow an isothermal
density profile $\rho_g$ defined as:
\begin{equation}
\rho_g(r) = \frac{M_{\rm diff}}{4\pi R_{\rm vir}r^2},
\end{equation}
\noindent
where $R_{\rm vir}$ is the virial radius of the dark matter halo.
The hot diffuse gas gradually cools providing the reservoir of cold gas out of which stars form. The gas cooling processes strongly depend on the temperature and
chemical composition of the gas.
In dark matter halos with virial temperature $T_{\rm vir} < 10^4 \,K$, referred to as mini-halos, the primordial gas can cool only through $\rm H_2$
roto-vibrational transitions \citep{Haiman96}. As the gas becomes progressively enriched in heavy elements, other
molecular species can contribute to cooling and collisionally excited metal fine-structure lines, mostly OI, CII can provide
additional cooling pathways.
Here we only consider the contribution of $\rm H_2$, OI and CII cooling using metallicity dependent tabulated cooling functions, $\Lambda(T_{\rm vir}, Z)$,
computed as described in Appendix A of \citet{Valiante16} but we neglect the effect of $\rm H_2$ photo-dissociation by Lyman-Werner photons. We will
return to this point in Section \ref{sec:results}.
In dark matter halos with virial temperatures $\rm T_{\rm vir} \geq 10^4 K$, referred to as Lyman-$\alpha$ cooling halos, the temperature is high enough to excite atomic transitions, allowing the primordial
gas to cool through hydrogen Lyman-$\alpha$ line emission. In this regime, we use metallicity-dependent tabulated cooling functions presented by
\citet{Sutherland93}.
The time scale for gas cooling, $\tau_{\rm cool}$, is defined as:
\begin{equation}
\tau_{\rm cool} = \frac{3}{2}\frac{ \mu m_p \kappa_B T_{\rm vir}}{\rho_g(r_{\rm cool}) \Lambda(T_{\rm vir},Z)},
\end{equation}
where $\kappa_B$ is the Boltzmann constant, $\mu$ is the mean molecular weight and $r_{\rm cool}$ is the cooling radius and it is obtained by assuming that the cooling time is
equal to the halo dynamical time $t_{\rm dyn} = R_{\rm vir}/v_{\rm DM}$, where $v_{\rm DM}$ is the dark matter (DM) halo circular velocity:
\begin{equation}
r_{\rm cool} = \left[ \frac{t_{\rm dyn} \, M_{\rm diff} \, \Lambda(T_{\rm vir}, Z)}{6 \pi \, \mu m_p\, \kappa_B T_{\rm vir} \,R^2_{\rm vir}}\right]^{1/2}.
\end{equation}
\noindent
Then, the gas cooling rate can be computed\footnote{Note that if $r_{\rm cool} > R_{\rm vir}$ we assume that the gas never reaches
hydrostatic equilibrium and it is immediately available to star formation \citep{Delucia10}.} as:
\begin{equation}
\dot{M}_{\rm cool} = 4 \pi \rho_g (r_{\rm cool}) r_{\rm cool}^2 \frac{dr_{\rm cool}}{dt} = \frac{M_{\rm diff}}{2R_{\rm vir}}\frac{r_{\rm cool}}{t_{\rm dyn}}.
\end{equation}
\subsection{Disk and bulge formation}
\label{diskBulge}
Along the hierarchical history of the final DM halo, we define major (minor) halo-halo merger events as those with halo mass ratio $\mu = M_{\rm halo,1}/M_{\rm halo,2}$ (with
$M_{\rm halo,1} \leq M_{\rm halo,2}$) larger (lower) than $\mu_{\rm thr} = 1/4$ \citep{Barausse12}. In quiescent evolution (i.e. no encounters with other galaxies), the cold gas settles on a rotationally-supported disk, because of the conservation of angular momentum, and can start to form stars.
The disk, composed of gas and stars,
can be disrupted by a major merger and a spherical bulge is expected to form in this event.
Minor mergers, instead, are not expected
to destroy the disk but may help the growth of the bulge by disk instabilities \citep{Naab03,Bournaud05}.
In our model, major mergers are supposed to destroy both the gaseous and stellar disk components of the newly-formed galaxy, adding the stars and gas to the central bulge. Minor mergers do not contribute to the transfer of matter between the disk and bulge, and thus lead to the formation of a new galaxy with disk and bulge masses that are the sum of the two progenitors ones.
We consider a self-gravitating disk, with an exponential gas surface density profile, $\Sigma_{\rm d}$, defined as \citep{Mo98}:
\begin{equation}\label{sigma}
\Sigma_{\rm d}(r) = \Sigma_{\rm d}(0) \, e^{-r/R_{\rm d}} ,
\end{equation}
\noindent
where $R_{\rm d}$ is the scale radius of the gaseous disk and $\Sigma_{\rm d}(0)$ is the
central surface densities of the gas. For the stellar component of the disk, we adopt the same
density profile with {the same scale radius} $R_{\rm d}$.
Following \citet{Mo98} we define the scale radius as,
\begin{equation}\label{rd}
R_{\rm d} = \frac{1}{\sqrt{2}}\left( \frac{j_{\rm d}}{m_{\rm d}} \right) \lambda R_{\rm vir} \frac{1}{\sqrt{f_{\rm c}}} f_{\rm R}(\lambda, c, m_{\rm d}, j_{\rm d}),
\end{equation}
\noindent
where $j_{\rm d} = J_{\rm d}/J$ is the ratio between the disk angular momentum and that of the halo, $m_{\rm d} = M_{\rm d}/M_{\rm h}$ is the disk mass (stars+gas) fraction over the halo mass. From the conservation of the specific angular momentum we assume $j_{\rm d} / m_{\rm d} = 1$. The spin parameter $\lambda$ is considered to be constant and equal to $0.05$, the mean value adopted by \citet{Mo98}.
The factors $f_{\rm c}$ and $f_{\rm R}$ take into account the correction to the total energy of the halo resulting from the NFW density profile and the gravitational effect of the disk, and are computed following the prescription given by \citet{Mo98}. The factor $f_{\rm c}$ depends on the concentration parameter $c$, that we assume to be constant and equal to $c=1$\footnote{Unfortunately,
numerical studies of the concentration parameter of dark matter halos spanning the mass and redshift range relevant for the present study are not available. Extrapolating the results of \citet{Munoz11},
we adopt a constant value of $c = 1$. At a fixed halo mass, BH growth would be favoured in more concentrated halos, that are characterized by a larger mass and circular velocity in the inner regions \citep{Mo98}.}:
\begin{equation}
f_{\rm c} = \frac{c}{2}\frac{1 - 1/(1+c)^2 - 2\ln(1+c)/(1+c)}{[c/(1+c) - \ln(1+c)]^2}.
\end{equation}
\noindent
The factor $f_{\rm R}$ is computed as,
\begin{equation}\label{fr}
f_{\rm R} = 2\left[ \int_0^{\infty} e^{-u}u^2\frac{v_c(R_{\rm d} u)}{v_c(R_{\rm vir})}\right]^{-1},
\end{equation}
\noindent
where $v_c(r)$ is the total rotation velocity of the system,
\begin{equation}
v_c^2(r) = v_d^2(r) + v_b^2(r) + v^2_{\rm DM}(r).
\label{eq:totcirc}
\end{equation}
\noindent
Here $v_b$ is the circular velocity of the bulge, $v_{\rm DM}$ is the circular velocity of the DM halo and $v_d$ is the circular velocity of the thin, exponential disk,
\begin{equation}
v^2_{d} = \pi \, G\, \Sigma_0 \, x^2 [I_0(x/2)K_0(x/2) - I_1(x/2)K_1(x/2)],
\end{equation}
\noindent
where $x = r/R_{\rm d}$ and $I_{\alpha},K_{\alpha}$ are the modified Bessel functions of the first and second
type, respectively and $\Sigma_0 = \Sigma(0)_{\rm d} + \Sigma(0)^{\star}_{\rm d}$ is the sum of the gas and stellar
central ($r=0$) surface densities.
For the bulge component, we assume that the
gas density profile $\rho_{\rm b}(r)$ is described as \citep{Hernquist90},
\begin{equation}\label{rho}
\rho_{\rm b}(r) = \frac{M_{\rm b}}{2\pi}\frac{r_{\rm b}}{r(r+r_{\rm b})^3},
\end{equation}
\noindent
where the scale radius, $r_{\rm b}$, is computed as $r_{\rm b} = R_{\rm eff}/1.8153$ (Hernquist 1990), and the effective radius $R_{\rm eff}$\footnote{$R_{\rm eff}$ is the effective radius of the isophote enclosing half the light.}, depends on the gas and stellar masses in the bulge \citep{Shen03}:
\begin{equation}\label{reff}
\log(R_{\rm eff}/{\rm kpc}) = 0.56\log(M_{\rm b} + M^{\star}_{\rm b}) - 5.54.
\end{equation}
\noindent
We adopt the same density profile for the stellar component in the bulge.
The velocity profile of the bulge, computed through the Poisson equation is
\begin{equation}
v^2_{b} = \frac{Gr(M_{\rm b} + M^{\star}_{\rm b})}{(r_{\rm b}+r)^2}.
\end{equation}
\noindent
We assume that the halo responds adiabatically to the gradual build up of the disk and bulge, maintaining the spherical symmetry during the contraction. Thus, the angular momentum is conserved during the collapse from a mean initial radius $r_i$ to a radius $r$ ($< r_i$), so that:
\begin{equation}
M_f(r)r = M(r_i)r_i,
\end{equation}
\noindent
where $M(r_i)$ is the mass of DM enclosed in $r_i$ obtained
integrating the NFW density profile and $M_f(r)$ is the total final mass within a radius r:
\begin{equation}
M_f(r) = M_{\rm d,t}(r) + M_{\rm b,t}(r) + (1-f_{\rm gal})M(r_i),
\end{equation}
\noindent
where $M_{\rm d,t}(r)$ and $M_{\rm b,t}(r)$ are the total disk and bulge masses (star and gas) enclosed within a
radius $r$, obtained by
integrating eqs.~(\ref{sigma}) and~(\ref{rho}), and $f_{\rm gal} = [M_{\rm d,t} + M_{\rm b,t}]/M_{\rm h}$ is the
fraction of the total mass in the disk and bulge.
The velocity curve of the perturbed DM halo is then,
\begin{equation}
v^2_{\rm DM}(r) = [G(M_f(r)-M_{\rm d,t}(r)-M_{\rm b,t}(r)]/r.
\end{equation}
\noindent
Following these prescriptions we model the formation and evolution of disk and bulge components in each halo along the reconstructed merger histories.
\begin{table}
\begin{center}
\caption{The calibrated values of the adjustable parameters of the reference model.}
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Free parameters}} & \textbf{values} \\
\hline
$\epsilon_{\rm d}^{\star} $ & quiescent star formation efficiency & $0.083$ \\
\hline
$\beta$ & BH accretion efficiency & $0.03$ \\
\hline
$\epsilon_{\rm AGN} $ & AGN-feedback efficiency & $1.5 \times 10^{-3}$ \\
\hline
\end{tabular}\label{Tab:free}
\end{center}
\end{table}
\subsubsection{Star formation rate}
Hydrodynamical simulations suggest that merging events, major mergers in particular, can trigger
bursts of star formation in the central regions as a consequence of the tidal forces produced by
galaxy-galaxy interactions \citep{Mihos94,Springel00,Cox08}.
Since starbursts are confined in the very central region of the galaxy, we assume a quiescent mode
of star formation in the disk while bursts are triggered in the bulge when a major merger occurs.
The star formation rate (SFR) in the disk is described as,
\begin{equation}\label{SFR}
\dot{M}^{\star}_{\rm d} = \epsilon_{\rm d}^{\star} \frac{M_{\rm d},}{\tau_{\rm d}}
\end{equation}
\noindent
where $M_{\rm d}$ is the gas mass in the disk, $\tau_{\rm d}~=~3 R_{\rm d}/v_c(3R_{\rm d})$ is the dynamical time of the
disk evaluated at the peak of the circular velocity profile \citep{Mo98}, $R_{\rm d}$ is the disk scale radius defined in eq. \ref{rd} and $\epsilon_{\rm d}^{\star}$ is an adjustable free parameter representing the star formation efficiency in the disk. In our reference model, $\epsilon_{\rm d}^{\star} =0.083$ (see Table \ref{Tab:free}).
Similarly, the SFR in the bulge is computed as,
\begin{equation}
\dot{M}^{\star}_{\rm b}= \epsilon_{\rm b}^{\star} \frac{M_{\rm b}}{\tau_{\rm b}},
\end{equation}
\noindent
where $M_{\rm b}$ is the gas mass in the bulge, $\tau_{\rm b} = R_{\rm eff}/v_c(R_{\rm eff})$ is the dynamical time of the bulge and the effective radius $R_{\rm eff}$ is defined in eq. \ref{reff} above.
We assume that in absence of merger events, the star formation efficiency in the bulge is equal to that of the disk,
$\epsilon_{\rm b}^{\star} = \epsilon_{\rm d}^{\star}$. When a merger event occurs, the star formation efficiency increases as a consequence of the destabilizing effect of the interaction, and we adopt the following scaling relation:
\begin{equation}
\epsilon_{\rm b}^{\star} = \epsilon_{\rm d}^{\star} \, f(\mu),
\label{eq:bulge_eff}
\end{equation}
\noindent
with $f(\mu) = \max[1,1+ 2.5 \, (\mu-0.1)]$, so that mergers with $\mu \le 0.1$ do not trigger starbursts. With the adopted scaling
relation, {the starburst efficiency in the reference model is $0.083 \le \epsilon_{\rm b}^{\star} \le 0.27$, consistent
with the range of values found by means of hydrodynamical simulations of merging galaxy pairs \citep{Cox08}
and adopted by other studies \citep{Menci04,Valiante11}.
\subsection{Black hole growth and feedback}
\label{Sec:bh growth}
\subsubsection{BH seeds}
We assume BH seeds to form only as remnants of first (Pop III) stars. In fact, our main aim is to investigate if SMBHs can form by super-Eddington accretion starting from ``light'' seeds at high redshift. Although the initial mass function of Pop III stars is still very uncertain, the most recent numerical simulations suggest a characteristic mass of a few hundreds of solar masses
at $z \sim 25$, that progressively shifts to a few tens of solar masses at lower redshifts \citep{Hirano15}. For simplicity, here we do not consider the redshift modulation of the characteristic mass
and we plant a BH seed with a mass of $M_{\rm seed} = 100 \, M_{\odot}$ in each newly-virialized halo with a metallicity $Z < Z_{\rm cr} = 10^{-4} Z_\odot$, above which the effects of dust and metal
line cooling allow the gas to fragment, reducing the characteristic mass to values comparable to those found in local stellar populations \citep{Schneider02,Schneider03,Schneider12a,Omukai05}.
\subsubsection{BH accretion}
Once formed, the BH accretes gas from the surrounding medium. The correlation between the mass of central SMBH and the bulge mass or velocity dispersion (\citealt{Magorrian98, Richstone98}, see \citealt{Kormendy13} and references therein) and the small scale on which the accretion takes place, suggest that the accretion onto the central black hole should be fuelled by the cold gas present in the bulge.
The collapse of material onto the central BH in a galaxy is triggered by both merger-driven infall of cold gas, which loses angular momentum due to galaxy encounters, and quiescent accretion, assuming that the
accretion rate is proportional to the cold gas mass in the bulge,
\begin{equation}
\dot{M}_{\rm accr} = \frac{f_{\rm accr} M_{\rm b}}{\tau_{\rm b}},
\end{equation}
\noindent
where, similarly to eq.~(\ref{eq:bulge_eff}), the accretion efficiency is expressed as,
\begin{equation}\label{accretion}
f_{\rm accr} = \beta \, f(\mu),
\end{equation}
\noindent
where $\beta$ is an adjustable free parameter. In our reference model, $\beta = 0.03$ (see Table \ref{Tab:free}),
{so that the efficiency of BH accretion is about $\sim 1/3$ of the efficiency of star formation in the bulge.}
Thus, the mass growth rate is,
\begin{equation}
\dot{M}_{\rm BH} = (1 - \epsilon_r) \dot{M}_{\rm accr},
\end{equation}
\noindent
where the radiative efficiency $\epsilon_r$ is defined as,
\begin{equation}
\epsilon_r = \frac{L_{\rm bol}}{\dot{M}_{\rm accr}\, c^2},
\end{equation}
with $L_{\rm bol}$ being the bolometric luminosity emitted by the accretion process.
At high accretion rates, the \citet{Shakura73} model of BH growth through a thin disk, where all
the heat generated by viscosity is immediately
radiated away, is incorrect. Instead, it is possible to use the optically thick, slim accretion disk solution, that is characterized by low radiative efficiencies \citep{Abramowicz88}. \\
The bolometric luminosity, $L_{\rm bol}$, is computed starting from the numerical solutions of the relativistic slim accretion disk equations obtained by \citet{Sadowski09}, adopting the fit presented by \citet{Madau14}:
\begin{equation}
\frac{L_{\rm bol}}{L_{\rm Edd}} = \ A(a) \left[ \frac{0.985}{\dot{M}_{\rm Edd}/\dot{M}_{\rm accr} + B(a)} + \frac{0.015}{\dot{M}_{\rm Edd}/\dot{M}_{\rm accr} + C(a)} \right] ,
\label{eq:slimdisk}
\end{equation}
\noindent
where the Eddington accretion rate is defined as $ \dot{M}_{\rm Edd} \equiv 16 \, L_{\rm Edd} / c^2 $ and $A(a), B(a)$ and $C(a)$ are functions of the BH spin parameter $a$,
\begin{eqnarray}
A(a) & = & (0.9663 - 0.9292 a)^{-0.5639} ,\\
B(a) & = & (4.627 - 4.445 a)^{-0.5524} ,\\
C(a) & = & (827.3 - 718.1 a)^{-0.7060}.
\end{eqnarray}
\noindent
The slim accretion disk model represented by eq.~(\ref{eq:slimdisk}) predicts that even when the accretion rate is super-Eddington, with $1 \lesssim \dot{M}_{\rm accr}/\dot{M}_{\rm Edd} \lesssim 100$, the disk luminosity remains only mildy super-Eddington, with $L_{\rm bol} \lesssim (2 - 4) \, L_{\rm Edd}$. In fact, in this regime a large fraction of the energy generated by viscosity does not have the time to be radiated away and is instead
advected into the black hole. As a result, the radiative efficiency is very small, with $0.002 \lesssim \epsilon_r \lesssim 0.05$, almost independently of the value of the BH spin parameter (see Figure 1 in \citealt{Madau14}.
Conversely, when the accretion rate is sub-Eddington, the radiative efficiency increases reaching an almost constant value which depends on the BH spin, as in the standard think disk solution, with $ \epsilon_r \lesssim 0.05$
for $a=0$ and $ \epsilon_r \lesssim 0.3$ for $a = 0.98$.
Here we do not describe the time evolution of the BH spin parameter and we simply assume that the module of the spin vector $a$ is randomly extracted from a uniform distribution
\citep{Tanaka09, Barausse12}.
\subsubsection{BH mergers}
In halo merging events, we assume that the two nuclear BHs coalesce with the
same timescale of their host halos. However, in minor mergers (with $\mu < \mu_{\rm thr} = 1/4$, see Section \ref{diskBulge}) only the largest of the two progenitors BHs can settle in the centre of the new halo potential well,
surviving as a nuclear BH, while the smaller one ends up as a satellite.
During the BH merger, the newly formed BH receives a large center-of-mass recoil due to the net linear momentum carried by the asymmetric gravitational waves emission \citep{Campanelli07, Schnittman08, Baker08}.
The recoil (or kick) velocity of the coalesced binary depends on the mass ratio of the merging pair and on the amplitude and orientation of the spin vectors of the two BHs. Here we follow the parametrization presented
by \citet{Tanaka09} and - for each merger event - we compute the kick velocity as a function of the BH mass ratio assuming the spin vectors to be randomly oriented. The average kick velocities increase
with the mass ratio of the merging pair, $q = M_{\rm BH,1}/M_{\rm BH, 2}$ (with $M_{\rm BH,1} \leq M_{\rm BH, 2}$). For strongly unequal mass mergers, with $0.01 \lesssim q \lesssim 0.1$, we find $\left\langle v_{\rm kick} \right\rangle = 1 - 100$ km/s, whereas
for larger mass ratios, with $0.1 \lesssim q \lesssim 1$, the kicks can be very strong, with velocities $\left\langle v_{\rm kick} \right\rangle = 100 - 1000$ km/s.
We then compare the kick velocity with the circular velocity at the radius of influence of the BH, $R_{\rm BH} = GM_{\rm BH}/v_c^2(R_{\rm BH})$ with $v_c(r)$ given by eq.~(\ref{eq:totcirc}),
and we retain the BH only when $v_{\rm kick} < v_c(R_{\rm BH})$. For $M_{\rm BH}/M_{\rm h} = 10^{-3}$, the retention velocity is $v_c(R_{\rm BH}) \sim 2 v_{\rm vir}$, where $v_{\rm vir}$ is the escape velocity at the virial radius \citep{Yoo04}.
\subsubsection{BH feedback}
There is now strong observational evidence that the energy released by the quasar can drive powerful galaxy-scale outflows
(for recent works see \citealt{Feruglio15, Carniani15, Cresci15} and references therein).
Outflowing gas at velocities up to $v \sim 1400$ km/s traced by [CII] emission has been detected in SDSS J1148
(Maiolino et al. 2012) with an estimated total mass outflow rate of $1400 \pm 300 \, M_\odot/$yr
that decreases with distance from the quasar,
ranging from a peak value of $\sim 500 \, M_\odot/$yr at $\sim 3$~kpc to $\lesssim 100 \, M_\odot/$yr at $\sim 20$~kpc
\citep{Cicone15}.
In \citet{Valiante12} we show that the quasar-driven mass outflow rate predicted by \textsc{GAMETE/QSOdust},
on the basis of a simple energy-driven wind, is in good agreement with the observations.
Here we follow a similar approach, adopting the so-called “blast wave” model,
in which the AGN radiation field can accelerate the gas generating fast supersonic winds which propagates
outwards through an expanding blast wave, pushing out the surrounding medium
(see e.g. \citealt{Cavaliere02, King03, King05, King10, Lapi05, Menci05, Menci08,
Zubovas12, Zubovas14, Costa14} and references therein).
In this framework, the energy released by the AGN that couples with the interstellar gas is estimated as,
\begin{equation}\label{feedback}
\dot{E}_{\rm AGN} = \epsilon_{\rm AGN} \, \epsilon_r \, \dot{M}_{\rm accr} c^2,
\end{equation}
\noindent
where the coupling efficiency $\rm \epsilon_{AGN}$ is an adjustable free parameter. In our reference model $\rm \epsilon_{AGN} = 1.5 \times 10^{-3}$ (see Table \ref{Tab:free}).
If the post shock material does not cool efficiently,
the bubble expands adiabatically and the outflow is energy-driven.
As the blast wave propagates from the center of the halo, it first interacts with the gas of the disk and bulge,
reheating a fraction of cold gas and transferring mass to the diffuse hot phase.
When the shock has propagated beyond the bulge and disk radius, part of the gas mass is ejected from the galaxy, if the binding energy is not enough to hold the material.
The mass outflow rate at a given radius $r$ can be estimated as:
\begin{equation}
\dot{M}_{\rm w, AGN} (r) = 2 \, \epsilon_{\rm AGN} \, \epsilon_r \, \left(\frac{c}{v_c(r)}\right)^2 \dot{M}_{\rm accr},
\label{eq:outflow}
\end{equation}
\noindent
where $v_c$ is the circular velocity of the system given by eq.~(\ref{eq:totcirc}), and we evaluate the above
equation at the bulge, disk and DM halo virial radius.
A similar description is used to describe the effects of SN-driven winds.
The mass outflow rate beyond a given radius $r$ is given by:
\begin{equation}
\dot{M}_{\rm w, SN} (r) = \frac{2 \, \epsilon_{\rm SN} \, E_{\rm SN}}{v_c(r)^2} \, R_{\rm SN}
\end{equation}
\noindent
where $R_{\rm SN}$ is the rate of SN explosions, $E_{\rm SN}$ is the average SN explosion energy, and $\epsilon_{\rm SN} = 1.6 \times 10^{-3}$
is the SN wind efficiency \citep{Valiante12}. The time-dependent SN rate and explosion energy is computed for each galaxy along the merger tree according to
formation rate, age and initial mass function of its stellar population. A detailed description of the chemical evolution model can be found in
\citet{Valiante11, Valiante14} and \citet{deBennassuti14}.
\begin{figure}
\centering
\includegraphics[width=8cm]{Fig1_sfr}
\caption{Redshift evolution of the total SFR (black line) and of Pop III stars (orange line), averaged over the 29 realizations. Shaded areas represent 1-$\sigma$ dispersions and the red arrow indicates the upper limit on the SFR inferred from the IR luminosity (see in the text for further details).}
\label{figure:sfr}
\end{figure}
\section{Results}
\label{sec:results}
In this section, we present the predicted evolution of the hierarchical assembly of the SMBH and its host galaxy. To explore the dependence of
the results on the population of progenitors and their merger rate, for the same model parameters we have run 30 independent merger trees.
In one merger tree we find that a merger occurs at $z = 6.43$ between two black holes of $M_{\rm 1, BH} = 1.7 \times 10^9 M_{\odot}$ and $M_{\rm 2, BH} = 1.6 \times 10^9 M_{\odot}$,
producing a recoil velocity $\sim 2$ times higher than the retention speed, $v_c(R_{\rm BH})$. The newly formed BH is displaced from the center and it stops accreting gas. For this
reason, we do not consider this to be a viable formation route for a bright quasar like J1148, and we exclude this merger tree from the sample average.
\subsection{The formation of stars and BH seeds}
In Fig.~\ref{figure:sfr}, we show the redshift evolution of the total SFR (summed over all the progenitor galaxies in each simulation) and the separate contribution of Pop~III stars.
We also show the upper limit on the SFR of $\sim 2000 \, M_{\odot}/$yr (Table \ref{Tab1}) inferred from the observed FIR luminosity using the relation
$L_{\rm FIR}/L_{\odot} = 10.84 \times 10^9 {\rm \, SFR}/(M_{\odot}/$yr) \citep{Valiante14}. This relation\footnote{The conversion factor between the FIR luminosity and the SFR has been
obtained assuming a 10 - 200 Myr burst of stars with solar metallicity and a Larson IMF with $m_{\rm ch} = 0.35 M_\odot$
\citep{Valiante14}.} is based on the assumption of
starburst dominated dust heating and it provides only an upper limit to the real SFR, due to the non-negligible contribution from the AGN. According to a recent detailed radiative transfer analysis,
the AGN can provide up to 60\% of the total FIR luminosity \citep{Schneider15}, decreasing the SFR by a factor 1.4 - 2.5, in agreement with the average value of $\sim 800 \, M_\odot$/yr predicted
by the reference model.
Due to efficient metal enrichment, Pop~III star formation becomes negligible below $\rm z \sim 20$ and no more BH seeds are formed, consistent with other studies (\citealt{Madau01,Haiman01,Heger03,Volonteri03,Madau04, Valiante16}. The mass distribution of DM halos which host BH seeds ranges between
$\sim 3 \times 10^6 M_{\odot}$ and $\sim 10^8 M_{\odot}$ with a peak at $M_{\rm h} \sim 10^7 M_{\odot}$, as shown in Fig.~\ref{seed_m}.
Thus, we find that a major fraction ($\sim 90\%$, on average) of BH seeds are formed in DM mini-halos,
where gas cooling could be easily suppressed due to H$_2$ photo-dissociation by Lyman-Werner photons.
The inclusion of this additional feedback effect slows down metal enrichment and extends BH seeds formation to lower
redshifts ($z \geq 15$) and larger DM halos ($\sim 5 \times 10^7 - 10^9 M_{\odot}$). While the evolution of the total BH
mass and BH accretion rate at $z < 15$ is only mildly affected, the birth environment of late-forming seed BHs
(gas rich Ly-$\alpha$ cooling halos) may be more favorable to SE accretion. Here we do not consider the effect of H$_2$ photo-dissociation,
which we defer to a future study, and we assume that the formation rate of Pop~III stars is limited only by metal enrichment.
\begin{figure}
\centering
\includegraphics[width=8cm]{Fig2_BHseeds}
\caption{Mass distribution of halos hosting a newly formed 100 $M_{\odot}$ BH seed, averaged over the 29 realizations with 1-$\sigma$ error bars.}
\label{seed_m}
\end{figure}
\subsection{BH evolution}
\begin{figure*}
\includegraphics[width=8.4cm]{Fig3_bhevo.eps}
\includegraphics[width=8.4cm]{Fig3_bhar.eps}
\caption{\small{Redshift evolution of the total and mean BH masses and BHARs, averaged over 29 independent merger trees. Shaded areas are 1-$\sigma$ dispersions.
\textit{Top, left panel}}: total BH mass (summed over all
BH progenitors at each redshift in each simulation, black line) and the BH mass grown by means of sub-Eddington (magenta line) and super-Eddington (cyan line) accretion events.
\textit{Top, right panel}: total BHAR (black line) and BHAR obtained considering only sub- (magenta line) and super- (cyan line) Eddington accreting BHs.
The mean BH mass and BHAR (averaged over all BH progenitors at each redshift in each simulation) are shown in the bottom panels (left and right, respectively). }
\label{fig:BHevo}
\end{figure*}
In Fig.~\ref{fig:BHevo} we show the redshift evolution of the BH mass and black hole accretion rate (BHAR) predicted by our reference model. In the top panels, the values are obtained summing over all BH
progenitors present at each redshift in each simulation and then averaged over the 29 realizations. The different lines allow to separate the contribution to the BH mass and accretion rate achieved by means of sub-Eddington
($\le 16 \, L_{\rm Edd} / c^2$) and super-Eddington ($> 16 \, L_{\rm Edd} / c^2$)
accretion events. By construction, the final BH mass predicted by the reference model is $\sim (3.6 \pm 1.6)\times 10^9 M_\odot$, in agreement with the value inferred from observations
of J1148 (see Table 1). We find that, on average, $\sim 75\%$ of the final SMBH mass grows
by means of super-Eddington gas accretion. This provides the dominant contribution to the total BHAR at all but the smallest redshifts. Although the quantities shown in all panels have been
averaged over 29 merger trees, the redshift evolution of the BHAR appears to be very intermittent, a consequence of rapid depletion/replenishment of the bulge gas reservoir out of which the
BHs accrete.
To gain a better idea of the typical values of BH mass and BHAR predicted by the reference model, in the bottom panels of Fig.~\ref{fig:BHevo} we also show the mean
quantities, averaged over all BH progenitors present at each redshift in each simulation. It is clear that at $20 \lesssim z \lesssim 25$ the mean BH mass rapidly grows from $\sim 100 \, M_\odot$
to $\sim 10^4 \, M_\odot$ by means of super-Eddington gas accretion rates of $10^{-5} M_\odot/{\rm yr} \lesssim {\rm BHAR} \lesssim 10^{-3} M_\odot/{\rm yr}$. Hence, due to early efficient
super-Eddington accretion, the mean BH progenitors at $z \sim 20$ have already achieved a mass comparable to the BH mass predicted by the direct collapse scenario.
This is consistent with what recently found by \citet{Lupi15} by means of high-resolution numerical simulations, which show that stellar-mass black holes can increase their
mass by 3 orders of magnitudes within a few million years while accreting gas at super-Eddington rates in the dense cores of high-$z$ galaxies.
\begin{figure*}
\includegraphics[width=8.4cm]{Fig4_BHmass_distri}
\includegraphics[width=8.4cm]{Fig4_accr}
\caption{\small{Number of accreting BHs as a function of the black hole mass ({\it left panel}) and the accretion ratio ({\it right panel}), averaged over 29 realizations with 1$-\sigma$ error bars.
The histograms show the number of super- (cyan) and sub- (magenta) Eddington accreting BHs. In each figure, we separately show 4 different redshift intervals
and we give the corresponding number fraction of super-Eddington accreting BHs over the total, $f_{\rm s}$.}}
\label{fig:histo}
\end{figure*}
\begin{figure}
\includegraphics[width=8cm]{Fig5_Edd.eps}
\caption{\small{Redshift evolution of the total BH mass ({\it upper panel}) and BHAR ({\it lower panel}), averaged over 29 independent merger trees. Shaded areas are 1-$\sigma$ dispersions.
In each panel, the orange line indicates the predicted evolution assuming $\dot{M}_{\rm accr} \leq 20 \, \dot{M}_{\rm Edd} = 320 \, L_{\rm Edd}/c^2$ and the black line shows the evolution
assuming the conventional Eddington limited accretion, $\dot{M}_{\rm accr} \leq L_{\rm Edd}/c^2$ (see text).}}
\label{fig:BHevoEdd}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{Fig6_BHkicks}
\caption{\small{The average redshift distribution of major mergers (black triangles) and of kicked BHs during BH-BH coalescences in the
model where $\dot{M}_{\rm accr} \leq L_{\rm Edd}/c^2$ (orange points). Each point has been obtained averaging over 29 different merger tree realizations and the
errorbars correspond to the 1-$\sigma$ dispersion.}}
\label{fig:BHkicks}
\end{figure}
\begin{figure*}
\centering
\begin{minipage}{.45\textwidth}
\includegraphics[width=\textwidth]{Fig7_accretionratio}
\end{minipage}
\hspace{0mm}
\begin{minipage}{.45\textwidth}
\includegraphics[width=\textwidth]{Fig7_accretiontau}
\end{minipage}
\caption{\small{Eddington accretion ratio, $\dot{M}_{\rm accr}/\dot{M}_{\rm Edd}$, (\textit{left panel}) and dynamical timescale of the bulge, $\tau_{\rm b}$, (\textit{right panel}) as a
function of the bulge gas - BH mass ratio, $M_{\rm b}/M_{\rm BH}$. Each point represents an accreting BH in any of the 29 merger histories. Sub-Eddington accreting
BHs are shown by magenta triangles, and we separate mildly super-Eddington accreting BHs with $1 \leq \dot{M}_{\rm accr}/\dot{M}_{\rm Edd} \leq 20$ (orange squares) and hyper-Eddington
accreting BHs with $\dot{M}_{\rm accr}/\dot{M}_{\rm Edd} > 20$ (cyan circles). The two horizontal dashed lines in the left panel allow to visually separate these regimes.
The vertical lines in both panels give two reference values of $M_{\rm b}/M_{\rm BH} =0.1$ and $20$ (see text).}}
\label{fig:prop}
\end{figure*}
Fig.~\ref{fig:histo} shows the average distribution of BHs accreting at super- and sub-Eddington rates as a function of the BH mass and Eddington accretion ratio for different redshift intervals.
The reference model predicts that, at $15 \le z \le 25$, almost all BH progenitors accrete at super-Eddington rates. Since the BH masses are still
relatively small, $10^2 \, M_\odot \le M_{\rm BH} \le 10^6 \, M_\odot$, BH accretion rates of $10^{-5} M_\odot/{\rm yr} \lesssim {\rm BHAR} \lesssim 5 \times 10^{-3} M_\odot/{\rm yr}$,
which characterize the early mass growth (see the bottom right panel of Fig.~\ref{fig:BHevo}), correspond to very large accretion ratios, $\dot{M}_{\rm accr}/\dot{M}_{\rm Edd} \sim 10^2 - 10^4$.
The mass of BH progenitors increases with time and the fractional number of super-Eddington accreting BHs decreases, being $f_{\rm s} \sim 60\%$ at $z \sim 10-15$ and
dropping to $f_{\rm s} \sim 20\%$ at $z < 10$. Because of the larger BH masses, the accretion ratios are smaller and $\dot{M}_{\rm accr}/\dot{M}_{\rm Edd} <500$ at $ z < 10$.
For most of the evolution, we find that BH progenitors accrete at highly super-Eddington rates, with $\dot{M}_{\rm accr}/\dot{M}_{\rm Edd} >> 10$.
At these large Eddington accretion ratios the
applicability of the adopted slim disk solution is highly debated. In fact, recent general-relativistic magneto-hydrodynamical simulations show that BHs accreting at
$20 < \dot{M}_{\rm accr}/\dot{M}_{\rm Edd} < 200$ develop a disk structure that is still radiatively inefficient, with total luminosities that do not exceed $\sim 10 \, L_{\rm Edd}$, but
the total energy escaping the system can be very large, mostly in the form of thermal and kinetic energy of outflowing gas and Poyinting flux \citep{McKinney14,Sadowski13}.
However, \citet{Inayoshi15} have shown that there exist regimes where steady accretion rates larger than 3000 times the Eddington rate can be sustained.
To better assess the impact of these extreme hyper-Eddington accretion events on our results, we have run the same set of simulations discussed so far but artificially imposing an
upper limit of $\dot{M}_{\rm accr} \leq 20 \, \dot{M}_{\rm Edd} = 320 \, L_{\rm Edd}/c^2$ to the gas accretion rate. The results are shown in Fig.~\ref{fig:BHevoEdd}. In the same figure, we also show, for comparison, the evolution
predicted assuming Eddington-limited accretion. In order to better compare with previous results, this model has been run
assuming $\dot{M}_{\rm accr} \leq L_{\rm Edd}/c^2$ ($1/16$ smaller than the definition adopted in the present study, see
Eq.~\ref{eq:slimdisk}), as conventionally adopted in the literature.
We find that, even when the Eddington accretion ratio is $\dot{M}_{\rm accr}/\dot{M}_{\rm Edd} \leq 20$, the
final SMBH mass predicted by the reference model is in good agreement with the observations. The high-redshift evolution of both the total BH mass and the total BHAR, however,
is markedly different from the results shown in Fig.~\ref{fig:BHevo}. At $z > 10$ the BHAR is several orders of magnitudes smaller and the BH mass is correspondingly affected,
being $\sim 10^6 \, M_\odot$ at $z \sim 15$ ($\sim 1/100$ of the total BH mass shown in Fig.~\ref{fig:BHevo} at the same $z$). Due to the smaller gas accretion rates at high redshifts, a larger gas fraction is retained around nuclear BHs at $z < 10$. As a result,
the BH mass has a steeper late growth rate, with short episodes of intense gas accretion reaching $\sim 10^2 \, M_\odot/{\rm yr}$ at $z \sim 7$.
On the contrary, when Eddington-limited gas accretion is assumed, the final BH mass can no longer be reproduced using the reference model. In this case, the gas accretion rates are
too small to trigger fast BH growth at high redshifts. The total BH mass is dominated by the coalescence of BH seeds and its redshift evolution is strongly affected by lack of BH
seeds at $z < 20$ (see the behaviour of the Pop~III SFR in Fig.~\ref{figure:sfr}) and by kicks received during BH-BH coalescences in major mergers.
Fig.~\ref{fig:BHkicks} shows the evolution of the average number of major mergers and of kicked BHs predicted by
the model. While the average number of major mergers decreases with time, the number of kicked BHs increases at $20 \lesssim z \lesssim 25$ and than decreases at lower $z$.
This is due to the combination of the growing number of BH seeds formed at high $z$ and of
the shallow potetial wells of their host mini-halos, which allow the kick velocity of the newly formed BH to easily exceed the retention speed.
Hence, we can conclude that super-Eddington accretion is fundamental for the formation of the first SMBHs at $z > 6$, even when extreme hyper-Eddington accretion
events are not considered.
\subsection{Environmental conditions for Super-Eddington accretion}
Our model enables us to perform a statistical study of the physical properties of the environments where BH progenitors accrete at super-Eddington rates.
The left panel of Fig.~\ref{fig:histo} shows that when both sub- and super-Eddington accreting BHs are present, their BH masses are
comparable, with a tendency of sub-Eddington accreting BHs to have larger masses at lower $z$. Similarly, the occurrence of super-Eddington
accretion is not correlated with the mass of the host dark matter halo, nor with its gas content or metallicity. At each given value of any of
these quantities, in fact, both sub- and super-Eddington accreting BHs are found in the simulations.
The different accretion regimes are more cleanly separated when we plot the Eddington gas accretion ratio as a function of the ratio between the
gaseous bulge and the BH masses (see the left panel of Fig.~\ref{fig:prop}). Most of the BHs that accrete at sub-Eddington rates
are characterized by $M_{\rm b}/M_{\rm BH} < 20$, whereas the number of super-Eddington accreting BHs is negligible when
$M_{\rm b}/M_{\rm BH} < 0.1$. However, when $0.1 \leq M_{\rm b}/M_{\rm BH} \leq 20$ (the region
between the two vertical lines in the plot), the BHs can be characterized by vastly different accretion ratios: a good fraction
of the hyper-Eddington accreting BHs are found in this region of the plot. The larger
accretion rate in these systems is due to the much shorter dynamical time of the bulge. This is shown in the right panel of Fig.~\ref{fig:prop}.
A sequence of increasing bulge dynamical times is evident, with most of the BHs found in bulges with $0.01 \, {\rm Myr} \lesssim \tau_{\rm b} < 1 \, {\rm Myr}$ in hyper-Eddington,
$0.1 \, {\rm Myr} \lesssim \tau_{\rm b} < 20 \, {\rm Myr}$ in mildly super-Eddington, and $5 \, {\rm Myr} \lesssim \tau_{\rm b} < 20 \, {\rm Myr}$ in
sub-Eddington accretion regimes. Indeed, hyper-Eddington accreting BHs are predominantly found in high-$z$ systems, with less massive and more compact
bulges. The figure also shows that super-Eddington accretion requires gas-rich bulges and that, when $M_{\rm b}/M_{\rm BH} < 0.1$, only
sub-Eddington accreting BHs in massive, gas poor bulges are found.
The environmental conditions for super-Eddington accretion that emerge from our statistical study are in good agreement with the results recently
found by \citet{Lupi15}. By means of detailed hydro-dynamical simulations, these authors show that, in order to accrete at super-Eddington
rates, BHs must be embedded in dense gas structures, with masses comparable or larger than the masses of the accreting BHs.
\subsection{BH-driven outflow}
Outflowing cold gas in J1148, traced by [C {\small II}] emission, was first detected by \citet{Maiolino12} with the
IRAM Plateau de Bure Interferometer, and then confirmed with high-resolution follow-up observations by \citet{Cicone15}.
The outflow has a complex morphology and spatial extent, reaching a maximum projected
radius of 30~kpc. The estimated mass outflow rate and velocity are shown in Fig.~\ref{fig:outflow} as a function of the projected distance from the nucleus.
In the same figure, we also show the predictions of the reference model. Following eq.~(\ref{eq:outflow}),
the outflow velocity is computed as the circular
velocity at the corresponding radius, $v_{\rm w, AGN}(r) = v_{\rm c}(r)$, and
we estimate the mass outflow rate accounting for the delay
$\tau_{\rm dyn} = r/v_{\rm w, AGN}$ between the BH energy release and the observation.
Due to the large variability of the BH luminosity, the 1-$\sigma$ dispersion among the
different merger trees of the predicted average mass outflow rate (gray shaded region in the upper panel)
is consistent with the data. However, the average values (black solid line) are larger than
observed and show a different radial dependence, especially at $r > 20$\,kpc.
The bottom panel shows that the observed outflow travels at a velocity
consistent with the circular velocity of the host system. There are a few radii
where the observed values are larger, probably reflecting a stronger coupling between the
energy and momentum injected by the AGN and the surrounding gas. Yet, even if we take the
observed values of outflow velocities at each radius to estimate $\tau_{\rm dyn}$ and $\dot{M}_{\rm w, AGN}$
(see the blue dashed line in the upper panel with the cyan shaded region), the resulting
mean mass outflow rate is still larger than observed. Our description of an energy-driven
wind with constant coupling efficiency may not be adequate to capture the complex dynamics of this
massive outflow. However, \citet{Cicone15} stress that the data should be considered as
a lower limit on the total mass outflow rate, because it accounts only for the atomic gas phase of
the outflow, while a significant amount of the outflowing mass may be in the molecular phase.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Fig8_outflow}
\caption{The mass outflow rate ({\it upper panel}) and velocity ({\it lower panel}) as a function of the projected distance from the
nucleus. \citet{Cicone15} observations are shown with red data points and the predictions of the reference model are shown by
black solid lines with shaded gray
regions. The blue dashed line in the upper panel (with the cyan dashed region) shows the predicted outflow rate that we would infer
using the BH luminosity predicted by the reference model and the observed outflow velocities (see text). The lines show the average
among the 29 merger trees and the shaded regions are the 1-$\sigma$ dispersion.}
\label{fig:outflow}
\end{figure}
\subsection{The coevolution of BHs and their host galaxies}
It is interesting to explore the implications of our results for the co-evolution of nuclear BHs and their host galaxies. In Fig.~\ref{fig:mrel} we show the evolutionary path
(from the bottom left to the top right) in the mean BH mass - stellar bulge mass ($\langle m_{\rm BH}\rangle$ - $\langle m_{\rm b}^{\star}\rangle$) plane predicted by the reference model (black solid line) and by the model with
$\dot{M}_{\rm accr} \leq 20 \, \dot{M}_{\rm Edd}$ (orange solid line). In each simulation, we consider the mean values among all the SMBH progenitors
and their hosts present at each redshift, and then we average over the 29 merger trees.
For comparison, we also show in the same figure the observational data and the empirical fit (gray data points and dashed line) for local galaxies provided by \citet{Sani11},
and the more recent scaling relation inferred for local ellipticals and classical bulges by \citet[][solid green line and shaded region]{Kormendy13}.
In the reference model, BH progenitors of the first SMBHs at $z > 6$ follow a symbiotic evolution, with a small offset with respect to the observed local scaling relation.
When $\dot{M}_{\rm accr} \leq 20 \, \dot{M}_{\rm Edd}$, the different evolution at high-$z$ is reflected in a steeper relation between the mean BH mass and the stellar
bulge, very close to that predicted by \citet{Kormendy13}. The difference between the models becomes negligible when $\langle m_{\rm BH}\rangle \, >10^7 \, M_\odot$ ($\langle m_{\rm b}^{\star}\rangle \, > 10^9 \, M_\odot$), which occurs - on average - at $ z \sim 10$.
When the average BH mass has reached its value of $(3.6 \pm 1.6) \times 10^9 M_{\odot}$ at $z = 6.4$, the host galaxy has already grown to a bulge (total) stellar mass of
$2.7 \, (3.2) \times 10^{11} M_{\odot}$. Hence, we predict a final average BH-to-bulge (total) stellar mass ratio of $M_{\rm BH}/M_{\rm star} = 0.013
\, (0.011)$, well within the scatter of the relations inferred from various observational studies of massive local galaxies \citep[][and references therein]{Marconi03, Sani11, Kormendy13}. However, this ratio is $\sim 25$ times smaller than what is inferred from observations of J1148
(red data point). Following the procedure commonly applied to high-$z$ bright QSOs, the stellar mass is computed as $M_{\rm star} = M_{\rm dyn} - M_{\rm H_2}$, with $M_{\rm dyn}$ and
$M_{\rm H_2}$ inferred from CO observations (see Table 1, \citealt{Walter04,Wang10}). Similar results obtained for a larger sample of $z > 6$ QSOs
have suggested the idea that the first SMBHs grow faster than their host galaxies (\citealt{Wang10,Wang13,Venemans15} see however \citealt{Willott15}).
As suggested by \citet{Valiante14}, observations of high-$z$ QSOs are sensitive to the innermost $2.5 - 3$~kpc and may be missing a significant fraction of the galaxy
\citep{Valiante14}. This is also supported by recent observations of J1148, which show extended [C {\small II}] 158 $\mu$m emission and far-infrared (FIR) continuum,
likely associated with cold gas and star formation on scales of $\sim 10 - 20$~kpc \citep{Cicone15}.
Indeed, the mean bulge effective radius at $z = 6.4$ predicted by the model is $R_{\rm eff} = 7.3 \pm 0.8$~kpc, in
good agreement with observations of local galaxies hosting the largest BHs (see Fig.~\ref{fig:reff}). When we restrict to the innermost 2.5 kpc, we
find a mean bulge stellar mass of $(3.9 \pm 0.2)\times 10^{10} M_\odot$, much closer to the observation (see the arrow and black data point in Fig.~\ref{fig:mrel}).
The same is true if we consider the mean gas mass within 2.5 kpc, that we predict to be $M_{\rm H_2} = (2.0 \pm 0.9) \times 10^{10} \, M_\odot$, that well reproduce
the observed value (see Table 1).
Finally, the reference model predicts a mean dust mass at $z = 6.4$ of $M_{\rm dust} = (3.6 \pm 0.9)\times 10^8\, M_\odot$, in good agreement with the value inferred
from the FIR luminosity. This result has been obtained using the chemical evolution module, which includes dust processing in a 2-phase ISM, that
has been developed by \citet{Valiante11,Valiante14} and \citet{deBennassuti14}. Hence, consistent with previous findings \citep{Valiante11,Valiante14}, we
find that the large dust mass that has enriched the ISM of the host galaxy is the result of a large stellar component, and that the apparent tension with the
observed dynamical mass - the so-called {\it stellar mass crisis} - is at least partly due to the small spatial extent of the observations. We refer the interested
readers to \citet{Valiante14} for an extended discussion on this point.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Fig9_coevolution}
\caption{Redshift evolution of the mean black hole mass as a function of the mean bulge stellar mass in SMBH progenitors for the reference model (black solid line) and
the model with $\dot{M}_{\rm accr} \leq 20 \, \dot{M}_{\rm Edd}$ (orange solid line). Gray circles are data for local galaxies, with the empirical fit (gray dashed line) provided by
\citet{Sani11}. The solid green line with shaded region is the scaling relation derived by \citet{Kormendy13}. The red point represents the black hole and stellar mass within a region of 2.5 kpc inferred from observations of J1148 (Table \ref{Tab1}).
The model predictions are averaged over 29 merger tree realizations and the errorbars show the 1-$\sigma$ dispersion for both mean BH and bulge stellar mass, at few selected redshift along the averaged merger histories.
The arrow illustrates the reduction in stellar mass if we restrict to the central 2.5 kpc region (black data point, see text).}
\label{fig:mrel}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Fig10_Revo}
\caption{Redshift evolution of the mean black hole mass as a function of the mean bulge effective radius of the host galaxy, averaged over 29 merger tree realizations with 1-$\sigma$ errorbars at few selected redshift,
for the reference model (black solid line), and the model with $\dot{M}_{\rm accr} \leq 20 \, \dot{M}_{\rm Edd}$ (orange solid line).
Gray circles represent data for local galaxies, with the empirical fit (gray dashed line) given by \citet{Sani11}.}
\label{fig:reff}
\end{figure}
\section{Discussion and conclusions}
\label{sec:discussion}
The data-constrained model GAMETE/QSOdust allows us to explore a large number of formation histories of a given quasar, in the present case J1148 at $z = 6.4$,
reproducing the observations of the quasar and its host galaxy. With the adjustable free parameters that we have selected, described in Table 2, the model reproduces
the physical quantities listed in Table 1.
Hence, the properties that we predict for the host galaxy of J1148 (SFR, dust mass, gas and stellar masses) are consistent with previous results
obtained by (\citealt{Valiante14, Valiante16}) for the same quasar.
With respect to \citep{Valiante11,Valiante14, Valiante16}, the current version of \textsc{GAMETE/QSOdust} enables to ({\it i}) follow the formation and
evolution of the disk and bulge in each progenitor galaxy, and ({\it ii}) remove the constraint of Eddington-limited BH accretion.
In particular, \citet{Valiante16} find that the formation of a few (between 3 and 30 in the
reference model) heavy BH seeds with masses $M_{\rm BH} = 10^5 \, M_\odot$ enables the Eddington-limited growth of a SMBH by $z = 6.4$.
This conclusion heavily depends on the occurrence - among the progenitors - of Lyman-$\alpha$ cooling halos where gas cooling is suppressed
by the low-metallicity and strong Lyman-Werner background \citep{Valiante16}. This ``head start'' requires favourable conditions, that are easily
erased by the joint interplay of chemical, radiative and mechanical feedback effects.
Here we have explored the alternative scenario where the BHs can grow through a radiatively inefficient slim disk at super-Eddington rates.
This condition is easily met by light BH seeds formed in gas-rich systems at high redshifts.
In the model presented in this work, we plant light BH seeds in newly virialized halos above redshift $z \sim 20$, before the effects of chemical feedback inhibit the formation of metal poor ($Z<Z_{\rm cr}$) stars.
With this seeding prescription, we find that:
\begin{itemize}
\item On average, $\sim 80\%$ of the SMBH mass of J1148 is provided by super-Eddington gas accretion ($>16 \, L_{\rm Edd} / c^2$).
This represents the dominant contribution to BH growth down to $z \sim$ 10;
\item Due to fast and efficient super-critical accretion, the mean BH mass at redshift $z \sim 20$ is $\gtrsim 10^4 \, M_\odot$, comparable that predicted for heavy BH seeds formed by direct collapse;
\item More than $90\%$ of BH progenitors accrete at super-Eddington rates at $15<z<25$ in dense, gas-rich environments. At these redshifts, hyper-Eddington accretion events, with $\dot{M}_{\rm accr}/\dot{M}_{\rm Edd} \sim 10^2-10^4$, are common;
\item The observed SMBH mass of J1148 at $z = 6.4$ can be reproduced even adopting a maximum super-Eddington accretion rate of $\dot{M}_{\rm accr} \leq 20 \, \dot{M}_{\rm Edd}$, showing that hyper-critical accretion
is not required;
\item BH progenitors of the final SMBH evolve in symbiosis with their host galaxies. The predicted AGN-driven mass outflow rate at $z = 6.4$ shows a radial profile that is broadly consistent with the lower limits
inferred from CII observations by Cicone et al. (2015);
\item The predicted final BH-to-bulge (total) stellar mass ratio,
$M_{\rm BH}/M_{\rm star} = 0.013 \, (0.011)$, is within the scatter of the observed local relation
and a factor of $\sim 25$ lower than inferred from dynamical mass observations of J1148.
The discrepancy is significantly reduced if we account
for the mass within 2.5\,kpc from the nucleus, the region targeted by CO data. At this radius,
the mean bulge stellar mass is $(3.9 \pm 0.2) \times 10^{10} \, M_\odot$, much closer to the
observational value.
\end{itemize}
As a consequence of the lower gas accretion rates,
the average BH mass predicted by \citet{Valiante16} is much smaller than in our reference model, at all but the latest redshifts
(see their Fig.~3).
This difference is reduced when we impose that $\dot{M}_{\rm accr} \leq 20 \, \dot{M}_{\rm Edd}$.
In this case, the average BH progenitor mass at $z \sim 15$ is comparable in the two models. However, while in \citet{Valiante16} the mass
growth is triggered by the formation of heavy seeds, in our model this is achieved by mildly super-Eddington accretion on
light BH seeds.
The progenitors of SMBHs at $z > 6$ experience the {\it strong} form of coevolution
defined by \citet{Kormendy13}, where galaxies affect BH growth by controlling BH feeding and merging,
and BHs control galaxy properties via AGN feedback.
In fact, while the small radiative efficiencies of super-Eddington accreting BHs
is indispensable to limit the effects of AGN feedback \citep{Lupi15}, at $z > 10$
the BHs shine at a few Eddington luminosities with a noticeable effect
on the cold gas content of their host galaxies. At lower $z$, an increasing fraction of BH
progenitors accrete at sub-Eddington rates, but with larger radiative efficiencies. As a result of the larger BH
mass and BH accretion rates, AGN-driven winds at $z < 10$ power strong galaxy-scale outflows and
suppress star formation, leading to the down-turn of the total SFR shown in Fig.~\ref{figure:sfr}.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{Fig11_Lbol}
\caption{Mean bolometric luminosity of BH progenitors as a function of the mean BH mass predicted by the
reference model (black solid line) and by the model with $\dot{M}_{\rm accr} \leq 20 \, \dot{M}_{\rm Edd}$ (yellow solid line).
For each model, the lines show the average among the 29 merger trees and the shaded regions are the 1-$\sigma$ dispersion.
The data points show the observational values of the two quasars SDSS J1149 (red circle) and ULAS J1120 (green square).
The diagonal dashed lines show some reference values of the luminosity in units of the Eddington luminosity.}
\label{fig:Lbol}
\end{figure}
In Fig.~\ref{fig:Lbol} we show the average bolometric luminosity as a function of the average BH mass
of SMBH progenitors for the reference model (black solid line) and for the model
with $\dot{M}_{\rm accr} \leq 20 \, \dot{M}_{\rm Edd}$ (yellow solid line). The model predictions are compared
with observations of SDSS J1148 ($z = 6.4$) and of the most distant quasar
currently known, ULAS J1120 at $z=7.1$ \citep{Mortlock11}. The errorbars on the bolometric
luminosities account for the observational uncertainties on the flux at $1450 \, {\AA}$ and on the bolometric
corrections \citep{Richards06}.
Some reference values of the luminosity in units of the Eddington luminosity are shown by the
diagonal dashed lines. The difference among the two models reflects the different BH accretion
history: in the model with $\dot{M}_{\rm accr} \leq 20 \, \dot{M}_{\rm Edd}$ the first BH progenitors
accrete at a lower rate, saving cold gas for the latest evolutionary phases. As a result,
for BH progenitors with $M_{\rm BH} \lesssim 10^8 \, M_\odot$, the mean luminosity
predicted by the reference model is always super-Eddington (with $L_{\rm bol} > 10\, L_{\rm Edd}$
when $M_{\rm BH} \lesssim 10^6 \, M_\odot$), whereas in the model with $\dot{M}_{\rm accr} \leq 20 \, \dot{M}_{\rm Edd}$
the mean luminosity is always $0.1 \, L_{\rm Edd} < L_{\rm bol} < L_{\rm Edd}$. However, in the latest
evolutionary phases, when $M_{\rm BH} > 10^8 \, M_\odot$, this trend is reversed.
Given the observational uncertainties and the large variability among different merger trees, the luminosity of J1148
is consistent with the model predictions. Interestingly, the data point of ULAS J1120 is also lying within the 1-$\sigma$
dispersion. Indeed, we find that $\sim 20\%$ of BH progenitors at $z = 7.1$ have luminosities and masses compatible
with the observed values of ULAS J1120, indicating that this quasar may be one of the progenitors of SDSS J1148 at $z = 6.4$.
\section*{Acknowledgments}
We thank Valeria Ferrari, Nicola Menci and Marta Volonteri for useful discussions and comments.
The research leading to these results has received funding from the European Research Council under the European
Union’s Seventh Framework Programme (FP/2007-2013)~/~ERC Grant Agreement n. 306476.
|
1,477,468,751,106 | arxiv | \section{Introduction}
\label{sec:intro}
Wireless communication has always been more vulnerable to attacks than its
wired counterparts. The fact that wireless signals are broadcast means they are
more easily eavesdropped. This weakness has been
exploited in many wireless networks~\cite{crack_bluetooth,
sheldon2012insecurity, crack_wepwpa}. Even more recent security protocols
like WPA2-PSK have been successfully compromised by snooping attacks~\cite{crack_wpa2psk, nakhila2015} via simple tools~\cite{aircrack}.
\yedit{Despite existing encryptions,
one can still infer the specific sources of traffic by observing just packet sizes
and counts in data transmissions~\cite{pet-http, marc06inferhttp}.}
\yedit{While we continue to improve encryption algorithms,
an equally promising direction}
is to use wireless beamforming to defend against
eavesdroppers at the physical layer. Beamforming allows a transmitter (TX) to
send a highly focused, directional signal towards a target receiver (RX), so
that nearby attackers not directly between the two endpoints cannot capture
the transmission. The narrow beam is built by leveraging signal
cancellations among multiple antennas in a phased array\footnote{ We do not
consider horn antennas as they are bulky, expensive, and can only be
rotated mechanically. They are not suitable for our application scenarios.}, and is most easily
built on \textit{millimeter-wave} (mmWave) transmitters~\cite{mmwave_secure}.
For example, 60GHz phased arrays could fit on small devices like smartphones,
and can generate highly focused beams
({\em e.g.,\ } 3$^\circ$ using 32$\times$32 antennas) while achieving Gbps throughput.
While earlier applications focused on short-range indoor applications, {\em e.g.,\ }
home routers~\cite{tplink11ad} and wireless virtual
reality headsets~\cite{htcvive60g}, new applications of mmWave leverage its
high directionality and throughput for long-range communication. Many such
applications have already been deployed. Facebook has deployed a mesh
network using 60GHz communications in downtown San
Jose~\cite{facebook_sanjose}. Google is considering replacing wired fiber
with mmWave to reduce cost~\cite{googlefiber}. Academics have proposed
picocell networks using mmWave signals towards next 5G
network~\cite{marzi_globecom15, picocell_tracking, zhu14a}.
With a growing number of deployed networks and applications, understanding
physical properties of mmWave is critical. One under-studied
aspect of directional transmissions is the artifact of array {\em side
lobes}. Fig.~\ref{fig:sidelobe_example} shows an example
of the series of side lobes pointing in different directions.
Side lobes are results of imperfect signal cancellation among antenna
elements. While weaker than the main lobe, side lobes carry the same
information, and can be exploited by eavesdroppers to recover the
transmission. As physical imperfections, they are very difficult to eliminate.
In this paper, we conduct the first empirical study of the security
properties of mmWave communications against side-lobe eavesdropping attacks.
\yedit{While theoretical studies have shown the problem of side-lobe leakage~\cite{kim2017analysis}, it is never validated using network measurements, especially for long-range
communications.}
We use a commercial 60GHz testbed from Facebook's Terragraph project~\cite{terragraph} to evaluate the
effectiveness of side-lobe eavesdropping in both indoor and outdoor
scenarios. Specifically, we answer three key questions:
\begin{packed_itemize}
\vspace{-0.06in}
\item \para{How severe is mmWave side-lobe eavesdropping?
(\S\ref{sec:measurement})} We observe that side-lobe eavesdropping is
incredibly effective in both indoor and outdoor scenarios. Attacker
can recover transmission in a large area with high success rate (details
below). \yedit{Particularly for outdoor scenarios,
most eavesdropping areas
are connected, and the attacker can move freely and launch stealthy attacks.}
\begin{table}[h]
\vspace{-0.08in}
\raggedleft
\small
\begin{tabular}{|l|C{1.16cm}|C{1.16cm}|C{1.16cm}|}
\hline
\multirow{2}{*}{Eavesdropping Area ($m^2$)} &
\multicolumn{3}{c|}{Attacker's Packet Success Rate} \\
\cline{2-4}
& $>$10\% & $>$50\% & $>$95\% \\
\hline
Mesh & 79 & 64.6 & 55 \\
\hline
Picocell & 109 & 88.6 & 54 \\
\hline
Peer-to-Peer & 16.6 & 15.7 & 13.1 \\
\hline
\end{tabular}
\vspace{-0.1in}
\end{table}
\item \para{Can better mmWave hardware improve security?
(\S\ref{sec:simulation})} We find that improved hardware can only
reduce the impact of the eavesdropping attack, but not fully defend
against it. Eavesdropping side lobes is still possible even after
removing hardware artifacts from antennas and deploying more antenna
elements.
\item \para{Are existing defenses effective against side-lobe eavesdrop
attacks? (\S\ref{sec:existing})} Although existing defenses show
promising results against single-device eavesdroppers, they either impose
impractical hardware requirements, or remain vulnerable against more
advanced attackers, {\em e.g.,\ } those with multiple devices.
\end{packed_itemize}
\begin{figure*}[ht]
\begin{minipage}{0.29\textwidth}
\vspace{-0.08in}
\centering
\includegraphics[width=1\textwidth]{figs/sidelobe.eps}
\vspace{-0.28in}
\caption{Example of side lobes of a 16$\times$8 array (horizontal plane).}
\label{fig:sidelobe_example}
\end{minipage}
\hfill
\begin{minipage}{0.685\textwidth}
\centering
\includegraphics[width=1\textwidth]{figs/scenario.eps}
\vspace{-0.22in}
\caption{Illustration of three application scenarios we test the
eavesdropping attack. Attacker could eavesdrop through side lobes (blue)
to decode information transmitted in the main lobe (red).}
\label{fig:scenarios}
\end{minipage}
\vspace{-0.1in}
\end{figure*}
\vspace{-0.15in}
\section{Background}
\label{sec:background}
To provide context for later study, we first describe
the adversarial model
and then our measurement methodology.
\para{Adversarial Model.} We consider {\em passive} eavesdropping, where an
attacker listens to side-lobe signals and recovers packet header or payload.
The
attacker stays hidden from its victim TX and RX, but is unable to manipulate
the communication between the victims. Without knowing the attacker's physical
location, victims cannot apply conventional defenses like
null-forming\footnote{If TX knows the attacker's location, it can change its
radiation pattern to nullify signals towards that
location to avoid attacks~\cite{nullforming}.}.
We do not consider eavesdropping attacks on the main lobe of the
transmission. Such an attack would affect the
communication between TX and RX, as the attacker has to stay inside the main
lobe or use a reflector, and thus can be
detected~\cite{steinmetzer2015eavesdropping}.
Finally, we assume the attacker has one or more synchronized devices as
powerful as the victim's hardware. The attacker knows the victim's location
and hardware configuration\footnote{This information is often publicly
available, or could be derived from simple techniques, {\em e.g.,\ } device
localization.}. The attacker and his device(s) are free to move around the
victims.
\para{Application Scenarios.}
We consider three practical scenarios where
mmWave signals are commonly used:
mesh networks~\cite{facebook_sanjose}, picocell networks~\cite{zhu14a},
and indoor peer-to-peer
transmissions~\cite{tplink11ad,htcvive60g}.
Fig.~\ref{fig:scenarios} shows an illustration of the three.
\yedit{mmWave signals are commonly considered for indoor peer-to-peer
scenarios (Fig.~\ref{fig:scenarios}(c)), {\em e.g.,\ }
virtual reality~\cite{htcvive60g, abari17enabling}
and wireless display~\cite{delldock}. Here TX and RX are within very short
range ($\leq$10m) and often at the same height ($\sim$1m).}
As mmWave signals degrade much faster than lower frequency signals
in the air, it is less known that
they can also be used outdoor for long-range communications
(20--200m). For example, Facebook has deployed a mesh network
in downtown San Jose~\cite{facebook_sanjose},
supporting up to 200m link using 60GHz phased array radios\footnote{Compared
to horn antennas, phased arrays offer robust real-time
link adaptation by eliminating mechanical steering.}. Researchers~\cite{marzi_globecom15, zhu14a} also propose picocell
networks using 60GHz signals.
In both scenarios, TX is mounted higher than human height, {\em e.g.,\ } 6m.
Depending on the scenario, RX is either mounted at a similar height or
on the ground, shown in Fig.~\ref{fig:scenarios}(a) and (b), respectively.
\begin{figure}[t]
\centering
\includegraphics[width=0.28\textwidth]{figs/testbed.eps}
\vspace{-0.1in}
\caption{Our 60GHz testbed with 16$\times$8 antenna array.}
\label{fig:testbed}
\end{figure}
\para{Measurement Hardware.}
Our testbed consists of three identical 60GHz radios. We use them as TX, RX,
and the attacker. Each radio has a 16$\times$8 rectangular phased array
(Fig.~\ref{fig:testbed}) and follows the 802.11ad single-carrier
standard for 60GHz communication~\cite{802.11ad}.
Our radios are designed for outdoor mesh network scenario with a
maximum Equivalent Isotropically Radiated Power (EIRP) of 32dBm,
supporting 1Gbps (QPSK) transmissions at 200m range (line-of-sight).
But we could re-purpose these radios for picocell and
peer-to-peer scenarios as well, by lowering the EIRP.
Each receiving radio can report received signal-to-noise-ratio (SNR)
of each packet in real time.
\begin{table}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{
Scenario
} & \multicolumn{2}{c|}{TX} & \multicolumn{3}{c|}{RX} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Examined\\Area\\(m$^2$)\end{tabular}} \\ \cline{2-6}
& \begin{tabular}[c]{@{}c@{}}EIRP \\ (dBm)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Height \\ (m)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Distance \\ to TX\\(m)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Height \\ (m)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Max\\Throughput\\ (Gbps)\end{tabular} & \\
\hline
Mesh & 32 & 6 & 200 & 6 & 1.0 & 10$\times$20 \\
\hline
Picocell & 32 & 6 & 50 & 1 & 1.5 & 10$\times$20 \\
\hline
Peer-to-Peer & 23 & 1 & 10 & 1 & 1.5 & 4$\times$5 \\
\hline
\end{tabular}
}
\vspace{-0.1in}
\caption{Detailed experiment setup and configurations.}
\label{tab:setup}
\vspace{-0.1in}
\end{table}
\para{Measurement Setup.}
We place our testbed radios at different heights and distances apart to
emulate the three application scenarios. In all scenarios, TX sends
32KB TCP packets to RX at 1Gbps by default.
Equipment placement details and specifications are listed in
Table~\ref{tab:setup}.
In particular for (c) peer-to-peer,
we choose 23dBm EIRP the same as the commodity 60GHz chipset from
Wilocity~\protect\cite{wilocity}. Given TX's EIRP and
the distance from victim RX to TX, RX can at best communicate
with TX at 1Gbps,
1.5Gbps, and 1.5Gbps with less than 5\% packet loss
in mesh, picocell, and peer-to-peer networks,
respectively. Further reducing TX power will affect RX's performance.
During transmission, we move the attacker radio around TX to eavesdrop
side lobes at different locations. We grid the area around TX
(200$m^2$ for two outdoor scenarios and 20$m^2$ for the indoor scenario)
into 816 (34$\times$24) rectangles. In each grid, we face the attacker
radio at TX and
eavesdrop the transmission for 30$s$. Our testbed could record 100k packet
samples and 30 SNR values in each grid. In each application
scenario, we collected a total of 80 million packets and 24k SNR measurements.
\vspace{-0.1in}
\section{Effectiveness of Eavesdropping}
\label{sec:measurement}
From our collected measurements, we now present the severity of side-lobe
eavesdropping under three mmWave network scenarios.
We use the following two metrics to quantify
the effectiveness of side-lobe eavesdropping.
\begin{packed_itemize} \vspace{-0.1in}
\item {\em Packet success rate (PSR)}
measures the percentage of packets the attacker could successfully
retrieve from eavesdropping through side lobes, calculated from
100k packets per location. When the attacker's PSR
is no less than that of the victim RX ($>$95\% in our
experiments), we consider it to be a {\em full} attack.
\item {\em Eavesdropping area}
measures the area where the attacker can achieve PSR higher than a given
threshold by eavesdropping on side lobe signals.
\end{packed_itemize}
\vspace{-0.06in}
\begin{figure*}[t]
\centering
\mbox{
\includegraphics[width=0.33\textwidth]{figs/map_mesh_psr.eps}
\hfill
\includegraphics[width=0.33\textwidth]{figs/map_pico_psr.eps}
\hfill
\includegraphics[width=0.33\textwidth]{figs/map_indoor_psr.eps}
}
\mbox{
\subfigure[Mesh Network]{
\includegraphics[width=0.315\textwidth]{figs/map_mesh_area_vs_psr.eps}
\label{fig:sidelobe_mesh}
}
\hspace{0.01in}
\subfigure[Picocell Network]{
\includegraphics[width=0.315\textwidth]{figs/map_pico_area_vs_psr.eps}
\label{fig:sidelobe_pico}
}
\hspace{0.01in}
\subfigure[Peer-to-Peer Network]{
\includegraphics[width=0.315\textwidth]{figs/map_indoor_area_vs_psr.eps}
\label{fig:sidelobe_indoor}
}
}
\vspace{-0.1in}
\caption{Effectiveness of side-lobe eavesdropping under three mmWave
scenarios.
For each scenario, the top plot shows attacker's packet success rate (PSR) at 1Gbps at different \yedit{side-lobe} locations.
TX is at (0, 0) and beams towards
RX along x-axis. The bottom one shows how the eavesdropping area changes with PSR thresholds at different link rates.
}
\label{fig:sidelobe}
\vspace{-0.1in}
\end{figure*}
\vspace{-0.08in}
\subsection{Mesh Network}
We begin by showing the effectiveness of eavesdropping in an outdoor mesh network.
During transmission, the main lobe points towards RX and side lobes point
towards the ground. The eavesdropper moves freely on the ground
and searches for locations where he could hear side-lobe signals.
Fig.~\ref{fig:sidelobe_mesh} shows the attacker's PSR at different
locations, and how the eavesdropping area changes.
From the heatmap in the figure,
we observe that the attack is proved very effective. In 79$m^2$
out of the 200$m^2$ examined area, the attacker could successfully decode
\textit{at least one} packet.
\jedit{
Aggregated, the eavesdropping area accounts for 39.5\% of the entire area.
Large connected portions allow an attacker to act more stealthily
by moving freely through areas as large as 23$m^2$
rather than staying in one location.
}
Note that all eavesdropping areas center along TX's transmission
direction (along the x-axis). This allows the attacker to easily predict
vulnerable areas to launch attacks, because side lobes
along the x-axis are strong enough for eavesdropping, while other side lobes
pointing away from the x-axis suffer higher signal loss ($>$13$dB$)
and become too weak.
We further investigated how the eavesdropping area changes given different PSR
thresholds. As shown in the lower figure in Fig.~\ref{fig:sidelobe_mesh}, the
eavesdropping area reduces very slowly as we increase the PSR threshold.
When requiring the attacker to achieve $>$50\% PSR, the eavesdropping area
reduces only by 19\%, down to 64$m^2$. Moreover, in a 55$m^2$ area
(69.6\% of the total eavesdropping area), the attacker could achieve
$>$95\% PSR, the same performance achieved by RX. This further
shows the incredible effectiveness of an eavesdropping attack in a mesh
network scenario.
Our measurements for the mesh network cover 200$m^2$ area and already show
the severity of side-lobe eavesdropping. Although not shown in figure, we found
that more eavesdropping locations also exist outside the examined area,
{\em e.g.,\ } $>$20$m$ away from TX. We leave further investigation of these areas
to future work.
\subsection{Picocell Network}
Fig.~\ref{fig:sidelobe_pico} shows the eavesdropping results in a picocell network
scenario.
Similar to the mesh network scenario, the attacker could successfully
eavesdrop the transmission in a large area. Within 109$m^2$, the attacker
could decode at least one packet, which is 54.5\% of the entire examined area.
An area of 54$m^2$ within this 109$m^2$ allows the attacker to eavesdrop
with $>$95\% PSR, thus fully recovering the victim's transmission.
The ratio of
eavesdropping area to the entire examined area is comparable to the mesh
network scenario, which indicates similar levels of effectiveness of the
eavesdropping attack.
Interestingly, in both the mesh and picocell networks,
the area of connected eavesdropping locations
grows larger as the attacker moves away from TX.
This seems counter-intuitive, since signals become weaker
at farther distances due to propagation loss. However,
in outdoor scenarios, the projection of side lobes on
the ground grows larger at farther distances. Despite the
propagation loss, the side lobes remain strong enough
for the attacker to successfully decode the transmission, given
the sufficiently high TX power for transmissions at distances over
100$m$. This finding appears more obvious in the picocell network
because TX's beams point downwards, causing less
propagation loss through side lobes.
\para{Increasing link rate reduces eavesdropping area.}
Different from the mesh network, where RX remains stationary and
achieves at most 1Gbps throughput, the victim RX in picocell is
mobile. As RX moves closer to TX, RX could achieve higher SNR and increase
data rate up to 1.5Gbps, while maintaining $>$95\% PSR.
We re-configured the testbed to transmit at 1.5Gbps, and measured the
corresponding PSR at different locations.
The lower figure in Fig.~\ref{fig:sidelobe_pico} shows a smaller eavesdropping area
when TX increases data rate from 1Gbps to 1.5Gbps. On average,
this reduces the eavesdropping area by 24$m^2$.
In particular, when requiring the
attacker to have $>$95\%, increasing throughput reduces the
eavesdropping area from 54$m^2$ to 31$m^2$.
\yedit{The area reduces because increasing the legit transmission rate also raises the channel quality requirement at the eavesdropper, thus
mitigating the attack to some extent.}
Yet it does not fully solve the problem, as the attacker could still
successfully decode packets in a large area.
\vspace{-0.05in}
\subsection{Peer-to-Peer Network}
Fig.~\ref{fig:sidelobe_indoor} shows the eavesdropping performance in
a peer-to-peer scenario.
In a connected area of 16.6$m^2$ the attacker could decode at least one packet.
The area is significantly large (83\%),
compared to the 20$m^2$ total examined area.
When requiring $>$95\% PSR, the attacker could still decode the transmission
in 65\% (13.1$m^2$) of the total area.
Similar to the picocell scenario, both RX and TX can move freely,
causing different distances between RX and TX.
This allows higher SNR and higher link rate
without degrading RX's PSR, but again, it cannot remove the eavesdropping
area completely. Still,
in an area of 7$m^2$, the attacker could decode transmissions with $>$95\% PSR.
Note that the shape of eavesdropping area in the peer-to-peer scenario differs
from those in the other two scenarios.
This is mainly because TX sits at a much
lower height than the other two scenarios. The attacker
resides on the same plane of TX and RX, and captures the side-lobe
signals on the horizontal plane. As such, the eavesdropping area
follows a similar shape of the side-lobe beam pattern
(Fig.~\ref{fig:sidelobe_example}), rather than the circular ones
observed in mesh and picocell networks.
This observation of different shapes within eavesdropping areas
could better guide the attacker's predictions for
where to launch attacks based on a targeted scenario.
Although the eavesdropping area in an indoor scenario
accounts for a larger portion of the examined area than the
outdoor scenarios, its absolute size is significantly smaller,
thus with less potential threat.
Moreover, 60GHz signals can hardly penetrate walls, so the eavesdropping area
for the indoor scenario
remains bounded by its room size, further restricting
the attacker's mobility and effectiveness of eavesdropping. Therefore,
side-lobe eavesdropping proves much more severe in the prior two outdoor
scenarios.
\vspace{-0.09in}
\subsection{Summary}
In all scenarios, we find that a passive attacker could
effectively eavesdrop transmissions with very high PSR in a large area.
This shows that despite the directional nature of mmWave beamforming
transmission, side lobes still expose a significant amount of information.
Increasing transmission rate slightly mitigates
the threat, but cannot effectively defend against the eavesdropping attack.
\section{Impact of Radio Hardware}
\label{sec:simulation}
\begin{figure}[t]
\centering
\includegraphics[width=0.42\textwidth]{figs/sim_distortion.eps}
\vspace{-0.1in}
\caption{Antenna artifacts cause side lobe distortions.}
\label{fig:sim_distortion}
\vspace{-0.15in}
\end{figure}
\begin{figure*}[ht]
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[width=0.98\textwidth]{figs/factor_perfecthw.eps}
\vspace{-0.1in}
\caption{Perfect antennas
help mitigate eavesdropping but
not avoid it.}
\label{fig:sim_perfecthw}
\end{minipage}
\hfill
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[width=0.98\textwidth]{figs/factor_size_optimal.eps}
\vspace{-0.1in}
\caption{Increasing number of antennas helps reduce
eavesdropping area.}
\label{fig:sim_antennasize}
\end{minipage}
\hfill
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[width=0.98\textwidth]{figs/factor_optimal_height_pico.eps}
\vspace{-0.1in}
\caption{Attacker can raise device high
to enlarge eavesdropping area.}
\label{fig:sim_raiseheight}
\end{minipage}
\end{figure*}
So far we have empirically measured the eavesdropping area using
off-the-shelf 60GHz devices (16$\times$8 phased arrays). In this
section, we further explore whether upgrading array hardware can help
reduce the impact of
eavesdropping attacks. Specifically, there are two immediate ways to improve
mmWave array
hardware and reduce side-lobe emissions: (1) removing implementation artifacts from the antenna
radiation pattern, (2) increasing the number
of antenna elements. Fig.~\ref{fig:sim_distortion} compares the ideal antenna
radiation pattern and that of our current hardware. While the current
hardware faces distortions on side-lobe emissions, the ideal array implementation
would produce weaker, less detectable side lobes. Similarly, increasing the
number of antennas can also reduce the emission power of side
lobes~\cite{balanis2016antenna}, thereby reducing the performance of an eavesdropping attack.
In the following, we study how upgrading radio hardware would reduce the
eavesdropping effectiveness. To emulate hardware configurations different
from our testbed, we used trace-driven simulations.
Specifically,
we apply the Friis free-space propagation model~\cite{friis} to compute an attacker's SNR
at different locations.
All of our testbed measurements,
along with prior works~\cite{zhu14a, zhou12}, show that
this model can accurately estimate the SNR in line-of-sight
with very small errors ($\pm 1dB$).
At each location, we map
simulated SNR to PSR using the empirical correlation derived from
previous testbed experiments. We verified that this correlation
remains stable and accurate across different application scenarios
and link rates. Our simulations follow the same configuration
in~\S\ref{sec:background}, with altered hardware aspects.
We also expanded the experiments by varying the height of the eavesdropping
device and RX's locations.
\vspace{-0.05in}
\subsection{Perfect Antennas without Artifacts}
First we simulated eavesdropping attacks on three application scenarios,
using perfect antennas without artifacts. Fig.~\ref{fig:sim_perfecthw}
shows the eavesdropping areas for different scenarios, compared with our testbed
measurements. We only present results with 1Gbps, and omit
those from other data rates as they show similar findings.
Comparing eavesdropping areas using perfect antennas and our testbed, we found
eliminating hardware artifacts reduces the eavesdropping area.
In the mesh and picocell network scenarios, the eavesdropping area reduced
by 43\% and 52\% respectively. However, the area for the indoor peer-to-peer scenario
reduced by only 4\%, as for short-range indoor communications,
TX's power (with 23$dBm$ EIRP)
at side lobes is high enough to allow eavesdropping.
Despite the reduced eavesdropping area, we find the remaining area is
still large enough for attackers to move around while achieving high PSR.
In mesh, picocell, and peer-to-peer scenarios, an attacker could achieve
full recovery of the transmission ($>$95\% PSR) in 45$m^2$, 52$m^2$,
and 15$m^2$ respectively. Thus,
\textbf{\textit{ removing hardware artifacts cannot fully defend against eavesdropping}}.
\vspace{-0.05in}
\subsection{Increasing Number of Antenna Elements}
In addition to removing artifacts from hardware, we increased the number
of antennas, and tested if the combination of these two techniques could
defend against the eavesdropping attacks. Fig.~\ref{fig:sim_antennasize} shows
how the eavesdropping area (with PSR $>$0) changes as we increase the
number of antennas in the horizontal plane (our testbed uses 16
antennas in this plane). We find that in all our application scenarios,
eavesdropping area decreases monotonically as we add more antennas.
For example, in the picocell network scenario, using 64 antennas
(compared to 16 in our testbed) effectively reduces the eavesdropping
area from 52.39$m^2$ down to 3.91$m^2$.
This confirms the theory that more antenna elements reduce
side lobes' beam width and emission power, resulting in shrinking the area
where an attacker could receive the side-lobe signals.
As well as incurring larger hardware
implementation cost and size,
\textbf{\textit{increasing the number of antennas does not fully prevent
an eavesdropping attack}}. For instance, in both mesh and picocell
scenarios, a simple yet effective method for attacker is to raise
the eavesdropping device to get closer to TX and receive stronger signals.
This results in higher SNR
than eavesdropping on the ground, and the attacker could achieve better
eavesdropping results. Fig.~\ref{fig:sim_raiseheight} shows its effect
in the picocell scenario. Even though TX uses 64 perfect
antennas (in the horizontal plane), an attacker could increase the eavesdropping area from
3.91$m^2$ to 15.2$m^2$ by moving the device from
in-hand position (1$m$) to above-head (2$m$). If attacker uses drones to
further raise the device height, the eavesdropping area
increases to 30.8$m^2$. We observed similar improvement in mesh
networks. As such, even after reconfiguring hardware with significant cost,
an attacker could still successfully eavesdrop in large area.
This poses a serious security threat as simple methods, like holding the device higher,
allow attackers to advance beyond hardware upgrades.
So we need new defense mechanisms.
\section{Analysis of Existing Defenses}
\label{sec:existing}
Existing defenses
focus on adding \textit{artificial noises} to
side-lobe signals to prevent attackers from decoding
useful information~\cite{alotaibi_tcom16, heath_enhancing16,
juying17,ramadan_icc16, heath_tcom13, mckay13, heath_adhoc16}.
They fall under two categories, depending on
how the noise is generated: (1) antenna-based defenses and
(2) RF-chain-based defenses\footnote{
An RF (radio-frequency) chain refers to a set of physical
hardware components for wireless signal processing, bridging
between the antenna array and radio baseband.
}.
In this section, we analyze these defenses to study whether they
are practical and effective defending against side-lobe eavesdropping.
We summarize them in Table~\ref{table:defense}.
\para{Antenna-Based Defenses.}
This defense creates noisy side lobes by changing the radiated
signals from a subset of antenna elements. During transmission,
TX either disables~\cite{alotaibi_tcom16, heath_tcom13}
or flips the phase~\cite{heath_enhancing16} of a random subset of antennas.
This produces randomized radiation patterns on
side lobes, with minimal impact on normal transmissions\footnote{
Due to space limit, we omit details about this
defense. We refer interested readers to related work for more information.}.
Antenna-based defenses require TX to change the selected antenna subset very
frequently, often on a per-symbol basis, {\em i.e.\ }
at the time scale of {\em sub-nanoseconds}.
Less frequent switching keeps signals within a packet
highly correlated with each other. This could allow the attacker
to simply estimate the wireless channel, or guess the correlation constant
to recover the transmission.
Despite the effectiveness, switching at a per-symbol frequency
incurs extremely high cost in hardware and power.
Today's hardware can only support packet-level switching (10s of nanoseconds)
for the same reason, making antenna-based defenses impractical.
Despite the impracticality, we implemented the defenses in simulation.
We found it effectively defends against single-device side-lobe
eavesdroppers, regardless of where the attack is launched.
However, it remains vulnerable to advanced attacks.
For instance, attack can use multiple \textit{synchronized}
devices to measure
side-lobe signals at different angles,
undo the effects of antenna randomization on
a per-symbol basis, and recover the packets.
\yedit{The key is to decode the antenna selections for transmission
from measurements, as there is a limited number of antenna subset selections.}
\para{RF-Chain-Based Defenses.}
Unlike antenna-based defenses,
these defenses add \textit{additional} RF chains
to generate noise and do not need randomizations in TX's radiation pattern.
They ``jam'' the eavesdropper at TX's side lobes,
so the attacker can only receive a mixture of transmitted signals
and noise signals.
For mmWave hardware, this adds significant complexity
and cost in RF signal processing components, increasing the
hardware cost and power requirements.
Despite that previous work~\cite{rfchain_high_power,
heath_enhancing16} reduces the hardware requirement,
these defenses~\cite{heath_enhancing16,
juying17, mckay13, heath_adhoc16} remain costly and
power-demanding.
We found in simulations that RF-chain-based defenses effectively
defend against single-device eavesdroppers.
Although, TX's side lobes have gaps in between,
which nulls the transmitted signals. An advanced attacker
can exploit this and search for only noise signals.
He could then perform noise cancellation with only two synchronized receivers:
one listening to only noise
and the other eavesdropping the mixed noise and legit signals.
The attack becomes more difficult when
TX uses over two RF chains to generate noise.
Noise from different RF chains would mix together and becomes
difficult to isolate. Still, this countermeasure comes at an
even higher cost in mmWave hardware and device power.
\begin{table}[h]
\centering
\resizebox{0.95\columnwidth}{!}{%
\begin{tabular}{|l|c|c|c|c|}
\hline
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Category\end{tabular}}
&
\multicolumn{2}{c|}{Defense Requirement} &
\multicolumn{2}{c|}{Vulnerability} \\
\cline{2-5}
&
\begin{tabular}[c]{@{}c@{}}\# of RF\\Chains\end{tabular} &
\begin{tabular}[c]{@{}c@{}}Antenna\\Switching\\Frequency\end{tabular} &
\begin{tabular}[c]{@{}c@{}}\# of Sync.\\Devices\\to Attack\end{tabular} &
\begin{tabular}[c]{@{}c@{}}Info. Required \\ for Attack\end{tabular} \\
\hline
No Defense & 1 & - & 1 & side-lobe signals \\
\hline
Antenna-Based & 1 & per-symbol & N & \begin{tabular}[c]{@{}c@{}}signals at\\N locations\end{tabular} \\
\hline
RF-Chain-Based & $>$2 & - & 2 & \begin{tabular}[c]{@{}c@{}}noise signals \\ at N locations\end{tabular} \\
\hline
\end{tabular}
}
\vspace{-0.05in}
\caption{Summary and vulnerabilities of different defense mechanisms.
$N$ is the number of TX antennas.}
\label{table:defense}
\vspace{-0.0in}
\end{table}
\vspace{-0.08in}
\section{Related Work}
\label{sec:related}
\para{Security Analysis in mmWave Eavesdropping.}
Existing works to study mmWave eavesdropping either perform
simulations~\cite{alotaibi_tcom16,
dai2011exploring, dai2013eavesdropping, heath_enhancing16, juying17,
kim2017analysis, heath_tcom13, wang_twcom16, heath_adhoc16}
or use horn antennas~\cite{steinmetzer2015eavesdropping}, which
have no side lobes.
Differing from these,
we are the first to study mmWave side-lobe eavesdropping
from actual measurements, using commercial 60GHz phased arrays
in real-world application scenarios.
Many of these proposed defenses against mmWave eavesdropping, {\em i.e.\ }
using antenna-based~\cite{alotaibi_tcom16,
heath_enhancing16, heath_tcom13}
or RF-chain-based designs~\cite{
heath_enhancing16, juying17, wang_twcom16, heath_adhoc16}
assume a naive single-device attacker.
Our work analyzes these proposals and finds these methods
either as vulnerable to advanced attackers
with multiple synchronized devices, or they introduce significant
hardware overhead and cost. Thus, these defenses are not applicable to
mmWave transmissions.
\para{Eavesdropping in Low-Frequency RF Bands.}
Eavesdropping is more prevalent and easier in lower frequency
bands, {\em e.g.,\ } Wi-Fi and cellular, due to its omni-directional signals.
Many previous works propose defense mechanisms using jamming, which injects
artificial noise towards the
attackers~\cite{cooperative_jam09, relay_jamming08,
mckay13}. Although different techniques are used, {\em e.g.,\ } a separated jammer
synchronized with the transmitter~\cite{ijam10}, cooperative devices
or relays~\cite{cooperative_jam09, relay_jamming08}, these defensive
mechanisms all require a high number of RF chains. Despite the acceptably
minimized hardware cost in commodity Wi-Fi and cellular
devices, the cost of these defenses remains extremely high in the context of mmWave.
\section{Conclusion and Future Work}
\label{sec:discussion}
Despite an initial step to investigate mmWave side-lobe
eavesdropping with real measurements, we already find it proves to be
a much greater threat than we expected.
We hope our results
draw the attention of the community and shed light on the future
development of mmWave communications.
Moving forward, many open challenges remain.
\para{Potential Defenses.} Despite existing proposals,
we lack a practical and secure solution against side-lobe
eavesdropping. Other than reducing the RF chain cost in mmWave
communications, a possible alternative could leverage
the antenna artifacts. Designing specific artifacts in hardware
could resist the attack since we saw earlier that artifacts
may alter the shape of side lobes. The artifacts should be carefully
designed so the normal transmission remains unaffected.
\para{Empirical Validation of Advanced Attacks.}
We briefly described and simulated two types of advanced attacks,
{\em i.e.\ } antenna randomization attack and noise cancellation attack.
While other advanced attacks remain possible,
current mmWave hardware is not flexible enough to implement these attacks.
Also, our device does not report bit error rate (BER) which may shed
light on more fine-grained insights as~\cite{imc10mesh} did.
We hope more flexible hardware becomes available soon, so we can
empirically validate the attacks with consideration of antenna artifacts,
which may affect the attacks' performance.
\bibliographystyle{acm}
\balance
\begin{small}
\balance
|
1,477,468,751,107 | arxiv | \section*{\centering Abstract}
\textit{
This paper proposes an explicit way to optimize the super-resolution network for generating visually pleasing images.
The previous approaches use several loss functions which is hard to interpret and has the implicit relationships to improve the perceptual score.
We show how to exploit the machine learning based model which is directly trained to provide the perceptual score on generated images.
It is believed that these models can be used to optimizes the super-resolution network which is easier to interpret.
We further analyze the characteristic of the existing loss and our proposed explicit perceptual loss for better interpretation.
The experimental results show the explicit approach has a higher perceptual score than other approaches.
Finally, we demonstrate the relation of explicit perceptual loss and visually pleasing images using subjective evaluation.
}
\section{Introduction}
\label{section:introduction}
Single-image Super-resolution (SR)~\cite{Irani:1991:IRI:108693.108696}
enlarges a low-resolution image (LR) so that its image quality is maintained.
As with many computer vision technologies, SR has been improved
significantly with convolutional neural networks, CNNs (e.g.,
DBPN~\cite{DBLP:conf/cvpr/HarisSU18,DBLP:journals/corr/abs-1904-05677},
WDST~\cite{DBLP:conf/iccv/0002YXD19},
and
SRFlow~\cite{eccv2020sisr}).
Its performance is improved every year as demonstrated in public
challenges~\cite{DBLP:conf/cvpr/TimofteGWG18,DBLP:conf/iccvw/GuLZXYZYSTDLDLG19,DBLP:conf/cvpr/ZhangGTSDZYGJYK20}.
In the common SR methods using CNNs as well as those without CNNs, SR models are
trained so that the mean square error (MSE) is minimized. The MSE is
computed from the difference between a reconstructed SR image and its
high-resolution (ground-truth) image.
However, it is revealed that the MSE minimization leads to
perceptually-discomfortable SR images
\cite{DBLP:conf/iccv/SajjadiSH17, DBLP:conf/cvpr/LedigTHCCAATTWS17}.
In these works, perceptually-comfortable images are
reconstructed additional loss functions such as perceptual loss~\cite{johnson2016perceptual}, adversarial loss~\cite{DBLP:conf/cvpr/LedigTHCCAATTWS17}, and style
loss~\cite{DBLP:conf/iccv/SajjadiSH17}.
In~\cite{DBLP:conf/cvpr/BlauM18}, it is demonstrated that there exists
a trade-off between the image-distortion quality evaluated by the MSE
and the perceptual quality.
In these approaches~\cite{DBLP:conf/iccv/SajjadiSH17,
DBLP:conf/cvpr/LedigTHCCAATTWS17}, the perceptual quality is
improved implicitly by several loss functions whose relationship with
the perceptual score is difficult to be interpreted. The difficulty in
this interpretation is increased due to deep networks in the SR
methods~\cite{DBLP:conf/iccv/SajjadiSH17,
DBLP:conf/cvpr/LedigTHCCAATTWS17} described above.
On the other hand, we can explicitly improve the perceptual quality of
machine learning (ML) based SR models by simpler ways.
The most straightforward way may be to manually provide subjective
perceptual scores to all possible SR images that are generated and
evaluated during the supervised training stage. Unfortunately, that
is impossible in reality.
Such explicit perceptual scores, however, can be predicted by
perceptual-quality-aware features~\cite{DBLP:journals/spl/MittalSB13}
and ML models that are trained directly by the subjective perceptual
scores~\cite{DBLP:journals/cviu/MaYY017}.
These features and models can be utilized for perceptual loss
functions that explicitly improve the perceptual quality.
In this paper, we evaluate the effectiveness of the aforementioned loss
functions for implicit and explicit improvement of the perceptual SR
quality as briefly shown in Fig.~\ref{fig:effect}.
The explicit perceptual loss is able to improve the perceptual score compare with other approaches.
\begin{figure*}[!t]
\begin{center}
\begin{tabular}[c]{ccccc}
\includegraphics[width=.17\textwidth]{89_HR}
&
\includegraphics[width=.17\textwidth]{89_LR_original_size}
&
\includegraphics[width=.17\textwidth]{89_LR_x4_bicubic}
&
\includegraphics[width=.17\textwidth]{89_idx11_implicit}
&
\includegraphics[width=.17\textwidth]{89_idx31_explicit}\vspace{-0.2em}\\
{\small (a) GT}
&{\small (b) LR}
&{\small (c) Bicubic}
&{\small (d) Implicit}
&{\small (e) Explicit}\vspace{0.2em}\\
&&Perc.: 7.36 &Perc.: 3.90 &Perc.: 3.77
\end{tabular}\vspace{1em}
\caption{Effect of explicit perceptual loss functions on SR. Noted: Perc. is \texttt{Perceptual score} where a lower score indicate better result.}
\label{fig:effect}\vspace{-1em}
\end{center}
\end{figure*}
\section{Related Work}
\label{section:related}
Image restoration and enhancement including image SR require
appropriate quality assessment metrics for evaluation. Such metrics
are important also for training objectives, if the quality assessment
is given by ML with training data.
PSNR and SSIM~\cite{DBLP:journals/tip/WangBSS04} are widely used as
such metrics, focusing on comparing a reconstructed image with its
ground truth image.
There exist methods for quality assessment that do not require a
reference ground truth image~\cite{DBLP:conf/icip/Luo04,
DBLP:conf/cvpr/TangJK14, DBLP:journals/spl/MittalSB13}, including
some that use deep neural networks to learn the metrics
\cite{DBLP:conf/cvpr/KangYLD14, DBLP:journals/tip/MaLZDWZ18}.
Several quality assessment metrics~\cite{DBLP:conf/icip/ReibmanBG06,
DBLP:conf/icip/YeganehRW12, DBLP:conf/icip/FangLZLG16} have been
evaluated specifically for SR, including no-reference metrics (a.k.a
blind metrics)~\cite{DBLP:journals/cviu/MaYY017}.
For the goal of this paper (i.e., SR model learning), no-reference
metrics are required because any SR images reconstructed by any
intermediate learning results of model parameters must be evaluated.
Among the aforementioned no-reference metrics, NIQE
\cite{DBLP:journals/spl/MittalSB13} and Ma's algorithm
\cite{DBLP:journals/cviu/MaYY017} are regarded as the representatives
of explicit perceptual metrics
based on hand-crafted features and ML, respectively, and
utilized in the perceptual-aware SR competition
\cite{DBLP:journals/corr/abs-1809-07517}.
These metrics~\cite{DBLP:journals/spl/MittalSB13,
DBLP:journals/cviu/MaYY017} are explained briefly as our explicit
perceptual loss functions at the beginning of the next section.
\section{SR Image Reconstruction based on Explicit Perceptual Loss}
\label{section:method}
\subsection{No-reference Perceptual Quality Assessment Metrics for Explicit Perceptual Metrics}
\label{subsection:metrics}
\paragraph{NIQE}
NIQE~\cite{DBLP:journals/spl/MittalSB13}, which is one of the explicit
perceptual loss used in our experiments, uses a collection of
quality-aware statistical features based on a simple and successful
space domain natural scene statistic (NSS) model. The NSS model is
appropriate for representing the explicit perceptual loss because this
model describes the statistics to which the visual apparatus has
adapted in natural images.
For NIQE, the NSS feature~\cite{bib:NSS},
$\frac{I (i, j) - \mu (i, j)}{\sigma (i, j) + 1},$ is computed in all
pixels of an image,
where $I(i, j)$, $\mu (i, j)$, and $\sigma (i, j)$ denote a pixel
value in $(i, j)$ and the mean and variance of pixels around $(i, j)$,
respectively. In NIQE, $7 \times 7$ pixels are used for computing
$\mu(i, j)$ and $\sigma(i, j)$.
Only patches where the sum of $\sigma(i ,j)$ is above a predefined
threshold are used in the following process.
In each patch, parameters in the following two Gaussian distributions
of pixel values $x$ are computed:
\begin{itemize}
\item {\bf Generalized Gaussian distribution (GGD):}
\begin{equation}
f_{g} \left( x; \alpha, \beta \right) = \frac{\alpha}{2 \beta
\Gamma(1/\alpha)} \exp \left( - \left( \frac{|x|}{\beta}
\right)^{\alpha} \right),
\label{eq:ggd}
\end{equation}
where $\gamma$ is the gamma function. $\alpha$ and $\beta$ are
estimated as the parameters.
\item {\bf Asymmetric generalized Gaussian distribution (AGGD):}
\footnotesize
\begin{equation}
f_{a} \left( x; \gamma, \beta_{l}, \beta_{r} \right) =
\frac{\gamma}{(\beta_{l} + \beta_{r})
\Gamma(1/\gamma)} \exp \left( - \left( \frac{|x|}{\beta'} \right)^{\gamma} \right),
\label{eq:aggd}
\end{equation}
\normalsize
where $\beta'$ is $\beta_{l}$ (when $x \leq 0$) or $\beta_{r}$ (when
$x \geq 0$). $\gamma$, $\beta_{l}$, and $\beta_{r}$ are estimated as
the parameters. The mean of the distribution, $\eta$, is also
parameterized:
\begin{equation}
\eta = \left( \beta_{l} - \beta_{r} \right) \frac{\gamma(\frac{2}{\gamma})}{\Gamma(\frac{1}{\gamma})}
\label{eq:eta}
\end{equation}
\end{itemize}
$\gamma$, $\beta_{l}$, $\beta_{r}$, and $\eta$ in Eqs. (\ref{eq:aggd})
and (\ref{eq:eta}) are estimated along the four orientations and used
in conjunction with $\alpha$ and $\beta$ in Eq. (\ref{eq:ggd}). In
total, 18 parameters are given in each patch.
The multivariate distribution of the estimated 18 features is
represented by the multivariate Gaussian (MVG) model.
With the mean and variance of the MVG, the quality of the test image
is evaluated by the following distance between the MVG models of
natural images and the test image: \footnotesize
\begin{equation}
D(\nu_{n}, \nu_{t}, \Sigma_{n}, \Sigma_{t}) = \sqrt{\left( (\nu_{n} - \nu_{t})^{T} \left( \frac{\Sigma_{1} + \Sigma_{2}}{2} \right)^{-1} (\nu_{n} - \nu_{t}) \right)},
\label{eq:D}
\end{equation}
\normalsize
where $\nu_{n}, \nu_{t}$ and $\Sigma_{n}, \Sigma_{t}$ are the mean
vectors and covariance matrices of the MVG models of the natural
images and the test image, respectively. In NIQE, the natural images
were selected from Flickr and the Berkeley image segmentation database.
\paragraph{Ma's Algorithm}
Ma's algorithm~\cite{DBLP:journals/cviu/MaYY017} evaluates the
perceptual quality of an image based on three types of low-level
statistical features in both spatial and frequency domains, namely the
local frequency features computed by the discrete cosine transform,
the global frequency features based on represented by the wavelet
coefficients, and the spatial discontinuity feature based on the
singular values.
For each of the three statistical features, a random forest regressor
is trained to predict the human-subjective score of each training
image.
While the original algorithm~\cite{DBLP:journals/cviu/MaYY017}
predicts the perceptual score by a weighted-linear regression using
the aforementioned three features, the computational cost of the local
frequency feature is too huge; 20 times slower than other two
features. This huge computational cost is inappropriate for a loss
function in ML.
Furthermore, the contribution of the global frequency feature is
smaller compared with the spatial discontinuity feature, as
demonstrated in~\cite{DBLP:journals/cviu/MaYY017}.
In our experiments, therefore, the spatial discontinuity feature is
evaluated as one of the explicit perceptual loss in order to avoid the
combinatorial explosion in evaluation with NIQE and three implicit
perceptual loss functions~\cite{DBLP:conf/iccv/SajjadiSH17,
DBLP:conf/cvpr/LedigTHCCAATTWS17}; while we use the five loss
functions resulting in $2^{5}=32$ combinations in our experiments
(Section~\ref{subssec:multi_loss}), $2^{7}=128$ tests are required if
all of the seven loss functions including the local and global
frequency features are evaluated.
Singular values of images with smooth contents become zero more
rapidly than for those with sharp contents, as validated in~\cite{DBLP:journals/cviu/MaYY017}.
This property suggests us to use the singular values for evaluating
the spatial discontinuity.
In the Ma's spatial discontinuity (MSD) metric, the singular values
are computed from a set of patches that are extracted from the image
with no overlaps. The concatenation of the singular values is used as
a MSD feature vector and fed into the random forest regressor.
\subsection{SR Loss Functions using Explicit Perceptual Metrics}
\label{subsection:method}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\columnwidth]{network}
\vspace*{1mm}
\caption{Deep SR network with the MSE loss ($L_{MSE}$) and our
explicit perceptual loss ($L_{PS}$). ``EPS'' in the figure
outputs the explicit perceptual score of each reconstructed SR
image, $S(x,y)$. $\theta_{sr}$ denotes the parameters of the
network.}
\label{fig:network}
\end{center}
\vspace*{-5mm}
\end{figure}
In this section, we propose how to use the two explicit perceptual
metrics, NIQE and MSD, as loss functions that are applicable to deep
SR networks.
A simple illustration of the SR network is shown in
Figure~\ref{fig:network}. Basic SR networks such as SRCNN and DBPN
employ only the MSE loss, which is
$L_{MSE} = \frac{1}{XY} \sum^{X,Y}_{x,y} \left( S(x,y) - H(x,y)
\right)^{2}$
in this figure where $S(x,y)$ and $H(x,y)$ denote the SR and
ground-truth high-resolution (HR) images, each of whose size is
$(X, Y)$, respectively.
Since NIQE outputs a scalar score (i.e.,
$D(\nu_{n}, \nu_{t}, \Sigma_{n}, \Sigma_{t})$ in (\ref{eq:D})) only
based on statistic calculation,
$D(\nu_{n}, \nu_{t}, \Sigma_{n}, \Sigma_{t})$ can be directly used as
a loss function for deep SR networks.
In this case, ``EPS'' in Figure~\ref{fig:network} consists of
Equations (\ref{eq:ggd}), (\ref{eq:aggd}), (\ref{eq:eta}), and
(\ref{eq:D}).
In MSD, on the other hand, a ML based regressor is
employed at the final stage.
Any type of regressors can be employed, including deep networks,
random forests, support vector machines, and so on. For example, the
Ma's algorithm uses random forest regressors for all metrics.
Depending on the type of the regressor, we propose the following two
options for using MSD as a loss function:
\begin{description}
\item[Deep networks:] As with NIQE, the MSD regressor using a deep
network that outputs a scalar MSD score can be straightforwardly
combined with deep SR networks.
In this case, ``EPS'' in Figure~\ref{fig:network} indicates the MSD
regressor using the deep network.
\item[Other ML algorithms:] We have difficulty in containing other ML
algorithms into a deep network both for efficient training and
inference. Our proposed method resolves this problem by employing a
MSD feature vector instead of a MSD score. The MSD feature vector is
computed from any image. In the training stage, we compute the MSE
loss between the MSD feature vectors of each reconstructed image and
its original HR image in ``EPS'' in Figure~\ref{fig:network}.
This loss is used for evaluating the perceptual quality of each
reconstructed SR image While the MSD feature vector is computed from
every reconstructed SR image, the one of the HR image is computed
only once at the beginning of the training stage.
\end{description}
Finally, the loss function with NIQE and MSD is defined as follows:
\begin{equation}
L_{PS} = \epsilon \left( D(\nu_{n}, \nu_{t}, \Sigma_{n},
\Sigma_{t}) \right)^{2}
+ \zeta L_{Ma},
\label{eq:loss}
\end{equation}
where $\epsilon$ and $\zeta$ denote the weights of NIQE and MSD,
respectively, and $L_{Ma}$ is either of the following ones:
\begin{description}
\item[Deep networks:]
\begin{equation}
L_{Ma} = PSNN(M^{S}),
\label{eq:dn}
\end{equation}
where $PSNN(M^{S})$ denotes the deep network that regresses the
perceptual score from $M^{S}$, which is the MSD feature vector of the
reconstructed SR. $PSNN(M^{S})$ is trained so that a lower score means
the better perceptual quality.
\item[Other ML algorithms:]
\begin{equation}
L_{Ma} = \sum_{i} \left( M^{S}_{i} - M^{H}_{i} \right)^{2},
\label{eq:ml}
\end{equation}
where $M^{H}$ denotes the MSD feature vector of the original HR image.
\end{description}
These two options, Eqs (\ref{eq:dn}) and (\ref{eq:ml}), are evaluated
in Section~\ref{subssec:explicit_loss}.
\section{Experimental Results}
\label{section:experiments}
\subsection{Implementation and training details}
The networks consist of two blocks: generator ($G$) and discriminator ($D$)~\cite{goodfellow2014generative}.
$G$ network generates (SR) images and $D$ network differentiates between real (HR) and fake (SR) images.
DBPN~\cite{DBLP:conf/cvpr/HarisSU18}, the winner of SR competition held in
2018~\cite{DBLP:conf/cvpr/TimofteGWG18, pirm2018}, was used as $G$ network.
Meanwhile, $D$ network consists of five hidden layers with batch norm and the last layer is fully connected layer.
The training mechanism is illustrated in Fig.~\ref{fig:train}.
We trained the networks using images from DIV2K \cite{Agustsson_2017_CVPR_Workshops} with online augmentation (scaling, rotating, flipping, and random cropping).
To produce LR images, we downscale the HR images on particular scaling factors using Bicubic.
We use the validation set from PIRM2018~\cite{pirm2018} as the test set which consists of 100 images.
The experiment focuses on 4$\times$ scaling factor.
We use DBPN-S which is the variant of DBPN with shallower depth.
On DBPN-S, we use $8 \times 8$ kernel with stride = 4 and pad by 2 pixels with T = 2.
All convolutional and transposed convolutional layers are followed by parametric rectified linear units (PReLUs), except the final reconstruction layer.
We initialize the weights based on~\cite{he2015delving}.
We use batch size of 4 with size $576 \times 576$ for HR image, while LR image size is 144 $\times$ 144.
We intentionally use big size patches assuming explicit perceptual loss works better on bigger patch than on the smaller patches as shown in Fig.~\ref{fig:train}.
The learning rate is initialized to $1e-4$ for all layers and decrease by a factor of 10 for every 50 epochs for total 100 epochs.
We used Adam with momentum to $0.9$.
All experiments were conducted using PyTorch 0.4.1 and Python 3.5 on NVIDIA TITAN X GPUs.
\begin{figure}
\begin{center}
\includegraphics[width=.5\textwidth]{architecture}
\end{center}\vspace{-1em}
\caption{The overview of training mechanism.}
\label{fig:train}
\end{figure}
To evaluate the performance, we use \texttt{Perceptual score} proposed in PIRM2018~\cite{pirm2018}, the perceptual super-resolution challenge. It is divided into three categories defined by thresholds on the RMSE. The three region are defined by Region 1: RMSE $\le 11.5$ , Region 2: $11.5 <$ RMSE $\le 12.5$, and Region 3: $12.5 <$ RMSE $\le 16$. The \texttt{Perceptual score} is computed by combining the quality measures of Ma~\cite{DBLP:journals/cviu/MaYY017} and NIQE~\cite{DBLP:journals/spl/MittalSB13} as below. A lower score means the better perceptual quality.
\begin{equation}
\texttt{Perceptual score} = 1/2((10-Ma)+NIQE)
\end{equation}
\subsection{Combination on multiple loss functions}
\label{subssec:multi_loss}
Here, we evaluate the combination of five losses function on G network to show the characteristic of each loss, producing 32 combinations.
On G network, we implement six losses (MSE, VGG, Style, Adversarial loss, NIQE, and Ma) which explained as below.
\begin{equation}\begin{split}
L_{G} &= 10*L_{mse} + w_1*L_{vgg} + w_2*L_{adv} + \\
&w_3*L_{style} + w_4*L_{NIQE} + w_5*L_{Ma}
\end{split}
\end{equation}
\begin{description}
\item[(a) $L_{mse}$] is pixel-wise loss $L_{mse} = || I^h - I^{sr} ||^2_2$.
\item[(b) $L_{vgg}$] is calculated in the feature space using pretrained VGG19~\cite{simonyan2014very} on multiple layers. This loss was originally proposed by~\cite{johnson2016perceptual, dosovitskiy2016generating}. Both $I^h$ and $I^{sr}$ are first mapped into a feature space by differentiable functions $f_{i}$ from VGG multiple max-pool layers ${(i = {2, 3, 4, 5})}$ then sum up each layer distances. $L_{vgg} = \sum\limits_{i={2}}^5 || f_i(I^h) - f_i(I^{sr}) ||^2_2$.
\item[(c) $L_{adv}$] $= -\texttt{log} (D(G(I^{l})))$~\cite{goodfellow2014generative}
\item[(d) $L_{style}$] is used to generate high quality textures~\cite{gatys2016image}.
\item[(e) $L_{NIQE}$] (Eqs.~\ref{eq:D})
\item[(f) $L_{Ma}$] (Eqs.~\ref{eq:loss})
\end{description}
For $D$ network~\cite{goodfellow2014generative}, it is optimized by
\begin{equation}\begin{split}
L_D = -\texttt{log} (D(I^{h})) - \texttt{log}(1-D(G(I^{l})))
\end{split}
\end{equation}
Table~\ref{tab:multi_loss} shows the results on 32 combinations.
It is hard to interpret the behavior of loss function.
However, some results can be highlighted to generally understand the characteristic of each loss, especially between implicit and explicit perceptual loss.
Noted that the weight for each loss is chosen based on preliminary experiments.
Among a single loss function (no.~1~-~6), it is shown that $L_{NIQE}$ provides the best results on Region 2.
Further improvement can be achieved when we combined explicit perceptual loss with $L_{adv}$ as shown in no.~12.
However, we start observing diminishing returns on $L_{NIQE}$ even combined with other implicit perceptual loss.
Meanwhile, $L_{Ma}$ shows good performance only if combined with $L_{vgg}$ and $L_{adv}$.
The best combination is shown on no.~19 which use four loss functions: $L_{mse}$, $L_{vgg}$,$L_{adv}$, and $L_{Ma}$.
It is also interesting to see that the second best result is no.~17 which also use four loss functions but replacing $L_{Ma}$ with $L_{style}$.
Therefore, we can conclude that $L_{Ma}$ is able to replace $L_{style}$ with a marginal improvement.
The best result on two explicit perceptual loss is shown by no.~31.
However, it is important to note that $L_{adv}$ is crucial to improve the performance of this combination.
We can clearly see it by comparing no.~31 and 30 where no.~30's performance is much worse by only eliminating $L_{adv}$.
\begin{table}[t!]
\scriptsize
\caption{The comparison of six losses on 32 combinations.}
\begin{center}
\scalebox{0.95}{
\begin{tabular}{|*1c|*1c|*1c|*1c|*1c|*1c|*1c|*1c|*1c|}
\hline
No. & $w_1$ & $w_2$ & $w_3$ & $w_4$ & $w_5$ & \texttt{Perc.} & RMSE & Region \\
\hline\hline
1&0&0&0&0&0&5.692&11.86&2\\\hline
2&0.1&0&0&0&0&5.654&11.82&2\\\hline
3&0&0.1&0&0&0&2.540&14.12&3\\\hline
4&0&0&10&0&0&5.699&11.76&2\\\hline
5&0&0&0&0.01&0&5.397&11.86&2\\\hline
6&0&0&0&0&0.001&5.666&11.89&2\\\hline
7&0.1&0.1&0&0&0&2.751&13.86&3\\\hline
8&0.1&0&10&0&0&5.784&12.1&2\\\hline
9&0.1&0&0&0.01&0&5.587&12.13&2\\\hline
10&0.1&0&0&0&0.001&5.713&11.81&2\\\hline
11&0&0.1&10&0&0&2.580&13.9&3\\\hline
12&0&0.1&0&0.01&0&2.575&13.73&3\\\hline
13&0&0.1&0&0&0.001&5.745&11.79&2\\\hline
14&0&0&10&0.01&0&5.557&11.98&2\\\hline
15&0&0&10&0&0.001&5.685&11.89&2\\\hline
16&0&0&0&0.01&0.001&5.506&11.94&2\\\hline
17&0.1&0.1&10&0&0&2.479&13.86&3\\\hline
18&0.1&0.1&0&0.01&0&2.562&14.44&3\\\hline
19&0.1&0.1&0&0&0.001&2.471&14.07&3\\\hline
20&0.1&0&10&0.01&0&5.733&11.82&2\\\hline
21&0.1&0&10&0&0.001&5.657&11.77&2\\\hline
22&0.1&0&0&0.01&0.001&5.533&11.85&2\\\hline
23&0&0.1&10&0.01&0&2.580&13.98&3\\\hline
24&0&0&10&0.01&0.001&5.459&11.85&2\\\hline
25&0&0.1&0&0.01&0.001&2.626&14.27&3\\\hline
26&0&0.1&10&0&0.001&3.089&13.80&3\\\hline
27&0.1&0.1&10&0.01&0&2.724&13.78&3\\\hline
28&0.1&0.1&10&0&0.001&2.549&13.78&3\\\hline
29&0.1&0.1&0&0.01&0.001&2.507&13.84&3\\\hline
30&0.1&0&10&0.01&0.001&5.614&11.86&2\\\hline
31&0&0.1&10&0.01&0.001&2.497&13.81&3\\\hline
32&0.1&0.1&10&0.01&0.001&2.537&14.03&3\\
\hline
\end{tabular}}
\label{tab:multi_loss}
\end{center}
\end{table}
\subsection{Different type of regressor on explicit perceptual loss}
\label{subssec:explicit_loss}
We conduct experiment to evaluate two types of regressor on explicit perceptual loss.
Here, the network is only optimized by one loss, either NN~(\ref{eq:dn}) or other ML~(\ref{eq:ml}).
The results are shown in Table~\ref{tab:explicit_loss}.
(\ref{eq:dn}) approach has a marginal decline compare with (\ref{eq:ml}).
It can be assumed that (\ref{eq:ml}) performed better than (\ref{eq:dn}).
Furthermore, (\ref{eq:ml}) provide low computation and less hyperparameter are needed to ease the optimization process.
\begin{table}[t!]
\small
\caption{The comparison of different type of the regressor for explicit perceptual loss.}
\begin{center}
\begin{tabular}{*1c|*1c|*1c}
\hline
Method & $\texttt{Perceptual Score}$ & RMSE\\
\hline
NN~(\ref{eq:dn}) & 5.729 & 11.83\\
other ML~(\ref{eq:ml}) & 5.666 & 11.90\\
\hline
\end{tabular}
\label{tab:explicit_loss}
\end{center}
\end{table}
\subsection{Subjective evaluation}
We performed a subjective test to quantify the performance of different combination of loss function. Specifically, we asked 30 raters to assign a score from 1 (bad quality) to 10 (best quality) to each image.
The raters rated 7 combinations of loss function: $L_{mse}$, $L_{mse}+L_{vgg}$, $L_{mse}+L_{adv}$, $L_{mse}+L_{style}$, $L_{mse}+L_{NIQE}$, $L_{mse}+L_{Ma}$, $L_{mse}+L_{vgg}+L_{adv}+L_{style}+L_{NIQE}+L_{Ma}$. In total, each raters rated 700 instances (7 combinations of 100 images).
The result of subjective evaluation is shown in Fig.~\ref{fig:subjective}.
Most of the raters give a lower score for ``MSE+Adv'' and ``All'', while there are slight differences between other methods.
The best subjective score is achieved by ``MSE+Style''.
On Section~\ref{subssec:multi_loss}, it shows that explicit perceptual loss is able to improve the \texttt{perceptual score}.
However, the subjective evaluation shows that better \texttt{perceptual score} does not give better visualization on human perception.
This result produces at least two observations.
From this evaluation, it shows there is no strong correlation between the existing \texttt{perceptual score} and subjective evaluation.
Other observation shows the explicit perceptual loss tends to generate high frequency artifacts which is considered as a noise on human perception as shown in Fig.~\ref{fig:subject}.
\begin{figure}[!t]
\begin{center}
\begin{tabular}[c]{cc}
\includegraphics[width=.17\textwidth]{hr}
&
\includegraphics[width=.17\textwidth]{lr}\\
{\small (a) GT}
&{\small (b) LR}\\
\includegraphics[width=.17\textwidth]{mse}
&
\includegraphics[width=.17\textwidth]{all}\\
{\small (c) MSE}
&{\small (d) Implicit}\vspace{0.2em}\\
\end{tabular}\vspace{1.5em}
\caption{The comparison of different loss function. The implicit perceptual loss tends to create high-frequency artifacts which can be considered as noise by human perception.}
\label{fig:subject}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=.5\textwidth]{subjective}
\end{center}\vspace{-1em}
\caption{The result of subjective test.}
\label{fig:subjective}
\end{figure}
\section{Concluding Remarks}
\label{section:conclusion}
We proposed an explicit way to utilize machine learning model which trained to produce the perceptual score on generated SR images.s
The experimental results show the proposed approach is able to improve the perceptual score better than the implicit approaches.
We further show the characteristic of implicit and explicit perceptual loss for easier interpretation.
We also demonstrate that the existing perceptual score does not correlate well with human perception using subjective evaluation.
The results open more challenges to create better image quality metrics which can be used explicitly to optimize the SR network.
Future work includes an extension of this work to other SR problems
(i.e., video SR~\cite{DBLP:conf/cvpr/NahTGBHMSL19,DBLP:conf/cvpr/HarisSU19,eccv2020vsr}, time
SR~\cite{DBLP:conf/cvpr/JiangSJ0LK18,eccv2020tsr}, and space-time
SR~\cite{DBLP:conf/cvpr/HarisSU20,DBLP:conf/cvpr/XiangTZ0AX20}).
This work is partly supported by
JSPS KAKENHI Grant Number 19K12129.
\bibliographystyle{ieee}
\begin{small}
|
1,477,468,751,108 | arxiv | \section{Introduction}
\label{sec:intro}
In general relativity, the asymptotic symmetries of asymptotically-flat spacetimes at both past and future null infinity are elements of the infinite-dimensional Bondi-Metzner-Sachs (BMS) group \cite{BBM, Sachs1} (see \cite{Ashtekar:2014zsa, AK} for recent reviews). It has been conjectured by Strominger \cite{Stro-CK-match} that the (a priori independent) BMS groups at past and future null infinity are related via an antipodal reflection near spatial infinity. This matching relation gives a \emph{global} ``diagonal'' asymptotic symmetry group for general relativity. If similar matching conditions relate the gravitational fields, then there exist infinitely many conservation laws in classical gravitational scattering between the incoming fluxes associated with the BMS group at past null infinity and the outgoing fluxes of the corresponding (antipodally identified) BMS group at future null infinity. These conservation laws are also related to soft graviton theorems \cite{He:2014laa,Strominger:2014pwa,Kapec:2015vwa,Avery:2015gxa,Campiglia:2015kxa,Campoleoni:2017mbt}, gravitational memory effects \cite{He:2014laa,Strominger:2014pwa,Pasterski:2015tva,HIW,Mao:2017wvx,Pate:2017fgt,Chatterjee:2017zeb} and the black hole information paradox \cite{Hawking:2016msc,Strominger:2017aeh,Hawking:2016sgy} (see \cite{Strominger:2017zoo} for a detailed review of recent developments and a complete list of references).
Such matching conditions on the asymptotic symmetries and fields have been shown in Maxwell theory on a background Minkowski spacetime \cite{CE} and in general asymptotically-flat spacetimes \cite{KP-EM-match}. In the gravitational case, the matching of the supertranslation symmetries and supermomentum charges has also be proven for linearized perturbations on a Minkowski background \cite{Tro} and in general asymptotically-flat spacetimes \cite{KP-GR-match}. For the translation symmetries these reduce to the much older result of \cite{Ash-Mag-Ash} which shows that the Bondi \(4\)-momentum on future and past null infinity matches the \(4\)-momentum at spatial infinity.
The main technique used in \cite{CE,KP-EM-match,Tro,KP-GR-match} to prove these matching conditions is to ``interpolate'' between the symmetries and charges at past and future null infinities using the field equations and the asymptotic symmetries and charges defined near spatial infinity. In a background Minkowski spacetime this analysis can be done using asymptotic Bondi-Sachs coordinates near each null infinity and asymptotic Beig-Schmidt coordinates near spatial infinity. Using the explicit transformations between these coordinate systems the matching conditions can be shown to hold for Maxwell fields and linearized gravity on Minkowski spacetime \cite{CE,Tro}. But in general asymptotically-flat spacetimes the transformations between the asymptotic coordinates is not known explicitly. In this case the covariant formulation of asymptotic-flatness given by Ashtekar and Hansen \cite{AH}, which treats both null and spatial infinities in a unified spacetime-covariant manner, has proven fruitful to analyze the matching of the symmetries and charges \cite{KP-EM-match,KP-GR-match}.
However, for the charges associated with the Lorentz symmetries such matching conditions between past and future null infinity have not yet been proven, except for the case of stationary spacetimes \cite{AS-ang-mom}. With an eye towards establishing these conjectured matching conditions for Lorentz symmetries and charges we revisit the formulation of the asymptotic symmetries and charges at spatial infinity.
The asymptotic behaviour at spatial infinity can be studied using many different (but related) formalisms. Since our primary motivation is to, ultimately, make contact with null infinity it will be more useful to use a spacetime covariant formalism without using any \((3+1)\) decomposition of the spacetime by spacelike hypersurfaces \cite{ADM, ADMG, RT-parity}. Such a \(4\)-dimensional formulation of asymptotic-flatness at spatial infinity can be be given using suitable asymptotic coordinates as formulated by Beig and Schmidt \cite{Beig-Schmidt}. The asymptotic symmetries and charges using the asymptotic expansion of the metric in these coordinates have been worked out in detail \cite{Beig-Schmidt, CDV-Lorentz, CD}. But as mentioned above, the relation between the Beig-Schmidt coordinates and the coordinates adapted to null infinity (like the Bondi-Sachs coordinates) is not known in general spacetimes. Thus, we will use the coordinate independent formalism of Ashtekar and Hansen \cite{AH,Ash-in-Held} (\cref{def:AH}) to investigate the symmetries and their associated charges at spatial infinity.\footnote{The relation between the Ashtekar-Hansen formalism and the Beig-Schmidt coordinates is summarized in \cref{sec:coord}.}
The asymptotic behaviour of the gravitational field for any asymptotically-flat spacetime is most conveniently described in a conformally-related unphysical spacetime, the Penrose conformal-completion. In the unphysical spacetime, null infinities \(\ms I^\pm\) are smooth null boundaries while spatial infinity is a boundary point \(i^0\) which is the vertex of ``the light cone at infinity'' formed by \(\ms I^\pm\). For Minkowski spacetime the unphysical spacetime is smooth (in fact, analytic) at \(i^0\). However, in more general spacetimes, the unphysical metric is not even once-differentiable at spatial infinity unless the ADM mass of the spacetime vanishes \cite{AH}, and the unphysical spacetime manifold \emph{does not} have a smooth differential structure at \(i^0\). Thus, in the Ashtekar-Hansen formalism, instead of working directly at the point \(i^0\) where sufficiently smooth structure is unavailable, one works on a ``blowup'' --- the space of spatial directions at \(i^0\) --- given by a timelike-unit-hyperboloid \(\ms H\) in the tangent space at \(i^0\). Suitably conformally rescaled fields, whose limits to \(i^0\) depend on the direction of approach, induce \emph{smooth} fields on \(\ms H\) and we can study these smooth limiting fields using standard differential calculus on \(\ms H\). For instance, in Maxwell theory the rescaled field tensor \(\Omega F_{ab}\) and in general relativity the rescaled (unphysical) Weyl tensor \(\Omh C_{abcd}\) (where \(\Omega\) is the conformal factor used in the Penrose conformal completion) admit regular direction-dependent limits to \(i^0\), and these fields induce smooth tensor fields on \(\ms H\). Similarly, the Maxwell gauge transformations and vector fields in the physical spacetime (suitably rescaled) admit regular direction-dependent limits which generate the asymptotic symmetries at \(i^0\) (see \cref{sec:spi-symm}).
The asymptotic symmetries in general relativity at spatial infinity have also been studied in detail in the Ashtekar-Hansen formalism \cite{AH,Ash-in-Held}. However in deriving the charges associated with these symmetries Ashtekar and Hansen reduced the asymptotic symmetry algebra from the infinite-dimensional \(\mf{spi}\) algebra to the Poincar\'e algebra consisting only of translations and Lorentz transformations. This reduction was accomplished by demanding that the ``leading order'' magnetic part of the Weyl tensor, given by a tensor \(\dd B_{ab}\) on \(\ms H\) (see \cref{eq:EB-defn}), vanish and additionally choosing the conformal factor near \(i^0\) so that the tensor potential \(\dd K_{ab}\) for \(\dd B_{ab}\) also vanishes (see \cref{rem:GR-gauge-choice}). This restriction was also imposed in \cite{MMMV-Lorentz,CDV-Lorentz}. In the work of Comp\`ere and Dehouck in \cite{CD}, the condition \(\dd B_{ab} = 0\) was not imposed however, they also specialized to a conformal factor where the trace \(\dd h^{ab} \dd K_{ab}\) (where $\dd{h}^{ab}$ denotes the inverse of the metric on $\ms H$) was set to vanish. As we will show below (see \cref{sec:conf-charges}) the charges of the Lorentz symmetries at spatial infinity are not conformally-invariant but shift by the charge of a supertranslation. This is entirely analogous to the supertranslation ambiguities in the Lorentz charges at null infinity. Thus, when matching the Lorentz charges at spatial infinity to those at past and future null infinity, one would need to perform this matching in the ``same'' choice of conformal factor in all three regions. A priori, it is not clear what the special choices of conformal factor chosen in the above mentioned analyses imply at null infinity. Thus, we will not impose any such restrictions on the conformal factor and not impose any conditions on \(\dd K_{ab}\) (apart from its equations of motion arising from the Einstein equation) in our analysis. As we will show, one peculiar consequence of keeping a completely unrestricted conformal factor will be that our charges will not be exactly conserved but will have a non-vanishing flux through regions of \(\ms H\) (except for pure translations). Thus, these charges are not associated with the point \(i^0\) at spatial infinity, but with cross-sections of the ``blowup'' \(\ms H\). This is not a serious drawback; as shown in \cite{KP-EM-match,KP-GR-match} for matching the symmetries and charges at null infinity, one only requires that the \emph{total} flux of the charges through all of \(\ms H\) vanish but there can be a non-vanishing flux through local regions of \(\ms H\). Thus, our main goal in this work is to analyze the symmetries and charges in general relativity without imposing any restrictions on the choice of conformal factor near spatial infinity.
\begin{center}* * *\end{center}
In our analysis of the asymptotic charges we will use the \emph{covariant phase space} formalism described below. Since the relevant quantities in the covariant phase space are defined in terms of the physical metric and their perturbations, we first analyze the conditions on the corresponding unphysical quantities so that they preserve the asymptotic-flatness conditions and the universal structure at \(i^0\) (\cref{sec:pert}). To derive the asymptotic symmetry algebra we then consider a physical metric perturbation \(\Lie_\xi \hat g_{ab}\) generated by an infinitesimal diffeomorphism and demand that it preserve the asymptotic conditions in the unphysical spacetime in the limit to \(i^0\). This will provide us with the following description of the asymptotic symmetries at \(i^0\) (\cref{sec:spi-symm}). The asymptotic symmetry algebra \(\mf{spi}\) is parametrized by a pair \((\dd f, \dd X^a)\) where \(\dd f\) is any smooth function and \(\dd X^a\) is a Killing field on \(\ms H\). The function \(\dd f\) parametrizes the supertranslations and \(\dd X^a\) parametrize the Lorentz symmetries. The \(\mf{spi}\) algebra is then a semi-direct sum of the Lorentz algebra with the infinite-dimensional abelian subalgebra of supertranslations. Note that this is the same as the asymptotic symmetries derived in \cite{AH,Ash-in-Held}. The only difference in our analysis is that we obtain the symmetries by analyzing the conditions on diffeomorphisms in the physical spacetime instead of using the unphysical spacetime directly as in \cite{AH,Ash-in-Held}.
To obtain the charges associated with these symmetries, the primary quantity of interest is the \emph{symplectic current} derived from the Lagrangian of a theory (see, \cite{LW,WZ} for details). The symplectic current \(\df\omega(\hat g; \delta_1 \hat g, \delta_2\hat g)\), is a local and covariant \(3\)-form which is an antisymmetric bilinear in two metric perturbations, $\delta \hat g$ on the physical spacetime. It can be shown that when the second perturbation \(\delta \hat g_{ab} = \Lie_\xi \hat g_{ab}\) is the perturbation corresponding to an infinitesimal diffeomorphism generated by a vector field \(\xi^a\) we have
\begin{equation} \label{eq:sympcurrent-charge}
\omega(\hat{g}; \delta \hat{g}, \lie_{\xi} \hat{g}) = d[\delta Q_{\xi} - \xi \cdot \theta(\delta \hat g)]\,,
\end{equation}
where we have assumed that \(\hat g_{ab}\) satisfies the equations of motion and \(\delta \hat g_{ab}\) satisfies the linearized equations of motion. The \(2\)-form \(Q_\xi\) is the \emph{Noether charge} associated with the vector field \(\xi^a\) and the \(3\)-form \(\theta(\delta \hat g)\) is the \emph{symplectic potential} \cite{LW,WZ}. If we integrate \cref{eq:sympcurrent-charge} over a \(3\)-dimensional surface \(\Sigma\) with boundary \(\partial\Sigma\) we get
\begin{equation} \label{eq:pertcharge}
\int_{\Sigma}\omega[\hat{g};\delta \hat{g}, \lie_{\xi}\hat{g}] = \int_{\partial \Sigma} \delta Q_{\xi} - \xi \cdot \theta(\delta \hat g) \,.
\end{equation}
To define the asymptotic charges at spatial infinity, we would like to evaluate \cref{eq:pertcharge} when the surface \(\Sigma\) extends to a suitably regular \(3\)-surface at \(i^0\) in the unphysical spacetime. Given the low amount of differentiability at \(i^0\) the appropriate condition is that \(\Sigma\) extends to a \(C^{>1}\) surface at \(i^0\). The limit of the boundary \(\partial\Sigma\) to \(i^0\) corresponds to a \(2\)-sphere cross-section \(S\) of the unit-hyperboloid \(\ms H\) in the Ashtekar-Hansen formalism. Then, the limiting integral on the right-hand-side of \cref{eq:pertcharge} (with the asymptotic conditions imposed on the metric perturbations as well as the symmetries) will define a perturbed charge on \(S\) associated with the asymptotic symmetry generated by \(\xi^a\). However, even though the explicit expressions for the integrand on the right-hand-side of \cref{eq:pertcharge} are well-known (see for instance \cite{WZ}), computing this limiting integral is difficult. So we will use an alternative strategy described next.
We will show that with the appropriate asymptotic-flatness conditions at \(i^0\), the symplectic current \(3\)-form \(\omega \equiv \omega_{abc}\) is such that \(\Omega^{\nfrac{3}{2}} \omega_{abc}\) has a direction-dependent limit to \(i^0\). The pullback of this limit to \(\ms H\), which we denote by \(\pb{\dd\omega}\), defines a symplectic current on \(\ms H\). We show that when one of the perturbations in this symplectic current is generated by an asymptotic \(\mf{spi}\) symmetry \((\dd f, \dd X^a)\), we have
\begin{equation}
\pb{\dd\omega}(g; \delta g, \delta_{(\dd f, \dd X)} g) = - \dd\varepsilon_3 \dd D^a \dd Q_a (g; \delta g, (\dd f, \dd X))\,,
\end{equation}
where \(\dd\varepsilon_3\) and \(\dd D\) are the volume element and covariant derivative on \(\ms H\). The covector \(\dd Q_a (g; \delta g, (\dd f, \dd X))\) is a local and covariant functional of the background fields corresponding to the asymptotic (unphysical) metric \(g_{ab}\), and linear in the asymptotic (unphysical) metric perturbations \(\delta g_{ab}\) and the asymptotic symmetry parametrized by \((\dd f, \dd X^a)\). Thus, we can write the symplectic current, with one perturbation generated by an asymptotic symmetry, as a total derivative on \(\ms H\). Then, in analogy to \cref{eq:pertcharge}, we define the perturbed charge on a cross-section \(S\) of \(\ms H\) by the integral
\begin{equation}\label{eq:pertcharge-hyp}
\int_S \dd\varepsilon_2 \dd u^a \dd Q_a (g; \delta g, (\dd f, \dd X)\,,
\end{equation}
where \(\dd\varepsilon_2\) is the area element and \(\dd u^a\) is a unit-timelike normal to the cross-section \(S\) within \(\ms H\). We then show that when the asymptotic symmetry is a supertranslation \(\dd f\), the quantity \(\dd Q_a (g; \delta g, \dd f)\) is integrable, i.e, it can be written as the \(\delta\) of some covector which is itself a local and covariant functional of the asymptotic fields and supertranslation symmetries. Then ``integrating'' \cref{eq:pertcharge-hyp} in the space of asymptotic fields, we can define a charge associated with the supertranslations on any cross-section \(S\) of \(\ms H\) (see \cref{sec:st}). When the asymptotic symmetry is a Lorentz symmetry parameterized by a Killing vector field \(\dd X^a\) on \(\ms H\), \cref{eq:pertcharge-hyp} \emph{cannot} be written as the \(\delta\) of some quantity (unless we restrict to the choice of conformal factor where \(\dd h^{ab} \dd K_{ab} = 0\) as described above). In this case, we will adapt the prescription by Wald and Zoupas \cite{WZ} to define an integrable charge for Lorentz symmetries (\cref{sec:lorentz-charge}). Then the change of these charges over a region \(\Delta\ms H\) bounded by two cross-sections provides a flux formula for these charges. In general, these fluxes will be non-vanishing (except for translation symmetries) unless we again restrict to the conformal factor where \(\dd h^{ab} \dd K_{ab} = 0\). However, as mentioned above, from the point of view of matching these charges to those on null infinity, the special conformal choices might not be convenient and it is not necessary to have exactly conserved charges on \(\ms H\). Thus, we will not restrict the conformal factor in any way and work with charges which can have non-trivial fluxes through some region of \(\ms H\).
\begin{center}* * *\end{center}
The rest of this paper is organized as follows. In \cref{sec:AH} we recall the definition of asymptotic-flatness at spatial infinity in terms of an Ashtekar-Hansen structure. To illustrate our approach outlined above we first study the simpler case of Maxwell fields at spatial infinity, and derive the associated symmetries and charges in \cref{sec:Maxwell}. In \cref{sec:GR-eom} we then consider the asymptotic gravitational fields and Einstein equations at spatial infinity. We also describe the universal structure, that is the structure that is common to all spacetimes which are asymptotically-flat at \(i^0\), in \cref{sec:univ-str}. In \cref{sec:pert} we analyze the conditions on metric perturbations which preserve asymptotic flatness and obtain the limiting form of the symplectic current of general relativity on the space of directions \(\ms H\). In \cref{sec:spi-symm}, using the analysis of the preceding section, we derive the asymptotic symmetry algebra (the \(\mf{spi}\) algebra) by considering infinitesimal metric perturbations generated by diffeomorphisms which preserve the asymptotic flatness conditions. In \cref{sec:spi-charges} we derive the charges and fluxes corresponding to these \(\mf{spi}\) symmetries. We end with a summary and describe possible future directions in \cref{sec:disc}.
We collect some useful results and asides in the appendices. In \cref{sec:coord} we construct a useful coordinate system near \(i^0\) using the asymptotic flatness conditions on the unphysical metric and relate it to the Beig-Schmidt coordinates in the physical spacetime. \cref{sec:useful-H} collects useful results on the unit-hyperboloid \(\ms H\) on Killing vector fields, symmetric tensor fields and a theorem by Wald showing that (with suitable conditions) closed differential forms are exact. Computations detailing the change in the Lorentz charge under conformal transformations are presented in \cref{sec:conf-lorentz-charge}. In \cref{sec:amb} we show that our charges are unambiguously defined by the the symplectic current of vacuum general relativity. In \cref{sec:new-beta} we generalize the Lorentz charges derived in \cref{sec:lorentz-charge} to include spacetimes where the ``leading order'' magnetic part of the Weyl tensor \(\dd B_{ab}\) is allowed to be non-vanishing.
\begin{center}* * *\end{center}
We use an abstract index notation with indices \(a,b,c,\ldots\) for tensor fields. Quantities defined on the physical spacetime will be denoted by a ``hat'', while the ones on the conformally-completed unphysical spacetime are without the ``hat'' e.g. \(\hat g_{ab}\) is the physical metric while \(g_{ab}\) is the unphysical metric on the conformal-completion. We denote the spatial directions at \(i^0\) by \(\vec\eta\). Regular direction-dependent limits of tensor fields, which we will denote to be $C^{>-1}$, will be represented by a boldface symbol e.g. \(\dd C_{abcd}(\vec\eta)\) is the limit of the (rescaled) unphysical Weyl tensor along spatial directions at \(i^0\). The rest of our conventions follow those of Wald \cite{Wald-book}.
\section{Asymptotic-flatness at spatial infinity: Ashtekar-Hansen structure}
\label{sec:AH}
We define spacetimes which are asymptotically-flat at null and spatial infinity using an Ashtekar-Hansen structure \cite{AH, Ash-in-Held}. We use the following the notation for causal structures from \cite{Hawking-Ellis}: \(J(i^0)\) is the causal future of a point \(i^0\) in \(M\), \(\bar J(i^0)\) is its closure, \(\dot J(i^0) \) is its boundary and \(\ms I \defn \dot J(i^0) - i^0\). We also use the definition and notation for direction-dependent tensors from \cite{Herb-dd}, see also Appendix B of \cite{KP-GR-match}.
\begin{definition}[Ashtekar-Hansen structure \cite{Ash-in-Held}]\label{def:AH}
A \emph{physical} spacetime \((\hat M, \hat g_{ab})\) has an \emph{Ashtekar-Hansen structure} if there exists another \emph{unphysical} spacetime \((M, g_{ab})\), such that
\begin{condlist}
\item \(M\) is \(C^\infty\) everywhere except at a point \(i^0\) where it is \(C^{>1}\),
\item the metric \(g_{ab}\) is \(C^\infty\) on \(M-i^0\), and \(C^0\) at \(i^0\) and \(C^{>0}\) along spatial directions at \(i^0\),
\item there is an embedding of \(\hat M\) into \(M\) such that \(\bar J(i^0) = M - \hat M\),
\item there exists a function \(\Omega\) on \(M\), which is \(C^\infty\) on \(M-i^0\) and \(C^2\) at \(i^0\) so that \(g_{ab} = \Omega^2 \hat g_{ab}\) on \(\hat M\) and
\begin{condlist}
\item \(\Omega = 0\) on \(\dot J(i^0)\),
\item \(\nabla_a \Omega \neq 0\) on \(\ms I\),
\item at \(i^0\), \(\nabla_a \Omega = 0\), \(\nabla_a \nabla_b \Omega = 2 g_{ab}\). \label{cond:Omega-at-i0}
\end{condlist}
\item There exists a neighbourhood \(N\) of \(\dot J(i^0)\) such that \((N, g_{ab})\) is strongly causal and time orientable, and in \(N \inter \hat M\) the physical metric \(\hat g_{ab}\) satisfies the vacuum Einstein equation \(\hat R_{ab} = 0\),
\item The space of integral curves of \(n^a = g^{ab}\nabla_b \Omega\) on \(\dot J(i^0)\) is diffeomorphic to the space of null directions at \(i^0\), \label{cond:int-curves}
\item The vector field \(\varpi^{-1} n^a\) is complete on \(\ms I\) for any smooth function \(\varpi\) on \(M - i^0\) such that \(\varpi > 0\) on \(\hat M \union \ms I\) and \(\nabla_a(\varpi^4 n^a) = 0\) on \(\ms I\). \label{cond:complete}
\end{condlist}
\end{definition}
The physical role of the conditions in \cref{def:AH} is to ensure that the point \(i^0\) is spacelike related to all points in the physical spacetime \(\hat M\), and represents \emph{spatial infinity}, and that null infinity \(\ms I \defn \dot J(i^0) - i^0\) has the usual structure.
Note that the metric \(g_{ab}\) is only \(C^{>0}\) at \(i^0\) along spatial directions, that is, the metric is continuous but the metric connection is allowed to have limits which depend on the direction of approach to \(i^0\). This low differentiability structure is essential to allow spacetimes with non-vanishing ADM mass \cite{AH, Ash-in-Held}. In the following we will only consider the behaviour of the spacetime approaching \(i^0\) along spatial directions, and we will not need the conditions corresponding to null infinity.
\begin{center}* * *\end{center}
For spacetimes satisfying \cref{def:AH} we have the following limiting structures at \(i^0\) when approached along spatial directions.
Along spatial directions \(\eta_a \defn \nabla_a \Omh\) is \(C^{>-1}\) at \(i^0\) and
\begin{equation}\label{eq:eta-defn}
\dd\eta^a \defn \lim_{\to i^0} \nabla^a \Omh\,,
\end{equation}
determines a \(C^{>-1}\) spatial unit vector field at \(i^0\) representing the spatial directions \(\vec\eta\) at \(i^0\). The space of directions \(\vec\eta\) in \(Ti^0\) is a unit-hyperboloid \(\ms H\).
If \(T^{a \ldots}{}_{b \ldots}\) is a \(C^{>-1}\) tensor field at \(i^0\) in spatial directions then, \(\lim_{\to i^0} T^{a \ldots}{}_{b \ldots} = \dd T^{a \ldots}{}_{b \ldots}(\vec\eta)\) is a smooth tensor field on \(\ms H\). Further, the derivatives of \(\dd T^{a \ldots}{}_{b \ldots}(\vec\eta)\) to all orders with respect to the direction \(\vec\eta\) satisfy\footnote{The factors of \(\Omh\) on the right-hand-side of \cref{eq:dd-der-spatial} convert between \(\nabla_a\) and the derivatives with respect to the directions; see \cite{Ash-in-Held,Geroch-asymp}.}
\begin{equation}\label{eq:dd-der-spatial}
\dd \partial_c \cdots \dd \partial_d \dd T^{a \ldots}{}_{b \ldots}(\vec\eta) = \lim_{\to i^0} \Omh \nabla_c \cdots \Omh \nabla_d T^{a \ldots}{}_{b \ldots}\,,
\end{equation}
where \(\dd \partial_a\) is the derivative with respect to the directions \(\vec \eta\) defined by
\begin{equation}\label{eq:dd-derivative-spatial}\begin{split}
\dd v^c \dd \partial_c \dd T^{a \ldots}{}_{b \ldots}(\vec\eta) & \defn \lim_{\epsilon \to 0} \frac{1}{\epsilon} \big[ \dd T^{a \ldots}{}_{b \ldots}(\vec\eta + \epsilon \vec v) - \dd T^{a \ldots}{}_{b \ldots}(\vec\eta) \big] \quad \text{for all } \dd v^a \in T\ms H \,,\\
\dd \eta^c \dd \partial_c \dd T^{a \ldots}{}_{b \ldots}(\vec\eta) & \defn 0\,.
\end{split}\end{equation}
The metric \(\dd h_{ab}\) induced on \(\ms H\) by the universal metric \(\dd g_{ab}\) at \(i^0\), satisfies
\begin{equation}\label{eq:d-eta-h}
\dd h_{ab} \defn \dd g_{ab} - \dd \eta_a \dd \eta_b = \dd \partial_a \dd \eta_b\,.
\end{equation}
Further, if \(\dd T^{a \ldots}{}_{b \ldots}(\vec\eta)\) is orthogonal to \(\dd\eta^a\) in all its indices then it defines a tensor field \(\dd T^{a \ldots}{}_{b \ldots}\) intrinsic to \(\ms H\). In this case, it follows from \cref{eq:d-eta-h} and \(\dd\partial_c \dd g_{ab} = 0\) (since \(\dd g_{ab}\) is direction-independent at \(i^0\)) that projecting \emph{all} the indices in \cref{eq:dd-der-spatial} using \(\dd h_{ab}\) defines a derivative operator \(\dd D_a\) intrinsic to \(\ms H\) which is also the covariant derivative operator associated with \(\dd h_{ab}\). We also define
\begin{equation}\label{eq:volume-hyp}
\dd\varepsilon_{abc} \defn - \dd\eta^d \dd\varepsilon_{dabc} \eqsp \dd\varepsilon_{ab} \defn \dd u^c \dd\varepsilon_{cab}\,,
\end{equation}
where \(\dd\varepsilon_{abcd}\) is volume element at \(i^0\) corresponding to the metric \(\dd g_{ab}\), \(\dd\varepsilon_{abc}\) is the induced volume element on \(\ms H\), and \(\dd\varepsilon_{ab}\) is the induced area element on some cross-section \(S\) of \(\ms H\) with a future-pointing timelike normal \(\dd u^a\) such that \(\dd h_{ab} \dd u^a \dd u^b = -1\).
\begin{remark}[Conformal freedom]\label{rem:conf}
It follows from the conditions in \cref{def:AH} that the allowed conformal freedom \(\Omega \mapsto \omega \Omega\) is such that \(\omega > 0\) is smooth in \(M - i^0\), is \(C^{>0}\) at \(i^0\) and \(\omega\vert_{i^0} = 1\). From these conditions it follows that
\begin{equation} \label{eq:conf-tr-defn}
\omega = 1 + \Omh \alpha \,,
\end{equation}
where \(\alpha\) is \(C^{>-1}\) at \(i^0\). Let \(\dd\alpha(\vec\eta) \defn \lim_{\to i^0} \alpha\), then from \cref{eq:conf-tr-defn} we also get
\begin{equation}
\lim_{\to i^0} \nabla_a \omega = \dd\alpha \dd\eta_a + \dd D_a \dd\alpha\,.
\end{equation}
Note in particular, that the unphysical metric \(\dd g_{ab}\) at \(i^0\) is invariant under conformal transformations. While
\begin{equation}
\eta^a \mapsto \omega^{-2}[\omega^\nfrac{1}{2} \eta^a + \tfrac{1}{2} \omega^{-\nfrac{1}{2}} \Omh \nabla^a \omega ] \implies \dd\eta^a \mapsto \dd\eta^a\,.
\end{equation}
Thus, unit spatial directions \(\vec\eta\), the space of directions \(\ms H\), and the induced metric on it \(\dd h_{ab}\) are also invariant.
\end{remark}
\section{Maxwell fields: symmetries and charges at \(i^0\)}
\label{sec:Maxwell}
To illustrate our general strategy, we first consider the simpler case of Maxwell fields on any fixed background spacetime satisfying \cref{def:AH}.
In the physical spacetime \(\hat M\), let \(\hat F_{ab}\) be the Maxwell field tensor satisfying the Maxwell equations
\begin{equation}\label{eq:Max-phys}
\hat g^{ac} \hat g^{bd}\hat\nabla_b \hat F_{dc} = 0 \eqsp \hat\nabla_{[a} \hat F_{bc]} = 0\,.
\end{equation}
In the unphysical spacetime \(M\) with \(F_{ab} \defn \hat F_{ab}\) we have
\begin{equation}\label{eq:maxwell}
\nabla_b F^{ba} = 0 \eqsp \nabla_{[a} F_{bc]} = 0\,.
\end{equation}
The Maxwell tensor $F_{ab}$ is smooth everywhere in the unphysical spacetime except at $i^{0}$. Analyzing the behaviour of \(F_{ab}\) in the simple case of a static point charge in Minkowski spacetime, it can be seen that \(F_{ab}\) diverges in the limit to $i^{0}$, but $\Omega F_{ab}$ admits a direction-dependent limit.\footnote{Note that this diverging behaviour of \(F_{ab}\) refers to the tensor in the unphysical spacetime with the chosen \(C^{>1}\) differential structure at \(i^0\). In an asymptotically Cartesian coordinate system of the physical spacetime, this behaviour reproduces the standard \(1/r^2\) falloff for \(F_{ab}\) and \(\dd F_{ab}(\vec\eta)\) is the ``leading order'' piece at \(O(1/r^2)\).} Hence we assume as our asymptotic condition that
\begin{equation} \label{eq:dd-F}
\lim_{\to i^{0}}\Omega F_{ab} = \dd{F}_{a b} (\vec\eta) \text{ is } C^{>-1}\,.
\end{equation}
The direction-dependent limit of the Maxwell tensor, $\dd{F}_{ab}$, induces smooth tensor fields on $\ms H$. These are given by the ``electric'' and ``magnetic'' parts of the Maxwell tensor defined by
\begin{equation}\label{eq:EB-F-defn}
\dd{E}_{a}(\vec\eta) =\dd{F}_{ab}(\vec\eta) \dd{\eta}^{b} \eqsp \dd{B}_{a}(\vec\eta) = * \dd{F}_{ab}(\vec\eta) \dd{\eta}^{b}\,.
\end{equation}
where \(* \dd{F}_{ab}(\vec\eta) \defn \tfrac{1}{2} \dd\varepsilon_{ab}{}^{cd} \dd F_{cd}(\vec\eta) \) is the Hodge dual with respect to the unphysical volume element \(\dd\varepsilon_{abcd}\) at \(i^0\). The electric and magnetic fields are orthogonal to \(\dd\eta^a\) and thus induce intrinsic fields \(\dd E_a\) and \(\dd B_a\) on \(\ms H\). Note that $\dd{F}_{ab}$ can be reconstructed from $\dd{E}_{a}$ and $\dd{B}_{a}$ using
\begin{equation}
\dd{F}_{ab} = 2 \dd{E}_{[a} \dd{\eta}_{b]} + \dd{\varepsilon}_{abcd} \dd{\eta}^{c} \dd{B}^{d}\,.
\end{equation}
The asymptotic Maxwell equations are obtained by multiplying \cref{eq:maxwell} by \(\Omega^{\nfrac{3}{2}}\) and taking the limit to \(i^0\) in spatial directions (see \cite{AH} for details)
\begin{equation}\label{eq:Max-eqn-asymp}\begin{aligned}
\dd D^{a}\dd{E}_{a} &=0 \eqsp \dd D_{[a} \dd{E}_{b]} =0 \,, \\
\dd D^{a} \dd{B}_{a} &=0 \eqsp \dd D_{[a}\dd{B}_{b]} =0\,.
\end{aligned}\end{equation}\\
To use the symplectic formalism for Maxwell theory, we will need to introduce the vector potential as the basic dynamical field. Let \(\hat A_a\) be a vector potential for \(\hat F_{ab}\) so that \(\hat F_{ab} = 2 \hat\nabla_{[a} \hat A_{b]}\) in the physical spacetime. Then, \(A_a \defn \hat A_a\) is a vector potential for \(F_{ab}\) in the unphysical spacetime. We further assume that the vector potential \(A_a\) for \(F_{ab}\) is chosen so that \(\Omh A_a\) is \(C^{>-1}\) at \(i^0\). Then define the asymptotic potentials
\begin{equation}\label{eq:EM-potentials}
\dd V(\vec\eta) \defn \dd\eta^a \lim_{\to i^0} \Omh A_a \eqsp \dd A_{a}(\vec\eta) \defn \dd h_a{}^b \lim_{\to i^0} \Omh A_b\,.
\end{equation}
Then the corresponding smooth fields \(\dd V\) and \(\dd A_a\) induced on \(\ms H\) act as potentials for the electric and magnetic field through
\begin{equation}\label{eq:Max-EB-potentials}
\dd E_a = \dd D_a \dd V \eqsp \dd B_a = \tfrac{1}{2} \dd\varepsilon_a{}^{bc} \dd D_b \dd A_c\,.
\end{equation}
Even though we do not need this form, for completeness, we note that the Maxwell equations on \(\ms H\) (\cref{eq:Max-eqn-asymp}) can be written in terms of the potentials \(\dd V\) and \(\dd A_a\) as
\begin{equation}
\dd D^2 \dd V = 0 \eqsp \dd D^2 \dd A_a = \dd D_a \dd D^b \dd A_b + 2 \dd A_a\,.
\end{equation}\\
Now consider a gauge transformation of the vector potential
\begin{equation}\label{eq:EM-gauge}
A_a \mapsto A_a + \nabla_a \lambda\,,
\end{equation}
where \(\lambda\) is \(C^{>-1}\) at \(i^0\). Then with \(\dd\lambda(\vec\eta) \defn \lim_{\to i^0} \lambda\), the gauge transformations of the asymptotic potentials (\cref{eq:EM-potentials}) on \(\ms H\) is given by
\begin{equation}\label{eq:Max-symm}
\dd V \mapsto \dd V \eqsp \dd A_a \mapsto \dd A_a + \dd D_a \dd\lambda\,.
\end{equation}
Thus, the asymptotic symmetries of Maxwell fields at \(i^0\) are given by the functions \(\dd\lambda\) on \(\ms H\).
\begin{remark}[Special choices of gauge]\label{rem:EM-gauge-choice}
The gauge freedom in the Maxwell vector potential can be used to impose further restrictions on the potential \(\dd A_a\) on \(\ms H\). We illustrate the following two gauge conditions which will have analogues in the gravitational case (see \cref{rem:GR-gauge-choice}).
\begin{enumerate}
\item Consider the Lorenz gauge condition \(\hat g^{ab} \hat\nabla_a \hat A_b = 0\) on the physical vector potential \(\hat A_a\) in the physical spacetime as used in \cite{CE,HT-em}. Multiplying this condition by \(\Omega^{-1}\) and taking the limit to \(i^0\), using \cref{eq:EM-potentials} we get the asymptotic gauge condition
\begin{equation}\label{eq:Lorenz-gauge-H}
\dd D^a \dd A_a = 2 \dd V\,.
\end{equation}
Alternatively, from \cref{eq:Max-symm} we see that
\begin{equation}
\dd D^a \dd A_a \mapsto \dd D^a \dd A_a + \dd D^2 \dd\lambda\,.
\end{equation}
By solving a linear hyperbolic equation for \(\dd\lambda\) we can choose a new gauge in which
\begin{equation}
\dd D^a \dd A_a = 0\,.
\end{equation}
Both these gauge conditions reduce the allowed asymptotic symmetries to
\begin{equation}
\dd D^2 \dd\lambda = 0\,.
\end{equation}
\item If we impose the restriction \(\dd B_a = 0\) then \(\dd D_{[a}\dd A_{b]} = 0\) and thus there exists a function \(\dd A\) so that \(\dd A_a = \dd D_a \dd A\).\footnote{This follows from the fact that every \(1\)-loop in \(\ms H\) is contractible to a point and hence the first de Rahm cohomology group of \(\ms H\) is trivial.} Then using the transformation \cref{eq:Max-symm} we can set \(\dd A_a = 0\). The remaining asymptotic symmetries are just the Coulomb symmetries \(\dd \lambda = \text{constant}\). This is analogous to the condition used by Ashtekar and Hansen in the gravitational case to reduce the asymptotic symmetries to the Poincar\'e algebra \cite{AH}.
\end{enumerate}
In what follows we will not need to impose any gauge condition on the potential \(\dd A_a\) and our analysis will be completely gauge invariant.
\end{remark}
\begin{remark}[Logarithmic gauge transformations]
\label{rem:log-gauge}
Note that above we only considered gauge transformations \cref{eq:EM-gauge} where the gauge parameter \(\lambda\) was \(C^{>-1}\) at \(i^0\). However, there is an additional ambiguity in the choice of gauge given by the \emph{logarithmic gauge transformations} of the form
\begin{equation}
A_a \mapsto A_a + \nabla_a (\ln\Omh \Lambda)\,,
\end{equation}
where \(\Lambda\) is \(C^{>0}\) at \(i^0\). Under this gauge transformation \(\Omh A_a\) is still \(C^{>-1}\) at \(i^0\), and from \cref{eq:EM-potentials} we have the transformations
\begin{equation}
\dd V \mapsto \dd V + \dd\Lambda \eqsp \dd A_a \mapsto \dd A_a \,,
\end{equation}
where \(\dd\Lambda \defn \lim_{\to i^0} \Lambda\) which is direction-independent at \(i^0\) and induces a constant function on \(\ms H\). From \cref{eq:Max-EB-potentials} we see that the fields \(\dd E_a\) and \(\dd B_a\) are invariant under this transformation. Since our charges and fluxes, derived below, will be expressed in terms of \(\dd E_a\) we will not need to fix this logarithmic gauge ambiguity in the potentials for electromagnetism. However, there is an analogous logarithmic translation ambiguity in the gravitational case which we will need to fix (see \cref{rem:log-trans}). Thus we now illustrate how this logarithmic gauge ambiguity can be fixed even in electromagnetism.
Since the metric \(\dd g_{ab}\) in the tangent space \(Ti^0\) is universal and isometric to the Minkowski metric it is invariant under the reflection of the spatial directions \(\vec\eta \mapsto - \vec\eta\). This gives rise to a reflection isometry of the metric \(\dd h_{ab}\) on the space of directions \(\ms H\). It was shown in \cite{KP-EM-match} that the Maxwell fields on \(\ms H\) which ``match'' on to asymptotically-flat Maxwell fields on null infinity are the ones where the electric field \(\dd E_a\) is reflection-odd i.e.
\begin{equation}
\dd E_a (\vec\eta) = - \dd E_a(-\vec\eta)\,.
\end{equation}
Further, since the logarithmic gauge parameter \(\dd\Lambda\) is \emph{direction-independent} we have that, \(\dd\Lambda\) is reflection-even
\begin{equation}
\dd \Lambda(\vec\eta) = \dd\Lambda(-\vec\eta)\,.
\end{equation}
Using a reflection-odd \(\dd E_a\) in \cref{eq:Max-EB-potentials} we see that using a logarithmic gauge transformation we can demand that the potential \(\dd V\) is also reflection-odd, so that
\begin{equation}
\dd V (\vec\eta) = - \dd V(-\vec\eta)\,.
\end{equation}
This fixes the logarithmic gauge ambiguity in the potentials.
\end{remark}
\begin{center}* * *\end{center}
Let us now analyze the charges and fluxes for this theory. To do this, we start by studying the symplectic current. In vacuum electromagnetism, this is given by:
\begin{equation}\label{eq:Maxwell-symp}
\omega_{abc}(\delta_{1} A, \delta_{2}A) = \hat\varepsilon_{abcd} \left( \delta_{1} \hat{F}^{de} \delta_{2} \hat A_{e} - \delta_{2}\hat {F}^{de} \delta_{1} \hat A_{e} \right)\,,
\end{equation}
where the indices on \(\delta \hat F_{ab}\) have been raised with the physical metric \(\hat g^{ab}\). In terms of quantities in the unphysical spacetime we have
\begin{equation}\label{eq:Maxwell-symp-unphys}
\omega_{abc}(\delta_{1} A, \delta_{2} A) = \varepsilon_{abcd} \left( \delta_{1} {F}^{de} \delta_{2} A_{e} - \delta_{2} {F}^{de} \delta_{1} A_{e} \right) \,,
\end{equation}
where we have used $\hat{\varepsilon}_{abcd} = \Omega^{-4} \varepsilon_{abcd}\,,$ and \(\hat g^{ab} = \Omega^2 g^{ab}\).
To obtain the limit to \(i^0\) we rewrite this in terms of direction-dependent quantities from \cref{eq:dd-F,eq:EM-potentials}. We see that \(\Omega^{\nfrac{3}{2}}\omega_{abc}\) is \(C^{>-1}\) at \(i^0\). The pullback of this direction-dependent limit to \(\ms H\) is then given by
\begin{equation}
\pb{\dd\omega}(\delta_{1} A, \delta_{2} A) = - \dd\varepsilon_3 \left( \delta_1 \dd E^a \delta_2 \dd A_a - \delta_2 \dd E^a \delta_1 \dd A_a \right)\,,
\end{equation}
where $\dd{\varepsilon}_{3} = \dd{\varepsilon}_{abc}$ is the volume element on $\ms H$.
We now take $\delta_{2}$ to correspond to a gauge transformation as in \cref{eq:Max-symm} to get
\begin{align}\label{maxwell-symp-current}
\pb{\dd\omega}(\delta A,\delta_{\dd{\lambda}} A)= - \dd\varepsilon_{3} \delta \dd E^a \dd D_a \dd{\lambda}
= - \dd\varepsilon_{3} \dd D^{a}(\delta \dd{E}_{a} \dd{\lambda})\,.
\end{align}
where in the last step we have used the linearized Maxwell equation $\dd D_{a} \delta \dd{E}^{a}=0$ (see \cref{eq:Max-eqn-asymp}). That is, the symplectic current (with one of the perturbations being generated by a gauge transformation) can be written as a total derivative of \(\delta\dd E_a \dd\lambda\). Thus we define the perturbed charge \(\delta \mathcal Q[\dd{\lambda}; S]\) on a cross-section \(S\) of \(\ms H\) by
\begin{equation}
\delta \mathcal Q[\dd{\lambda}; S] = \int_S \dd{\varepsilon}_{_2} \dd u^{a} \delta\dd{E}_{a} \dd{\lambda}\,,
\end{equation}
where $\dd{\varepsilon}_{2} \equiv \dd{\varepsilon}_{ab}$ is the area element on $S$ and $\dd u^{a}$ is the future-directed normal to it. Note that this expression is manifestly integrable and defines the unperturbed charge once we choose a reference solution on which \(\mathcal Q[\dd{\lambda}; S] = 0 \) for all \(\dd\lambda\) and all \(S\). For the reference solution we choose the trivial solution \(F_{ab} = 0\) so that \(\dd E_a = 0\). Then the unperturbed charge is given by
\begin{equation} \label{eq:Maxwell-charge}
\mathcal Q[\dd{\lambda}; S] = \int_S \dd{\varepsilon}_{_2} \dd u^{a} \dd{E}_{a} \dd{\lambda} \,,
\end{equation}
Let \(\Delta\ms H\) be any region of \(\ms H\) bounded by the cross-sections \(S_2\) and \(S_1\) (with \(S_2\) in the future of \(S_1\)), then the flux of the charge \cref{eq:Maxwell-charge} through $\Delta\ms H$ is given by
\begin{equation} \label{Maxwell-flux}
\mathcal {F} [\dd{\lambda}; \Delta \mathcal H] = - \int_{\Delta\ms H} \dd{\varepsilon}_{_3} \dd{E}_{a} \dd D^{a} \dd{\lambda} \,.
\end{equation}
Note that the flux of the charge vanishes for $\dd{\lambda} = \text{constant}$ in which case \cref{eq:Maxwell-charge} is the Coulomb charge. The charges associated with a general smooth $\dd{\lambda}$ are only associated with the blowup \(\ms H\) and not to $i^{0}$ itself. These additional charges are nevertheless useful to relate the charges defined on past and future null infinity and derive the resulting conservation laws for their fluxes in a scattering process; see \cite{KP-EM-match}.
\section{Gravitational fields and Einstein equations at \(i^0\)}
\label{sec:GR-eom}
Now we turn to a similar analysis of symmetries, charges and fluxes for general relativity. To set the stage in this section we analyze the consequences of Einstein equations and the universal structure common to all spacetimes satisfying \cref{def:AH}.
Using the conformal transformation relating the unphysical Ricci tensor \(R_{ab}\) to the physical Ricci tensor \(\hat R_{ab}\) (see Appendix~D of \cite{Wald-book}), the vacuum Einstein equation \(\hat R_{ab} = 0\) can be written as
\begin{equation}\label{eq:EE}\begin{aligned}
S_{ab} & = - 2 \Omega^{-1} \nabla_a \nabla_b \Omega + \Omega^{-2} \nabla^{c}\Omega \nabla_{c}\Omega g_{ab}\,, \\
\Omega^{\nfrac{1}{2}} S_{ab} & = -4 \nabla_a \eta_b + 4 \Omega^{-\nfrac{1}{2}} \left( g_{ab} - \tfrac{1}{\eta^{2}} \eta_{a} \eta_{b} \right)\eta_c \eta^c \,,
\end{aligned}\end{equation}
where, as before, \(\eta_a = \nabla_a \Omh\), and \(S_{ab}\) is given by
\begin{equation}\label{eq:S-defn}
S_{ab} \defn R_{ab} - \tfrac{1}{6} R g_{ab}\,.
\end{equation}
Further, the Bianchi identity \(\nabla_{[a} R_{bc]de} = 0\) on the unphysical Riemann tensor along with \cref{eq:EE} gives the following equations for the unphysical Weyl tensor \(C_{abcd}\) (see \cite{Geroch-asymp} for details).
\begin{subequations}\label{eq:Bianchi-unphys}\begin{align}
\nabla_{[e} (\Omega^{-1} C_{ab]cd}) = 0 \label{eq:curl-weyl}\,, \\
\nabla^d C_{abcd} = - \nabla_{[a} S_{b]c}\,. \label{eq:Weyl-S}
\end{align}\end{subequations}
Since the physical Ricci tensor $\hat{R}_{ab}$ vanishes, the gravitational field is completely described by the physical Weyl tensor $\hat{C}_{abcd}$. The unphysical Weyl tensor is then \(C_{abcd} = \Omega^2 \hat C_{abcd}\). Since the unphysical metric \(g_{ab}\) is \(C^{>0}\) at \(i^0\), \(\Omh C_{abcd}\) is \(C^{>-1}\) at \(i^0\) \cite{AH}, and let
\begin{equation}
\dd{C}_{abcd}(\vec\eta) \defn \lim_{\to i^{0}} \Omega^{\nfrac{1}{2}} C_{abcd}\,.
\end{equation}
The \emph{electric} and \emph{magnetic} parts of \(\dd C_{abcd}(\vec\eta)\) are, respectively, defined by
\begin{equation} \label{eq:EB-defn}
\dd{E}_{ab}(\vec\eta) \defn \dd{C}_{acbd} (\vec\eta) \dd{\eta}^{c} \dd{\eta}^{d} \eqsp \dd{B}_{ab}(\vec\eta) \defn * \dd{C}_{acbd} (\vec\eta) \dd{\eta}^{c}\dd{\eta}^{d}\,.
\end{equation}
where \(*\dd C_{abcd}(\vec\eta) \defn \tfrac{1}{2} \dd \varepsilon_{ab}{}^{ef} \dd C_{efcd} (\vec\eta)\). It follows from the symmetries of the Weyl tensor that both \(\dd E_{ab}(\vec\eta)\) and \(\dd B_{ab}(\vec\eta)\) are orthogonal to \(\dd\eta^a\), symmetric and traceless with the respect to the metric \(\dd h_{ab}\) on $\ms H$, and thus define smooth tensor fields \(\dd E_{ab}\) and \(\dd B_{ab}\) on \(\ms H\), respectively. The limiting Weyl tensor can be obtained from these fields using
\begin{equation}\label{eq:E-B-decomp}
\dd{C}^{ab}{}_{cd}(\vec\eta) = 4 \dd{\eta}^{[a} \dd{\eta}_{[c} \dd{E}^{b]}{}_{d]} - 4 \dd{h}^{[a}{}_{[c}\dd{E}^{b]}{}_{d]} + 2 \dd{\varepsilon}^{abe} \dd{\eta}_{[c}\dd{B}_{d]e} + 2 \dd{\varepsilon}_{cde}\dd{\eta}^{[a}\dd{B}^{b]e} \,.
\end{equation}
Further, as shown in \cite{AH}, multiplying \cref{eq:curl-weyl} by \(\Omega\) and taking the limit to \(i^0\) gives the equations of motion
\begin{equation} \label{eq:EB-curl}
\dd D_{[a} \dd E_{b]c} =0 \eqsp \dd D_{[a} \dd B_{b]c}=0\,.
\end{equation}
These are the asymptotic Einstein equations at spatial infinity. Taking the trace over the indices \(a\) and \(c\) and using the fact that $\dd{E}_{ab}$ and $\dd{B}_{ab}$ are traceless, it also follows that
\begin{equation}\label{eq:EB-div}
\dd D^b \dd{E}_{ab} = \dd D^b \dd{B}_{ab}= 0 \,.
\end{equation}
To apply the symplectic formalism to general relativity, we will need to consider metric perturbations instead of just perturbations of the Weyl tensor. As we will show below (\cref{eq:gamma-E-K}) suitably rescaled limits of the unphysical metric perturbations can be expressed in terms of perturbations of certain potentials for \(\dd E_{ab}\) and \(\dd B_{ab}\) provided by the tensor \(S_{ab}\) in \cref{eq:S-defn}. These potentials are obtained as follows: Since \(g_{ab}\) is \(C^{>0}\), \(\Omh S_{ab}\) is \(C^{>-1}\) and let \(\dd S_{ab} (\vec\eta) \defn \lim_{\to i^0} \Omh S_{ab}\). Define
\begin{equation} \label{eq:potentials-defn}
\dd{E}(\vec\eta) \defn \dd{S}_{ab}(\vec\eta) \dd{\eta}^{a}\dd{\eta}^{b} \eqsp \dd{K}_{ab}(\vec\eta) \defn \dd{h}_a{}^{c} \dd{h}_b{}^{d} \dd{S}_{cd}(\vec\eta) - \dd{h}_{ab} \dd{E}(\vec\eta)\,,
\end{equation}
which induce the fields \(\dd E\) and \(\dd K_{ab}\) intrinsic to \(\ms H\). Following \cite{AH}, multiplying \cref{eq:Weyl-S} by \(\Omega\) and taking the limit to \(i^0\), along with \cref{eq:EB-curl} implies that
\begin{equation}\label{eq:h-eta-S}
\dd h_a{}^b \dd \eta^c \dd S_{bc}(\vec\eta) = \dd D_a \dd E\,,
\end{equation}
and
\begin{equation} \label{eq:EB-potentials}
\dd{E}_{ab} = -\tfrac{1}{4} (\dd D_{a}\dd D_{b}\dd{E} + \dd h_{ab} \dd{E}) \eqsp \dd{B}_{ab} = -\tfrac{1}{4}\dd{\varepsilon}_{cda}\dd D^{c}\dd{K}^{d}{}_{b}\,.
\end{equation}
Thus, \(\dd E\) is a scalar potential for \(\dd E_{ab}\) while \(\dd K_{ab}\) is a tensor potential for \(\dd B_{ab}\).\footnote{Since \(\dd B_{ab}\) is curl-free (\cref{eq:EB-curl}), there also exists a scalar potential for \(\dd B_{ab}\) (see \cref{lem:scalar-pot}). However this scalar potential cannot be obtained as the limit of some tensor field on spacetime.}
The potentials \(\dd E\) and \(\dd K_{ab}\) are not free fields on \(\ms H\). Suitably commuting the derivatives and using \cref{eq:Riem-hyp} one can verify that \(\dd E_{ab}\) identically satisfies \cref{eq:EB-curl} when written in terms of the potential \(\dd E\) while \(\dd h^{ab} \dd E_{ab} = 0\) gives
\begin{equation} \label{eq:box-E}
\dd D^{2}\dd E + 3 \dd{E} = 0\,.
\end{equation}
On the other hand, since \(\dd K_{ab}\) is symmetric the magnetic field \(\dd B_{ab}\) in \cref{eq:EB-potentials} is identically traceless. Since \(\dd B_{ab}\) is symmetric and satisfies \cref{eq:EB-curl}, we get that
\begin{subequations}\label{eq:K-eom}\begin{align}
\dd{\varepsilon}_a{}^{bc} \dd{B}_{bc} = 0 \implies \dd D^{b}\dd{K}_{ab} & = \dd D_{a} \dd{K} \label{eq:div-K}\,, \\
\dd\varepsilon_a{}^{cd} \dd D_c \dd{B}_{db} = 0 \implies \dd D^{2}\dd{K}_{ab} & = \dd D_{a}\dd D_{b} \dd{K} + 3 \dd{K}_{ab} - \dd{h}_{ab} \dd{K} \label{eq:box-K}\,,
\end{align}\end{subequations}
where \(\dd K \defn \dd h^{ab} \dd K_{ab}\), and to get \cref{eq:box-K} we have commuted derivatives using \cref{eq:Riem-hyp} and used \cref{eq:div-K}. Considering the potentials \(\dd E\) and \(\dd K_{ab}\) as the basic fields, the asymptotic Einstein equations are given by \cref{eq:box-E,eq:K-eom}, while the Weyl tensors \(\dd E_{ab}\) and \(\dd B_{ab}\) are derived quantities through \cref{eq:EB-potentials}.\\
To define the charge for asymptotic Lorentz symmetries, e.g. angular momentum in \cref{sec:lorentz-charge}, we will need the ``subleading'' part of the magnetic Weyl tensor. Following Ashtekar and Hansen \cite{AH}, we will restrict to the class of spacetimes satisfying the additional condition \(\dd B_{ab} = 0\). We also require that the ``subleading'' magnetic field defined by
\begin{equation} \label{eq:beta-defn}
\dd{\beta}_{ab} \defn \lim_{\to i^{0}} * C_{acbd} \eta^{c} \eta^{d}\,,
\end{equation}
exists as a \(C^{>-1}\) tensor field at \(i^0\). The condition \(\dd B_{ab} = 0\) is satisfied in any spacetime which is \emph{either} stationary \emph{or} axisymmetric \cite{B-zero}. In \cref{sec:new-beta} we show how one can define a ``subleading'' magnetic Weyl tensor and the Lorentz charges even when \(\dd B_{ab} \neq 0\). Since those computations are more tedious we impose the above restriction in the main body of the paper.
The consequences of this restriction are as follows. Since \(\dd B_{ab} = 0\) from \cref{eq:EB-potentials} the ``curl'' of $\dd{K}_{ab}$ vanishes
\begin{equation} \label{eq:vanishing-curl-K}
\dd D_{[a} \dd{K}_{b]c} = 0\,.
\end{equation}
It follows from \cref{lem:scalar-pot} that there exists a scalar potential \(\dd k\) such that
\begin{equation}\label{eq:K-potential}
\dd{K}_{ab} = \dd D_{a}\dd D_{b} \dd{k} + \dd{h}_{ab}\dd{k}\,.
\end{equation}
The scalar potential \(\dd k\) is a free function on \(\ms H\) since the equations of motion \cref{eq:K-eom} are identically satisfied after using \cref{eq:K-potential}. Using the freedom in the conformal factor one can now set \(\dd K_{ab} = 0\) (see \cite{AH} and \cref{rem:GR-gauge-choice}). Since, we do not wish to impose any restrictions on the conformal factor, we will \emph{not} demand that \(\dd K_{ab}\) vanishes.
Note that from \cref{eq:beta-defn} it follows that \(\dd\beta_{ab}\) is symmetric, tangent to \(\ms H\) and traceless. In the following we shall also need an equation of motion for \(\dd\beta_{ab}\) which is obtained as follows: Contract the indices \(e\) and \(d\) in \cref{eq:curl-weyl} and multiply by
\(3 \Omega\) to get
\begin{equation}
\nabla^d C_{abcd} = \Omega^{-1} C_{abcd}\nabla^d \Omega = 2 \Omega^{-\nfrac{1}{2}} C_{abcd}\eta^d\,.
\end{equation}
Using the Hodge dual of the above equation we obtain
\begin{equation}
\Omh \nabla^b (*C_{acbd} \eta^c \eta^d) = - 2 *C_{acbd}\eta^b \eta^c\eta^d + 2 \Omh *C_{acbd} \nabla^b \eta^{(c} \eta^{d)}\,.
\end{equation}
The first term on the right-hand-side vanishes due to the symmetries of the Weyl tensor. In the second term on the right-hand-side we substitute for the derivative of \(\eta_a\) using \cref{eq:EE} to get
\begin{equation}
\Omh \nabla^b (*C_{acbd} \eta^c \eta^d) = - \tfrac{1}{4} (\Omh *C_{acbd}) (\Omh S^{bc}) \eta^d\,.
\end{equation}
Taking the limit to \(i^0\), writing the tensor \(\dd S_{ab}\) in terms of the gravitational potentials through \cref{eq:potentials-defn,eq:h-eta-S}, and using \(\dd B_{ab} = 0\) along with \cref{eq:E-B-decomp}, we get the equation of motion
\begin{equation}\label{eq:div-beta}
\dd D^b \dd\beta_{ab} = \tfrac{1}{4} \dd\varepsilon_{cda} \dd E^c{}_b \dd K^{bd}\,.
\end{equation}
\begin{remark}[Conformal transformations of the asymptotic fields]\label{rem:conf-GR-fields}
Under changes of the conformal factor \(\Omega \mapsto \omega\Omega\) we have
\begin{equation}\begin{aligned}
S_{ab} & \mapsto S_{ab} - 2 \omega^{-1} \nabla_a \nabla_b \omega + 4 \omega^{-2} \nabla_a \omega \nabla_b \omega - \omega^{-2} g_{ab} \nabla^c \omega \nabla_c \omega\,, \\
C_{abcd} & \mapsto \omega^2 C_{abcd}\,.
\end{aligned}\end{equation}
From the conditions in \cref{rem:conf} it follows that \(\dd E_{ab}\), \(\dd B_{ab}\) and \(\dd E\) are invariant while
\begin{equation}\label{eq:conf-K}
\dd K_{ab} \mapsto \dd K_{ab} - 2 (\dd D_a \dd D_b \dd\alpha + \dd h_{ab}\dd\alpha)\,.
\end{equation}
Further, when \(\dd B_{ab} = 0\) we also have the transformation of the ``subleading'' magnetic Weyl tensor \(\dd\beta_{ab}\) given by
\begin{equation}\label{eq:conf-beta}
\dd\beta_{ab} \mapsto \dd\beta_{ab} - \dd\varepsilon_{cd(a} \dd E^c{}_{b)} \dd D^d \dd\alpha\,.
\end{equation}
\end{remark}
\subsection{The universal structure at \(i^0\)}
\label{sec:univ-str}
In this section we summarize the \emph{universal structure} at \(i^0\), that is, the structure common to all spacetimes which are asymptotically-flat in the sense of \cref{def:AH} and thus is independent of the choice of the physical spacetime under consideration.
Consider any two unphysical spacetimes \((M, g_{ab}, \Omega)\) and \((M', g'_{ab},\Omega')\) with their respective \(C^{>1}\) differential structures at their spatial infinities corresponding to two different physical spacetimes. Using a \(C^1\) diffeomorphism we can identify the points representing the spatial infinities and their tangent spaces without any loss of generality. Each of the metrics \(g_{ab}\) and \(g'_{ab}\) induces a metric in the tangent space \(Ti^0\) which is isometric to the Minkowski metric. Thus, the metric \(\dd g_{ab}\) at \(i^0\) is also universal. This also implies that the spatial directions \(\vec\eta\), the space of directions \(\ms H\) and the induced metric \(\dd h_{ab}\) are universal.
So far we have only used the \(C^1\) differential structure. However since the differential structure at \(i^0\) is slightly better, being \(C^{>1}\), we can identify the spacetimes at the ``next order''. In \cite{AH} this structure was imposed by suitably identifying spacelike geodesics in the \emph{physical} spacetimes. But as pointed out by \cite{Porrill} this identification cannot be performed except in very special cases. Below we argue that a similar identification of the spacetimes can be done using equivalence classes of \(C^{>1}\) curves in the unphysical spacetimes. The proof is based on constructing a suitable \(C^{>1}\) coordinate system at \(i^0\) and is deferred to \cref{sec:coord}, we summarize the main construction below.
Consider the unphysical spacetime \((M, g_{ab}, \Omega)\), and a spacelike \(C^{>1}\) curve \(\Gamma_v\) in \(M\) passing through \(i^0\) with tangent \(v^a\). Since the curve is \(C^{>1}\) its tangent vector \(v^a\) is \(C^{>0}\). Using the universal metric \(\dd g_{ab}\) at \(i^0\) we can then demand that \(v^a\) be unit-normalized at \(i^0\) and thus along the curve \(\Gamma_v\)
\begin{equation}
\lim_{\to i^0} v^a = \dd\eta^a\,,
\end{equation}
that is the curve \(\Gamma_v\) points in some spatial direction \(\vec\eta\) at \(i^0\). Further, since \(\Gamma_v\) is \(C^{>1}\), \(v^b \nabla_b v^a\) is a \(C^{>-1}\) vector. Thus, define the \emph{acceleration} of \(\Gamma_v\) at \(i^0\) by the projection of this vector on to \(\ms H\)
\begin{equation}\label{eq:acc-defn}
\dd A^a[\Gamma_v] \defn \dd h^a{}_b \lim_{\to i^0} v^c \nabla_c v^b\,.
\end{equation}
Now we define the curves \(\Gamma_v\) (with tangent \(v^a\)) and \(\Gamma_\eta\) (with tangent \(\eta^a\)) to be equivalent if their accelerations are equal at \(i^0\). To see what this entails, note that since \(v^a\) is \(C^{>0}\) and equals \(\dd\eta^a\) in the limit to \(i^0\) we have that \(v^a = \eta^a + \Omh w^a\) for some \(w^a\) which is \(C^{>-1}\) at \(i^0\). Then, from \cref{eq:acc-defn} we have
\begin{equation}
\dd A^a[\Gamma_v] = \dd A^a[\Gamma_\eta] \iff \dd h_{ab} \lim_{\to i^0} w^b = 0\,.
\end{equation}
Thus, we have an equivalence class of curves through \(i^0\) pointing in each direction \(\vec\eta\) defined by\footnote{These equivalence classes of curves form a principal bundle over \(\ms H\), called \emph{Spi} in \cite{AH}.}
\begin{equation}\label{eq:equiv-curves}
\Gamma_v \sim \Gamma_\eta \iff \dd h_{ab}\lim_{\to i^0} \Omega^{-\nfrac{1}{2}} (v^b - \eta^b) = 0\,.
\end{equation}
We will show in \cref{sec:coord} that using a \(C^{>1}\) diffeomorphism one can identify these equivalence classes of curves between any any two spacetimes \((M, g_{ab}, \Omega)\) and \((M', g'_{ab}, \Omega')\). Further, we show that the conformal factors \(\Omega\) and \(\Omega'\) can also be identified in a neighbourhood of \(i^0\).
Thus, the universal structure at \(i^0\) consists of the point \(i^0\), the tangent space \(Ti^0\), the metric \(\dd g_{ab}\) at \(i^0\) and the equivalence classes of \(C^{>1}\) curves given by \cref{eq:equiv-curves}. In addition, the conformal factor \(\Omega\) can also be chosen to be universal.
\begin{remark}[Logarithmic translations]
\label{rem:log-trans}
So far we have worked with a fixed \(C^{>1}\) differential structure in the unphysical spacetime at \(i^0\). But given a physical spacetime the unphysical spacetime is ambiguous up to a \(4\)-parameter family of \emph{logarithmic translations} at \(i^0\) which simultaneously change the \(C^{>1}\) differential structure and the conformal factor at \(i^0\); see \cite{Ash-log} or Remark~B.1 of \cite{KP-GR-match} for details. The logarithmic translations at \(i^0\) are parameterized by a \emph{direction-independent} vector \(\dd\Lambda^a\) at \(i^0\). Any such vector can be written as
\begin{equation}
\dd \Lambda^a = \dd\Lambda \dd\eta^a + \dd D^a \dd\Lambda\,,
\end{equation}
where \(\dd\Lambda(\vec\eta) = \dd\eta_a \dd\Lambda^a\) is a function on \(\ms H\) satisfying
\begin{equation}\label{eq:log-trans-eqn}
\dd D_a \dd D_b \dd\Lambda + \dd h_{ab} \dd\Lambda = 0\,.
\end{equation}
Under such logarithmic translations the potentials \cref{eq:potentials-defn} transform as \cite{Ash-log}
\begin{equation}\label{eq:log-trans}
\dd E \mapsto \dd E + 4\dd\Lambda \eqsp \dd K_{ab} \mapsto \dd K_{ab}\,,
\end{equation}
while \(\dd E_{ab}\) and \(\dd B_{ab}\) are invariant. The presence of these logarithmic translations will lead to the following issue when we define the charges for supertranslations in \cref{sec:st}. For general supertranslations (which are not translations) our charges will depend on the potential \(\dd E\) instead of just the electric field \(\dd E_{ab}\). Thus, even if we take the physical spacetime to be the Minkowski spacetime our charges will not vanish due to the logarithmic translation ambiguity \cref{eq:log-trans} in \(\dd E\). Thus, now we will fix these logarithmic translations following the argument in \cite{Ash-log}.
Since the metric \(\dd g_{ab}\) in the tangent space \(Ti^0\) is universal and isometric to the Minkowski metric it is invariant under the reflection of the spatial directions \(\vec\eta \mapsto - \vec\eta\). This gives rise to a reflection isometry of the metric \(\dd h_{ab}\) on the space of directions \(\ms H\). Now it was shown in \cite{KP-GR-match} that the only spacetimes which are asymptotically-flat at spatial infinity and which ``match'' on to asymptotically-flat spacetimes on null infinity are the ones where \(\dd E_{ab}\) is reflection-even, i.e.
\begin{equation}
\dd E_{ab} (\vec\eta) = \dd E_{ab}(-\vec\eta)\,.
\end{equation}
Further, since \(\dd\Lambda = \dd\eta_a \dd\Lambda^a\) for the \emph{direction-independent} vector \(\dd\Lambda^a\) we have that, \(\dd\Lambda\) is reflection-odd
\begin{equation}
\dd \Lambda(\vec\eta) = - \dd\Lambda(-\vec\eta)\,.
\end{equation}
For a reflection-even \(\dd E_{ab}\), from \cref{eq:EB-potentials,eq:log-trans-eqn}, it follows that using a logarithmic translation we can demand that the potential \(\dd E\) is also reflection-even, so that
\begin{equation}\label{eq:parity-E}
\dd E (\vec\eta) = \dd E(-\vec\eta)\,.
\end{equation}
Having fixed the logarithmic translations in this way, \(\dd E_{ab} = 0\) then implies that \(\dd E = 0\). In particular, for Minkowski spacetime we have
\begin{equation}\label{eq:Mink-stuff}
\dd E = 0 \eqsp \dd B_{ab} = 0 \eqsp \dd \beta_{ab} = 0 \quad \text{(on Minkowski spacetime)}\,.
\end{equation}
Note that when \(\dd E_{ab} = 0\), \(\dd \beta_{ab}\) is conformally-invariant (see \cref{eq:conf-beta}) and the conditions \cref{eq:Mink-stuff} do not depend on the conformal factor chosen for Minkowski spacetime. These conditions will ensure that our all our charges will vanish on Minkowski spacetime. Thus, from here on we will assume that the logarithmic translations have been fixed as above that is, we work the choice of \(C^{>1}\) differential structure at \(i^0\) where the parity condition \cref{eq:parity-E} is satisfied.
\end{remark}
\section{Metric perturbations and symplectic current at \(i^0\)}
\label{sec:pert}
Now consider a one-parameter family of asymptotically-flat physical metrics \(\hat g_{ab}(\lambda)\) where \(\hat g_{ab} = \hat g_{ab}(\lambda = 0)\) is some chosen background spacetime. Define the physical metric perturbation \(\hat \gamma_{ab}\) around the background \(\hat g_{ab}\) by
\begin{equation}\label{eq:phys-pert}
\hat \gamma_{ab} = \delta \hat g_{ab} \defn \left. \frac{d}{d\lambda} \hat g_{ab}(\lambda) \right\vert_{\lambda = 0}\,.
\end{equation}
We will use ``\(\delta\)'' to denote perturbations of other quantities defined in a similar way.
As discussed above, the conformal factor \(\Omega\) can be chosen universally, i.e., independently of the choice of the physical metric. The unphysical metric perturbation is
\begin{equation}\label{ond}
\delta g_{ab} = \gamma_{ab} = \Omega^2 \hat \gamma_{ab}\,,
\end{equation}
and we also have
\begin{equation} \label{eq:variations-1}
\delta \eta_{a} = \delta \nabla_a \Omh = 0 \eqsp \delta \eta^{a} = \delta(g^{ab}\eta_b) = -\gamma^{ab} \eta_{b}\,.
\end{equation}
Now we investigate the conditions on the unphysical perturbation \(\gamma_{ab}\) which preserve asymptotic flatness and the universal structure at \(i^0\) described in \cref{sec:univ-str}. First recall that since the unphysical metric \(g_{ab}\) is $C^{>0}$ and universal at $i^{0}$, it follows that the unphysical metric perturbation \(\gamma_{ab}\) is \(C^{>0}\) and \(\gamma_{ab}\vert_{i^0} = 0\). Therefore
\begin{equation} \label{eq:lim-gamma}
\dd\gamma_{ab}(\vec\eta) \defn \lim_{\to i^0} \Omega^{-\nfrac{1}{2}} \gamma_{ab} \text{ is } C^{>-1}\,,
\end{equation}
With \cref{eq:variations-1,eq:lim-gamma} we also see that \(\delta \dd\eta^a = 0\). Thus, the metric perturbation also preserves the spatial directions \(\vec\eta\) at \(i^0\), the space of directions \(\ms H\) and the metric \(\dd h_{ab}\) on it.
Now consider the universal structure given by the equivalence classes of \(C^{>1}\) curves through \(i^0\) as described in \cref{sec:univ-str}. Consider the equivalence class of a fixed curve \(\Gamma_v\) with tangent \(v^a\). For this equivalence class to be preserved, the perturbation of \cref{eq:equiv-curves} must vanish. Evaluating this condition using \cref{eq:variations-1,eq:lim-gamma} we obtain the condition
\begin{equation}\label{eq:gamma-eta-h}
\dd h_a{}^b \dd \eta^c \dd \gamma_{bc}(\vec\eta) = 0\,.
\end{equation}
In summary, \cref{eq:lim-gamma,eq:gamma-eta-h} are the asymptotic conditions on the unphysical metric perturbations which preserve the asymptotic flatness and the universal structure at \(i^0\).
The metric perturbation \(\dd\gamma_{ab}\) can be directly related to the perturbations of the gravitational potentials \(\dd E\) and \(\dd K_{ab}\) defined in \cref{eq:potentials-defn}. Perturbing \cref{eq:EE} to evaluate $\Omega^{\nfrac{1}{2}} \delta S_{ab}$ and taking the limit to \(i^0\) using \cref{eq:variations-1,eq:lim-gamma} we get
\begin{equation}
\delta\dd{S}_{ab} = \lim_{\to i^{0}} \Omega^{\nfrac{1}{2}}\delta S_{ab} = 4 \dd{\partial}_{(a} \dd{\gamma}_{b)c} \dd{\eta}^{c} + 4\dd{\eta}_{(a} \dd{\gamma}_{b)c}\dd{\eta}^{c} + 2\dd{\gamma}_{ab} - 4 \dd{\gamma}_{cd} \dd{\eta}^{c}\dd{\eta}^{d} \dd g_{ab}\,.
\end{equation}
Using the definition of the gravitational potentials \cref{eq:potentials-defn} and \cref{eq:gamma-eta-h} we obtain
\begin{subequations}\label{eq:deltaEK}\begin{align}
\delta \dd{E} & = 2 \dd{\gamma}_{ab}\dd{\eta}^{a} \dd{\eta}^{b} \label{eq:deltaE}\,, \\
\delta \dd{K}_{ab} & = -2 \dd{h}_a{}^{c} \dd{h}_b{}^{d} \dd{\gamma}_{cd} - \dd{h}_{ab}\delta \dd{E} \label{eq:deltaK}\,.
\end{align}\end{subequations}
Using \cref{eq:gamma-eta-h,eq:deltaEK} we can reconstruct the metric perturbation \(\dd\gamma_{ab}(\vec\eta)\) in terms of the perturbed gravitational potentials on \(\ms H\) as
\begin{equation} \label{eq:gamma-E-K}
\dd\gamma_{ab}(\vec\eta) = \tfrac{1}{2} \left[ \delta \dd{E} (\dd{\eta}_{a}\dd{\eta}_{b} - \dd h_{ab}) - \delta \dd{K}_{ab} \right]\,.
\end{equation}
The linearized Einstein equations for \(\dd \gamma_{ab}\) in the form \cref{eq:gamma-E-K} are then equivalent to the linearizations of \cref{eq:box-E,eq:K-eom}.\\
Next we consider the behaviour of the symplectic current of vacuum general relativity near \(i^0\). The symplectic current is given by (see \cite{WZ})
\begin{equation}\label{eq:sympcurrent}
\omega_{abc} = - \tfrac{1}{16 \pi}\hat\varepsilon_{abcd} \hat w^d \quad\text{with}\quad \hat w^a = \hat P^{abcdef} \hat\gamma_{2bc} \hat\nabla_d \hat\gamma_{1ef} - [1 \leftrightarrow 2]\,,
\end{equation}
where ``\([1 \leftrightarrow 2]\)'' denotes the preceding expression with the \(1\) and \(2\), labeling the perturbations, interchanged and the tensor \(\hat P^{abcdef}\) is given by
\begin{equation}\label{eq:P-defn}
\hat P^{abcdef} = \hat{g}^{ae}\hat{g}^{fb}\hat{g}^{cd} - \tfrac{1}{2}\hat{g}^{ad}\hat{g}^{be}\hat{g}^{fc} - \tfrac{1}{2}\hat{g}^{ab}\hat{g}^{cd}\hat{g}^{ef} - \tfrac{1}{2}\hat{g}^{bc}\hat{g}^{ae}\hat{g}^{fd} + \tfrac{1}{2}\hat{g}^{bc}\hat{g}^{ad}\hat{g}^{ef}\,.
\end{equation}
To analyse the behaviour of the symplectic current in the limit to \(i^0\) we first express it in terms of quantities in the unphysical spacetime using
\begin{equation}
\varepsilon_{abcd} = \Omega^{4} \hat{\varepsilon}_{abcd} \eqsp P^{abcdef} = \Omega^{-6} \hat{P}^{abcdef} \eqsp \gamma_{ab} = \Omega^{2} \hat{\gamma}_{ab}\,,
\end{equation}
where \(P^{abcdef}\) is defined through the unphysical metric by the same expression as \cref{eq:P-defn}. Using these, and converting the physical derivative operator \(\hat\nabla\) to the unphysical one \(\nabla\) as
\begin{equation}
\hat{\nabla}_{d} \hat{\gamma}_{1 ef} = \nabla_{d} \hat{\gamma}_{1 ef} + \Omega^{-1} [\hat{\nabla}_{d} \Omega \hat{\gamma}_{1 ef} + \hat{\nabla}_{e} \Omega \hat{\gamma}_{1 df} - g_{ed} \hat{\nabla}^{a} \Omega \hat{\gamma}_{1 af} + (e \leftrightarrow f)]\,,
\end{equation}
we obtain
\begin{equation}\begin{aligned}\label{eq:sympcurrent-unphys}
\omega_{abc} & = - \tfrac{1}{16 \pi}\varepsilon_{abcd} w^d \,, \quad \\[1.5ex]
\text{with}\quad
w^a & = \Omega^{-2} P^{abcdef} \gamma_2{}_{bc} \nabla_d \gamma_1{}_{ef} + \Omega^{-3} \gamma_1^{ab} \nabla_b \Omega \gamma_{2 c}{}^c - [1 \leftrightarrow 2] \,.
\end{aligned}\end{equation}
Converting to quantities which are direction-dependent at \(i^0\) and using \cref{eq:lim-gamma} we see that \(\Omega^{\nfrac{3}{2}} \omega_{abc}\) is \(C^{>-1}\). The pullback \(\pb{\dd\omega}\) to \(\ms H\) of \(\lim_{\to i^0}\Omega^{\nfrac{3}{2}} \omega_{abc}\) is given by
\begin{equation}\label{eq:sympcurrent-H}
\pb{\dd\omega} = - \frac{1}{16\pi} \dd\varepsilon_{3}~ \dd\eta^a \left( 2 \dd{\eta}^{b} \dd\gamma_{2 ab} \dd\gamma_{1} - \tfrac{1}{2} \dd\gamma_{1 ab} \dd{\partial}^{b} \dd\gamma_{2} + \dd\gamma_{1}^{bc} \dd{\partial}_{c} \dd\gamma_{2 ab} - \tfrac{1}{2} \dd\gamma_{1} \dd{\partial}^{b} \dd\gamma_{2 ab} \right) - [1 \leftrightarrow 2]\,.
\end{equation}
This expression can be considerably simplified by rewriting it in terms of the perturbed gravitational potentials \(\delta \dd E\) and \(\delta \dd K_{ab}\) using \cref{eq:gamma-E-K}. An easy but long computation gives
\begin{equation}\label{eq:sympcurrent-H-simplified}
\pb{\dd\omega} = \frac{1}{64 \pi}\dd\varepsilon_3~ (\delta_{1} \dd{K} \delta_{2} \dd{E} - \delta_{2} \dd{K} \delta_{1} \dd{E})\,,
\end{equation}
where, as before, $\dd{K}\defn \dd{h}^{ab} \dd{K}_{ab}$.
\section{Asymptotic symmetries at \(i^0\): The \(\mf{spi}\) algebra}
\label{sec:spi-symm}
In this section we analyze the asymptotic symmetries at \(i^0\). We show that the diffeomorphisms of the physical spacetime which preserve the asymptotic flatness of the spacetime (defined by \cref{def:AH}) generate an infinite-dimensional algebra \(\mf{spi}\). This asymptotic symmetry algebra was obtained in \cite{AH,Ash-in-Held} by analyzing the infinitesimal diffeomorphisms which preserve the universal structure at \(i^0\). Here we provide an alternative derivation by considering the physical perturbations generated by such infinitesimal diffeomorphisms and demanding that the corresponding unphysical perturbations satisfy the asymptotic conditions \cref{eq:lim-gamma,eq:gamma-eta-h}.
Consider an infinitesimal diffeomorphism generated by a vector field \(\hat \xi^a\) in the physical spacetime, and let \(\xi^a = \hat\xi^a\) be the corresponding vector field in the unphysical spacetime. For \(\xi^a\) to be a representative of an asymptotic symmetry at \(i^0\) the infinitesimal diffeomorphism generated by \(\xi^a\) must preserve the universal structure at $i^{0}$. Firstly, the infinitesimal diffeomorphism must keep the the point $i^{0}$ fixed and preserve the $C^{>1}$ differential structure at $i^{0}$. Thus, \(\xi^a\) must be $C^{>0}$ at \(i^0\) and \(\xi^a \vert_{i^0} = 0\). This implies that \(\Omega^{-\nfrac{1}{2}} \xi^a\) is \(C^{>-1}\) at \(i^0\) and let
\begin{equation} \label{eq:X-defn}
\dd X^a(\vec\eta) \defn \lim_{\to i^{0}}\Omega^{-\nfrac{1}{2}} \xi^{a}\,.
\end{equation}
Now consider the physical metric perturbation $ \hat\gamma^{(\xi)}_{ab} = \delta_\xi \hat{g}_{ab} \defn \lie_{\xi} \hat{g}_{ab}$ corresponding to an infinitesimal diffeomorphism generated by $\xi^{a}$. The corresponding unphysical metric perturbation is given by
\begin{align} \label{eq:lin-diffeo}
\gamma^{(\xi)}_{ab} = \Omega^{2} \lie_{\xi} \hat g_{ab} = \lie_{\xi} g_{ab} - 4\Omega^{-\nfrac{1}{2}} \xi^{c} \eta_{c} g_{ab}\,.
\end{align}
Since \(\gamma^{(\xi)}_{ab}\) must satisfy the asymptotic conditions at \(i^0\) in \cref{eq:lim-gamma,eq:gamma-eta-h}, we have that \(\gamma^{(\xi)}_{ab}\) is \(C^{>0}\) at \(i^0\) and \(\gamma^{(\xi)}_{ab} \vert_{i^0} = 0\). To see the implications of these conditions first evaluate the condition \(\gamma^{(\xi)}_{ab} \vert_{i^0} = 0\) using \cref{eq:lin-diffeo,eq:X-defn} which gives
\begin{equation}\label{eq:lorentz-cond}
\dd{\eta}_{a}\dd{X}^{a}(\vec\eta) = 0 \eqsp \dd D_{(a} \dd X_{b)} = 0\,,
\end{equation}
that is, the vector field \(\dd X^a\) is tangent to \(\ms H\) and is a Killing vector field on it. Thus, \(\dd X^a\) is an element of the Lorentz algebra \(\mathfrak{so}(1,3)\). Some useful properties of these Killing vectors and their relationship to infinitesimal Lorentz transformations in the tangent space \(Ti^0\) are collected in \cref{sec:Killing-H}.
Further, since both \(\gamma^{(\xi)}_{ab}\) and \(\lie_\xi g_{ab}\) are \(C^{>0}\) we must have that \(\Omega^{-\nfrac{1}{2}} \xi^a \eta_a\) is also \(C^{>0}\). Since \(\Omega^{-\nfrac{1}{2}} \xi^a \eta_a \vert_{i^0} = 0\) (which follows from \cref{eq:X-defn,eq:lorentz-cond}) we have that \(\Omega^{-1}\xi^a \eta_a\) is \(C^{>-1}\) at \(i^0\) so define
\begin{equation}\label{eq:f-defn}
\dd f(\vec\eta) \defn \lim_{\to i^0} \Omega^{-1}\xi^a \eta_a\,.
\end{equation}
The function \(\dd f\) on \(\ms H\) then parametrizes the \emph{supertranslations}. The vector field generating a supertranslation can be obtained as follows. Consider \(\xi^a\) such that the corresponding \(\dd X^a\) (\cref{eq:X-defn}) vanishes and \(\dd\chi^a \defn \lim_{\to i^0} \Omega^{-1}\xi^a\) is \(C^{>-1}\) so that \(\dd f = \dd \chi^a \dd \eta_a\). Now consider the metric perturbation \cref{eq:lin-diffeo} corresponding to such a vector field. From \cref{eq:gamma-eta-h} we must have
\begin{equation}
\dd h_a{}^b \dd \eta^c \dd\gamma^{(\xi)}_{bc} = 0\,,
\end{equation}
where, as before, \(\dd\gamma^{(\xi)}_{ab} = \lim_{\to i^0} \Omega^{-\nfrac{1}{2}} \gamma^{(\xi)}_{ab}\). Evaluating this condition using \cref{eq:lin-diffeo} and \(\dd\chi^a = \lim_{\to i^0} \Omega^{-1}\xi^a\) we get
\begin{equation}
\dd h_{ab} \dd\chi^b = - \dd D_a \dd f\,.
\end{equation}
Thus a pure supertranslation \(\dd f\) is represented by a vector field \(\xi^a\) such that
\begin{equation} \label{eq:supetr-vec-field}
\lim_{\to i^{0}}\Omega^{-1} \xi^{a} = \dd{f} \dd\eta^{a} - \dd D^{a} \dd{f}\,.
\end{equation}
In summary, the asymptotic symmetries at \(i^0\) are parameterized by a pair \((\dd f, \dd X^a)\) where \(\dd f\) is a smooth function and \(\dd X^a \in \mathfrak{so}(1,3)\) is a smooth Killing vector field on \(\ms H\).\\
The Lie algebra structure of these symmetries can be obtained as follows. Let \(\xi^a_1\) and \(\xi^a_2\) be the vector fields representing the asymptotic Spi-symmetries \((\dd f_1, \dd X^a_1)\) and \((\dd f_2, \dd X^a_2)\) respectively. Then the Lie bracket \([\xi_1, \xi_2]^a = \xi^b_1 \nabla_b \xi^a_2 - \xi^b_2 \nabla_b \xi^a_1 \) of the representatives induces a Lie bracket on the Spi-symmetries. Using \cref{eq:X-defn,eq:lorentz-cond,eq:f-defn} the induced Lie bracket on the Spi-symmetries can be computed to be
\begin{equation}\label{eq:spi-bracket}\begin{aligned}
(\dd f, \dd X^a) &= [ (\dd f_1, \dd X^a_1), (\dd f_2, \dd X^a_2) ]\,, \\
\text{with}\quad
\dd f & = \dd X_1^b \dd D_b \dd f_2 - \dd X_2^b \dd D_b \dd f_1\,, \\
\dd X^a &= \dd X_1^b \dd D_b \dd X_2^a - \dd X_2^b \dd D_b \dd X_1^a\,.
\end{aligned}\end{equation}
Thus, the Spi symmetries form a Lie algebra \(\mf{spi}\) with the above Lie bracket structure. Note that if \(\dd X^a_1 = \dd X^a_2 = 0\) then \(\dd f = \dd X^a = 0\) --- the supertranslations form an infinite-dimensional abelian subalgebra \(\mathfrak s\). Further if \(\dd X^a_1 = 0\) and \(\dd X^a_2 \neq 0\) then \(\dd X^a = 0\), thus the supertranslations \(\mathfrak s\) are a Lie ideal in \(\mf{spi}\). The quotient algebra \(\mf{spi}/\mathfrak s\) is then isomorphic to the algebra of Killing fields on \(\ms H\) i.e. the Lorentz algebra \(\mathfrak{so}(1,3)\). Thus the Spi symmetry algebra has the structure of a semi-direct sum
\begin{equation}\label{eq:spi-semi-direct}
\mf{spi} \cong \mathfrak{so}(1,3) \ltimes \mathfrak s\,.
\end{equation}
The \(\mf{spi}\) algebra also has a preferred \(4\)-dimensional subalgebra \(\mathfrak t \) of \emph{translations}. These are obtained as the supertranslations \(\dd f\) satisfying the additional condition
\begin{equation}\label{eq:trans-cond}
\dd D_a \dd D_b \dd f + \dd h_{ab} \dd f = 0\,.
\end{equation}
The space of solutions to the above condition is indeed \(4\)-dimensional --- this can be seen from the argument in \cref{rem:trans-vectors} below, or by solving the equation in a suitable coordinate system on \(\ms H\); see Eqs.~D.204 and D.205 of \cite{CD} or Eq.~C.12 of \cite{KP-GR-match}. Further from \cref{eq:spi-bracket} it can be verified that the Lie bracket of a translation with any other element of \(\mf{spi}\) is again a translation, that is, the translations \(\mathfrak t\) are a \(4\)-dimensional Lie ideal of \(\mf{spi}\).
\begin{remark}[Translation vectors at \(i^0\)]\label{rem:trans-vectors}
Let \(\dd v^a\) be a direction-independent vector at \(i^0\), and \(\dd v^a = \dd f \dd \eta^a + \dd f^a\) where \(\dd \eta_a \dd f^a = 0\). Then, since \(\dd v^a\) is direction-independent we have
\begin{equation}
0 = \dd\partial_a \dd v_b = \dd D_a \dd f_b + \dd h_{ab} \dd f + \dd\eta_b (\dd D_a \dd f - \dd f_a)\,,
\end{equation}
which then implies \(\dd f_a = \dd D_a \dd f\) and that \(\dd f\) satisfies \cref{eq:trans-cond}. Thus, any vector \(\dd v^a \in Ti^0\) gives rise to a Spi-translation in \(\mathfrak t\). Conversely, given any translation \(\dd f \in \mathfrak t\), the vector at \(i^0\) defined by (note the sign difference in the hyperboloidal component relative to \cref{eq:supetr-vec-field})
\begin{equation}\label{eq:trans-vector-defn}
\dd v^a \defn \dd f \dd \eta^a + \dd D^a \dd f\,,
\end{equation}
is direction-independent i.e., \(\dd v^a \in Ti^0\). Thus, the Spi-translations \(\mathfrak t\) can be represented by vectors in \(Ti^0\).
\end{remark}
\begin{remark}[Conformal transformation of Spi symmetries]\label{rem:conf-symm}
Let \((\dd f, \dd X^a)\) be a Spi symmetry defined by a vector field \(\xi^a\) as above, i.e.,
\begin{equation}
\dd X^a \defn \lim_{\to i^0} \Omega^{-\nfrac{1}{2}} \xi^a \eqsp \dd f \defn \lim_{\to i^0} \Omega^{-1} \xi^a \eta_a\,.
\end{equation}
For a fixed \(\xi^a\), consider the change in the conformal factor \(\Omega \mapsto \omega \Omega\). Then, from \cref{rem:conf} we have the transformations
\begin{equation}
\dd X^a \mapsto \dd X^a \eqsp \dd f \mapsto \dd f + \tfrac{1}{2} \lie_{\dd X}\dd \alpha\,.
\end{equation}
Note that a pure supertranslation \((\dd f, \dd X^a = 0)\) is conformally-invariant, while a ``pure Lorentz'' symmetry \((\dd f = 0, \dd X^a)\) is not invariant but shifts by a supertranslation given by \(\tfrac{1}{2} \lie_{\dd X}\dd \alpha\). This further reflects the semi-direct structure of the \(\mf{spi}\) algebra given in \cref{eq:spi-semi-direct}.
\end{remark}
\begin{center}* * *\end{center}
To find the charge corresponding to the Spi-symmetries we need to evaluate the symplectic current \cref{eq:sympcurrent-H-simplified} when the perturbation denoted by \(\delta_2\) is generated by a Spi-symmetry. So we now calculate the perturbations $\delta_{(\dd f, \dd X)} \dd{E}$ and $\delta_{(\dd f, \dd X)} \dd{K}$ in the gravitational potentials corresponding to the metric perturbation \cref{eq:lin-diffeo}.
The potentials \(\dd E\) and \(\dd K_{ab}\) are defined in terms of (a rescaled) limit of \(S_{ab} \) by \cref{eq:potentials-defn}. Consider then the change in $S_{ab}$ under the perturbation \cref{eq:lin-diffeo}. The second term on the right-hand-side of \cref{eq:lin-diffeo} is a linearized conformal transformation (see \cref{rem:conf}) with \(\dd \alpha = - 2 \dd f\). Thus, the change in \(\dd E\) and \(\dd K_{ab}\) induced by this linearized conformal transformation is given by (see \cref{rem:conf-GR-fields})
\begin{equation} \label{eq:supetr-deltaE-deltaK}
\delta_{\dd{f}} \dd{E} = 0 \eqsp \delta _{\dd{f}} \dd{K}_{ab} = 4 (\dd D_{a}\dd D_{b} \dd{f} + \dd{f} \dd{h}_{ab})\,.
\end{equation}
The first term on the right-hand-side of \cref{eq:lin-diffeo} is a linearized diffeomorphism and, since \(S_{ab}\) is a local and covariant functional of \(g_{ab}\) the corresponding perturbation in \(S_{ab}\) is \(\Lie_\xi S_{ab}\). Explicitly computing the Lie derivative, using \cref{eq:X-defn,eq:lorentz-cond} gives
\begin{equation}
\delta_{\dd X} \dd S_{ab} = \lim_{\to i^{0} } \Omh \lie_{\xi} S_{ab}=\dd X^{c} \dd{\partial}_{c} \dd{S}_{ab} +2 \dd{S}_{c(a}\dd{\eta}_{b)}\dd X^{c}+ 2 \dd{S}_{c(a}\dd{\partial}_{b)}\dd X^{c}\,.
\end{equation}
Then, from the definition of the gravitational potentials \cref{eq:potentials-defn} we have
\begin{equation}
\delta_{\dd{X}} \dd{E} = \lie_{\dd X} \dd{E} \eqsp \delta_{\dd{X}} \dd{K}_{ab} = \lie_{\dd{X}} \dd{K}_{ab}\,.
\end{equation}
As a result, under a general Spi symmetry parametrized by $(\dd{f}, \dd{X}^{a})$ we have
\begin{equation} \label{eq:spi-changes}
\delta_{(\dd f, \dd X)} \dd{E} = \lie_{\dd X}\dd E \eqsp \delta_{(\dd f, \dd X)} \dd{K}_{ab} = \lie_{\dd X} \dd{K}_{ab} + 4 (\dd D_{a}\dd D_{b} \dd{f} + \dd{h}_{ab} \dd{f})\,.
\end{equation}
Note that our parity condition \cref{eq:parity-E} does not place any further restrictions on these symmetries.\\
\begin{remark}[Special choices of conformal factor]\label{rem:GR-gauge-choice}
The freedom in the conformal factor can be used to impose further restrictions on the potential \(\dd K_{ab}\). We note the following two conditions that have been used in prior work.
\begin{enumerate}
\item From \cref{eq:conf-K} we see that \(\dd K \defn \dd h^{ab}\dd K_{ab}\) transforms as
\begin{equation}
\dd K \mapsto \dd K - 2 (\dd D^2 \dd\alpha + 3 \dd\alpha)\,.
\end{equation}
Now given a choice of conformal factor so that \(\dd K \neq 0\) we can always solve a linear hyperbolic equation for \(\dd\alpha\) on \(\ms H\) and choose a new conformal factor (as in \cref{rem:conf}) so that in the new conformal completion \(\dd K = 0\). This is the choice made in \cite{CD,CDV-Lorentz,Tro}. With this restriction on \(\dd K\) we see from \cref{eq:spi-changes} that the allowed supertranslations are reduced to functions \(\dd f\) which satisfy
\begin{equation}\label{eq:st-CD}
\dd D^2 \dd f + 3 \dd f = 0\,.
\end{equation}
\item Consider the restricted class of spacetimes where \(\dd B_{ab} = 0\). Then, the tensor \(\dd K_{ab}\) can be written in terms of a scalar potential \(\dd k\) as in \cref{eq:K-potential}. Comparing \cref{eq:K-potential} with \cref{eq:conf-K} we see that we can choose \(\dd \alpha = \nfrac{1}{2} \dd k\). Then, we can choose a new conformal factor (as in \cref{rem:conf}) so that in the new conformal completion \(\dd K_{ab} = 0\). This is the choice made in \cite{AH,Ash-in-Held}. With this restriction we see from \cref{eq:spi-changes} that the allowed supertranslations are reduced to the translation algebra (\cref{eq:trans-cond}), and the full asymptotic symmetry algebra reduces to the Poincar\'e algebra.
\end{enumerate}
\end{remark}
It is not clear, a priori, what such special choices of conformal factor imply at null infinity. From the point of view of matching the Spi symmetries and charges to the ones defined on null infinity such choices of conformal factors might not be convenient. So we will \emph{not} impose any such conditions on the conformal factor in our analysis and work with the full \(\mf{spi}\) algebra. However, we will argue that our results reduce to those of \cite{AH,CD} when the corresponding restrictions are imposed.
\section{Spi-charges}
\label{sec:spi-charges}
In this section we now compute the charges associated with the Spi-symmetries. Following our strategy we consider the symplectic current \(\pb{\dd\omega}\) where one of the perturbations, \(\delta_2\), is a perturbation generated by an asymptotic Spi-symmetry represented by \((\dd f, \dd X^a)\). Using \cref{eq:sympcurrent-H-simplified,eq:spi-changes} we have
\begin{equation}\label{eq:omega-symm}
\pb{\dd\omega}(\delta g, \delta_{(\dd f, \dd X)} g) = \frac{1} {64\pi} \dd\varepsilon_3 \left[ \delta\dd K \lie_{\dd X} \dd E - \delta \dd E \lie_{\dd X} \dd K - 4 \delta \dd E (\dd D^2 \dd f + 3 \dd f) \right]\,.
\end{equation}
We show next that, under suitable conditions, the above expression can be written as a total derivative on \(\ms H\) that is,
\begin{equation}\label{eq:omega-Q}
\pb{\dd\omega}(\delta g, \delta_{(\dd f, \dd X)} g) = - \dd\varepsilon_3~ \dd D^a \dd Q_a(g; \delta g; (\dd f, \dd X))\,,
\end{equation}
where \(\dd Q_a\) is a local and covariant functional of its arguments on \(\ms H\).
It will be convenient to do this separately for supertranslations and Lorentz symmetries. In \cref{sec:st}, we will find that for supertranslations the functional \(\dd Q_a\) is integrable, and defines the \emph{supermomentum} charges on cross-sections \(S\) of \(\ms H\). Then we show in \cref{sec:lorentz-charge} that for Lorentz symmetries \(\dd Q_a\) is not integrable, in general. In this case we will adopt the prescription of Wald and Zoupas with suitable modifications to define an integrable charge for Lorentz symmetries. Finally, as noted in \cref{rem:conf-symm}, a ``pure Lorentz'' symmetry is not conformally-invariant but shifts by a supertranslation. Similarly, we show in \cref{sec:conf-charges} that the Lorentz charge shifts by a supertranslation charge under conformal transformations, in accord with the semi-direct structure of the \(\mf{spi}\) algebra (\cref{eq:spi-semi-direct}).
\subsection{Charges for supertranslations: Spi-supermomentum}
\label{sec:st}
To define the charge for the supertranslations consider \cref{eq:omega-symm} for a pure supertranslation \((\dd f, \dd X^a = 0)\)
\begin{equation}\begin{aligned}
\pb{\dd\omega}(\delta g, \delta_{\dd f} g) & = - \frac{1}{16\pi} \dd\varepsilon_3~ \delta \dd E (\dd D^2 \dd f + 3 \dd f)\,, \\
& = - \frac{1}{16\pi} \dd\varepsilon_3 \dd D^{a} \delta (\dd{E} \dd D_{a} \dd{f} - \dd{f}\dd D_{a}\dd{E})\,,
\end{aligned}\end{equation}
where the second line uses \cref{eq:box-E}. In this case, the symplectic current can be written in the form \cref{eq:omega-Q} where the \(\dd Q_a\) is manifestly integrable. Thus, we define the Spi \emph{supermomentum} charge at a cross-section \(S\) of \(\ms H\) by
\begin{equation} \label{eq:st-charge}
\mathcal{Q}[\dd{f}; S] = \frac{1}{16\pi} \int_S \dd\varepsilon_2~ \dd u^a (\dd{E} \dd D_{a} \dd{f} - \dd{f}\dd D_{a}\dd{E})\,.
\end{equation}
Here we have chosen the charge to vanish on Minkowski spacetime where \(\dd E = 0\) (see \cref{eq:Mink-stuff}). The corresponding flux is given by (using \cref{eq:box-E})
\begin{equation}\label{eq:st-flux}
\mathcal{F}[\dd{f};\Delta \ms H] \defn \mathcal{Q}[\dd{f}; S_2] - \mathcal{Q}[\dd{f}; S_1] = - \frac{1}{16\pi}\int_{\Delta \ms H} \dd{\varepsilon}_{3}~ \dd{E} (\dd D^{2} \dd{f} + 3\dd{f})\,.
\end{equation}
When \(\dd f \in \mathfrak t\) is a Spi-translation the charge \cref{eq:st-charge} can be written in an alternative form as follows: Using \cref{eq:EB-potentials,eq:box-E} we have the identity
\begin{equation}\label{eq:st-tr-conversion}\begin{aligned}
-\dd{f} \dd D_{a} \dd{E} + \dd{E} \dd D_{a} \dd{f} & = 2 \dd{E}_{ab} \dd D^{b} \dd{f} + \dd D^{b} \left( \dd D_{[a} \dd{E} \dd D_{b]} \dd{f} \right) \\
&\quad - \tfrac{1}{2} \left[ \dd D_a \dd E (\dd D^2 \dd f + 3 \dd f ) - \dd D^b \dd E (\dd D_a \dd D_b \dd f + \dd h_{ab} \dd f) \right] \,.
\end{aligned}\end{equation}
The second term on the right-hand-side corresponds to an exact \(2\)-form and vanishes upon integrating on \(S\), while the last line vanishes for translations due to \cref{eq:trans-cond}. Hence, the charge for any translation \(\dd f \in \mathfrak t\) can be written as
\begin{equation} \label{eq:tr-charge}
\mathcal Q[\dd{f}; S] = \frac{1}{8\pi} \int_S \dd\varepsilon_2~ \dd u^{a} \dd{E}_{ab} \dd D^{b} \dd{f}\,,
\end{equation}
which reproduces the charge for translations given in \cite{AH}. Using \cref{eq:trans-cond} the flux of translations vanishes across any region \(\Delta\ms H\) and thus the translation charge is independent of the choice of cross-section \(S\). Using the isomorphism between Spi-translations \(\dd f\) and vectors \(\dd v^a\) in \(Ti^0\) (see \cref{rem:trans-vectors}), the translation charge in \cref{eq:tr-charge} defines a \(4\)-momentum vector \(\dd P^a\) at \(i^0\) such that
\begin{equation}
\dd P^a \dd v_a = \mathcal Q[\dd{f}; S]\,.
\end{equation}
Note that this relation is well-defined at \(i^0\) since the translation charge is independent of the cross-section \(S\). The vector \(\dd P^a\) is precisely the ADM \(4\)-momentum at \(i^0\) \cite{AMA-spi-3+1} and also coincides with the limit to \(i^0\) of the Bondi \(4\)-momentum on null infinity \cite{Ash-Mag-Ash} (the corresponding result for all the supertranslation charges was proven in \cite{KP-GR-match}).
The charge expression \cref{eq:st-charge} agrees with the results of Comp\`ere and Dehouck \cite{CD}. Note that when the conformal factor is chosen so that \(\dd K = 0\) the supertranslation algebra is reduced to the subalgebra satisfying \cref{eq:st-CD} and the flux corresponding to such supertranslations vanishes across any region \(\Delta\ms H\). As was shown in \cite{KP-GR-match}, to relate the supertranslation symmetries and charges at spatial infinity to the ones on null infinity, it is sufficient that the \emph{total} flux of these charges vanishes on \emph{all} of \(\ms H\),\footnote{To make this rigorous it is necessary to additionally complete \(\ms H\) to include the null directions at \(i^0\). This construction is detailed in \cite{KP-EM-match,KP-GR-match}.} and the flux need not vanish across some local region \(\Delta\ms H\). Thus the restriction on the conformal factor imposing \(\dd K = 0\) is not necessary.
Note that in \cite{KP-GR-match} the supermomentum charges at spatial infinity were related to those on null infinity using the Ashtekar-Hansen expression \cref{eq:tr-charge} for \emph{all} supertranslations (even those which are not translations), instead of the expression \cref{eq:st-charge}. On \(\ms H\), these charge expressions differ by the integral of last line of \cref{eq:st-tr-conversion} over some cross-section \(S\). However, the regularity conditions on \(\dd E\) and \(\dd f\) used in \cite{KP-GR-match} as the spatial directions \(\vec\eta\) limit to null directions at \(i^0\) ensure that the additional terms vanish (see, for instance, Appendix.~D of \cite{KP-GR-match}) and both expressions yield the same \emph{finite} supermomenta in null directions which further equals the supermomenta at null infinity. Thus, the result of \cite{KP-GR-match} can also be derived using the expression \cref{eq:st-charge} for the supertranslation charges.
\subsection{Lorentz charges with \(\dd B_{ab} = 0\)}
\label{sec:lorentz-charge}
Next we will obtain a charge formula for the Lorentz symmetries. As emphasized in \cite{AH,Ash-in-Held}, to obtain such a charge formula one needs to consider the ``subleading'' piece of the magnetic part of the Weyl tensor. Thus, in the following we will make the additional assumption that \(\dd B_{ab} = 0\) and that the ``subleading'' magnetic part \(\dd \beta_{ab}\) defined in \cref{eq:beta-defn} exists. However, in \cref{sec:new-beta} we show how the restriction that \(\dd B_{ab}\) vanishes can be lifted to obtain a charge for the Lorentz symmetries.
For a ``pure Lorentz'' symmetry \((\dd f = 0, \dd X^a)\) we have from \cref{eq:omega-symm}
\begin{equation} \label{symp-lorentz}
\pb{\dd\omega}(\delta g, \delta_{\dd X}g) = \frac{1}{64 \pi} \dd{\varepsilon}_{3} ( \lie_{\dd X} \dd{E} \delta \dd{K} - \lie_{\dd X} \dd{K} \delta \dd{E})\,.
\end{equation}
We now want to write this as a total derivative of the form \cref{eq:omega-Q}. To do so consider the following tensor
\begin{equation}\label{eq:W-defn}
\dd{W}_{ab} \defn \dd{\beta}_{ab} + \tfrac{1}{8} \dd{\varepsilon}_{cd(a} \dd D^{c} \dd{E} \dd K^d{}_{b)} - \tfrac{1}{16} \dd{\varepsilon}_{abc}\dd{K}\dd D^{c}\dd{E}\,.
\end{equation}
Using \cref{eq:div-beta,eq:vanishing-curl-K,eq:div-K}, we obtain
\begin{equation}
\dd D^a \dd W_{ab} = 0 \eqsp \dd h^{ab} \dd W_{ab} = 0\,.
\end{equation}
Note that \(\dd W_{ab}\) is not a symmetric tensor. Further using \cref{eq:W-defn,eq:box-X} we have
\begin{equation} \label{div-eq}
\dd D^{a}[\dd{W_{ab}}\dv{\dd{X}}^{b}] = \tfrac{1}{8} \dd{X}^{a} \dd D_{a}\dd{E} \dd{K}\,,
\end{equation}
where $\dv{\dd{X}}^{a} \defn \f{1}{2} \dd{\varepsilon}^{abc}\dd D_{b} \dd{X}_{c}$ is the ``dual'' Killing vector field to \(\dd X^a\) (see \cref{eq:dual-X-defn}). Therefore, \cref{symp-lorentz} can be written as
\begin{equation}\label{eq:symp-lorentz-W}
\pb{\dd\omega}(\delta g, \delta_{\dd X}g) = \frac{1}{8 \pi} \dd{\varepsilon}_3 \dd D^a \left[ \delta \dd W_{ab} \dv X^b - \tfrac{1}{8} \delta \dd E \dd K \dd X_{a} \right]\,,
\end{equation}
which is again of the form \cref{eq:omega-Q}. However the functional \(\dd Q_a\) in this case is not integrable, in general. To see this consider
\begin{equation}\label{eq:Q-lor}
\int_S \dd\varepsilon_2~ \dd u^a \dd Q_a [\delta g ; \dd X] = - \frac{1}{8 \pi} \int_S \dd\varepsilon_2~ \dd u^a \left[ \delta \dd W_{ab} \dv X^b - \tfrac{1}{8} \delta \dd E \dd K \dd X_{a} \right]\,,
\end{equation}
and compute an antisymmetrized second variation to get
\begin{equation}\label{eq:integrability}\begin{aligned}
\int_S \dd\varepsilon_2 \dd u^a \big(\delta_1 \dd Q_a [\delta_2 g ; \dd X] - \delta_2 \dd Q_a [\delta_1 g ; \dd X] \big) & = \tfrac{1}{64\pi} \int_S \dd\varepsilon_2 \dd u^a \dd X_a \left(\delta_1 \dd K \delta_2 \dd E - \delta_2 \dd K \delta_1 \dd E \right) \\
& = - \int_S \dd X \cdot \pb{\dd\omega}(\delta_1 g, \delta_2 g)\,.
\end{aligned}\end{equation}
If \cref{eq:Q-lor} were integrable then the above antisymmetrized second variation would vanish for all perturbations and all cross-sections \(S\). However, since we allow arbitrary perturbations of both \(\dd E\) and \(\dd K_{ab}\) the expression on the right-hand-side vanishes if and only if the Lorentz vector field happens to be tangent to the cross-section \(S\). However a general Lorentz vector field is not tangent to any cross-section of \(\ms H\), in particular Lorentz boosts do not preserve any cross-section of \(\ms H\). Thus, the expression \cref{eq:Q-lor} is not integrable and cannot be used to define the charge of Lorentz symmetries.
To remedy this, note that \cref{eq:integrability} is similar to the integrability criterion derived by Wald and Zoupas (see Eq.~16 of \cite{WZ}). Wald and Zoupas further developed a general prescription to define a integrable charge (``conserved quantity'') which we now adapt to our case. Let \(\dd\Theta(g;\delta g)\) be a \(3\)-form on \(\ms H\) which is a symplectic potential for the pullback of the symplectic current (\cref{eq:sympcurrent-H-simplified}) to $\ms H$, that is,
\begin{equation}\label{eq:Theta-symppot}
\pb{\dd\omega}(g; \delta_{1} g, \delta_{2} g) =\delta_{1} \dd{\Theta}(g; \delta_{2}g) - \delta_{2} \dd{\Theta}(g; \delta_{1}g)\,,
\end{equation}
for all backgrounds and all perturbations. We also require that the choice of \(\dd\Theta\) satisfy the following conditions
\begin{enumerate}
\item \(\dd\Theta\) is locally and covariantly constructed out of the dynamical fields \((\dd E, \dd K_{ab})\), their perturbations, and finitely many of their derivatives, along with the ``universal background structure'' \(\dd h_{ab}\) present on \(\ms H\).
\item \(\dd\Theta\) is independent of any arbitrary choices made in specifying the background structure, in particular, \(\dd\Theta\) is conformally-invariant.
\item $\dd\Theta(g;\delta g) = 0$ for Minkowski spacetime for \emph{all} perturbations $\delta g$.
\end{enumerate}
In analogy to the Wald-Zoupas prescription we define the charge \(\mathcal Q[\dd X^a; S]\) associated with a Lorentz symmetry through
\begin{equation}\label{eq:WZ-charge}
\delta \mathcal Q[\dd X^a ; S] \defn \int_S \dd\varepsilon_2 \dd u^a \dd Q_a(\delta g; \dd X^a) + \int_S \dd X \cdot \dd\Theta(\delta g)\,.
\end{equation}
From \cref{eq:integrability,eq:Theta-symppot} it follows that the above defining relation is integrable and thus defines a charge \(\mathcal Q[\dd X^a ; S]\) once we pick a reference solution where the charge vanishes.\\
For the \(3\)-form \(\dd\Theta\) we choose
\begin{equation} \label{eq:Theta-choice}
\dd{\Theta}(g;\delta g) \defn -\frac{1}{64\pi}\dd{\varepsilon}_{3} \dd{E} \delta \dd{K} \,.
\end{equation}
It can be verified that this choice satisfies all the criteria listed below \cref{eq:Theta-symppot}. In particular \(\dd\Theta\) is conformally-invariant, and for Minkowski spacetime \(\dd E = 0\) (\cref{eq:Mink-stuff}) and so \(\dd\Theta = 0\) on Minkowski spacetime for \emph{all} perturbations. This choice for \(\dd\Theta\) is not unique, but we will argue in \cref{sec:amb} that the ambiguity in the the choice of \(\dd\Theta\) does not affect our final charge expression.
With the choice \cref{eq:Theta-choice} and \cref{eq:Q-lor,eq:WZ-charge}, we have
\begin{equation}
\delta \mathcal Q[\dd{X}^{a};S] = - \frac{1}{8 \pi} \int_S\dd{\varepsilon}_{2}~ \dd u^{a} \delta [ \dd{W}_{ab} \dv{\dd{X}}^{b} - \tfrac{1}{8} \dd{K} \dd{E} \dd{X}_{a}]\,,
\end{equation}
We define the unperturbed charge by picking the reference solution to be Minkowski spacetime which satisfies \(\dd E = 0\) and \(\dd \beta_{ab} = 0\) (\cref{eq:Mink-stuff}). Thus, we have the charge
\begin{equation}\label{eq:lorentz-charge}
\mathcal Q[\dd{X}^{a};S] = - \frac{1}{8 \pi} \int_S\dd{\varepsilon}_{2}~ \dd u^{a} [ \dd{W}_{ab} \dv{\dd{X}}^{b} - \tfrac{1}{8} \dd{K} \dd{E} \dd{X}_{a}]\,,
\end{equation}
The corresponding flux of the Lorentz charges is given by
\begin{equation}\label{eq:lorentz-flux}
\mathcal{F}[\dd{X}^{a}, \Delta \ms H] = - \frac{1}{64\pi}\int_{\Delta \ms H} \dd\varepsilon_3~ \dd{E} \lie_{\dd X}\dd{K}\,.
\end{equation}
Note that the flux is essentially given by \(\mathcal{F}[\dd{X}^{a}, \Delta \ms H] = \int_{\Delta\ms H} \dd\Theta(g;\delta_{\dd X}g)\) in analogy to the Wald-Zoupas prescription (see Eq.~32 of \cite{WZ}).\\
When the conformal factor is chosen so that \(\dd K_{ab} = 0\) then the Lorentz charge reduces to
\begin{equation}\label{eq:lorentz-charge-AH}
\mathcal Q [\dd X^a; S] = - \f{1}{8 \pi } \int_S \dd\varepsilon_{2}~ \dd u^{a} \dd\beta_{ab} \dv X^b\,,
\end{equation}
which is the expression given by \cite{AH}. Note that when the conformal factor is chosen such that \(\dd K = 0\), the expression \cref{eq:Q-lor} is manifestly integrable and our ``correction term'' \(\dd\Theta\) (\cref{eq:Theta-choice}) vanishes. In both these cases, the flux of the Lorentz charges vanishes across any region \(\Delta\ms H\), i.e., the Lorentz charges are identically conserved. Further, since the vector fields \(\dd X^a\) correspond precisely to infinitesimal Lorentz transformations \(\dd\Lambda_{ab}\) in \(Ti^0\) (see \cref{eq:X-Lambda}), the charge defines an ``angular momentum'' tensor \(\dd J^{ab}\) at \(i^0\) through
\begin{equation}
\dd J^{ab} \dd\Lambda_{ab} = \mathcal Q [\dd X^a; S]\,,
\end{equation}
where the right-hand-side is independent of the cross-section since the charge is conserved.
\subsection{Transformation of charges under conformal changes}
\label{sec:conf-charges}
We now consider the transformation of the charges and fluxes for a Spi symmetry under changes of the choice of conformal factor as discussed in \cref{rem:conf}.
Consider a pure supertranslation symmetry \((\dd f, \dd X^a = 0)\). As shown in \cref{rem:conf-symm}, a pure supertranslation is conformally-invariant. Further from \cref{rem:conf-GR-fields} the potential \(\dd E\) is also conformally-invariant. Thus, the charge and flux of supertranslations in \cref{eq:st-charge,eq:st-flux} is also conformally-invariant.
However a ``pure Lorentz'' symmetry \((\dd f = 0, \dd X^a)\) is not conformally-invariant (see \cref{rem:conf-symm}), and hence we expect that the charge and flux of a Lorentz symmetry must transform nontrivially under changes of the conformal factor. Consider first the flux of Lorentz charges given by \cref{eq:lorentz-flux}. Using the transformation of \(\dd K_{ab}\) (\cref{eq:conf-K}) we see that this flux expression transforms as
\begin{equation}
\mathcal F[\dd X^a; \Delta \ms H ] \mapsto \mathcal F[\dd X^a; \Delta \ms H ] + \frac{1}{32\pi} \int_{\Delta\ms H} \dd\varepsilon_3 \dd E (\dd D^2 \Lie_{\dd X} \dd\alpha + 3 \Lie_{\dd X} \dd\alpha)\,.
\end{equation}
Comparing the second term on the right-hand-side to \cref{eq:st-flux}, we see that it is precisely the flux of a supertranslation given by \((-\nfrac{1}{2} \Lie_{\dd X} \dd\alpha)\). Thus, under a change of conformal factor the Lorentz flux shifts by the flux of a supertranslation
\begin{equation}\label{eq:conf-lorentz-flux}
\mathcal F[\dd X^a; \Delta \ms H ] \mapsto \mathcal F[\dd X^a; \Delta \ms H ] + \mathcal F[-\nfrac{1}{2} \Lie_{\dd X}\dd\alpha; \Delta\ms H]\,.
\end{equation}
One can similarly verify that the Lorentz charge \cref{eq:lorentz-charge} also shifts by the charge of a supertranslation. The explicit computation is a bit tedious and is presented in \cref{sec:conf-lorentz-charge}. However, we can derive the transformation of the Lorentz charge by a more general argument which we present below. This argument also holds in the more general case when \(\dd B_{ab} \neq 0\) considered in \cref{sec:new-beta} below.
From the transformation of the flux \cref{eq:conf-lorentz-flux}, we can deduce that the Lorentz charge expression \cref{eq:lorentz-charge} must transform as
\begin{equation}
\mathcal{Q}[\dd X^a; S] \mapsto \mathcal Q[\dd X^a; S] + \mathcal Q[-\nfrac{1}{2}\lie_{\dd X}\dd \alpha; S] + \int_S \dd\varepsilon_2 \dd u^a \dd \mu_a[\dd\alpha]\,,
\end{equation}
where the second term on the right-hand-side is the charge of a supertranslation \((-\nfrac{1}{2} \Lie_{\dd X}\dd\alpha)\) and the third term is a possible additional term determined by a covector \(\dd\mu_a\) which depends linearly on \(\dd\alpha\) and is divergence-free, \(\dd D^a \dd\mu_a[\dd\alpha] = 0\) for \emph{all} \(\dd\alpha\). Since \(\dd\alpha\) is a free function on \(\ms H\) we can apply \cref{thm:Wald} with \(\dd\alpha\) as the ``dynamical field''. Thus, from \cref{eq:Wald-hyp} we conclude that the final integral above vanishes, and that the Lorentz charge shifts by the charge of a supertranslation \((-\nfrac{1}{2} \Lie_{\dd X}\dd\alpha)\).
\begin{equation}\label{eq:lor-charge-trans-summ}
\mathcal{Q}[\dd X^a; S] \mapsto \mathcal Q[\dd X^a; S] + \mathcal Q[-\nfrac{1}{2}\lie_{\dd X}\dd \alpha; S]\,.
\end{equation}
If we restrict to the choice of conformal factor where \(\dd K_{ab} = 0\), so that the asymptotic symmetries are reduced to the Poincar\'e algebra and \(\dd\alpha\) is a Spi-translation satisfying \cref{eq:trans-cond}, then \cref{eq:lor-charge-trans-summ} reproduces the transformation law given in Eq.~29 of \cite{AH} and Eq.~6.8 of \cite{Ash-in-Held}.
Consider the charge of any Spi-symmetry represented by \((\dd f, \dd X^a)\), then under a conformal transformation the same Spi-symmetry is now represented by \((\dd f + \nfrac{1}{2} \lie_{\dd X}\dd\alpha, \dd X^a)\) (see \cref{rem:conf-symm}). The total charge of the Spi-symmetry transforms as
\begin{equation}\label{eq:total-charge-inv}\begin{aligned}
\mathcal{Q}[\dd f; S] + \mathcal{Q}[\dd X^a; S] \mapsto & \mathcal{Q}[\dd f + \nfrac{1}{2} \lie_{\dd X}\dd\alpha; S] + \mathcal{Q}[\dd X^a; S] + \mathcal Q[-\nfrac{1}{2}\lie_{\dd X}\dd \alpha; S]\,, \\
& = \mathcal{Q}[\dd f; S] + \mathcal{Q}[\dd X^a; S]\,,
\end{aligned}\end{equation}
that is, the charge of any Spi-symmetry is independent of the choice of conformal factor --- the change in the function \(\dd f\) representing the symmetry is exactly compensated by the change in the Lorentz charge given in \cref{eq:lor-charge-trans-summ}.
\section{Discussion}
\label{sec:disc}
In this paper, we analyzed the asymptotic symmetries and the corresponding charges for asymptotically-flat spacetimes at spatial infinity \(i^0\) using the Ashtekar-Hansen formalism, without any restrictions on the choice of the conformal factor at spatial infinity, which were imposed in previous analyses. Using the covariant phase space, we considered the direction-dependent limit of symplectic current of vacuum general relativity to spatial infinity. Using the pullback of this limit of the symplectic current to the space of spatial directions \(\ms H\) at spatial infinity, we obtained expressions for charges corresponding to all asymptotic symmetries. We rederived the known expressions for supertranslation charges but more a general expression for the Lorentz charge when conformal factor is completely unrestricted. In this case, we used a Wald-Zoupas type correction to make the Lorentz charge integrable, which also ensures that this charge transforms correctly under the action of a supertranslation, or equivalently, that the charge of a general Spi-symmetry is conformally-invariant.
The main motivation behind our analysis is to eventually relate the Lorentz charges at spatial infinity to the ones defined on null infinity. In this context, the Lorentz charge expressions would have to be matched in the ``same'' choice of conformal factor at both null infinity and spatial infinity, and it is not clear what the restrictions on the conformal factor at spatial infinity placed in previous works imply at null infinity. Thus, we hope that our more general expression for the Lorentz charge at spatial infinity will be more useful to repeat the matching analysis for the case of Lorentz symmetries that was done previously for Maxwell theory \cite{KP-EM-match} and supertranslations in general relativity \cite{KP-GR-match}. If this works out as expected, this would imply that the full BMS group at past null infinity is matched to the full BMS group at future null infinity and moreover, that the incoming fluxes of all BMS symmetries through past null infinity are equal to the outgoing fluxes of the anitpodally identified BMS symmetries through future null infinity. This would then prove the existence of infinitely many conservation laws, one for each generator of the BMS group, in classical gravitational scattering in asymptotically-flat spacetimes, as anticipated by Strominger \cite{Stro-CK-match}.
Another avenue for future investigation would be to quantize the asymptotic fields on \(\ms H\) in the spirit of the asymptotic quantization program on null infinity \cite{Ashtekar:1987tt}, see also \cite{Alexander1984}. This could lead to the possibility of relating the asymptotic ``in-states'' on past null infinity to the ``out-states'' on future null infinity, similar to the matching conditions in the classical theory, and provide further insight into the structure of quantum scattering.
We also note that the asymptotic fields at spatial infinity in both Maxwell theory and general relativity are described by smooth tensor fields living on a unit-hyperboloid $\ms H$. As is well-known \(\ms H\) is precisely the \(3\)-dimensional de Sitter spacetime. To prove the matching conditions for Maxwell and gravitational fields on \(\ms H\) with those on null infinity, \(\ms H\) was conformally-completed into a cylinder in the analysis of \cite{KP-GR-match,KP-EM-match}. It would be interesting to see if insights from the de Sitter/CFT correspondence \cite{dS-CFT} can be applied to develop a holographic understanding of electromagnetism and general relativity in asymptotically-flat spacetimes at spatial infinity, perhaps similar to \cite{Mink-CFT}.
\section*{Acknowledgements}
We thank \'Eanna \'E. Flanagan for helpful discussions and constant encouragement over the course of this work. IS would also like to thank D. Iozzo for help with \textsc{xAct}. This work is supported in part by the NSF grant PHY-1707800 to Cornell University. Some calculations used the computer algebra system \textsc{Mathematica} \cite{Mathematica}, in combination with the \textsc{xAct/xTensor} suite~\cite{JMM:xAct,MARTINGARCIA2008597}, and the Riemannian Geometry and Tensor Calculus package \cite{RGTC}.
|
1,477,468,751,109 | arxiv | \subsection{Iris Data}
We begin by showing how SCAMP clusters the iris data set of \cite{fisher1936use}.
The data matrix contains measurements of sepal length, sepal width,
petal length, and petal width for 150 irises of three types: setosa, versicolor, and virginica.
There are multiple ties due to rounding: only 43 petal length measurements are unique,
the maximum of the four variables.
Because of the numerous ties SCAMP's noise phase has a substantial effect on the clustering.
Before running SCAMP, we first look at the distribution of dip test p-values for the data matrix.
Figure \ref{figure:irisPvalue} shows this distribution.
Based on this distribution, we see that choosing smaller values of $\alpha$ will not change
which columns are annotated: petal length and petal width have dip test p-values below $0.01$
every iteration.
We note, however, that more conservative $\alpha$ values will lead
to a sparser annotation forest, since the same $\alpha$ value is used to split a coordinate across
all depths of a partition tree.
We proceed with our default value of $\alpha = 0.25$.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\textwidth,keepaspectratio]{./irisPvalueElbow.png}
\caption[Pvalues]{The iris data set features, ranked according to their mean
dip test p-value across 1000 noise applications. We see that
the p-value of sepal length and sepal width are sensitive to SCAMP's noise phase.
The plot shows classical threshold such as $\alpha =0.01$ (the dashed blue line), $\alpha = 0.05$
(the dotted green line) will label the same two features as a more liberal threshold of $\alpha = 0.25$ (the solid red).}\label{figure:irisPvalue}
\end{figure}
Figure \ref{figure:irisFigure} summarizes the clusterings found across 500 SCAMP iterations with
with $\alpha=0.25$, $m=25$, and the Gaussian noise parameter \eqref{scamp:GaussianNoise}
$\gamma = 4$.
Each iteration, SCAMP randomly samples $200$ candidate clusters from the annotation forest.
The cluster labels only describe an observations petal width and petal length,
since sepal width and sepal length appear unimodal to SCAMP at level $\alpha=0.25$. The labels
determined by SCAMP appear useful in characterizing the differences between the three iris
species.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\textwidth,keepaspectratio]{./iris_comparison}
\caption[Iris data]{True labels and SCAMP labels of Fisher's iris data according to the maximum
vote across 500 iterations.
The maximum vote SCAMP clustering ARI of $0.886$
and an unadjusted VI distance [\cite{meilua2007comparing}] of $0.410$.
SCAMP determines its labels automatically.}\label{figure:irisFigure}
\end{figure}
\subsection{Olive Oil}
We next apply SCAMP to the olive oil dataset of \cite{forina1983classification}.
This dataset has been previously analyzed by \cite{tantrum2003assessment}, \cite{stuetzle2003estimating}, and
\cite{chen2016comprehensive}.
We include this example to show how the descriptive labels produced by SCAMP can be useful for data
analysis, since the derived cluster labels explain differences between observations in different SCAMP clusters.
A description of these data is provided in the R package \pkg{classifly} [\cite{wickham2014classifly}],
from which we quote:
\begin{displayquote}
The olive oil data consists of the percentage composition of 8 fatty acids
(palmitic, palmitoleic, stearic, oleic, linoleic, linolenic, arachidic, eicosenoic)
found in the lipid fraction of 572 Italian olive oils.
There are 9 collection areas, 4 from southern Italy (North and South Apulia, Calabria, Sicily),
two from Sardinia (Inland and Coastal) and 3 from northern Italy (Umbria, East and West Liguria).
\end{displayquote}
The 9 collection areas constitute 3 production regions in this dataset:
southern Italy, Sardinia, and northern Italy.
Before running SCAMP, we again check our default choice of $\alpha$ against the
the data matrix: figure \ref{figure:olivePvalue} shows the distribution dip test
p-values of the features in the data matrix.
We see the top 6 features always have a dip test p-value below our default $\alpha=0.25$
aross $1000$ noise iterations.
We observe, however, that the stearic fatty acid feature has a dip test p-value slightly above
$0.25$ almost $35\%$ of the noise iterations.
This poses a problem for SCAMP's annotation phase, since it indicates the stearic fatty acid feature
will often drop out of the cluster annotation strings.
This in turn will bias the maximum vote annotation heuristic.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\textwidth,keepaspectratio]{./olivePvalueElbow.png}
\caption[Pvalues]{The olive oil data set features, ranked according to their mean
dip test p-value.
The plot shows classical threshold such as $\alpha =0.01$ (the dashed blue line), $\alpha = 0.05$
(the dotted green line) will label the same five features. Here, the default threshold of $\alpha = 0.25$ (the solid red)
will inconsisently set labels for all eight features.}\label{figure:olivePvalue}
\end{figure}
Because of this, we decide to increase our default $\alpha$: we
run $5000$ SCAMP iterations with $\alpha=0.30$, $m=25$, and $\gamma=4$.
Each iteration, SCAMP randomly samples $400$ candidate clusters from the annotation forest.
The maximum-label clustering heuristic determines $11$ clusters after $5000$ iterations.
Of these $11$ clusters, only $8$ have more than $25$ observations, our value for $m$.
We emphasize SCAMP can produce clusters with fewer than $m$ elements since the final
clustering is determined by the most frequently assigned
label for each observation across the $5000$ SCAMP iterations.
The parameter $m$ only restricts cluster size in
the search for candidate clusters for a single iteration.
Using the 9 Italian collection areas as the true cluster assignment,
the maximum-label
SCAMP clustering has an ARI of $0.734$ and unadjusted VI distance of $1.377$.
Each cluster is labeled according to the relative quantities of the $8$
measured fatty acids.
For example, one cluster of $190$ observations is labeled
``palmitic highest, palmitoleic highest, stearic lowest, oleic lowest, linoleic highest, linolenic
highest, arachidic highest, eicosenoic highest''. A second cluster of $57$ observation is labeled
``palmitic highest, palmitoleic highest, stearic medium-high, oleic lowest, linoleic lowest, linolenic
highest, arachidic highest, eicosenoic highest''. A third cluster of $34$ observations is
labeled ``palmitic lowest, palmitoleic lowest, stearic medium-high, oleic highest, linoleic lowest,
linolenic highest, arachidic highest, eicosenoic highest''.
These clusters broadly correspond to the southern Italy production region, which
contains the collection areas South-Apulia (206 observations), North-Apulia (25 observations), Sicily (36 observations), and Calabria (56 observations).
We visualize these clusters in figure \ref{figure:oliveoil} using t-SNE [\cite{maaten2008visualizing}] to map the eight dimensions to two.
By coloring the t-SNE map according to the relative magnitude of each olive
oil's fatty acid content, we see that the SCAMP labels reflect differences in the
underlying fatty acid measurements. For example, olive oils in the cluster ``palmitic
highest, palmitoleic highest, stearic lowest, oleic lowest, linoleic highest, linolenic
highest, arachidic highest, eicosenoic highest'' are mostly concentrated in a region of
the t-SNE map with high and low measured values of these fatty-acids, respectively (relative to the
data set). A similar observation holds for the other clusters determined by SCAMP.
From a data analysis standpoint, this allows an analyst to immediately understand why
a given olive oil ends up in a particular SCAMP cluster.
For example, within the South Italy production region, the three main
SCAMP clusters all contain olive oils with
relatively high amounts of linolenic, arachidic, and eicosenoic fatty acids.
The two SCAMP clusters that correspond (largely) to South-Apulia and Calabria also have
olive oils with
high amounts of palmitic and palmitoleic acid, while the SCAMP cluster corresponding
(largely) to North-Apulia and Sicily has olive oils with
low amounts of these same fatty acids. On the other hand,
the olive oils in
the SCAMP clusters corresponding to
North-Apulia and Calabria have high amounts of stearic and
linoleic acides, while the olive oils in the
South-Apulia/Sicily cluster has low amounts of these same fatty acids.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\textwidth,height=0.5\textheight]{./tsneOliveOilPaper.pdf}
\caption[Olive oil visualization]{Each panel displays the same t-SNE map of the olive oil dataset.
Panel 1, titled ``Italy Area'', colors the points according to their region of production in Italy.
Bounding boxes are drawn around four regions:
the dotted-blue box bounds observations from South-Apulia;
the solid-orange box bounds North-Apulia;
the dashed-pink box bounds Sicily;
the final dotted-green box bounds Calabria.
Panel 2, titled ``SCAMP'', colors the same t-SNE map according to an observation's membership in a
SCAMP run-off cluster.
Bounding boxes are drawn around three cluster:
``palmitic highest, palmitoleic highest, stearic lowest, oleic lowest, linoleic highest, linolenic highest, arachidic highest, eicosenoic highest'' is bounded by
the dashed-blue box;
``palmitic highest, palmitoleic highest, stearic medium-high, oleic lowest, linoleic lowest, linolenic highest, arachidic highest, eicosenoic highest'' is bounded by the solid-pink box;
`palmitic lowest, palmitoleic lowest, stearic medium-high, oleic highest, linoleic lowest, linolenic highest, arachidic highest, eicosenoic highest'' is bounded by
the long-dashed-green box.
The remaining panels show the t-SNE map colored by the relative magnitude of the fatty-acid measurement:
the darker the point, the higher the measurement.
}\label{figure:oliveoil}
\end{figure}
\subsection{Separated mixtures}
Here we conduct a simulation study to model the following setting: an experimenter has
collected $n$ observations for $p$ variables. The experimenter wishes to cluster the data
in order to find sub-collections of rows that exhibit a common signal along some sub-collection
of the $p$ columns.
To generate a matrix with such a property, we sample each sub-collection
of rows from a multivariate distribution.
In the simplest case, the multivariate distribution is a Gaussian.
Both the mean vector and covariance matrix differ from cluster to cluster.
In this simplest case, "no signal" corresponds to a mean vector entry $0$ and an
activation "signal" corresponds to a mean vector entry larger than $0$.
We examine three general scenarios.
\begin{enumerate}
\item{Scenario 1: Moderate sample size, moderate number of clusters.}
\item{Scenario 2: Large sample size, moderate number of clusters.}
\item{Scenario 3: Large sample size, large number of clusters.}
\end{enumerate}
There are five binary parameters that can be used to modify a scenario.
Together they create $32$ distinct simulation settings within a scenario.
For each setting, we run multiple iterations of the simulation.
We provide a graphical overview here to illustrate how the simulation parameters change the underlying
data matrix in figure \ref{figure:simulationSettings}.
The simulation parameters, and the modifications they determine, are described in detail in appendix
\ref{section:appendixSettings}
\begin{figure}[H]
\centering
\[
\raisebox{4\baselineskip+.5\myoffset}
\defl{l}
\stackunder[0pt]{\colblock[blue!0]{1}{3.5}{\text{\#\ Obs}}}
{
\stackunder[0pt]{\colblock[graphNode2]{5}{3.5}{\mathbf{1000}}}
{\stackunder[0pt]{\colblock[graphNode4]{2.5}{3.5}{\mathbf{500}}}
{\stackunder[0pt]{\colblock[graphNode6]{1.25}{3.5}{\mathbf{250}}}%
{\stackunder[0pt]{\colblock[graphNode7]{1.25}{3.5}{\mathbf{250}}}
{\stackunder[0pt]{\colblock[graphNode2]{1.25}{3.5}{\mathbf{250}}}
{\stackunder[0pt]{\colblock[graphNode4]{1.25}{3.5}{\mathbf{250}}}
{\stackunder[0pt]{\colblock[graphNode6]{1}{3.5}{\mathbf{125}}}
{\stackunder[0pt]{\colblock[graphNode7]{1}{3.5}{\mathbf{125}}}
{\stackunder[0pt]{\colblock[graphNode2]{1}{3.5}{\mathbf{125}}}
{\colblock[graphNode4]{1}{3.5}{\mathbf{125}}}}}}}}}}}
}
{
\stackunder[0pt]{\colblock[blue!0]{1}{10}{\text{Baseline\ 20\ Dim}}}
{\stackunder[0pt]{\colblock[graphNode2]{5}{10}{\mathbf{MVG/MVT}}}
{\stackunder[0pt]{\colblock[graphNode4]{2.5}{10}{\mathbf{MVG}}}
{\stackunder[0pt]{\colblock[graphNode6]{1.25}{10}{\mathbf{MVG/MVT}}}%
{\stackunder[0pt]{\colblock[graphNode7]{1.25}{10}{\mathbf{MVG}}}
{\stackunder[0pt]{\colblock[graphNode2]{1.25}{10}{\mathbf{MVG/MVT}}}
{\stackunder[0pt]{\colblock[graphNode4]{1.25}{10}{\mathbf{MVG}}}
{\stackunder[0pt]{\colblock[graphNode6]{1}{10}{\mathbf{MVG/MVT}}}
{\stackunder[0pt]{\colblock[graphNode7]{1}{10}{\mathbf{MVG}}}
{\stackunder[0pt]{\colblock[graphNode2]{1}{10}{\mathbf{MVG/MVT}}}
{\colblock[graphNode4]{1}{10}{\mathbf{MVG}}}}}}}}}}}}
}
{
\stackunder[0pt]{\colblock[blue!0]{1}{10}{\text{Parameter\ 20\ Dim\ Block}}}
{\stackunder[0pt]{\colblock[graphNode2]{1.25}{10}{\mathbf{MVG/MVT\ 2/3}}}
{\stackunder[0pt]{\colblock[graphNode4]{1}{10}{\mathbf{MVG\ 2/3}}}
{\stackunder[0pt]{\colblock[graphNode6]{1.25}{10}{\mathbf{MVG/MVT\ 2/3}}}%
{\stackunder[0pt]{\colblock[graphNode7]{1}{10}{\mathbf{MVG\ 2/3}}}
{\stackunder[0pt]{\colblock[graphNode2]{1}{10}{\mathbf{MVG/MVT\ 2/3}}}
{\stackunder[0pt]{\colblock[graphNode4]{1.25}{10}{\mathbf{MVG\ 2/3}}}
{\stackunder[0pt]{\colblock[graphNode6]{2.5}{10}{\mathbf{MVG/MVT\ 2/3}}}
{\stackunder[0pt]{\colblock[graphNode7]{1}{10}{\mathbf{MVG\ 2/3}}}
{\stackunder[0pt]{\colblock[graphNode2]{1.25}{10}{\mathbf{MVG/MVT\ 2/3}}}
{\colblock[graphNode4]{5}{10}{\mathbf{MVG\ 2/3}}}}}}}}}}}}
}
{
\stackunder[0pt]{\colblock[blue!0]{1}{10}{\text{Parameter \ 20\ Dim\ Noise}}}
{\colblock[light-gray]{16.5}{10}{\mathbf{MVT}}}
}
}
\]
\caption[Simulation Overview]{This graphical overview of simulation settings shows how different parameters affect the block-structure of a sampled data matrix.
The number of observations refers to the baseline Scenario 1 in which there are 10 clusters.
Each cluster in the baseline setting is sampled from a 20 dimensional multivariate Gaussian with randomly chosen mean vector and covariance matrix.
Entries of the mean vector are constrained to be $0$ or $6$: an entry of $6$ indicates the signal is present in that cluster.
Variances are bounded above by $3$.
Simulation parameters govern the following:
the clusters can be alternatively sampled from multivariate Gaussian and multivariate T;
clusters can be transformed coordinate-wise by one of four maps;
a second 20 dimensional block with additional cluster structure can be sampled;
the mean vector of the second block can contain $2$
components in $\{0,6\}$ or $3$ components in $\{0,3,6\}$;
a third 20 dimensional block of $3000$ observations can be sampled
from a multivariate T with 5 degrees of freedom to model the inclusion of noise variables.
This creates $32$ distinct scenarios,
creating a data matrix with variable numbers of columns depicted here.
Color coding indicates which transformation affects the block.
The label on each block indicates the generating distribution and possible number of components.
}\label{figure:simulationSettings}
\end{figure}
In addition to the graphical depiction of the simulation parameters provided in figure \ref{figure:simulationSettings},
we also provide t-SNE visualizations of the cluster structure produced in
data matrices under different parameter settings in figure
\ref{figure:simulationTsneMatrixVix}.
t-SNE is particularly well suited to visualizing these data since many of the clusters
are sampled from multivariate Gaussian or multivariate t distributions.
The t-SNE maps show significant overlap when the observations in clusters
are taken through the maps described in the appendix \ref{section:appendixSettings}.
In our experience, overlapping clusters are commonly observed in t-SNE maps produced
from biological datasets.
Looking ahead, we see such overlap in our analysis of the GTEx data, visualized in figure \ref{figure:gtexTsne}.
\begin{figure}[H]
\centering
\includegraphics[height=0.35\textheight,keepaspectratio]{./simSettingsDemoTsne.pdf}
\caption[Simulation Matrix t-SNE]{t-SNE visualization of data matrices produced under eight different
simulation settings.
The number of observations in the data matrix, true number of clusters, and number of observations
sub-sampled for visualization are given in the label.
Points are colored according to their cluster membership.
Each column in the display shows the cluster structure of a data matrix generated with identical
parameter settings, with one exception:
the diplays in the first row are untransformed,
while the clusters on the second row are taken through the maps described the appendix \ref{section:appendixSettings}}\label{figure:simulationTsneMatrixVix}
\end{figure}
\subsubsection{Scenario 1: Moderate sample size, moderate number of clusters}
The graphical overview of figure \ref{figure:simulationSettings} is specialized to this scenario.
In it, the data matrix contains $3000$ observations for all $32$ parameter settings.
There are either $10$ clusters in $20$ dimensions, or $17$ clusters in $40$ dimensions.
Cluster sizes range between $125$ and $1000$ observations in the $10$ cluster scenario.
The clustering ground truth is taken to be the label of the mixture component which produces an observation of the data matrix.
We sample data matrices $10$ times under each simulation setting.
Each iteration, we apply the following clustering methods.
Notice that several methods require the user to set the number of clusters as a parameter.
In the current simulation scenario, we provide all methods except SCAMP the number of clusters.
\begin{enumerate}
\item{(Oracle) Affinity Propagation (AP) [\cite{frey2007clustering,bodenhofer2011apcluster}]: number of clusters provided to the procedure ``apclusterK'' each iteration.
Data matrix scaled to mean 0 and unit variance.
We set the parameters ``maxits=2000'',``convits=2000'', and ``bimaxit=100''.}
\item{(Oracle) K-Means [\cite{hartigan1979algorithm}]: number of clusters set to the truth each iteration. Data matrix scaled to mean 0 and unit variance.}
\item{(Oracle) K-medoid [\cite{calinski1974dendrite,hennig2013find,pamkCite,clusterPAMCite}]: number of clusters set to the truth each iteration.
Data matrix scaled to mean 0 and unit variance.}
\item{(Oracle) Model based clustering (Mclust) [\cite{fraley2002model,mclustcite2}]: number of components set to the truth each iteration.}
\item{SCAMP: we randomly search for candidate clusters, stopping after we find $50 \cdot \text{(number of columns)}$. We pre-set $\alpha=0.25$, $m=25$, and $\gamma=4$ across all simulations. We conduct a single SCAMP iteration per simulation setting and iteration. SCAMP
is provided $16$ threads to parallelize the search for candidate clusters.}
\item{SCAMP20: twenty iterations of the SCAMP procedure are performed and the maximum label heuristic used to cluster the data. Each iteration set to the same parameters as the single SCAMP run.}
\end{enumerate}
Observe that methods marked with the prefix ``(Oracle)'' have had the number of clusters
provided to them.
Looking ahead, the subsequent comparison of run-times omits the computation
cost of estimating the number of clusters.
If the method uses the number of clusters as a tuning parameter, as with k-means,
we provide the number directly.
In the case of affinity propagation,
parameters are modified to get the method close to the true number.
Model based clustering is capable of making its own estimate of the number
of components present in a data matrix,
but must be given a range of possible components to try.
A variety of methods could be used to estimate
the number of clusters for k-means and k-medoids
[see, for example, \cite{celeux1996entropy}, \cite{tibshirani2001estimating}, and
\cite{dudoit2002prediction}].
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.95\textwidth,keepaspectratio]{./sim01}
\caption[Simulation Study 1]{32 different simulations are run according to the parameters in table \ref{simulation:mixtureSettings}
in appendix section \ref{section:appendixSettings}.
50 iterations per setting. Mean of displayed statistics computed across iterations.
Dataset has 3000 rows, and 20/40/60 columns depending on setting.
True number of clusters either 10 or 17, depending on setting.
Oracle prefix in method description indicates method given number of clusters or method parameters are adjusted to obtain that number.
The logarithm of the true number of clusters is indicated by the dashed red line in each facet of the final panel.
\label{figure:simulation01Figure}}
\end{figure}
Results from the simulation study are summarized in figure \ref{figure:simulation01Figure}.
The first panel displays the distribution of mean ARI over the $50$ iterations
for each of the $32$ simulation settings.
There we can see that the methods with an oracle providing the number of clusters
perform quite well when the simulation parameters
match their implicit distribution assumptions:
K-means, K-medoids, and Mclust are able to recover cluster assignments almost perfectly in several
of the multivariate normal settings.
However, there is serious downside risk evident for these same methods: when the parameters of the
simulation introduce nuisance features and transform observations (through the maps
\eqref{scamp:sim_settings}, described in the appendix),
the cluster assignments can differ substantially from the underlying mixture despite being provided
the true number of mixture components.
On the other hand, we see that SCAMP clusterings usually perform well in terms of ARI,
no matter the simulation settings: it is robust to distributional assumptions.
We also note that the twenty iteration of SCAMP improves the mean ARI under all parameter settings.
For each simulation setting, the method with maximum mean ARI is taken as a baseline value.
The difference between this best baseline ARI and each method's ARI is recorded.
The second panel of figure \ref{figure:simulation01Figure} shows the distribution of these
differences across the $32$ simulation settings, with a zero value indicating the method performed
best (according to adjusted rand index) in that setting.
This panel shows that a single SCAMP iteration has ARI within $0.1$ of the best performing method
almost $50\%$ of the time.
The panel also shows that the twenty SCAMP iterations are almost always performing as well as the
best performing method,
and performs the best in terms of adjusted rand index about $25\%$ of the time.
The third panel of figure \ref{figure:simulation01Figure} shows the distribution of the logarithm
of mean run-time across methods.
We see that on these simulated data, SCAMP is relatively affordable when given $16$ threads:
a SCAMP single iteration, which estimates the number of clusters,
is usually finished after $20$ seconds.
The run-time comparison may be biased due to implementations: after reviewing the documentation
for the compared methods, we did not see the ability to provide them with additional execution
threads.
We also see that multiple SCAMP iterations incur additional cost in terms of run-time.
The fourth panel shows the logarithm of the number of clusters estimated by each method
in the top inset (the methods with an oracle are flat because they are given the truth).
The bottom inset shows the logarithm of the number of clusters with more than $25$ observations
in them.
SCAMP estimate is usually overestimates the truth in both the $10$ and $17$ cluster scenarios.
Iterating twenty times improves the estimate at the cost of producing an increased number
of smaller clusters.
\subsubsection{Scenario 2: Large sample size, moderate number of clusters}
Our second simulation scenario explores settings with larger data matrices: our data matrix increases to $30000$ observations.
The graphical overview of figure \ref{figure:simulationSettings}
can be modified to describe this setting by multiplying each cluster size by $10$.
We continue to use the 32 settings defined by the parameters described there
We run a modified collection of methods in this setting due to the computational expense. They are:
\begin{enumerate}
\item{Leveraged Affinity Propagation (AP)[\cite{frey2007clustering,bodenhofer2011apcluster}]:
Using the ``apclusterL'' implementation in the R package \pkg{apcluster},
we set ``maxits=2000'',``convits=2000'', ``frac=0.1'' (corresponding to $10\%$ of the data), ``sweeps=5'' and ``q=0''.}
\item{(Oracle) Clustering Large Aplications [\cite{rousseeuw1990finding}]: The R implementation \textit{clara} in the package \pkg{cluster} is used, with $k$ set to the truth, the ``samples'' parameter set to $50$, and sampsize set to $3000$, which is $10\%$ of the rows of the data set.}
\item{(Oracle) K-Means: no change.}
\item{(Oracle) K-medoid: no change.}
\item{(Oracle) Model based clustering (Mclust): we still provide the function ``Mclust'' with the number of clusters. We now initialize EM with $10\%$ of the data randomly selected.}
\item{SCAMP: no change.}
\item{SCAMP20: no change.}
\end{enumerate}
For each of the $32$ simulation settings, we again run $10$ iterations per simulation setting.
The results of the simulation are displayed in figure \ref{figure:simulation02Figure}.
We interpret each panel of the figure as we did in figure \ref{figure:simulation01Figure}.
In the large matrix setting, we see that SCAMP performs well.
Iterating SCAMP twenty times continues to improve performance at the cost of additional run-time.
On the other hand, the downside risk of alternative methods is more pronounced in this setting.
For some methods, this deterioration in performance in in part explained by their reliance on
sub-sampling. Additionally, modifying the parameter settings for Leveraged Affinity Propagation
could potentially improve perfomance in terms of ARI -- we kept the current settings
for comparability to scenario 1.
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.95\textwidth,keepaspectratio]{./sim02}
\caption[Simulation Study 2]{32 different simulations are run according to the parameters in table \ref{simulation:mixtureSettings}
in appendix section \ref{section:appendixSettings}.
10 iterations per setting. Mean of displayed statistics computed across iterations.
Dataset has 30000 rows, and 20/40/60 columns depending on setting.
True number of clusters either 10 or 17, depending on setting.
Oracle prefix in method description indicates method given number of clusters or method parameters are adjusted to obtain that number.
The logarithm of the true number of clusters is indicated by the dashed red line in each facet of the final panel.
\label{figure:simulation02Figure}}
\end{figure}
\subsubsection{Scenario 3: Large sample size, large number of clusters}
We conclude by simulating data data with a large number of
observations and a large number of clusters of different sizes.
We continue to generate a data matrix with $30000$ rows.
In this scenario, the basic mixture has
30 components in $20$ dimensions, and $56$ components in the larger $40$ dimensions.
In the $30$ cluster composition, cluster sizes range between $100$ and $10000$ observations.
In the $56$ cluster composition, cluster sizes range between $50$ and $3950$.
Once again, for each of the $32$ simulation settings,
we run $10$ iterations
We refer the reader to the appendix for specific details.
In this scenario, we see that SCAMP continues to perform well.
Since it does not have to sub-sample,
it is able to recover much of component structure of the underlying mixture.
In this scenario, the alternative methods almost all perform worse in terms of ARI than SCAMP.
SCAMP run for twenty iterations almost always performs best in terms of its adjusted rand index.
A single SCAMP iteration compares well to other methods in terms of run-time,
and slightly over-estimates the true number of clusters on average.
\begin{figure}[!tbh]
\centering
\includegraphics[width=0.95\textwidth,keepaspectratio]{./sim03}
\caption[Simulation Study 3]{32 different simulations are run according to the parameters in table \ref{simulation:mixtureSettings}
in appendix section \ref{section:appendixSettings}.
10 iterations per setting. Mean of displayed statistics computed across iterations.
Dataset has 30000 rows, and 20/40/60 columns depending on setting.
True number of clusters either 30 or 56, depending on setting.
Oracle prefix in method description indicates method given number of clusters or method parameters are adjusted to obtain that number.
The logarithm of the true number of clusters is indicated by the dashed red line in each facet of the final panel.
\label{figure:simulation03Figure}}
\end{figure}
\subsection{Circular Data and Rotated Data}
The previous simulation of well-separated mixtures along the measurement axes is the ideal use case for SCAMP: in such cases, it is possible for SCAMP to provide interpretable clusterings of the data matrix.
Here, we simulate two scenarios where SCAMP will perform poorly.
Figure \ref{figure:scampJoint} illustrates two examples in $\mathbb{R}^2$.
In the first case, the data have circular structure.
SCAMP over partitions these data since the coordinate projections appear highly multimodal, and so the connected structure of the data is lost.
In the second case, the data are simulated from a mixture of two multivariate Gaussians with mean vectors $\mu_1 = (1.5,0)$, $\mu_2 = (0,1.5)$ and common covariance matrix
$\Sigma \equiv
\begin{pmatrix}
1.0\ \ \ 0.7 \\
0.7\ \ \ 1.0 \\
\end{pmatrix}
$.
Such data have coordinate projections which appear unimodal.
As a result, SCAMP clusters the data matrix by producing a single cluster consisting of the entire data matrix.
The adjusted rand scores reflect SCAMP's difficulty with such data.
If one is willing to sacrifice interpretable clusterings, SCAMP can still be used to cluster
these kinds of data matrices.
In the first case, SCAMP can be applied to the data matrix taken through
the map $\phi(x,y) \mapsto x^2 + y^2$.
In the second case, SCAMP can be applied to the principal components.
Figure \ref{figure:scampJoint} shows how SCAMP clusters the data under these transformations.
Since the clusterings are determined on transformed data matrices, the labels produced by SCAMP
are not immediately interpretable on the original measurement scale.
However, these clusterings are still useful: SCAMP recovers the underlying number of circular
components in the first case, the number of mixture components in the second case.
The adjusted rand scores are consequently improved.
In many clustering tasks, a method, such as k-means, is used as a clustering tool after the data
have been transformed: PCA and kernel methods are two such examples.
These simulations show that in such work flows SCAMP might be an viable alternative. We conclude
with a practical demonstration of such a work-flow.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\textwidth,keepaspectratio]{./final07_joint}
\caption[SCAMP Failures]{SCAMP clusterings of two simulated datasets.
In the case of circular data, SCAMP over partitions the data because the coordinate projections are highly multimodal.
After transforming the raw data through the map $\phi(x,y) \equiv x^2 + y^2$, SCAMP is able to capture the circular structure.
In the case of data with cluster structure that is not visible along coordinate projects,
SCAMP under partitions and lumps all observations into a single cluster.
After rotation through the principal components, SCAMP is able to recover group structure.}\label{figure:scampJoint}
\end{figure}
\subsection{GTEx Data}
As a practical demonstration, we apply SCAMP to bulk RNA seq gene read count data from the
the Genotype-Tissue Expression (GTEx) Project.
This project is supported by the Common Fund of the Office of the Director of the National Institutes of Health,
and by NCI, NHGRI, NHLBI, NIDA, NIMH, and NINDS.
The data used for the demonstration in this paper were obtained from the
\textcolor{blue}{\href{https://www.gtexportal.org/}{https://www.gtexportal.org/}}
from the v6p release in the the file ``GTEx\_Analysis\_v6p\_RNA-seq\_RNA-SeQCv1.1.8\_gene\_reads.gct.gz''
on April 19, 2018.
An alternative analysis of the v6 data is given in \cite{dey2017visualizing}.
This dataset contains per-gene read counts for $56238$ genes across $8555$ samples.
The samples were collected from $450$ human donors.
Each of the $8555$ samples is taken from one of $31$ primary tissue types from a given donor.
The primary tissue type labels are further refined to $53$ tissue labels
that describe the sample location.
For example $1259$ samples are given the primary tissue label ``Brain''.
These $1259$ samples have $13$ corresponding
tissue labels ranging from ``Brain - Amygdala'' to ``Brain - Substantia nigra''.
Here, will apply several clustering methods to the unlabeled count data (after transformation).
We will use these two label sets as versions of ground truth for clustering.
These count data are zero inflated and, for a given gene,
differ by orders of magnitude across the samples.
To apply SCAMP to these data, we first remove $1558$ genes
which have zero counts across all $8555$ samples.
This produces a data matrix of integer counts with $8555$
rows and $54680$ columns. We transform the counts for
each remaining gene by the map $\log_2(1+x)$.
We then normalize each gene across samples by the total sum of the transformed counts.
To apply SCAMP, we next compute the top $50$ right-singular
vectors of the data matrix and then use them rotate the transformed data matrix.
We then apply SCAMP to the rotated data matrix.
Once again,
the selection of $\alpha$ can be guided empirically before running SCAMP:
figure \ref{figure:gtexPvalue} suggests our default value of $\alpha = 0.25$
will label clusters with all SVs exhibiting multimodality (according to the dip test)
in these data.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\textwidth,keepaspectratio]{./pvalueElbow.png}
\caption[Pvalues]{The top 50 singular values for the pre-processed GTEx data, ranked according to their mean dip test p-value.
The plot shows how the three thresholds for $\alpha$ select different numbers of SVs for
annotation: $0.01$ (the dashed blue line) selects $12$ column vectors; $0.05$
(the dotted green line) selects $13$; $0.25$ (the solid red) selects $14$.}\label{figure:gtexPvalue}
\end{figure}
For the purpose of demonstration, we will cluster the data with SCAMP
using all three choices of $\alpha$ since the $\alpha$ values $\{0.01,0.05,0.25\}$
correspond to labeling clusters with $\{12, 13, 14\}$ SVs.
For each choice of $\alpha$, we run SCAMP for 100 iterations.
Each search phase of each iteration,
we randomly sample $5000$ candidate clusters from the annotation forest.
The final clustering if found using the maximum-vote heuristic across the $100$ iterations.
We also cluster the dataset using affinity propagation, Mclust, k-means, and k-medoids.
We modify the parameters for affinity propagation to try to achieve convergence, setting
``maxits'' to $5000$, ``convits'' to $5000$, and ``lam'' to $0.95$.
For Mclust, we let the number of mixture components
range from $2$ to $100$, and select the final mixture using BIC: in this dataset,
this selects 24 mixture components.
We use the estimate $24$ as the k parameter in k-means and k-medoids.
\begin{figure}[H]
\centering
\begin{tabular}{rlllllll}
\hline
& SCAMP 01 & SCAMP 05 & SCAMP 25 & AP & Mclust & k-means & k-medoid \\
\hline
Primary Tissue: VI & 1.296 & 1.362 & 1.453 & 1.383 & 1.003 & 1.205 & 1.017 \\
Primary Tissue: ARI & 0.633 & 0.632 & 0.592 & 0.559 & 0.705 & 0.643 & 0.749 \\
\# Primary Tissue & 31 & 31 & 31 & 31 & 31 & 31 & 31 \\
Tissue: VI & 1.415 & 1.473 & 1.542 & 1.277 & 1.344 & 1.582 & 1.464 \\
Tissue: ARI & 0.649 & 0.657 & 0.654 & 0.69 & 0.606 & 0.557 & 0.564 \\
\# Tissue & 53 & 53 & 53 & 53 & 53 & 53 & 53 \\
Method \# of Clusters & 54 & 61 & 75 & 41 & 24 & 24 & 24 \\
\hline
\end{tabular}
\caption[Score]{The VI distance and ARI for methods using the primary tissue label
and tissue label as ground truth.
The number trailing each SCAMP column indicates the value of $\alpha$ used to cluster the data. }\label{figure:gtexScores}
\end{figure}
Numerical summaries of the clusterings are listed in the table in figure \ref{figure:gtexScores}.
The different methods perform comparably in terms of VI distance no matter
which set of labels are used as the truth.
k-medoid has the best performance in terms of ARI when the primary
tissue labels are taken as the truth.
Affinity propagation has the best performance in terms of ARI when the tissue labels are taken as
the truth.
Visualizing the data can help explain this difference: figure \ref{figure:gtexTsne}
provides a visualization of these data through their t-SNE map.
In figure \ref{figure:gtexTsne}, we see both SCAMP
and affinity propagation both have a tendency to partition primary tissue types into sub-clusters.
As the value of $\alpha$ increases from $0.01$ to $0.25$, the number of clusters found by SCAMP
increases. This can be seen in the t-SNE visualization, where homogeneous
islands are split into overlapping clusters across the SCAMP panels.
It would require further analysis to determine if SCAMP and affinity propagation
are detecting groups of biological interest that are not detected by other methods (or each other).
However, since the goal of this section is to show how
SCAMP can be productively applied to a modern dataset, we defer such analysis to subsequent work.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\textwidth,keepaspectratio]{./altGtexTsne.png}
\caption[gtexTsne]{A t-SNE visualization of the GTEx data.
Panel 1 shows points colored by the $53$ GTEx tissue labels.
Generally, the ``islands'' in the t-SNE map correspond to a tissue type.
However, separation of ``islands'' occurs within tissues:
the blue bounding boxes surround four ``islands'' with primary tissue label ``Brain''
and thirteen tissue labels
corresponding to specific regions of the brain.
The pink bounding boxes surround two ``islands'' with primary tissue label ``Skin''
and three tissue labels "Cells - Transformed fibroblasts",
"Skin - Not Sun Exposed (Suprapubic)",
and "Skin - Sun Exposed (Lower leg)".
The remaining panels show the tissues colored according to SCAMP, affinity propagation,
Mclust, k-means, and k-medoid clusterings of the data.
Colors are manually selected to match a method's cluster with
the most similar ``true'' cluster (tissue label).
When a method cluster has been matched to a ``true'' clusters,
subsequent method clusters that also match
the same ``true'' cluster are colored by a separate palette.}\label{figure:gtexTsne}
\end{figure}
\subsection{Outline of Paper}
The remainder of this paper is organized as follows.
In section \ref{section:doubledip}, we develop an extended version of the dip test of \cite{hartigan1985dip} that is used by SCAMP to search for candidate clusters containing a small number of observations.
Section \ref{section:scamp} provides the details of the SCAMP clustering algorithm.
The performance of SCAMP is examined in section \ref{section:casestudes}, where we apply it to simulated and real data sets.
Proofs of several claims made in section \ref{section:doubledip} are contained in the appendix.
As mentioned earlier, SCAMP often uses the taut string of \cite{davies2001local,davies2004densities} to induce modal groups.
We decided to use the taut string since, as noted by \cite{davies2004densities} page 1099, observation (iii), it has the minimum modality over a large collection of density estimators.
We found this modal sparsity appealing given SCAMP's search strategy.
However, SCAMP always uses the default value of $\kappa=19$ (discussed and set by the authors in \cite{davies2004densities}) to solve the $\kappa$-Kuiper problem.
For data matrices with fewer than $400$ observations,
the splitting is done with an extended version of the dip test in a sequential procedure designed to estimate the number of modes $k$ present in the coordinate of the sub-matrix under consideration.
In part, our extension of the dip test grew out of a desire to minimize SCAMP's dependence on bandwidth selection.
Once estimated, we use $\hat{k}$ in conjunction with the one-dimensional dynamic programming version of $k$-medoids by \cite{wang2011ckmeans}
to induce modal sub-collections (and so ensure the exactly $\hat{k}$ modal sub-collections are produced).
To our knowledge, this extended version of the dip test is new.
Other testing procedures, such as the excess mass test, could have been used in its place:
\cite{ameijeiras2016mode} discuss additional testing procedures.
However, as noted by \cite{hartigan2000testing} as well as \cite{hall2004attributing}, the excess mass test concludes the presence of two modes in situations like those depicted by figure \ref{fig:excessmassfail}.
SCAMP is designed with the goal of detecting such small groups and selecting them if they have high preference scores: using our extension makes this outcome possible.
We also considered using the multiscale procedure of
\cite{dumbgen2008multiscale}
and the jaws-based method of
\cite{hartigan2000testing} to test for multimodality in small samples.
We decided against using these tests since they both required developing automated procedures to make decisions on the basis of interval collections:
in the former, the collections $\mathcal{D}^{+}(\alpha)$ and $\mathcal{D}^{-}(\alpha)$;
in the latter, $W$- an $M$-components contained within larger candidate antimodal sections of the empirical distribution.
Thinking about how to develop such automated procedures led us to consider our extension to the dip, which we discuss in the next section.
\begin{figure}[tbh]
\centering
\input{./figure_1.tex}
\caption[Excess mass will not detect multimodality]{As observed in \cite{hall2004attributing}, the excess mass difference $\Delta_3=0$ iff the smallest mode of the distribution lies below the base of the valley of the larger two modes.
In the depicted Gaussian mixture, the excess mass difference does not detect trimodality since $E_3(\lambda)=E_2(\lambda)$ for values of $\lambda$, including those above the horizontal line. }
\label{fig:excessmassfail}
\end{figure}
\subsection{Summary of Contributions}
In this paper, we have developed a new clustering algorithm called selective clustering annotation using modes of projections (SCAMP). SCAMP is designed to be:
\begin{itemize}
\item{\textbf{Interpretable.} Clusters produced by SCAMP are automatically assigned labels in terms of the columns of the underlying dataset.
When the measurements in the underlying dataset are themselves meaningful, this produces clusters that human users can understand simply by reading their label.}
\item{\textbf{Robust.} SCAMP defines clusters in terms of the empirical distributions of their coordinate projections: they must be unimodal.
SCAMP is consequently able to produce clusters with long tails that accommodate outliers.}
\item{\textbf{Heterogeneous.} SCAMP is able to produce clusterings of data matrix consisting of clusters of different sizes.
}
\item{\textbf{Simple to use.} SCAMP determines the number of clusters in a dataset as a consequence of three tuning parameters: a level $\alpha$,
against which p-values from dip test are compared; a lower-bound $m$,
which defines the smallest number of observations that can constitute a cluster; and a
univariate Gaussian variance parameter $\gamma$.
The parameter $\alpha$ describes a univariate quantity, no matter the dimension
of the data matrix.
Given these parameters, SCAMP will cluster the dataset:
estimating the number of clusters in a data matrix is an implicit part of its clustering
strategy.
This is in contrast to procedures such as $k$-means and $k$-medoids,
which either require the user to set the number of clusters $k$ themselves, or to estimate
this parameter prior to clustering.}
\item{\textbf{Composable.} The most computationally demanding task in the SCAMP procedure is the
search for candidate clusters.
Each tree grown in the search requires computing the dip test and taut string density
estimator numerous times.
Fortunately, $O(n)$ algorithms exist for both computations.
In addition, the search can be parallelized: a parallel {C\nolinebreak[4]\hspace{-.05em}\raisebox{.4ex}{\tiny\bf ++}} \ implementation of this
procedure with R interface is available.
When combined with a random-sampling scheme, it is possible to use SCAMP to cluster datasets
with a large number of observations
and moderate number of features. We demonstrate this possibility on both real and simulated
data in section \ref{section:casestudes}.
In modern datasets, where the number of features is much larger than the number of
observations,
SCAMP can be used productively
in conjunction with dimensionality reduction methods such as
PCA.
We demonstrate this possibility on bulk RNA seq data in section \ref{section:casestudes}.}
\end{itemize}
\subsection{Single Iteration}
Before SCAMP is applied to a data matrix $X$, there is a pre-processing step: the data matrix is normalized so each column vector has mean zero and unit variance.
After normalization, the data matrix is passed to SCAMP.
There are four phases in a single SCAMP iteration.
In the first phase, two forms of structured noise are added to the data matrix.
This is done so that ties within column vectors of a data matrix are broken and the relative order of observations within a column are perturbed.
In the second phase, the data matrix is recursively searched for $\alpha$-$m$-clusters.
In the third phase, a partial clustering of the data matrix is determined by selecting $\alpha$-$m$-clusters with high preference scores.
If any observations are not assigned to a cluster after phase three,
the second and third phase repeat on residual sub-matrices until a complete partitional clustering of the data matrix has been found.
In the fourth and final phase, the selected clusters are assigned descriptive labels.
This labeling provides a name for every row of the data matrix.
Figure \ref{figure:scampOutline} visualizes these four phases, when SCAMP is applied to the famous iris data set of \cite{fisher1936use}.
\begin{figure}[H]
\centering
\includegraphics[width=0.95\textwidth,keepaspectratio]{./scampOutline}
\caption[SCAMP outline]{A single SCAMP iteration applied the iris data set of \cite{fisher1936use}.
The data set contains measurements of sepal length, sepal width, petal length, and petal width of three iris species.
Measurements are taken to two decimal places, leading to multiple ties.
The structured noise of phase 1 eliminates ties.
The annotation forest depicted in phase 2 is exhaustive: all partition trees found by SCAMP are displayed.
The candidate clusters are the leaves of the two partition trees.
These candidate clusters are used to induce the graph of phase 3: an edge connects two candidate clusters if they share a common observation.
The candidate clusters are then scored with a preference function.
Preference scores are reported on each node.
Selected clusters, bounded by red rectangles, are chosen to maximize the total sum of preference scores (candidate clusters with the same score are identical subsets of observations).
In phase 4, the selected clusters are annotated by comparing them to annotation boundaries derived from the annotation forest.}\label{figure:scampOutline}
\end{figure}
We now look at the details of each phase.
\subsubsection{Add noise to the data matrix}\label{scamp:noiseStep}
For $1 \leq i \leq k$, denote the rows of a data matrix $X_{k \times p}$ by
\[
r_i \equiv (x_{(i,1)},\ldots,x_{(i,k)})\ ,
\]
for $1 \leq j \leq p$, the columns by
\[
c_j \equiv (x_{(1,j)},\ldots,x_{(k,j)})' \ ,
\]
and specific entries as $x_{(i,j)}$.
A single scamp iteration begins by adding noise to each column $c_j$ of the data matrix.
The noise procedure is the same for each column $c_j$.
First, the order statistics of $c_j$
\[
O_{(1:d_1)}, O_{(2:d_2)},\ldots,O_{(q:d_q)}
\]
are determined.
Here $O_{(1:d_1)}$ indicates the minimum value observed in column $c_j$ with $d_1$ repetitions, $O_{(q:d_q)}$ the maximum with $d_q$ repetitions, and $O_{(n:d_n)}$
for $1 < n < q$ the values between. Once determined, duplicate entries of $c_j$ are replaced by samples from a uniform distribution.
Suppose an entry $x_{(i,j)}$ corresponds to the order statistic $O_{(n:d_n)}$. If $d_n=1$, $x_{(i,j)}$ is unmodified. Otherwise $x_{(i,j)}$ is replaced by
\begin{align}
u_{(i,j)} \sim \text{Uniform}\left(\frac{O_{(n:d_n)}+O_{(n-1:d_{n-1})}}{2},\frac{O_{(n:d_n)}+O_{(n+1:d_{n+1})}}{2}\right)\ . \label{scamp:uniformNoise}
\end{align}
Notice that $d_n$ entries of $c_j$ are replaced by samples from this specific uniform distribution.
In the case that either the minimum value of $c_j$ is not unique, the lower bound of \eqref{scamp:uniformNoise} is replaced by that minimum value.
The same hold if the maximum value of $c_j$ has copies, though with the upper bound of \eqref{scamp:uniformNoise} replaced by the maximum value instead.
The choice of adding uniform noise is motivated by SCAMP's reliance on the dip test.
As discussed in section \ref{section:doubledip}, the dip test is calibrated against the uniform distribution.
By replacing tied observations with draws from a uniform distribution that is supported on an interval bounded by neighboring order statistics,
we have replaced a jump of $d_n/k$ in $c_j$'s empirical distribution with $d_n$ jumps of $1/k$.
From the point of view of the dip test, the empirical distribution of $c_j$ now behaves like a unimodal distribution on the interval
$([O_{(n:d_n)}+O_{(n-1:d_{n-1})}]/2,[O_{(n:d_n)}+O_{(n+1:d_{n+1})}]/2)$.
Once uniform noise has been added to $c_j$, the order statistics
\[
N_{(1:1)}, N_{(2:1)},\ldots,N_{(k:1)}
\]
are again determined. Notice each order statistic is now unique.
The noise-step concludes by associating each entry
$u_{(i,j)}$ with its new order statistic $N_{(n:1)}$, and replacing it by
\begin{align}
n_{(i,j)} \sim \text{Normal}\left(\mu = u_{(m,j)} , \sigma = \frac{N_{n+1:1}+N_{n-1:1}}{2\cdot \gamma}\right)\ , \label{scamp:GaussianNoise}
\end{align}
with $\gamma$ a tuning parameter (by default we set $\gamma \equiv 4$).
As almost all the mass is concentrated within three standard deviations of the mean, the addition of Gaussian noise \eqref{scamp:GaussianNoise} provides
observations with the chance to swap relative position with some of their neighbors.
For observations with values near modes of $c_j$, this step is largely inconsequential in their ultimate cluster assignment.
But for observations near antimodes of $c_j$, this step increases the chance they change cluster assignment on successive SCAMP iterations.
Over multiple SCAMP iterations, changes in cluster assignments provide an uncertainty measure of each observation's SCAMP label.
\subsubsection{Search for candidate clusters}\label{scamp:phase2}
After structured noise has been added to the data matrix, SCAMP searches the data matrix for subsets of rows that follow the definition of an $\alpha$-$m$-cluster.
To conduct this search, two parameters must be set: the significance level for the dip test of \cite{hartigan1985dip}, $\alpha$, and the lower bound on cluster size, $m$.
Once set, SCAMP begins by testing each coordinate projection of the dataset for multimodality using the dip test.
For each column in the data matrix, when multimodality is detected by the dip test
modal groups are induced by estimating the density of the column vector and then separating observations into groups relative to the antimodes of the density.
This is done using the taut string density estimator of \cite{davies2004densities}.
The cut-points that separate the modal groups are determined by calculating the average coordinate value along each antimodal component of the taut string.
Sub-matrices are then created for all observations that fall between each neighboring pair of cut-points (the minimum and maximum value along the coordinate sub-collection are treated as cut-points).
Within each sub-matrix, projections of the remaining $p-1$ coordinates are recursively tested for multimodality (again using the dip test at level $\alpha$).
If multimodality is detected in a coordinate of a sub-matrix containing fewer than $400$ observations, the taut-string is not used to induce modal groups.
Instead, bimodality is tested using the $2$-dip of section \ref{section:doubledip} at level-$\alpha/2$.
Failure to reject the hypothesis of bimodality causes SCAMP to conclude $\hat{k}=2$ modal sub-collections are present in the coordinate.
If bimodality is rejected, trimodality is tested using the $3$-dip at level-$\alpha/3$.
Now, a failure to reject leads SCAMP to conclude $\hat{k}=3$ modal sub-collections are present.
In principle, this sequential testing procedure can be carried out until it arrives at an estimate of $n$-modality.
However if tri-modality is rejected, SCAMP simply concludes $\hat{k}=4$ modal sub-matrices are present due to computational limitations.
SCAMP then separates the projection into modal sub-collections using
the one-dimensional version of $k$-medoids by \cite{wang2011ckmeans} with $\hat{k}$ groups.
We introduce this secondary sequential testing procedure in sub-matrices with fewer than $400$ observations for several reasons. The first is computational. As the number of observations increases, the cost of exhaustive splitting grows with the falling-factorial of the number of modes of the $N$-dip. Because of this computational cost, we have not yet been able to compute tables of critical values for the $N$-dip when $N > 3$.
Ideally, we would like to apply this sequential procedure until the $N$-dip fails to reject its null hypothesis for some $N$, in place of the current combination of the dip test and subsequent density estimate. This is because the sequential procedure relies only on the choice of $\alpha$. It does not introduce dependence on bandwidth into the SCAMP search procedure, implicitly or explicitly. The taut string, for example, requires setting the $\kappa$ parameter, or accepting its default value of $\kappa = 19$.
In our experience, when applying SCAMP to real datasets the default $\kappa$-value of $19$ causes the taut string to produce very useful density estimates the vast majority of the time. However, in rare data sets we have found that the taut string can find a very large number of antimodes when applied to certain features. In these cases, exhaustive search of the annotation forest becomes impossible unless the feature is modified or removed from the analysis.
To account for this rare circumstance, our implementation in the \pkg{scamp} package has a parameter to
limit the number of antimodes SCAMP will search along any coordinate in a given partition tree. In the
event the taut string estimate exceeds the parameter value, the search terminates along that branch. In
our typical use cases, we set this parameter to $100$, though we expect this parameter should be
modified depending on the problem domain.
If we view an estimated density having a very large number of antimodes when in truth there are very few antimodes in the underlying distribution as an instance of type I error, we can think of switching to the sequential testing procedure as explicitly imposing a preference for type II error into the SCAMP search procedure in small sub-collections. The current implementation guarantees that SCAMP will never produce greater than $4$ modal sub-collections when examining small subsets of the data matrix, even in cases where there truly are more than $4$. Should more efficient implementations of the $N$-dip become available, this restriction can be reduced for larger number of modes in the $N$-dip and more than $400$ observations in a sub-matrix.
The recursive search terminates along a branch when it encounters a sub-matrix whose coordinates all have dip test $p$-values exceeding $\alpha$,
or the sub-matrix under consideration contains fewer than $m$ observations.
In the former case, SCAMP has found a candidate cluster and the indices of the rows in the sub-matrix are recorded.
In the latter case, stopping is viewed as uninformative and the indices are deleted.
Ideally this search step is exhaustive.
In practice, exhaustive search is impractical for data matrices with even a moderate number of
features.
Sampling is used to address this limitation: our implementation in the package \pkg{scamp} is able
to search uniformly at random among the space of partitional trees in the annotation forest until
finding a specified number of candidate clusters. We empirically justify this sampling approach with
our simulated data experiment in section \ref{section:casestudes}.
\subsubsection{Select Candidate Clusters}\label{scamp:phase3}
Clustering of the dataset begins with SCAMP imposing a \textit{preference values} on the set of
candidate clusters collected on phase 2.
As discussed in section \ref{section:introduction}, the preference function reflects the users beliefs
about components in the mixture \eqref{scamp:cluster_distribution}.
For its default, SCAMP prefers candidate clusters that are unimodal, symmetric, and have low variance
along each coordinate projection.
If SCAMP is applied to a data matrix produced in a domain where other cluster shapes are desirable, the
preference function can be modified to select accordingly.
SCAMP uses
the sample L-moments of \cite{hosking1990moments,hosking2007some} to assign preference
scores to candidate clusters.
Recalling definition \ref{section1:def:amcluster} from section \ref{section:introduction},
denote the set of $s$ candidate clusters found in SCAMP phase 2 by
\[
\mathcal{C} \equiv \left\{X_{\mathcal{I}\times p}^1,\ldots,X_{\mathcal{I}\times p}^r,\ldots,X_{\mathcal{I}\times p}^s \right\}\ ,
\]
with $1 \leq r \leq s$ and with the indices into the original data matrix different for each element of $\mathcal{C}$.
Denote Hosking's trimmed second, third, and fourth order
for the $i^{th}$ coordinate of $X_{\mathcal{I}\times p}^r$ by $\sigma_i^r, \tau_i^r$, and $\phi_i^r$ respectively, with $1 \leq i \leq p$.
Additionally, denote the p-value of the dip statistic $\hat{D}_1$ for each of the $p$-coordinates of of $X_{\mathcal{I}\times p}^r$ by $\delta_i^r$.
For each fixed candidate cluster $X_{\mathcal{I}\times p}^r$ in $\mathcal{C}$, SCAMP assigns it a preference value by first computing
\begin{align}
&\delta^r \equiv \min_{1 \leq i \leq p} \delta_i^r \ ,\ \sigma_{\text{max}}^r \equiv \max_{1 \leq i \leq p} \abs{\sigma_i^r} \ ,\ \sigma_{\text{sum}}^r \equiv \sum_{1 \leq i \leq p} \abs{\sigma_i^r} \ ,\nonumber \\
&\tau_{\text{max}}^r \equiv \max_{1 \leq i \leq p} \abs{\tau_i^r} \ , \ \tau_{\text{sum}}^r \equiv \sum_{1 \leq i \leq p} \abs{\tau_i^r} \ ,\ \phi_{\text{max}}^r \equiv \max_{1 \leq i \leq p} \abs{\phi_i^r} \ ,\
\text{ and } \phi_{\text{sum}}^r \equiv \sum_{1 \leq i \leq p} \abs{\phi_i^r} \ .\nonumber
\end{align}
With these quantities computed for each element of $\mathcal{C}$, SCAMP next computes
\begin{align}
&\sigma_{\text{max}} \equiv \max_{1 \leq r \leq s} \sigma_{\text{max}}^r \ ,\ \sigma_{\text{sum}} \equiv \max_{1 \leq r \leq s} \sigma_{\text{sum}}^r \ ,
\tau_{\text{max}} \equiv \max_{1 \leq r \leq s} \tau_{\text{max}}^r \ , \nonumber \\
&\ \tau_{\text{sum}} \equiv \max_{1 \leq r \leq s} \tau_{\text{sum}}^r \ ,
\phi_{\text{max}} \equiv \max_{1 \leq r \leq s} \phi_{\text{max}}^r \ , \text{ and } \ \phi_{\text{sum}} \equiv \max_{1 \leq r \leq s} \phi_{\text{sum}}^r \ .\nonumber
\end{align}
Using these maximums across $\mathcal{C}$, normalized values of the sample L-moments are then computed for each $X_{\mathcal{I}\times p}^r$:
\begin{align}
\sigma_{nm}^r \equiv 1 - \frac{\sigma_{\text{max}}^r}{\sigma_{\text{max}}} \ ,\ \sigma_{ns}^r \equiv 1 - \frac{\sigma_{\text{sum}}^r}{\sigma_{\text{sum}}} \ , \label{section:scamp:normstep}
\end{align}
along with the analogous quantities $\tau_{nm}^t, \tau_{ns}^r, \phi_{nm}^r,$ and $\phi_{ns}^r$.
The preference score of cluster $X_{\mathcal{I}\times p}^r$ is then defined to be
\begin{align}
\mathcal{P}\left(X_{\mathcal{I}\times p}^r\right) \equiv \delta_r + \frac{\sigma_{nm}^r+\sigma_{ns}^r}{2}+ \frac{\tau_{nm}^r+\tau_{ns}^r}{2}+ \frac{\phi_{nm}^r+\phi_{ns}^r}{2}\ .\label{section:scamp:preference}
\end{align}
The normalization step \eqref{section:scamp:normstep} allows the following interpretation of the preference score \eqref{section:scamp:preference}: the higher the better.
Notice that $X_{\mathcal{I}\times p}^{r}$ enters $\mathcal{C}$ because $\delta_r > \alpha$.
By using $\delta_r$ as a signal of unimodality in \eqref{section:scamp:preference}, we SCAMP is encouraged to discriminate the degree of unimodality.
Suppose SCAMP is applied to a data matrix with $\alpha = 0.05$.
Two hypothetical candidate clusters $X_{\mathcal{I}\times p}^{r_1}$ and $X_{\mathcal{I}\times p}^{r_2}$ enter $\mathcal{C}$ with $d_{r_1} = 0.06$ and $d_{r_2} = 0.94$. $\mathcal{P}$
on the basis of the dip alone, $\mathcal{P}$ is defined to prefer $X_{\mathcal{I}\times p}^{r_2}$.
The preference score \eqref{section:scamp:preference} averages the trimmed sample L-moments to balance between the worst shape and average shape of the coordinates of a candidate cluster $X_{\mathcal{I}\times p}^{r}$.
For example, $(\sigma_m^r+ \sigma_s^r)/2$ is meant to score a candidate cluster so that if either one single coordinate has extremely high spread or if many coordinates have moderate spread,
then SCAMP will not find the candidate cluster particularly appealing. Similar reasoning motivated $(\tau_m^r+\tau_s^r)/2$ and $(\phi_m^r+\phi_s^r)/2$.
The sample L-moments are trimmed in order to moderate the preference score penalty incurred by a candidate cluster when it contains an outlying observation along any of its coordinates.
By scoring each candidate cluster, SCAMP has created a set of weights
\[
\mathcal{P}
\equiv \left\{
\mathcal{P}\left(X_{\mathcal{I}\times p}^1\right),
\ldots,
\mathcal{P}\left(X_{\mathcal{I}\times p}^r\right),
\mathcal{P}\left(\ldots,X_{\mathcal{I}\times p}^s\right) \right\}
\]
associated with the set of candidate clusters $\mathcal{C}$. A graph can now be induced by viewing each
element of $\mathcal{C}$ as a node, and inducing the adjacency matrix with edge set
\[
\mathcal{E} \equiv \left[
\mathbb{E}\left(X_{\mathcal{I}\times p}^{r_1},X_{\mathcal{I}\times p}^{r_2}\right)
\ \vline\
\mathbb{E}\left(X_{\mathcal{I}\times p}^{r_1},X_{\mathcal{I}\times p}^{r_2}\right) =
\begin{cases}
0 \text{ if } X_{\mathcal{I}\times p}^{r_1} \cap X_{\mathcal{I}\times p}^{r_2} = \varnothing \\
1 \text{ otherwise} \\
\end{cases}
\right]\ .
\]
That is, two candidate clusters are connected by an edge if they share a common observation from the data matrix.
Clustering of the data matrix proceeds by selecting a subset of $\mathcal{C}$ that maximizes the sum of the associated preferences $\mathcal{P}$ and no two selected clusters are connected in $\mathcal{E}$.
This is an instance of maximum weight independent set (MWIS) integer program.
The MWIS problem is NP-hard; guarantees on the performance of approximate solutions are not known expect in special cases [\cite{brendel2010segmentation}].
SCAMP performs a greedy search: it sequentially picks the available candidate cluster with the highest preference score,
and eliminates all candidate clusters connected in $\mathcal{E}$ from contention in subsequent selection steps.
\subsubsection{Cluster residual observations}
The initial set of selected clusters, those candidate clusters picked by the greedy selection procedure of phase 3, often determine a partial clustering of the data matrix.
This is because the set of selected clusters consists of leaves of different partition trees in the annotation forest.
The residual observations that are not elements of any selected clusters do not necessarily obey the definition of an $\alpha$-$m$-cluster.
When these residual observations exhibit multimodality at level $\alpha$ along some of their coordinate projections, they are thought to represent samples from components of the mixture \eqref{scamp:cluster_distribution} that were not found in the initial search.
Therefore, a SCAMP iteration does not terminate after a single search-and-selection pass.
To complete the clustering of the data matrix, SCAMP conducts the search phase 2 and the selection phase 3 on the subset of rows of the data matrix corresponding to the residual observations.
This creates a new set of candidate clusters $\mathcal{C}$. The new set is scored according to the preference function $\mathcal{P}$.
New clusters are then selected and appended to the clusters already selected in the earlier selection round.
Once again, the secondary selection procedure may produce residual observation.
So, the procedure recurses, applying the search and selection phases to smaller and smaller subsets of the original data matrix until either no residual observations appear,
the residual observations comprise an $\alpha$-$m$-cluster,
or there are fewer than $m$ residual observations.
At the completion of this process, a set of selected clusters $\mathcal{S}$ has been created.
By its construction, $\mathcal{S}$ partitions the data matrix into $\alpha$-$m$-clusters.
This is why SCAMP does not require the user to select the number of clusters as a tuning parameter: the number of clusters follows instead from the choice of $\alpha$ and $m$.
$\alpha$ may initially seem as difficult a parameter to set as the number of clusters $k$.
In practice, however, we have found it relatively simple to set since $\alpha$ describes
a one-dimensional characteristic of the data matrix.
Our standard procedure is the following.
Before we apply SCAMP to a new data matrix for a given application, we first
decide on an absolute upper bound on $\alpha$.
Intuitively, this corresponds to the minimum amount of evidence we require to split
a coordinate into modal sub-collections.
Based on our experience of applying the dip test to a wide-range of datasets, we set this
upper bound to a default value of $\alpha = 0.25$.
Next, we refine this upper bound on $\alpha$ by
adding the structured noise of phase 1 to the columns of the data matrix, then computing the
dip test p-values for each column vector, and finally plotting their distribution.
If there is a clear point of separation in the distribution of p-values that
is slightly larger than our default choice of $\alpha=0.25$,
we can modify our choice upwards to respect the observed separation.
Our case studies in section \ref{section:casestudes} provide several examples
of these plots and how we use them.
When there is no clear point of separation in the p-values but a subset of the p-values fall below
our default $\alpha$ value, we normally use the default.
However, if we want a sparser clustering of the data matrix, we
next look at histograms of the features below our bound,
and pick an $\alpha < 0.25$ value large enough to ensure those features which appear
multimodal from inspection are entered into the search phase.
In our experience, classical default values, such as $\alpha = 0.05$, can also work well for
high-dimensional datasets with a large number of observations,
so long as the signal of interest is well-separated:
the separated mixture simulation of section \ref{section:casestudes} is one example of
such a scenario.
To summarize: while the choice of $\alpha$ can vary from problem to problem,
inspecting the distribution of dip-test p-values for the available features provides a simple
data-driven approach for modifying this parameter from a default setting of $\alpha = 0.25$.
As for the parameter $m$, if prior knowledge informs the expected cluster size, we can use this
knowledge to set a value. If no prior knowledge informs the analysis, we simply pick as small a
value as computational resources allow: our default value is $m=25$.
Beyond the computational limit, there is an absolute lower bound of $m = 4$ due to SCAMP's
reliance on the dip test.
\subsubsection{Annotate selected clusters}
A SCAMP iteration concludes by assigning labels to the selected clusters $\mathcal{S}$.
Since each selected cluster is an $\alpha$-$m$-cluster, we expect the distribution of observations within a cluster to be somewhat unimodal along any given coordinate.
Empirically we have observed that while the hypothesis of unimodality generally holds for clusters selected early in the selection process,
clusters selected near the end of the selection process can be asymmetric with shoulders.
Nevertheless, our annotation procedure behaves as if the assumption of unimodality holds equally well for all selected clusters.
To begin, annotation boundaries are determined for each of the coordinates of the data matrix $X_{k \times p}$ that are multimodal at level $\alpha$.
The annotation boundaries are computed using data collected in the annotation forest search of phase 2.
For each of the $j$ coordinates of the data matrix, $1 \leq j \leq p$, SCAMP records the location of all cut-points found in all trees of the annotation forest grown during the \textit{initial}
candidate cluster search. It records each collection of cut-points in a separate set. Define
\begin{align}
K_i^j \equiv \left\{(\kappa_1,\ldots,\kappa_i) \ \vline \
\begin{matrix}
\text{SCAMP induces } i+1\text{ modal groups in coordinate }\ j\\
\ \text{ with i cut points } \kappa_1 < \ldots < \kappa_i\\
\end{matrix}
\right\}\ , \label{scamp:cutPointDist}
\end{align}
and the cardinality of each set by $\mathcal{K}_i^j = \abs{K_i^j}$. Define $m^j \text{ to be the largest index } i \text{ such that } \mathcal{K}_i^j > 0$.
The annotation set for the $j^{th}$ coordinate is $K_a^j$, with $a$ defined to be the smallest index $1 \leq a \leq m^j$ such that $\mathcal{K}_a^j \geq \mathcal{K}_k^j$ for all $1 \leq k \leq m^j$.
Supposing the annotation set $K_a^j$ has cardinality $\mathcal{K}_a^j = n$,
the annotation boundaries $K^j$ are derived for the $j^{th}$ coordinate by computing the median cut-point
for each of the $a$ annotation coordinates:
\begin{align}
K^j \equiv \left\{(k_1,\ldots,k_a) \ \vline \ k_b \equiv \text{median}_{1 \leq i \leq n} \ \kappa_{(b,i)} \text{ for } 1 \leq b \leq a \right\}\ . \label{scamp:annotationBoundaries}
\end{align}
To annotate a specific coordinate in a selected cluster, three sample percentiles of the selected cluster are computed.
By default the $q_l \equiv 50^{th} - \epsilon $, $q_m \equiv 50^{th}$,
and $q_u \equiv 50^{th} + \epsilon$ percentile are used,
leading us to annotate clusters relative to their median coordinate values.
These values can be adjusted by the user, depending how rigidly they want the final labels
assigned to a cluster to respect the annotation boundaries along all labeled coordinates.
In our experience, this varies from problem to problem.
The label provided to each cluster is a proportion describing the relative position of the sample
quantiles to the annotation boundaries \eqref{scamp:annotationBoundaries},
thereby associating the cluster with a mode of the coordinate projection.
In the special case $a=1$, the following occurs: if $q_l \geq k_1$,
the cluster is given the label $1/1$; if $q_u \leq k_1$, the cluster is given the label $0/1$;
otherwise, the cluster is split, with points in the cluster below $k_1$ given the label $0/1$,
and those above $k_1$ the label $1/1$.
If $a > 1$, then $q_m$ is compared to each of the boundaries $k_1,\ldots,k_a$.
If $q_m < k_1$, the cluster is labeled $0/a$; otherwise the cluster is labeled with the index of
the largest annotation boundary $q_m$ exceeds.
To summarize: for each multimodal coordinate of the data matrix $X$, SCAMP clusters are assigned
labels with fractions describing their relative position of the cluster column
sample quantiles to annotation boundaries.
In the \pkg{scamp} package,
these fractions (up to $8$) are then mapped to a dictionary to give each
observation an interpretable label for the particular fraction.
For example, if $a=1$, SCAMP reports $0/1$ as ``lowest'', and $1/1$ as ``highest''; if $a=2$,
$1/2$ is ``medium''; etc.
\subsection{Multiple Iterations}
As noted at the start of this section, a single SCAMP iteration determines a partitional clustering of a data matrix $X_{k \times p}$.
The clustering produced by that iteration is always dependent on the realization of the random noise in section \ref{scamp:noiseStep}.
Further uncertainty is added if the search of section \ref{scamp:phase2} finds only a random subset of the annotation forest.
SCAMP is able to quantify the effect of these stochastic sections of the procedure by being run
multiple times on the same data matrix $X_{k \times p}$.
By providing a name for each selected cluster in a given iteration the annotation step, SCAMP also
provides names for each individual observation in the data matrix.
Because of this, we can track the naming history of each observation in the data matrix across
multiple iterations.
We emphasize the computation efficiency:
keeping this record only requires updating a map for each observation across the iterations,
rather than recording which observations are co-clustered as SCAMP iterates.
Labels assigned to a given observation define the keys for each map; the values are integer counts
of the number of times the observations is labeled a particular way.
We view the name determined by a single SCAMP iteration
as a vote for how a particular row of a data matrix should be described.
For some rows, the vote is consistent: every iteration SCAMP gives a specific
observation the same name.
In such a case, we have high confidence the observation is named appropriately.
For other rows, the vote is split, and two or three names appear with some regularity.
In these cases, it seems likely the appropriate name might be some combination
of those names that appear regularly.
And for still other rows, the vote is inconclusive, with no collection of names appearing to be
more than fluctuations of the SCAMP procedure itself.
Because of this, the output of the \textit{scamp} procedure in the package \pkg{scamp} is not one
but \textit{two} clusterings of a data matrix $X_{k \times p}$.
The first is the clustering found by labeling each observation according to label it is given most
often across multiple observations.
The second is the clustering found by a run-off heuristic, which we now describe.
If, across multiple SCAMP iterations, an observation receives the same label in more than half of
them, the observation is named according to that label.
However, if an observation is given a label less than half but more than $30\%$ of the time, and
the second most frequent label appears more than $20\%$ of the time,
then the two labels are combined according to the following heuristic. If the most frequent label
describes a coordinate, but the second most frequent label does not,
the final label is given the most frequent label's description. However, if the most frequent
label describes a coordinate by the ratio $a/b$ and the second most frequent label
described it as $c/d$, that coordinate is then called $(ad + bc)/(2bd)$. For example, if the most
frequent label is $0/1$ (corresponding to the annotation ``lowest''), and the second most frequent
label is $1/2$ (corresponding to the annotation ``medium''), the combined label would become
$(0\cdot 2 + 1\cdot 1)/(2\cdot 1 \cdot 2) = 1/4$ (corresponding to the annotation ``medium-low'').
A similar combination heuristic is carried out in the case where the most frequent label for an observation occurs between $20\%$ to $30\%$ of the time, and the second and third
most frequent more than $15\%$.
Of course, other voting schemes could be considered in place of these heuristics, and they carry
interesting questions of their own: is any one voting scheme optimal, or does it depend on problem
domain? We note that instead of running SCAMP for a pre-specified number of iterations, SCAMP
could instead run until subsequent iterations produce
no change (larger than $\epsilon$) in the clustering determined by the selected voting scheme.
In our own use of SCAMP, we most-often use the result of the maximum-vote heuristic after setting
the number of iterations to as large a value as computation time allows.
\subsubsection{Simulation parameters}
This section of the appendix describes our settings for the separated mixtures simulation.
\begin{table}[ht]
\centering
\begin{tabular}{lrr}
\hline
& Default & Elaboration \\
\hline
Second Mixture Dimension: & 0 & 20 \\
Second Mixture Mean Different: & False & True \\
Noise Component Dimension: & 0 & 20 \\
Transform Coordinates: & False & True \\
Sampling Regime: & Gaussian & T plus Gaussian \\
\hline
\end{tabular}
\caption{Simulation Parameter}\label{simulation:mixtureSettings}
\end{table}
When all parameters are set to their default, the mixture components of \eqref{section4:mixtureDesc} are all multivariate Gaussian.
The mean vector of each component is randomly selected so its entries lie in $\left\{0,6\right\}$.
The variance-covariance matrix is randomly selected from the space of positive definite matrices with variance bounded above by 3.
We now describe how parameter settings affect the simulation.
When the ``Second Mixture Dimension'' parameter is set to $20$, a random sample from a
\textit{second mixture} of either size $3000$ or $30000$ is taken. The number of rows
depends on the simulation scenario.
This leads to a data matrix of size $3000 \times 40$ or $30000 \times 40$, depending on the
simulation scenario.
The clustering ground truth is taken to be the cross-product of the labels of the mixture
components which produces a row of the data matrix.
This creates $17$ distinct clusters ranging in size between $125$ and $375$ observations.
The ``Second Mixture Mean Different'' parameter only affects the simulation if the ``Second Mixture
Dimension'' parameter is set to $20$.
If the ``Second Mixture Mean Different'' parameter is set to ``False'', the mean
vectors of components in the simulated mixture have entries randomly chosen from
$\left\{0,6\right\}$.
If the ``Second Mixture Mean Different'' parameter is set to ``True'', the
mean vectors of components in the simulated mixture have entries randomly chosen from
$\left\{0,3,6\right\}$.
If the ``Noise Component Dimension'' parameter is set to $20$, a sample of $3000$
or $30000$ observations is taken from a multivariate t distribution with $5$ degrees of freedom.
The non-centrality parameter is set to $2.5$ for each component.
The scale matrix sampled from the space of positive definite matrices with diagonal elements
bounded above by 3.
This sample is adjoined to the samples from the mixture distribution(s). It is mean to represent
the case when measurements are taken on variables unrelated to the signal of interest.
Since the sample is from a single distribution, it does not affect the clustering ground truth.
If the ``Transform Coordinates'' parameter is set to ``True'', rows sampled from the mixtures are taken through the following maps:
\begin{align}
f_1(x) &= \left(\sqrt{|x_1|},\ldots,\sqrt{|x_p|}\right)\ . \nonumber \\
f_2(x) &= \left(\exp(x_1),\ldots,\exp(x_p)\right)\ . \nonumber \\
f_3(x) &= \left(x_1^2,\ldots,x_p^2\right)\ . \nonumber \\
f_4(x) &= \left(x_1,\ldots,x_p\right)\ . \label{scamp:sim_settings}
\end{align}
The maps are sequentially applied to the components of the mixture, starting with $f_1$.
Once all four maps have been applied to different mixture components, the process repeats
from $f_1$, until all clusters have been taken through a map.
Finally, if the ``Sampling Regime'' parameter is set to ``T plus Gaussian'', components of the mixture alternate between multivariate normal and multivariate t with $5$ degrees of freedom.
For the multivariate t, the non-centrality parameter and scale matrix are sampled from the same space as its Gaussian counterpart.
If the ``Sampling Regime'' parameter is set to ``Gaussian'', components of the mixture are all initially samples from multivariate normal.
\subsubsection{Scenario 1: Moderate sample size, moderate number of mixture components}
To simulate this scenario, we begin by taking a sample of size $3000$ from a mixture in $20$ dimensions.
The mixture components have the following sizes, and are sampled in the stated order:
\begin{align}
\text{Mixture Component Sizes } \equiv\ \left\{1000,500,250,250,250,250,125,125,125,125\right\}\ .\label{section4:mixtureDesc}
\end{align}
In this setting when the ``Second Mixture Dimension'' parameter is set to $20$, a random sample from a \textit{second mixture} of size $3000$ is taken.
The mixture components have the following sizes; samples are taken the stated order:
\begin{align}
\text{Second Mixture Component Sizes } \equiv\ \left\{250,125,250,125,125,250,500,125,250,1000\right\}\ .\label{section4:mixtureDesc2BigSimple}
\end{align}
As before, this creates $17$ distinct clusters.
\subsubsection{Scenario 2: Large sample size, moderate number of mixture components}
The mixture components have the following sizes, and are sampled in the stated order:
\begin{align}
\text{Mixture Component Sizes } \equiv\ \left\{10000,5000,2500,2500,2500,2500,1250,1250,1250,1250\right\}\ .\label{section4:mixtureDescBigSimple}
\end{align}
In this setting when the ``Second Mixture Dimension'' parameter is set to $20$, a random sample from a \textit{second mixture} of size $30000$ is taken.
The mixture components have the following sizes; samples are taken the stated order:
\begin{align}
\text{Second Mixture Component Sizes } \equiv\ \left\{2500,1250,2500,1250,1250,2500,5000,1250,2500,10000\right\}\ .\label{section4:mixtureDesc2BigSimple}
\end{align}
As before, this creates $17$ distinct clusters.
\subsubsection{Scenario 3: Large sample size, moderate number of mixture components}
The mixture components have the following sizes, and are sampled in the stated order:
\begin{align}
\text{Large Sample Mixture Component Sizes } \equiv\
\left\{
\begin{matrix}
&10000,5000,2500,2500,1250,1250\\
&1250,1250,500,500,500,500\\
&250,250,250,250,250,250\\
&250,250,100,100,100,100\\
&100,100,100,100,100,100\\
\end{matrix}
\right\}\ .\label{section4:mixtureDescBig}
\end{align}
In this setting when the ``Second Mixture Dimension'' parameter is set to $20$, a random sample from a \textit{second mixture} of size $30000$ is taken.
The mixture components have the following sizes; samples are taken the stated order:
\begin{align}
\text{Large Sample Second Mixture Component Sizes } \equiv\
\left\{
\begin{matrix}
&250,100,250,100,100,250\\
&500,100,100,100,1250,250\\
&500,100,250,1250,100,500\\
&5000,250,10000,2500,250,1250\\
&100,100,500,2500,1250,250\\
\end{matrix}
\right\}\ .\label{section4:mixtureDescBig2}
\end{align}
When the mixtures \eqref{section4:mixtureDescBig2} and \eqref{section4:mixtureDescBig} are combined, they
produce a mixture with 56 components, with sample sizes
oranging between $50$ and $3950$ observations.
\section{Introduction}\label{section:introduction}
\input{./jmlr_newIntro.tex}
\section{Double Dipping}\label{section:doubledip}
\input{./jmlr_doubledip.tex}
\section{Selective Clustering Annotated using Modes of Projections}\label{section:scamp}
\input{./jmlr_scamp.tex}
\section{Simulations and Examples}\label{section:casestudes}
\input{./alt_jmlr_newCaseStudies.tex}
\section{Concluding Discussion}\label{section:conclusion}
In this paper, we developed a new clustering algorithm called selective clustering annotated using
modes of projections (SCAMP).
SCAMP relies heavily on the dip test of \cite{hartigan1985dip}; while developing SCAMP, we also
developed an extension
of the dip test, described in section \ref{section:doubledip}, to test univariate distribution for
multiple modes.
In section \ref{section:scamp}, we discussed the details of the SCAMP clustering algorithm.
In section \ref{section:casestudes}, we showed
that the SCAMP algorithm can used to produce interesting clusterings of many different types of data.
Over the course of the paper, we showed that SCAMP makes minimal distributional assumptions,
has tuning parameters that are relatively easy to set, can produce clusters with interpretable
labels,
and its able to directly cluster datasets with a large number of observations.
We also discussed one of SCAMP's main limitations:
SCAMP can only detect clusters that are separable along the axes of measurement.
Additionally, we
proposed some work-arounds to this limitation: to apply SCAMP to the GTEx data set, we first
rotated the data through its principal components.
As our analysis of the GTEx data shows, many clustering algorithms, including SCAMP,
can be used to generate useful clusterings of data matrices with a moderate number of observations.
However, many of these approaches also require the user to provide the number of clusters in the
data as
a parameter. This requirement necessitates either a separate estimation step, a model comparison
step, or choosing a large number
for this parameter and then merging clusters. By only requiring that the user set a level $\alpha$
for the dip test,
we think SCAMP has simplified the clustering task.
In terms of future research, we are interested in improving SCAMP's annotation step.
Currently SCAMP only annotates clusters relative to the columns in a data set with dip test
p-values below level $\alpha$.
This is effectively a variable selection step.
We think this step causes the SCAMP labels to omit some useful information.
For example, labels do not currently reflect a clusters position relative to a unimodal variables
with wide shoulders;
it would be useful to improve the annotations to account for such a scenario.
We are also interested the consistency of phase 2 and phase 3 of the SCAMP algorithm.
While there are many hierarchical clustering algorithms, SCAMP's approach of recursively growing
multiple partition trees and
then selecting leaves across different trees is, to our knowledge, new.
It is of interest to know if there are natural parametric conditions
(or, more generally, separation conditions) for the mixture components
\eqref{scamp:cluster_distribution},
under which SCAMP can be shown to recover the number of components in the mixture and to
correctly assign observations to their generating component.
\section{Acknowledgements}
EG wishes to thank both K. K. Dey and V. Voillet for their separate discussions of the GTEx data.
This work was funded by a grant from the Bill and Melinda Gates Foundation to RG [OPP1032317]
and a grant from the NIGMS to GF [R01 GM118417-01A1].
\newpage
|
1,477,468,751,110 | arxiv | \section{Defining and identifying treatment effects}
\section{Introduction}
\label{intro}
When designing a randomized trial, it is sometimes necessary and/or desirable to assign the intervention to groups of participants rather than to individuals \cite{hayes_cluster_2017, campbell_how_2014, donner_design_2010, eldridge_practical_2012}. For example, it would be impractical to test the impact of a new teaching method if the method were delivered to randomized students in the same classroom, but much more feasible if randomization happens at the classroom level. In cluster randomized trials (CRTs), correlation between the outcomes of individuals in a given cluster may arise due to, for example, spillover effects between individuals or shared environments or characteristics of individuals in the cluster. This dependence violates the common regression assumption that all observations are independent and identically distributed (i.i.d.), complicating statistical estimation and inference. Longitudinal data can also be considered clustered; with repeated measurements on the same individuals, each individual is their own ``cluster'' and the correlation between outcomes could be due to measured and/or unmeasured characteristics of the individual \cite{fitzmaurice_applied_2012}.
A number of well-established methods can account for the dependence of observations in a cluster \cite{liang_longitudinal_1986, fitzmaurice_applied_2012, hayes_cluster_2017}. However, not all methods can address practical challenges that may arise in CRT analysis. First, outcomes may not be measured on all participants in each cluster. This could occur by design if, for example, measurement of a rare or expensive outcome only occurred in a sub-sample of participants. Failing to adjust for sampling can result in biased point estimates and/or misleading inference \cite{laan_targeted_2011, horvitz_generalization_1952, mhs_gordis_2018}. Additionally, incomplete ascertainment of outcomes among all (or the selected subset of) participants can bias results if the outcomes are not missing completely at random (MCAR) \cite{rubin_inference_1976, robins_analysis_1995, national_research_council_prevention_2010}. Individuals whose outcomes are not measured are likely different than those who were fully observed; for example, students who are absent on an exam day may be systematically different than those present. If this systematic missingness is influenced by the intervention (for example, a new teaching technique improves motivation and attendance, influencing exam scores and the probability of measurement), the risk of bias is even larger. This is a common problem: a 2016 review found that missing data were present in 93\% of CRTs, 55\% of which simply performed a complete-case analysis \cite{fiero_statistical_2016}.
Second, logistical and fiscal constraints often limit the number of clusters in CRTs. Indeed, a review of 100 CRTs found 37\% with fewer than 20 clusters \cite{kahan_increased_2016} and another review of 100 CRTs found a median of 33 clusters \cite{selvaraj_characteristics_2013}. Further, in CRTs with many clusters, key subgroup analyses might be conducted within strata defined by cluster-level covariates (e.g., region), limiting the number of randomized units included in that analysis. As the number of clusters shrinks, chance imbalance on covariates that influence the outcome becomes more likely. Accounting for these covariates and other outcome predictors can increase the precision of the estimator and thereby the statistical power (e.g. \cite{tsiatis_covariate_2008, hayes_cluster_2017, moore_covariate_2009, fisher_statistical_1932}). However, in analyses with few clusters, including too many covariates can lead to overfitting, and it is often not clear which covariates (or their form) to select for optimal performance \cite{balzer_adaptive_2016}.
Third, statistical inference often relies on (i) tests with known finite sample properties that may be inefficient or (ii) the asymptotic behavior of estimators that may not hold in CRT analyses with a limited number of clusters. For example, generalized estimating equations (GEE) and generalized linear mixed models (GLMMs), two common approaches for analyzing CRTs \cite{laird_random-effects_1982, liang_longitudinal_1986}, both rely on having a ``sufficient'' number of clusters. The exact recommendation varies, with some suggesting GEE can be used with as few as 10 clusters \cite{pan_small-sample_2002}, while others suggest that these approaches (without small-sample corrections) should be avoided without $\geq$30 clusters \cite{kreft_introducing_1998, hayes_cluster_2017, murray_design_2018}. Altogether, inference based on a small number of clusters may be unreliable, creating conservative or anti-conservative confidence interval coverage depending on the situation \cite{leyrat_cluster_2018}. For an overview and comparison of existing CRT analysis methods, we refer the reader to \cite{hayes_cluster_2017, benitez_defining_2022}.
Here, we address these challenges by combining \textit{Two-Stage targeted minimum loss-based estimation} (TMLE) to account for sub-sampling and missing individual-level outcomes \cite{balzer_two-stage_2021} with carefully considered \textit{conditional independence assumptions} to address the limited numbers of clusters \cite{laan_estimating_2013}. The novel contributions of this work include the following. First, we extend Two-Stage TMLE to handle differential measurement of an outcome among a closed cohort, where cohort membership is defined by sub-sampling and also subject to differential measurement. Second, we detail the assumptions required to increase the effective sample size by considering a sub-unit of the cluster to be the conditionally independent unit. Since the cluster remains the unit of randomization, this process results in the CRT behaving more like an observational study. As a consequence, we extend the prior asymptotic results and practical implementation of Two-Stage TMLE to address the challenges in this setting. Finally, to the best of our knowledge, this is the first work to demonstrate the real-life consequences of various analytic choices, using real-world data from a community randomized trial.
The rest of the paper proceeds as follows. Section \ref{motivation} describes the motivating example for this work. Section \ref{stage1} describes the strategy for estimating a cluster-level endpoint, adjusting for sub-sampling, missing outcomes at baseline among those sampled, and missing outcomes at follow-up among those at risk at baseline. Section \ref{stage2} presents a cluster-level strategy to estimate the intervention effect, while optimizing precision with few independent units. Section \ref{2comm} presents several causal models and the assumptions required to increase the number of independent units by partitioning the clusters into sub-units. Section \ref{Sec:Est_partition} describes the impact of re-defining the independent unit on statistical estimation and inference. Comparative results from the real data example are presented in Section \ref{results}, and we conclude with a brief discussion in Section \ref{discussion}.
\section{Motivating example}
\label{motivation}
Our novel methodology is motivated by a sub-study of the Sustainable East Africa Research in Community Health (SEARCH) trial, a 32-community CRT to evaluate the population-level effects of a community-based approach to ``Universal HIV Test and Treat" as compared to an enhanced standard-of-care in rural Kenya and Uganda (NCT01864603) \cite{havlir_hiv_2019}. In intervention communities, participants were offered (i) multi-disease testing through annual, community-based health fairs, (ii) universal treatment eligibility for people with HIV, and (iii) patient-centered and streamlined care \cite{chamie_hybrid_2016, kwarisiima_high_2017}. In control communities, participants were also offered multi-disease testing through the same mechanism at baseline and endline, while treatment eligibility and care followed the country standard. The applied results evaluating the SEARCH intervention effect on several endpoints have been previously published (see, e.g., \cite{petersen_association_2017, havlir_hiv_2019,hickey_effect_2021, kamya_search_2021}), while the data analysis is ongoing for several secondary outcomes.
An estimated 1.7 billion people (roughly a quarter of the world’s population) are infected with tuberculosis (TB), and this vast reservoir of latent infections fuels TB disease and death via reactivation or rapid progression to disease once infected \cite{houben_global_2016, macpherson_mortality_2009}. The SEARCH intervention was found to reduce active TB disease among people with HIV \cite{havlir_hiv_2019}, but the impact on TB infection and community-wide TB transmission in the wider population is unknown. Understanding TB transmission dynamics and then implementing effective public health interventions is difficult. First, transmissions are airborne and likely occur both inside and outside the household in community-based settings \cite{martinez_paediatric_2019, carbone_active_2015, wood_tuberculosis_2010}. Second, the majority of transmission events result in latent infection, which can much later progress to active TB (i.e., TB disease). Finally, measurement of TB infection is imperfect and expensive. To estimate the population-level prevalence and incidence of TB (both latent and active) as well as the intervention effect on incidence, SEARCH conducted the following sub-study in 9 communities in eastern Uganda. First, in each community, a sample of households was selected from an enumerated list, generated from a rapid household census at study baseline \cite{jakubowski_universal_2022}. Selection was a stratified random sample with the sub-study was purposefully enriched for households where at least 1 adult (aged $\geq$ 15 years) had HIV; the goal was 100 households of each type per community. In all selected households, tuberculin skin tests (TSTs) and sociodemographic surveys were administered to household members aged $\geq$ 5 years. The sub-study participants who were TST-negative at baseline formed a closed cohort, in which a follow-up TST survey was done one year later. The primary outcome of the sub-study was the one-year incidence of TB infections among those at risk at baseline. The applied results have been previously presented \cite{marquez_impact_2022}; here we focus on the methods used to generate those results.
Estimating the effect of the SEARCH intervention on incident TB infection presented several challenges and thus opportunities for the development and application of novel methods. First, the sample was enriched for persons with HIV. It is well known that the risk and incidence of TB differs by HIV serostatus \cite{macpherson_mortality_2009}. Thus, ignoring the sampling scheme could bias estimates of the TB burden and the intervention effect. Second, while multiple visits were made to selected households to locate all household members and administer TSTs, baseline measurement of TB status was incomplete, raising concerns that the TST-negative cohort was not representative of all persons at risk in the sub-sample. Likewise, despite best efforts, measurement of TB status at the end of follow-up was also incomplete, again raising concerns about differential capture of incident infections among the TST-negative cohort. Finally, despite thousands of participants, there were only 9 communities in the sub-study, limiting statistical power and motivating the consideration of the parish, a sub-community unit, as the conditionally independent unit.
Altogether, estimation of the SEARCH intervention effect required adjustments for purposefully differential sampling, potentially differential outcome measurement, and few independent units. Full discussion of the choices made in the application is given in Section~\ref{results}; we now present our analytic approach more generally.
\section{Two-Stage TMLE for sampling and missing outcomes in CRTs}
\label{sec:Two-stage}
\textit{Two-Stage TMLE} was developed to reduce bias and improve efficiency of CRTs by optimally adjusting for baseline cluster-level covariates, after controlling for missing individual-level outcomes \cite{balzer_two-stage_2021}. In the first stage, we identify and estimate a cluster-specific endpoint, accounting for potentially differential measurement of individual-level outcomes. To do so, we stratify on each cluster, allowing the relationships between the individual-level covariates, measurements, and outcomes to be cluster-specific. For example, the relationship between age and missingness might be different in a more urbanized cluster vis-a-vis a more rural one, and this strategy allows for that flexibility. In the second stage, we use the cluster-level endpoint estimates to evaluate the intervention effect, optimally adjusting for cluster-level covariates to increase precision. Two-Stage TMLE compares favorably to competing CRT methods, especially when there are post-baseline causes of missingness \cite{balzer_two-stage_2021}, and is detailed below. We now extend the approach to account for baseline and endline outcome status missingness, and incorporate covariate adjustment to support the assumptions needed to increase the number of conditionally independent units.
\subsection{Stage 1: Identifying and estimating the cluster-level endpoint}
\label{stage1}
When the individual-level outcomes are not MCAR, estimating the cluster-specific endpoint with the simple mean among those measured can create several hazards. First, failing to account for over-sampling of certain subgroups and under-sampling of others can bias estimates for the population of interest.
Second, in longitudinal studies, failing to account for incomplete measurement of baseline status can skew estimates of risk and thereby estimates of intervention effectiveness. As an extreme example, suppose only participants at very low risk of the outcome were tested at baseline; then estimates of the baseline proportion who are at risk would be biased upwards, and the resulting incidence cohort would be a poor representation of the larger population. Likewise, failing to account for incomplete measurement of final endpoint status among the incidence cohort can also bias estimates of risk and intervention effectiveness. As another extreme example, suppose all high-risk cohort members did not have their endpoint measured; then cluster-level estimates of incidence would be biased downwards. If missingness is present at both baseline and follow-up, these biases could compound. Further, if missingness is differential by arm - say, the high-risk participants were more likely to be measured at follow-up in the intervention arm - the potential for bias is even greater.
In our motivating study, all of these dangers were present. The households sampled for the sub-study were enriched for persons with HIV, baseline TST measurement was potentially differential among members of sampled household, and measurement of incident TB infection was also potentially differential among participants who were TST-negative at baseline. In the following subsection, we discuss our definition of the cluster-level endpoint and describe methods for estimating it, along with relevant assumptions. We follow a similar strategy to that set out by Balzer et al. \cite{balzer_far_2020, balzer_two-stage_2021}.
\subsubsection{Notation}
For an individual in a given cluster, let $E^c$ represent the cluster-level covariates (e.g., baseline HIV prevalence) and $W$ the set of individual-level covariates (e.g., age and whether a person with HIV lives in their household). These are either measured prior to intervention implementation or, at minimum, not impacted by the intervention. Let $A^c$ represent whether the cluster was in the intervention arm ($A^c=1$) or the control ($A^c = 0$) and $S$ indicate that an individual was sampled for the sub-study. Next, define $Y_0^*\in \{0,1\}$ as a participant's underlying (possibly unmeasured) outcome status at baseline - specifically, $Y_0^*=1$ if the participant has the outcome (e.g., TB infection) at baseline and 0 if not. Likewise, define $\Delta_0$ as an indicator that their outcome was measured at baseline; hence, $\Delta_0$ is deterministically 0 if the participant was not sampled ($S=0$) for the sub-study. Then define the observed outcome at baseline as $Y_0 = \Delta_0 \times Y^*_0$, equaling 1 if the participant was measured and had the outcome at baseline. Participants known to be at risk at baseline (i.e., those with $\Delta_0=1$ and $Y_0=0$) form a closed cohort for incidence measurement. Variables $Y^*_1$, $\Delta_1$, and $Y_1$ are the follow-up timepoint analogues. Follow-up measurement only occurs among members of the incidence cohort; thereby, $\Delta_1 = 0$ if either $\Delta_0 = 0$ or $Y_0 = 1$ . Thus, the observed data on a participant is $O = (E^c, W, A^c, S, \Delta_0, Y_0, \Delta_1, Y_1)$. However, since $E^c$ and $A^c$ are constant in a cluster, we can simplify to the participant-level to $O = (W, S, \Delta_0, Y_0, \Delta_1, Y_1)$ \cite{balzer_two-stage_2021}. A simplified directed acyclic graph (DAG) showing the relationships between variables in our applied example is shown in Figure \ref{oneclustdag}.
\subsubsection{Cluster-level causal parameter}
In Stage 1, our interest is in the true cluster-specific incidence of the outcome, which could be directly observed in the counterfactual scenario where (i) all cluster members were included in the sub-study and assessed for the outcome at baseline, and (ii) all who tested negative at baseline were assessed again at follow-up. If such a scenario were possible, we would know the baseline status $Y_0^*$ on all cluster members and the endline status $Y_1^*$ on all cluster members who were at risk at baseline: $Y^*_1 \mid Y^*_0=0$. While this scenario is impossible, it helps us to define and later identify our target, cluster-level causal parameter $Y^{c*}$ as:
\begin{equation}
\label{eq:targparam}
\begin{aligned}
Y^{c*} \equiv \mathbb{P}(Y_1^* = 1 \mid Y_0^* = 0) = \frac{\mathbb{P}(Y_1^* = 1, Y_0^* = 0)}{\mathbb{P}(Y_0^* = 0)}
\end{aligned}
\end{equation}
Within each cluster separately, we will identify and estimate the numerator and denominator and then take their ratio to obtain an estimate of the corresponding cluster-level statistical parameter.
\subsubsection{Identification of the cluster-level endpoint}
Using the identity $\mathbb{P}(Y_0^*=0) = 1 - \mathbb{P}(Y_0^*=1)$, we can estimate the prevalence of the outcome ($\mathbb{P}(Y_0^*=1)$) at baseline and then subtract from 1 to obtain the proportion at risk. As previously discussed, we would be able to directly calculate the baseline outcome prevalence if all cluster members were included in the sub-study ($S=1$) and assessed for the outcome ($\Delta_0 = 1$). The true underlying parameter $\mathbb{P}(Y^*_0=1)$ can be represented as a statistical parameter (i.e., function) of the observed data distribution, given the following assumptions: i) baseline outcome status is missing at random (MAR): $Y^*_0 \perp \!\!\! \perp S \mid W$ and $Y_0^* \perp \!\!\! \perp \Delta_0 \mid S=1, W$; and ii) positivity: $\mathbb{P}(S=1 \mid W = w) > 0$ and $\mathbb{P}(\Delta_0 = 1 \mid S = 1, W = w) > 0$ for all values $w \in W$.
The reasonableness of the identification assumptions in our motivating example are discussed in Section \ref{results}. In brief, MAR would hold if sub-study sampling was done randomly within values of $W$ \emph{and} if the only common causes of the outcome and its measurement (among those sampled) were also captured in $W$. Additionally, the positivity assumption would hold if there were no values of $W$ in which sub-sampling or measurement (among those sampled) were impossible. Together, these assumptions allow identification of the target causal parameter $\mathbb{P}(Y^*_0 = 1)$ using the observed data as an iterated expectation over the adjustment set $W$ (derivation in the supplemental materials), denoted $\psi_{den} =$ $\mathbb{E} [\mathbb{E} \{Y_0 \mid \Delta_0 = 1$, $S = 1, W \}]$.
To guide our identification of the numerator, the proportion of individuals who are positive at follow-up and at risk at baseline $\mathbb{P}(Y_1^* = 1, Y_0^* = 0)$, we consider a longitudinal dynamic intervention (e.g. \cite{hernan_comparison_2006, laan_causal_2007, robins_estimation_2008}). First, as with the denominator, we `set' $S = 1$ and $\Delta_0=1$; that is, all cluster members are sampled for the sub-study and all have their outcome measured at baseline. Second, among those at risk at baseline $(Y_0=0, \Delta_0=1)$, we `set' $\Delta_1 = 1$ to ensure complete measurement of the outcome at follow-up among those in the incidence cohort. Identification of $\mathbb{P}(Y_1^* = 1, Y_0^* = 0)$ is possible under the sequential randomization assumptions \cite{robins_new_1986} and corresponding positivity assumptions (details and derivation in supplemental materials):
$$
\psi_{num} = \mathbb{E} \left[ \mathbb{E} \{ \mathbb{E} (Y_1 \mid \Delta_1 = 1, Y_0=0,\Delta_0 = 1, S = 1, W) \mid \Delta_0 = 1, S = 1, W\} \right]
$$
If the adjustment variables are discrete, this can be equivalently expressed as
$$
\psi_{num}=\sum_w \mathbb{P}(Y_1=1 \mid \Delta_1=1, Y_0=0, \Delta_0=1, S=1 , W=w) \mathbb{P}(Y_0= 0 \mid \Delta_0=1, S=1, W=w) \mathbb{P}(W=w)
$$
In words, the statistical parameter for the numerator is the adjusted probability of having the outcome at follow-up among those at risk at baseline, scaled by the adjusted probability of being at risk at baseline, and standardized with respect the adjustment set. We discuss the reasonableness of the identification assumptions in our motivating example in Section \ref{results}.
Once the numerator and denominator and denominator are identified, they are combined to provide a cluster-level statistical parameter $Y^c = \psi_{num}/(1-\psi_{den})$, equaling the target causal parameter $Y^{c*}=\mathbb{P}(Y_1^*=1 | Y_0^*=0)$ under the above identifiability assumptions. (Recall we parameterized $\psi_{den}$ in terms of the baseline prevalence.)
\subsubsection{Estimating the cluster-level statistical parameter}
\label{est}
Several options exist to estimate the statistical parameters corresponding to the denominator $\psi_{den}$, numerator $\psi_{num}$, and, thus, the cluster-level endpoint $Y^c$. For the denominator, if the strata of the adjustment covariates $W$ are discrete and not too numerous, a weighted average of strata-specific mean outcomes can be taken among those sampled and measured at baseline, corresponding to the non-parametric G-computation result of \cite{robins_new_1986}. Equivalently, one could estimate the propensity scores $\mathbb{P}(\Delta_0 = 1, S = 1 \mid W)$ for each strata of $W$, calculate the inverse probability weights (IPW), and take the empirical mean if the weighted outcomes \cite{horvitz_generalization_1952}. (Of course, the propensity score can be factorized as $\mathbb{P}(\Delta_0 = 1, S = 1 \mid W)= \mathbb{P}(S = 1 \mid W)\times \mathbb{P}(S = 1 \mid \Delta_0 = 1, W)$.) When continuous covariates are included and/or values of $W$ have weak support, the conditional outcome expectation or propensity score must be estimated via other methods, such as logistic regression, before computing the G-computation or IPW result. Similar challenges apply to estimation of the numerator $\psi_{num}$.
We chose instead to estimate $\psi_{den}$ and $\psi_{num}$ using TMLE \cite{laan_targeted_2011}. Briefly, TMLE is a framework for constructing asymptotically linear, substitution estimators of statistical parameters by combining estimates of the outcome regression and the propensity score. TMLE is ``double-robust'' in that it will be consistent if either of those are estimated consistently, and if both are estimated consistently and at appropriate convergence rates, it will achieve the non-parametric efficiency bound, providing the smallest asymptotic variance among a wide class of estimators. For estimation of the outcome regression and propensity score, we use the ensemble machine learning method Super Learner \cite{laan_super_2007} with a diverse set of candidate learners fit via cross-validation, allowing for flexible relationships between variables. Using the ensemble allows us to avoid relying on (possibly mis-specified) parametric models and leverage advances in non-parametric machine learning. Step-by-step implementation of TMLE for $\psi_{den}$ and $\psi_{num}$ in Stage 1 are given in supplemental materials.
We then obtain a point estimate of the cluster-specific endpoint as $\hat{Y}^c=\hat{\psi}_{num}/(1-\hat{\psi}_{den})$. Additionally, since the estimators of the numerator and denominator are asymptotically linear with known influence curve, we can use the delta method to generate confidence intervals for each cluster-specific endpoint. As described next, the estimated cluster-specific endpoints $\hat{Y}^c$ are also used to evaluate the intervention effect in Stage 2 of Two-Stage TMLE.
\subsection{Stage 2: Estimating and obtaining inference for the treatment effect}
\label{stage2}
With estimates of the cluster-level endpoint $\hat{Y}^c$ accounting for sub-sampling and missingness, we turn our attention to how to optimally estimate the treatment effect. In Stage 2, our observed cluster-level data consist of $O^c = (E^c, A^c, \hat{Y}^c)$, where $E^c$ refers to a set of general cluster-level characteristics (e.g., urban vs. rural) as well as aggregated individual-level covariates $W$ (e.g., proportion of people with HIV). Using these data, we can estimate a variety of causal effects. For example, our Stage 2 statistical estimand can defined on the relative scale by the ratio of the treatment-specific means $\psi^c(a^c) = \mathbb{E}[ \mathbb{E}[Y^c | A^c=a^c, E^c]]$. This is a cluster-level analog of the G-computation identifiability result \cite{robins_new_1986, balzer_targeted_2016, balzer_new_2019}. In CRTs, the adjustment variables $E^c$ are included for improved precision - not to control for confounding or missingness. Since the cluster-level intervention $A^c$ is randomized, there is no confounding and the positivity assumption holds by design.
Therefore, Stage 2 estimation of the treatment effect can proceed by implementing a cluster-level TMLE, as detailed in \cite{balzer_two-stage_2021}. The key challenge to Stage 2 is \emph{a priori} specification of the optimal covariate adjustment set $E^c$. One solution to this challenge is \textit{Adaptive Pre-specification} (APS) \cite{balzer_adaptive_2016}, which flexibly selects the combination of estimators for the outcome regression and for the propensity score that maximizes empirical efficiency. Briefly, APS pre-specifies a candidate set of working generalized linear models (GLMs) for the outcome regression and for the propensity score, each adjusting for different baseline covariates. Next, to select the optimal combination of estimators, we pre-specify as loss function the squared influence curve of the TMLE for the target statistical parameter. Finally, using cross-validation, we estimate the expected loss (a.k.a., the risk) of candidate GLMs, and select the combination (and hence adjustment set) that has the smallest cross-validated variance estimate. Finite sample simulations and real-world applications have demonstrated precision gains over alternative approaches \cite{balzer_adaptive_2016, balzer_two-stage_2021, benitez_defining_2022}.
Under conditions detailed in \cite{balzer_two-stage_2021}, Two-Stage TMLE will be normally distributed in the large data limit, allowing for the construction of confidence intervals and hypothesis tests.
In particular, we need each of the cluster-level endpoints $Y^c$ to be consistently estimated in Stage 1. Biased estimators of the cluster-specific endpoints can result in biased estimates of and misleading inference for the treatment effect. Indeed, the Two-Stage approach is most effective when the cluster size is relatively large, allowing for flexible and well-supported estimation of the cluster-level endpoint. The regularity conditions on Stage 2 estimators of the cluster-level outcome regression and known propensity score hold, by design, when using APS to select from working GLMs. In CRTs with fewer than 40 clusters randomized ($N<40$), we recommend using the Student's $t$ distribution with $N-2$ degrees of freedom as a finite sample approximation of the asymptotic normal distribution. Alternatively, permutation tests or other randomization-based approaches can be used for statistical inference, acknowledging that these approaches are testing a different null hypothesis.
\section{(Re-)Defining the independent unit}
\label{2comm}
A fundamental premise of CRTs is that individual-level outcomes are dependent within a cluster. Sources of dependence could include shared cluster-level factors, including the intervention, as well as social interactions between participants within a cluster. Instead, clusters are assumed to be independent, providing the basis for statistical inference, as described in the prior subsection. However, as previously discussed, CRTs tend to randomize few clusters, limiting the statistical power. For example, while the main SEARCH study randomized 32 communities, measurement of incident TB infection occurred in only 9 communities in eastern Uganda. Additionally, in CRTs with many clusters, subgroup analyses to understand effect heterogeneity may be conducted among limited numbers of clusters. The extreme case of estimating a causal effect with only two clusters was treated in depth by van der Laan, Petersen, and Zheng \cite{laan_estimating_2013}, and we will draw upon much of their framework below.
In this section, our goals are to carefully define a hierarchical causal model, accurately reflecting the data generating process for a CRT \cite{balzer_new_2019, benitez_defining_2022}, detail the assumptions needed to consider a sub-cluster component to be the conditionally independent unit \cite{laan_estimating_2013}, and then present the consequences of these assumptions for statistical estimation and inference with Two-Stage TMLE. In SEARCH, for example, participants are nested within households, villages, parishes, and ultimately communities. Under different assumptions, explicitly stated below, any level of this clustering could be treated as the conditionally independent unit.
For simplicity, we focus on CRTs where individuals are grouped into sub-cluster ``partitions", and these partitions are grouped into a cluster. For ease of presentation, we also focus on CRTs with two partitions per cluster; however, our results naturally generalize to other settings. We index partitions by $j = \{1,2\}$ within clusters, which remain the unit of randomization. As before, $E^c$ is the set of cluster-level characteristics, and $A^c$ is an indicator of the cluster being randomized to the intervention arm. Now let $W^p$ be the set of partition-level covariates, which could include general characteristics of the partition (e.g., urban vs. rural) as well as aggregates of individual-level covariates. Likewise, let $Y^p$ be partition-level endpoint, defined analogously to $Y^c$ in Stage 1.
\subsection{Hierarchical structural causal model}
\label{ex0}
Following \cite{balzer_new_2019,benitez_defining_2022}, we use the structural causal model (SCM) of Pearl \cite{pearl_causality_2009} to formalize the hierarchical data-generating process for a CRT with two partitions. (Again, our presentation and results generalize to CRTs with more than two partitions.) We start with a simple SCM reflecting independence between clusters; the corresponding DAG is in Figure~\ref{s2_u}:
\begin{equation}
\begin{aligned}
E^c &= f_{E^c}(U_{E^c})\\
W^p_1 &= f_{W^p_1}(E^c, U_{W^p_1})\\
W^p_2 &= f_{W^p_2}(E^c, U_{W^p_2})\\
A^c &= f_A(U_{A^c})\\
Y^p_1 &= f_{Y^p_1}(E^c, W^p_1, W^p_2, A^c, U_{Y_1^p})\\
Y^p_2 &= f_{Y^p_2}(E^c, W^p_1, W^p_2, A^c, U_{Y_2^p})
\end{aligned}
\label{scm0}
\end{equation}
The structure of the remaining $U$s may be complex and cluster-specific; for example, the unobserved factors influencing the outcomes $(U_{Y^p_1}, U_{Y^p_2})$ might be correlated with and/or in some way related to unmeasured factors at the cluster level $U_{E^c}$. For more, we direct the reader to \cite{laan_estimating_2013}. Beyond the unmeasured factors, there are several sources of dependence between partition-level outcomes in this model. For example, the outcome for the $j^{th}$ partition $Y^p_j$ may depend on the characteristics of the other $W^p_{-j}$. This general model only allows for independence at the cluster-level, not the partition-level (yet).
\subsection{Assumptions for partition-level independence}
To treat the partition (sub-cluster) as the conditionally independent unit, we need to make several assumptions, similar to those detailed in \cite{laan_estimating_2013}.
\begin{enumerate}
\item Any effect of the cluster-level covariates $E^c$ on the partition-level outcome $Y_j^p$ is only through their effect on partition-level covariates $W^p_j$.
\item The $j^{th}$ partition's outcome $Y_j^p$ is not influenced by the characteristics of the other partition in its cluster $W^p_{-j}$.
\item There are no unmeasured common causes of the cluster-level characteristics and the partition-level characteristics or outcomes: $U_{E^c} \perp \!\!\! \perp U_{Y^p_j}, U_{W^p_j}$.
\item There are no unmeasured common causes of characteristics or outcomes between partitions in a cluster: $U_{Y^p_j}, U_{W^p_j} \perp\!\!\!\perp U_{Y^p_{-j}}, U_{W^p_{-j}}$.
\end{enumerate}
Whether or not these assumptions are reasonable depend on the study context. To maximize the effective sample size, it might be tempting to define the ``partitions" as the $J$ individuals in a cluster. However, this would entail very strong assumptions, which may be unrealistic.
In the SEARCH study, for example, the baseline HIV status of one household member likely influences the outcome risk of another, as people with HIV may be at higher risk of acquiring or spreading TB (violating assumption \#2), and a household's ventilation may be a residual (i.e., unmeasured) source of correlation between individual-level outcomes (violating assumption \#4); hence, we were unable to justify considering individuals as the unit of conditional independence. Further discussion of the motivating study is provided in Section~\ref{results}. In other settings, however, considering individuals as conditionally independent may be reasonable; for a further discussion of this option, see \cite{laan_estimating_2013}.
In general, these assumptions should be considered with caution, as they result in meaningful changes to the causal graph (Figure~\ref{s2_indep}) and model:
\begin{equation}
\begin{aligned}
E^c &= f_{E^c}(U_E^c)\\
W^p_1 &= f_{W^p_1}(E^c, U_{W^p_1})\\
W^p_2 &= f_{W^p_2}(E^c, U_{W^p_2})\\
A^c &= f_{A^c}(U_{A^c})\\
Y^p_1 &= f_{Y^p_1}(W^p_1, A^c, U_{Y^p_1})\\
Y^p_2 &= f_{Y^p_2}(W^p_2, A^c, U_{Y^p_2})
\end{aligned}
\label{scm000}
\end{equation}
where the following independence assumptions on the joint distribution of unmeasured factors $\mathbb{P}_U$ hold by design and by assumption, respectively:
$U_W, U_{E^c}, U_{Y_1^p}, U_{Y_2^p} \perp \!\!\! \perp U_{A^c}$;
$U_{E^c} \perp \!\!\! \perp U_{Y^p_j}, U_{W^p_j}$, and
$U_{Y^p_j}, U_{W^p_j} \perp \!\!\! \perp U_{Y^p_{-j}}, U_{W^p_{-j}}$.
\subsection{Estimation and inference with partition-level independence}
\label{Sec:Est_partition}
The assumptions encoded in the restrictive causal model (Eq.~\ref{scm000}) have important implications for our two-stage estimation approach. Previously, when considering the cluster to be the independent unit, we identified and estimated a cluster-specific endpoint $Y^c$ that accounted for sub-sampling of individuals, missingness on baseline outcome status of sampled individuals, and missingness on final outcome status of individuals known to be at risk at baseline. Under the more restrictive model, we now need to identify and estimate a partition-specific endpoint $Y^p$ in Stage 1. Additionally, during effect estimation in Stage 2, we previously adjusted for cluster-level covariates $E^c$ to increase precision. Now, however, adjustment for the partition-level covariates $W^p$ is \textit{required} to block the effect of the cluster-level factors $E^c$, which are no longer included in the adjustment set. Therefore, the Stage 2 statistical estimand is now defined in terms of contrasts of treatment-specific means of the partition-level endpoint, such as $\psi^p(a^c) = \mathbb{E}[\mathbb{E}\{Y^p | A^c=a^c, W^p\}]$. Here, $\psi^p(a^c)$ is the conditional expectation of the partition-level outcome given the cluster-level intervention and partition-level covariates, averaged over the distribution of the partition-level covariates. Blurring the lines between randomized trials and observational data, we now must adjust for confounders $W^p$ to identify a causal effect
and support the conditional independence assumptions.
Importantly, the revised statistical estimand $\psi^p(a^c)$ is in terms of the expected partition-level outcome and has a subtly different interpretation that the original statistical estimand $\psi^c(a^c)$, which was in terms of the expected cluster-level outcome. Additionally, when the number of partitions per cluster varies, the value of these two estimands could differ. However, we can apply weights to recover either estimand; for a detailed discussion and worked examples, see \cite{benitez_defining_2022}.
As before, the revised statistical estimand $\psi^p(a^c)$ could be estimated with a variety of algorithms. We again recommend TMLE, given its double robustness property. Recall the adjustment for $W^p$ is now required to support the independence assumptions. For flexible adjustment of those covariates, we can again use Super Learner as in Stage 1.
Importantly, treating the partition as the conditionally independent unit also changes the assumptions needed for statistical inference. Specifically, we now have stronger conditions for Two-Stage TMLE to be asymptotically linear. As before, we need Stage 1 estimators of the partition-level endpoint $Y^p$ to be consistent. Now, however, the regularity conditions on effect estimation in Stage 2 do not hold by design. Instead, we need estimators of the partition-level outcome regression and propensity score to converge to the truth at quick enough rates and avoid overfitting \cite{laan_targeted_2011}. To satisfy these Stage 2 conditions, we recommend implementing a partition-level TMLE with Super Learner with a diverse set of candidate algorithms. If these conditions hold, the TMLE will be consistent with variance equal to the variance of its influence curve.
\section{Application to the SEARCH sub-study for incident TB infection}
\label{results}
Many factors influence the TB infection risk. Examples include an individual's susceptibility (e.g., age and HIV status) and their level of exposure to TB. The latter is influenced by air quality/ventilation and social mixing patterns (e.g., the prevalence of TB in a given location and the infectiousness of persons with TB in a given location). In particular, transmission is known to occur both within and outside households \cite{martinez_paediatric_2019, carbone_active_2015, wood_tuberculosis_2010}. Given these considerations, we immediately eliminated the individual and the household as possible candidates for the conditionally independent unit. However, given the geographic distribution of the study communities, the mobility patterns of community residents, and the distribution of gathering sites such as churches, we expected that TB was far less likely to spread across \textit{parishes}, a sub-community administrative unit. Therefore, with the larger sub-study team, we critically evaluated the assumptions in Section \ref{2comm} and concluded that after adjustment for the parish-level prevalence of HIV and the parish-level proportion of adults who drink alcohol, it was reasonable to treat the parish as the conditionally independent unit. In brief, HIV and TB are linked epidemics, and outside of the household, bars and other drinking locations are the key hotpots for TB transmission. In Section~\ref{comparison}, we demonstrate the sensitivity of our study results with the parish (two parishes per community) or the community as the (conditionally) independent unit. Before doing so, we previously summarize the assumptions and implementation of Two-Stage TMLE in SEARCH.
\subsection{Assumptions \& estimator implementation}
In Two-Stage TMLE, our first step is to identify and estimate a partition-level endpoint, appropriately accounting for missingness at the individual-level. In SEARCH, there were three ways complete-case data would not be representative of the underlying population: (1) the sampling scheme for the sub-study; (2) measurement of baseline TB status among those sampled, and (3) measurement of final TB status among those in the incidence cohort (i.e., persons known to be at risk at baseline). Specifically, sampling was enriched for adult members with HIV: of the 17,858 households in the 9 eastern Ugandan communities, 1,435 were selected and 688 (47.9\%) of those households had at least one adult with HIV. The adult (aged $\geq 15$) prevalence of HIV in the sub-sample was 19.6\%, a sharp contrast to adult HIV prevalence in the overall study communities of 3.6\% \cite{havlir_hiv_2019}. Nonetheless, sampling $S$ was random within household HIV status $H$, satisfying the following assumptions by design: $Y^*_0 \perp \!\!\! \perp S \mid H$ and $\mathbb{P}(S=1 \mid H=h)>0$ for $h\in\{0,1\}$.
Despite up to three visits to the sampled households, including weekends and after hours, only 4884 (58\%) of the 8420 household members were administered a TST at baseline. With the larger sub-study team, we determined that the key risk factors for prevalent TB and its measurement were age and increased mobility, largely due to school or work. Consider, for example, an older participant who has an occupation that requires travel; they may be less likely to be measured, but also at a higher risk of TB, due to their age and travel. Nonetheless, if in addition to household HIV status $H$, age and mobility were the only common causes of prevalent TB and its measurement, then the missing data assumption $Y_0^* \perp \!\!\! \perp \Delta_0 \mid S=1, W$ would hold, where, for ease of notation, $W$ includes household HIV status, participant age, and their mobility. Put differently, we were willing to assume that for sampled individuals and within values of $W$, the prevalence of TB among those with a baseline TST was representative of the prevalence of TB among those without a baseline TST. Additionally, we assumed there was a positive probability of administering a TST within all possible values of $W$. These assumptions, together with the sampling design, allowed for the identification of baseline prevalence of TB in each partition $\mathbb{P}(Y_0^*=1)$ as $\psi_{den}=\mathbb{E}[Y_0 \mid \Delta_0=1, S=1, W]$.
Among the 4884 participants administered a TST at baseline, 3871 (78\%) were TST-negative, forming a closed cohort for incidence measurement. The assumptions on measurement of TB status among this cohort were similar. In brief, we again assumed that within strata defined by household HIV status, age, and mobility, the risk of incident TB infection among the 3003 (78\%) of cohort members with a follow-up TST was representative of the risk of incident TB infection among the 828 (22\%) of cohort members without a final TST. We also assumed a positive probability of receiving a follow-up TST (among the incidence cohort) within all strata defined by $W$. These assumptions were again supported by the study design, including the repeat visits to households, and allowed for identification of $\mathbb{P}(Y_1^*=1, Y_0^*=0)$ as $\psi_{num} = \mathbb{E}\left[ \mathbb{E}\{ \mathbb{E}( Y_1 \mid \Delta_1 = 1, Y_0=0,\Delta_0 = 1, S = 1, W) \mid \Delta_0 = 1, S = 1, W\} \right]$.
For implementation, we estimated the one-year incidence of TB infection by stratifying on parish and taking the ratio of $\hat{\psi}_{num}$ and $(1-\hat{\psi}_{den})$, each estimated with TMLE.
Within TMLE, we used Super Learner to combine estimates from main-terms GLM, multivariate adaptive regression splines, and a simple mean. Figure \ref{s1} in the supplemental materials provides estimates of the parish-specific one-year incidence of TB infection, along with 95\% confidence intervals.
Recall in Two-Stage TMLE, the second step is to use the partition-level endpoint estimates $\hat{Y}^p$ to evaluate the effect of the cluster-level intervention. In the application, we evaluated the SEARCH effect on incident TB infection with a parish-level TMLE, a procedure outlined in \cite{balzer_two-stage_2021}. To support our previously discussed independence assumptions, the adjustment set $W^p$ include the baseline parish-level prevalence of HIV and the baseline parish-level prevalence of alcohol use. For flexible adjustment, we used the Super Learner with the same library of prediction algorithms. Computing code is available at [blinded for review]
.
\subsection{Comparative results}
\label{comparison}
The results of the SEARCH sub-study on incident TB infection have been previously presented in \cite{marquez_impact_2022}. The primary prespecified analysis, using Two-Stage TMLE with the parishes as the conditionally independent unit, suggested that SEARCH intervention resulted in a 27\% reduction in incident TB infection in eastern Uganda; the adjusted relative risk (aRR) was $0.73$ $(95\%$CI: $0.57-0.92;$ one-sided $p$-value $0.005)$. Plausible mechanisms for this substantial reduction in incidence are detailed in \cite{marquez_impact_2022}.
We now explore the practical impact of varying the identifiability assumptions on estimation and inference. The results of our comparison are summarized in Figure~\ref{results_fig} and Table~\ref{tablecomp}in the supplemental materials. First, we relaxed the assumption that parishes were conditionally independent and, instead, took a more traditional approach treating the randomized unit (i.e., the cluster) as the independent unit. As expected, when we moved from an effective sample size of $N = 18$ in the parish-level analysis to a sample size of $N = 9$ in the community-level analysis, the effect estimate shifted slightly and substantial precision was lost: aRR $= 0.86$ $(95\%$CI: $0.66-1.13;$ $p = 0.115)$. In this secondary analysis, we used Two-Stage TMLE as described in Section~\ref{sec:Two-stage}. Stage 1 was implemented analogously to obtain community-level estimates of TB incidence, accounting for the 3 sources of missingness. However, Stage 2 was implementing using a commmunity-level TMLE with Adaptive Pre-specification to select the adjustment covariates that maximized empirical efficiency \cite{balzer_adaptive_2016}.
To further explore the impact of our assumption that parishes were conditionally independent, we conducted a sensitivity analysis where Stage 1 accounted for missingness (as before), but Stage 2 was implemented without adjustment. This approach corresponds to the very strong and unreasonable assumption that the only source of dependence between parishes was the shared community-level intervention $A^c$. In other words, this analysis assumed no community-level covariates (measured or not) directly or indirectly influenced the incidence of TB infection. Estimates from this approach were again in the similar direction, but even less precise: aRR $= 0.91$ $(95\%$CI: $0.63 - 1.32$; $p=0.304)$.
Next we explored the impact our missing data assumptions. Specifically, we conducted a a sensitivity analysis where Stage 1 estimates of incidence were unadjusted, but Stage 2 was adjusted (as before). This approach corresponds to the very strong and unreasonable assumption that individual-level outcomes were missing completely at random (MCAR). In fact, we know this assumption was violated: the sub-sample was enriched for persons with HIV, and HIV is a known risk factor for TB. Age and mobility are additional risk factors for TB and for not having a TST placed at baseline or follow-up. Perhaps unsurprisingly, estimates from the approach were markedly different and in the opposite direction of the primary analysis: aRR $=1.04$ $(95\%$CI: $0.80 - 1.37$; $p=0.633)$. In other words, conducting a complete-case analysis would lead to the erroneous conclusion that the SEARCH intervention increased the incidence of TB infection by 4\%.
Finally and as an extreme example of strong assumptions on measurement and independence, we conducted a fully unadjusted analysis. In Stage 1, we estimated the parish-level incidence of TB infection with the raw proportion among those measured. Then in Stage 2, we compared parish-level incidence estimates by arm without further adjustment. This approach is not recommended in practice and suggested the SEARCH intervention increased the incidence of TB infection by 18\%: aRR $= 1.18$ $(95\%$CI: $0.84-1.63$; $p=0.843)$.
\section{Discussion}
\label{discussion}
CRTs allow for the rigorous evaluation of interventions delivered at the group-level. Within CRTs, rare or expensive outcomes may only be measured in a subset of clusters and, within those clusters, on a sub-sample of participants. Missing outcomes among participants is another common issue, which can bias estimates of baseline prevalence, the incidence of the outcome, and the intervention effect. To address these challenges, we extended Two-Stage TMLE to account for sub-sampling of participants and differential measurement of their outcomes at baseline and follow-up. Additionally, we detailed the assumptions needed to consider a sub-cluster partition as the conditionally independent unit. We also extended Two-Stage TMLE to this novel setting, which blurs the lines between CRTs and observational studies. Our application to real-data from the SEARCH community randomized trial demonstrated the real-world impact of varying assumptions and analytic choices. For example, ignoring the sampling scheme and assuming the outcomes were missing-completely-at-random reversed the direction of the estimated intervention effect.
When estimating the endpoint in Stage 1 and evaluating the intervention effect in Stage 2, we used TMLE with Super Learner to minimize parametric assumptions and allow for flexible estimation of the relationships between variables.
In the absence of missing data, a single-stage approach, such as GLMMs or GEE, could be used to estimate the intervention effect. These methods account for the dependence of participants within a cluster and can incorporate adjustment for partition-level variables $W^p$, needed to support the independence assumptions. However, when adjusting of covariates, these alternative estimators are often limited in their ability to estimate marginal effects \cite{benitez_defining_2022}. For example, when using the logit-link in GLMM and GEE, the conditional odds ratio is estimated \cite{laird_random-effects_1982, hubbard_gee_2010}. Additionally, as previously discussed, even after considering the sub-cluster partition to be the conditionally independent unit, the effective sample size may still be too small to support use of these approaches without finite sample corrections.
Finally and perhaps most importantly, these methods rely on strong modeling assumptions and do not share the double robustness properties of TMLE.
Nonetheless, our approach does require real assumptions on missing data and the dependence structure within a cluster.
These assumptions have implications for trial design. First, all the shared causes of missingness and outcomes must be measured. Second, fairly large cluster (or sub-cluster) sizes are needed for stable and consistent estimation of the endpoints in Stage 1 \cite{balzer_two-stage_2021}. Finally, to support any conditional independence assumptions in Stage 2, a rich set of partition-level covariates should be collected.
In all cases, these assumptions should be carefully considered and transparently stated.
Even when these assumptions are reasonable, there may be additional finite sample concerns for estimation and inference. Specifically, there can arise a tension between adjusting for too many covariates (with the potential of overfitting, even with cross-validation) and including too few (not supporting the identifiability assumptions). As illustrated with the real-data example, in-depth discussion with the study team is imperative to identifying the minimal set of adjustment variables needed to support our assumptions. Additionally, we recommend conducting a simulation study, informed by the real-data application, to provide guidance on potential sources of bias. Finally, future work could consider implementation of a `Collaborative' Two-Stage TMLE \cite{laan_collaborative_2010}, where the propensity score is fit in response to adjustment conducted in the outcome regression.
\printbibliography
\section*{Tables and figures}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=.79\textwidth]{dag_one_clust.png}
\caption{A simplified directed acyclic graph (DAG) for a participant from a given cluster in the TB sub-study of SEARCH. For simplicity, the graph is shown without any dependence between unmeasured variables, which are, thus, omitted. Abbreviations: TST = tuberculin skin test, TB = tuberculosis.}
\label{oneclustdag}
\end{center}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=.75\textwidth]{s2_unrestricted.png}
\caption{DAG visualizing the assumed relationships between variables in SCM, assuming two partitions in each cluster.}
\label{s2_u}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=.75\textwidth]{s2_indep.png}
\caption{Restricted DAG reflecting the assumptions needed for the partitions to be conditionally independent. Note the independence of partition- and cluster-level $U$s, the fact that all effects of $E^c$ on the outcomes occur only through their effect on $W^p_j$, and the independence of outcome $Y^p_j$ from $W^p_{-j}$ after conditioning on $W^p_j$.}
\label{s2_indep}
\end{center}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=.85\textwidth]{results_plot_BW.png}
\caption{Graphical comparison of the results under different sets of assumptions using real data from the SEARCH sub-study on incident TB infection. The primary analysis, a parish-level analysis with adjustment in Stage 1 and Stage 2, is shown first.}
\label{results_fig}
\end{center}
\end{figure}
\section*{Acknowledgements and funding}
We extend many thanks to: the Ministries of Health of Uganda and Kenya; our research and administrative teams in San Francisco, Uganda, and Kenya; collaborators and advisory boards; and, especially, all the communities and participants involved. Funding support for this work was provided by The National Institutes of Health (U01AI099959, UM1AI068636, K23AI118592, and R01AI151209) and the President's Emergency Plan for AIDS Relief.
\end{document}
\section{Derivations of the identifiablity results}
\label{appendix:derivations}
\subsection{Stage 1 denominator}
\label{appendix:derivations:denominator}
Recall that $S$ is an indicator of being sampled for participation in the sub-study, $W$ are individual-level covariates, $\Delta_0$ is an indicator of individual measurement at baseline, $Y^*_0$ is the true underlying outcome status of the individual at baseline, and $Y_0 = \Delta_0 \times Y^*_0$ an indicator that the individual was measured and had the outcome at baseline. Our goal is to estimate the baseline outcome prevalence $\mathbb{P}(Y^*_0 = 1)$ under a hypothetical intervention where all participants were included $(S = 1)$ and all had their outcome measured at baseline $(\Delta_0 = 1)$. Under these following assumptions, together with sufficient data support (i.e., positivity), we can identify the target causal parameter $\mathbb{P}(Y^*_0 = 1)$ as
\begin{equation}
\label{eq:one}
\begin{aligned}
\mathbb{P}(Y^*_0 = 1) &= \sum_{w} \mathbb{P}(Y^*_0 \mid W = w) \mathbb{P}(W = w) \\
&\text{by } Y_0^* \perp \!\!\! \perp S \mid W \text{:}\\
&= \sum_{w} \mathbb{P}(Y^*_0 \mid S = 1, W = w) \mathbb{P}(W = w) \\
&\text{by } Y_0^* \perp \!\!\! \perp \Delta_0 \mid S=1, W \text{:}\\
&= \sum_{w} \mathbb{P}(Y^*_0 \mid \Delta_0 = 1, S = 1, W= w) \mathbb{P}(W = w)\\
& \text{since }Y_0^* = Y_0 \text{ when } \Delta_0 = 1, S = 1 \text{:}\\
&= \sum_{w} \mathbb{P}(Y_0 \mid \Delta_0 = 1, S = 1, W = w) \mathbb{P}(W = w)\\
&=\mathbb{E} \left[ \mathbb{E} \left(Y_0 \mid \Delta_0 = 1, S = 1, W \right) \right].
\end{aligned}
\end{equation}
Throughout the summation generalizes to an integral for continuous-valued variables $W$.
\subsection{Stage 1 numerator}
\label{appendix:derivations:numerator}
In addition to the notation in Appendix \ref{appendix:derivations:denominator}, recall that $\Delta_1$ is an indicator of individual measurement at follow-up, $Y^*_1$ is the underlying indicator that the individual had the outcome at follow-up, and $Y_1 = \Delta_1 \times Y^*_1$ an indicator that the individual was measured and had the outcome at follow-up. Our goal is to identify the proportion of individuals who have the outcome at follow-up and were at risk at baseline: $\mathbb{P}(Y_1^* = 1, Y_0^* = 0)$. To do so, we utilize and extend the approach of Balzer et al. \cite{balzer_far_2020}. For simplicity of presentation, we define the underlying indicator of the outcome of interest as
$Z^* = \mathbb{I}(Y^*_1 = 1, Y^*_0 = 0)$ and
its observed analog as $Z = \mathbb{I}(Y_1 = 1, Y_0 = 0)$.
To address missingness on baseline outcome status and follow-up outcome status, we consider a longitudinal dynamic regime \cite{hernan_comparison_2006, laan_causal_2007, robins_estimation_2008}. First, we `set' $S = 1$ and $\Delta_0=1$; that is, all individuals are included in the sub-sample and measured at baseline. Second, among those known to be at risk at baseline ($Y_0=0$ and $\Delta_0=1$), we `set' $\Delta_1 = 1$ to ensure complete measurement of the outcome at follow-up.
Identification of $\mathbb{P}(Z^*=1)$ from the observed data distribution is possible under the sequential randomization assumption \cite{robins_new_1986}, which is similar to the `missing at random' assumption, and sufficient data support (i.e., the relevant positivity assumptions), shown in the steps of the derivation below:
\begin{align*}
\mathbb{P} (Z^* & = 1) \\
=& \sum_w \mathbb{P}(Z^*=1 \mid W=w)\mathbb{P}(W=w)\\
\text{by }& Z^* \perp \!\!\! \perp S \mid W \text{ and } Z^* \perp \!\!\! \perp \Delta_0 \mid S = 1, W \\
=& \sum_w \mathbb{P}(Z^*=1 \mid \Delta_0=1, S=1, W=w)\mathbb{P}(W=w)\\
=& \sum_w \sum_{y_0} \mathbb{P}(Z^*=1 \mid Y_0=y_0, \Delta_0=1, S=1, W=w)
\mathbb{P}(Y_0= y_0 \mid \Delta_0=1, S=1, W=w) \mathbb{P}(W=w)\\
\text{by } & Z^* = 0 \text{ when } Y_0 = 1 \text{ and } \Delta_0 = 1
\\
=& \sum_w \mathbb{P}(Z^*=1 \mid Y_0=0, \Delta_0=1, S=1, W=w)
\mathbb{P}(Y_0= 0 \mid \Delta_0=1, S=1, W=w) \mathbb{P}(W=w)\\
\text{by } & Z^* \perp \!\!\! \perp \Delta_1 \mid Y_0 = 0, \Delta_0 = 1,S=1,W \\
=& \sum_w \mathbb{P}(Y_1^*=1 \mid \Delta_1=1, Y_0=0, \Delta_0=1, S=1, W=w)
\mathbb{P}(Y_0= 0 \mid \Delta_0=1, S=1, W=w) \mathbb{P}(W=w)\\
=& \sum_w \mathbb{P}(Y_1=1 \mid \Delta_1=1, Y_0=0, \Delta_0=1, S=1 , W=w)
\mathbb{P}(Y_0= 0 \mid \Delta_0=1, S=1, W=w) \mathbb{P}(W=w)
\end{align*}
As before, the summation generalizes to an integral for continuous-valued variables $W$.
\section{Additional results from SEARCH and comparative results under different identification assumptions}
\label{app:sens}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=.85\textwidth]{_s1_BW.png}
\caption{Adjusted Stage 1 estimates of the parish-level incidence of TB infection with 95\% confidence intervals, sorted by point estimate and study arm. Recall there are 2 parishes per community.}
\label{s1}
\end{center}
\end{figure}
\begin{table}[!hb]
\begin{tabular}{l|l|l}
& Control mean (min-max) & Intervention mean (min-max)\\
\hline
\hline
Total population & 4767 (2453-6913) & 5391 (3102-7459)\\
\hline
Sampled & 466 (186-679) & 470 (210-692)\\
\hline
Tested at baseline & 264 (112-448) & 280 (116-428)\\
\hline
TST-negative at baseline, tested at follow-up & 171 (49-289) & 162 (66-268)\\
\hline
\end{tabular}
\caption{Parish-level characteristics by arm}
\label{tablecomp}
\end{table}
\begin{table}[]
\centering
\def\arraystretch{1.5
\begin{tabularx}{\textwidth}{Y Y l}
\textbf{Estimator} & \textbf{Key assumptions} & \makecell{\textbf{Risk ratio}\\\textbf{estimate (95\% CI)}}\\
\hline
Stage 1 and Stage 2 adjusted, parish-level (Primary analysis) &
Individual-level outcomes are missing at random (MAR) given household HIV status, age and mobility. Parishes-level outcomes are conditionally independent given the prevalence of HIV and prevalence of alcohol use. & 0.73 (0.57 - 0.92)\\
Stage 1 and Stage 2 adjusted, community-level$^*$ &
Individual-level outcomes are MAR given household HIV status, age and mobility.
& 0.86 (0.66 - 1.13)\\
Stage 1 adjustment only, parish-level &
Individual-level outcomes are MAR given household HIV status, age and mobility. Parishes-level outcomes are (marginally) independent.
& 0.91 (0.63 - 1.32)\\
Stage 2 adjustment only, parish-level &
Individual-level outcomes are missing completely at random (MCAR). Parishes-level outcomes are conditionally independent given the prevalence of HIV and prevalence of alcohol use.
& 1.04 (0.80 - 1.37)\\
Unadjusted, parish-level &
Individual-level outcomes are MCAR. Parishes-level outcomes are (marginally) independent.
& 1.18 (0.84 - 1.63)\\
\hline
\multicolumn{3}{l}{\begin{footnotesize} $^*$Stage 2 adjustment covariates selected through Adaptive Prespecification to maximize empirical efficiency \cite{balzer_adaptive_2016}. \end{footnotesize}}
\end{tabularx}
\caption{Comparison of the results under different sets of assumptions using real data from the SEARCH sub-study on incident TB infection.}
\label{tab:sens}
\end{table}
\end{document}
\section{Defining and identifying treatment effects}
\section{Introduction}
\label{intro}
When designing a randomized trial, it is sometimes necessary and/or desirable to assign the intervention to groups of participants rather than to individuals \cite{hayes_cluster_2017, campbell_how_2014, donner_design_2010, eldridge_practical_2012}. For example, it would be impractical to test the impact of a new teaching method if the method were delivered to randomized students in the same classroom, but much more feasible if randomization happens at the classroom level. In cluster randomized trials (CRTs), correlation between the outcomes of individuals in a given cluster may arise due to, for example, spillover effects between individuals or shared environments or characteristics of individuals in the cluster. This dependence violates the common regression assumption that all observations are independent and identically distributed (i.i.d.), complicating statistical estimation and inference. Longitudinal data can also be considered clustered; with repeated measurements on the same individuals, each individual is their own ``cluster'' and the correlation between outcomes could be due to measured and/or unmeasured characteristics of the individual \cite{fitzmaurice_applied_2012}.
A number of well-established methods can account for the dependence of observations in a cluster \cite{liang_longitudinal_1986, fitzmaurice_applied_2012, hayes_cluster_2017}. However, not all methods can address practical challenges that may arise in CRT analysis. First, outcomes may not be measured on all participants in each cluster. This could occur by design if, for example, measurement of a rare or expensive outcome only occurred in a sub-sample of participants. Failing to adjust for sampling can result in biased point estimates and/or misleading inference \cite{laan_targeted_2011, horvitz_generalization_1952, mhs_gordis_2018}. Additionally, incomplete ascertainment of outcomes among all (or the selected subset of) participants can bias results if the outcomes are not missing completely at random (MCAR) \cite{rubin_inference_1976, robins_analysis_1995, national_research_council_prevention_2010}. Individuals whose outcomes are not measured are likely different than those who were fully observed; for example, students who are absent on an exam day may be systematically different than those present. If this systematic missingness is influenced by the intervention (for example, a new teaching technique improves motivation and attendance, influencing exam scores and the probability of measurement), the risk of bias is even larger. This is a common problem: a 2016 review found that missing data were present in 93\% of CRTs, 55\% of which simply performed a complete-case analysis \cite{fiero_statistical_2016}.
Second, logistical and fiscal constraints often limit the number of clusters in CRTs. Indeed, a review of 100 CRTs found 37\% with fewer than 20 clusters \cite{kahan_increased_2016} and another review of 100 CRTs found a median of 33 clusters \cite{selvaraj_characteristics_2013}. Further, in CRTs with many clusters, key subgroup analyses might be conducted within strata defined by cluster-level covariates (e.g., region), limiting the number of randomized units included in that analysis. As the number of clusters shrinks, chance imbalance on covariates that influence the outcome becomes more likely. Accounting for these covariates and other outcome predictors can increase the precision of the estimator and thereby the statistical power (e.g. \cite{tsiatis_covariate_2008, hayes_cluster_2017, moore_covariate_2009, fisher_statistical_1932}). However, in analyses with few clusters, including too many covariates can lead to overfitting, and it is often not clear which covariates (or their form) to select for optimal performance \cite{balzer_adaptive_2016}.
Third, statistical inference often relies on (i) tests with known finite sample properties that may be inefficient or (ii) the asymptotic behavior of estimators that may not hold in CRT analyses with a limited number of clusters. For example, generalized estimating equations (GEE) and generalized linear mixed models (GLMMs), two common approaches for analyzing CRTs \cite{laird_random-effects_1982, liang_longitudinal_1986}, both rely on having a ``sufficient'' number of clusters. The exact recommendation varies, with some suggesting GEE can be used with as few as 10 clusters \cite{pan_small-sample_2002}, while others suggest that these approaches (without small-sample corrections) should be avoided without $\geq$30 clusters \cite{kreft_introducing_1998, hayes_cluster_2017, murray_design_2018}. Altogether, inference based on a small number of clusters may be unreliable, creating conservative or anti-conservative confidence interval coverage depending on the situation \cite{leyrat_cluster_2018}. For an overview and comparison of existing CRT analysis methods, we refer the reader to \cite{hayes_cluster_2017, benitez_defining_2022}.
Here, we address these challenges by combining \textit{Two-Stage targeted minimum loss-based estimation} (TMLE) to account for sub-sampling and missing individual-level outcomes \cite{balzer_two-stage_2021} with carefully considered \textit{conditional independence assumptions} to address the limited numbers of clusters \cite{laan_estimating_2013}. The novel contributions of this work include the following. First, we extend Two-Stage TMLE to handle differential measurement of an outcome among a closed cohort, where cohort membership is defined by sub-sampling and also subject to differential measurement. Second, we detail the assumptions required to increase the effective sample size by considering a sub-unit of the cluster to be the conditionally independent unit. Since the cluster remains the unit of randomization, this process results in the CRT behaving more like an observational study. As a consequence, we extend the prior asymptotic results and practical implementation of Two-Stage TMLE to address the challenges in this setting. Finally, to the best of our knowledge, this is the first work to demonstrate the real-life consequences of various analytic choices, using real-world data from a community randomized trial.
The rest of the paper proceeds as follows. Section \ref{motivation} describes the motivating example for this work. Section \ref{stage1} describes the strategy for estimating a cluster-level endpoint, adjusting for sub-sampling, missing outcomes at baseline among those sampled, and missing outcomes at follow-up among those at risk at baseline. Section \ref{stage2} presents a cluster-level strategy to estimate the intervention effect, while optimizing precision with few independent units. Section \ref{2comm} presents several causal models and the assumptions required to increase the number of independent units by partitioning the clusters into sub-units. Section \ref{Sec:Est_partition} describes the impact of re-defining the independent unit on statistical estimation and inference. Comparative results from the real data example are presented in Section \ref{results}, and we conclude with a brief discussion in Section \ref{discussion}.
\section{Motivating example}
\label{motivation}
Our novel methodology is motivated by a sub-study of the Sustainable East Africa Research in Community Health (SEARCH) trial, a 32-community CRT to evaluate the population-level effects of a community-based approach to ``Universal HIV Test and Treat" as compared to an enhanced standard-of-care in rural Kenya and Uganda (NCT01864603) \cite{havlir_hiv_2019}. In intervention communities, participants were offered (i) multi-disease testing through annual, community-based health fairs, (ii) universal treatment eligibility for people with HIV, and (iii) patient-centered and streamlined care \cite{chamie_hybrid_2016, kwarisiima_high_2017}. In control communities, participants were also offered multi-disease testing through the same mechanism at baseline and endline, while treatment eligibility and care followed the country standard. The applied results evaluating the SEARCH intervention effect on several endpoints have been previously published (see, e.g., \cite{petersen_association_2017, havlir_hiv_2019,hickey_effect_2021, kamya_search_2021}), while the data analysis is ongoing for several secondary outcomes.
An estimated 1.7 billion people (roughly a quarter of the world’s population) are infected with tuberculosis (TB), and this vast reservoir of latent infections fuels TB disease and death via reactivation or rapid progression to disease once infected \cite{houben_global_2016, macpherson_mortality_2009}. The SEARCH intervention was found to reduce active TB disease among people with HIV \cite{havlir_hiv_2019}, but the impact on TB infection and community-wide TB transmission in the wider population is unknown. Understanding TB transmission dynamics and then implementing effective public health interventions is difficult. First, transmissions are airborne and likely occur both inside and outside the household in community-based settings \cite{martinez_paediatric_2019, carbone_active_2015, wood_tuberculosis_2010}. Second, the majority of transmission events result in latent infection, which can much later progress to active TB (i.e., TB disease). Finally, measurement of TB infection is imperfect and expensive. To estimate the population-level prevalence and incidence of TB (both latent and active) as well as the intervention effect on incidence, SEARCH conducted the following sub-study in 9 communities in eastern Uganda. First, in each community, a sample of households was selected from an enumerated list, generated from a rapid household census at study baseline \cite{jakubowski_universal_2022}. Selection was a stratified random sample with the sub-study was purposefully enriched for households where at least 1 adult (aged $\geq$ 15 years) had HIV; the goal was 100 households of each type per community. In all selected households, tuberculin skin tests (TSTs) and sociodemographic surveys were administered to household members aged $\geq$ 5 years. The sub-study participants who were TST-negative at baseline formed a closed cohort, in which a follow-up TST survey was done one year later. The primary outcome of the sub-study was the one-year incidence of TB infections among those at risk at baseline. The applied results have been previously presented \cite{marquez_impact_2022}; here we focus on the methods used to generate those results.
Estimating the effect of the SEARCH intervention on incident TB infection presented several challenges and thus opportunities for the development and application of novel methods. First, the sample was enriched for persons with HIV. It is well known that the risk and incidence of TB differs by HIV serostatus \cite{macpherson_mortality_2009}. Thus, ignoring the sampling scheme could bias estimates of the TB burden and the intervention effect. Second, while multiple visits were made to selected households to locate all household members and administer TSTs, baseline measurement of TB status was incomplete, raising concerns that the TST-negative cohort was not representative of all persons at risk in the sub-sample. Likewise, despite best efforts, measurement of TB status at the end of follow-up was also incomplete, again raising concerns about differential capture of incident infections among the TST-negative cohort. Finally, despite thousands of participants, there were only 9 communities in the sub-study, limiting statistical power and motivating the consideration of the parish, a sub-community unit, as the conditionally independent unit.
Altogether, estimation of the SEARCH intervention effect required adjustments for purposefully differential sampling, potentially differential outcome measurement, and few independent units. Full discussion of the choices made in the application is given in Section~\ref{results}; we now present our analytic approach more generally.
\section{Two-Stage TMLE for sampling and missing outcomes in CRTs}
\label{sec:Two-stage}
\textit{Two-Stage TMLE} was developed to reduce bias and improve efficiency of CRTs by optimally adjusting for baseline cluster-level covariates, after controlling for missing individual-level outcomes \cite{balzer_two-stage_2021}. In the first stage, we identify and estimate a cluster-specific endpoint, accounting for potentially differential measurement of individual-level outcomes. To do so, we stratify on each cluster, allowing the relationships between the individual-level covariates, measurements, and outcomes to be cluster-specific. For example, the relationship between age and missingness might be different in a more urbanized cluster vis-a-vis a more rural one, and this strategy allows for that flexibility. In the second stage, we use the cluster-level endpoint estimates to evaluate the intervention effect, optimally adjusting for cluster-level covariates to increase precision. Two-Stage TMLE compares favorably to competing CRT methods, especially when there are post-baseline causes of missingness \cite{balzer_two-stage_2021}, and is detailed below. We now extend the approach to account for baseline and endline outcome status missingness, and incorporate covariate adjustment to support the assumptions needed to increase the number of conditionally independent units.
\subsection{Stage 1: Identifying and estimating the cluster-level endpoint}
\label{stage1}
When the individual-level outcomes are not MCAR, estimating the cluster-specific endpoint with the simple mean among those measured can create several hazards. First, failing to account for over-sampling of certain subgroups and under-sampling of others can bias estimates for the population of interest.
Second, in longitudinal studies, failing to account for incomplete measurement of baseline status can skew estimates of risk and thereby estimates of intervention effectiveness. As an extreme example, suppose only participants at very low risk of the outcome were tested at baseline; then estimates of the baseline proportion who are at risk would be biased upwards, and the resulting incidence cohort would be a poor representation of the larger population. Likewise, failing to account for incomplete measurement of final endpoint status among the incidence cohort can also bias estimates of risk and intervention effectiveness. As another extreme example, suppose all high-risk cohort members did not have their endpoint measured; then cluster-level estimates of incidence would be biased downwards. If missingness is present at both baseline and follow-up, these biases could compound. Further, if missingness is differential by arm - say, the high-risk participants were more likely to be measured at follow-up in the intervention arm - the potential for bias is even greater.
In our motivating study, all of these dangers were present. The households sampled for the sub-study were enriched for persons with HIV, baseline TST measurement was potentially differential among members of sampled household, and measurement of incident TB infection was also potentially differential among participants who were TST-negative at baseline. In the following subsection, we discuss our definition of the cluster-level endpoint and describe methods for estimating it, along with relevant assumptions. We follow a similar strategy to that set out by Balzer et al. \cite{balzer_far_2020, balzer_two-stage_2021}.
\subsubsection{Notation}
For an individual in a given cluster, let $E^c$ represent the cluster-level covariates (e.g., baseline HIV prevalence) and $W$ the set of individual-level covariates (e.g., age and whether a person with HIV lives in their household). These are either measured prior to intervention implementation or, at minimum, not impacted by the intervention. Let $A^c$ represent whether the cluster was in the intervention arm ($A^c=1$) or the control ($A^c = 0$) and $S$ indicate that an individual was sampled for the sub-study. Next, define $Y_0^*\in \{0,1\}$ as a participant's underlying (possibly unmeasured) outcome status at baseline - specifically, $Y_0^*=1$ if the participant has the outcome (e.g., TB infection) at baseline and 0 if not. Likewise, define $\Delta_0$ as an indicator that their outcome was measured at baseline; hence, $\Delta_0$ is deterministically 0 if the participant was not sampled ($S=0$) for the sub-study. Then define the observed outcome at baseline as $Y_0 = \Delta_0 \times Y^*_0$, equaling 1 if the participant was measured and had the outcome at baseline. Participants known to be at risk at baseline (i.e., those with $\Delta_0=1$ and $Y_0=0$) form a closed cohort for incidence measurement. Variables $Y^*_1$, $\Delta_1$, and $Y_1$ are the follow-up timepoint analogues. Follow-up measurement only occurs among members of the incidence cohort; thereby, $\Delta_1 = 0$ if either $\Delta_0 = 0$ or $Y_0 = 1$ . Thus, the observed data on a participant is $O = (E^c, W, A^c, S, \Delta_0, Y_0, \Delta_1, Y_1)$. However, since $E^c$ and $A^c$ are constant in a cluster, we can simplify to the participant-level to $O = (W, S, \Delta_0, Y_0, \Delta_1, Y_1)$ \cite{balzer_two-stage_2021}. A simplified directed acyclic graph (DAG) showing the relationships between variables in our applied example is shown in Figure \ref{oneclustdag}.
\subsubsection{Cluster-level causal parameter}
In Stage 1, our interest is in the true cluster-specific incidence of the outcome, which could be directly observed in the counterfactual scenario where (i) all cluster members were included in the sub-study and assessed for the outcome at baseline, and (ii) all who tested negative at baseline were assessed again at follow-up. If such a scenario were possible, we would know the baseline status $Y_0^*$ on all cluster members and the endline status $Y_1^*$ on all cluster members who were at risk at baseline: $Y^*_1 \mid Y^*_0=0$. While this scenario is impossible, it helps us to define and later identify our target, cluster-level causal parameter $Y^{c*}$ as:
\begin{equation}
\label{eq:targparam}
\begin{aligned}
Y^{c*} \equiv \mathbb{P}(Y_1^* = 1 \mid Y_0^* = 0) = \frac{\mathbb{P}(Y_1^* = 1, Y_0^* = 0)}{\mathbb{P}(Y_0^* = 0)}
\end{aligned}
\end{equation}
Within each cluster separately, we will identify and estimate the numerator and denominator and then take their ratio to obtain an estimate of the corresponding cluster-level statistical parameter.
\subsubsection{Identification of the cluster-level endpoint}
Using the identity $\mathbb{P}(Y_0^*=0) = 1 - \mathbb{P}(Y_0^*=1)$, we can estimate the prevalence of the outcome ($\mathbb{P}(Y_0^*=1)$) at baseline and then subtract from 1 to obtain the proportion at risk. As previously discussed, we would be able to directly calculate the baseline outcome prevalence if all cluster members were included in the sub-study ($S=1$) and assessed for the outcome ($\Delta_0 = 1$). The true underlying parameter $\mathbb{P}(Y^*_0=1)$ can be represented as a statistical parameter (i.e., function) of the observed data distribution, given the following assumptions: i) baseline outcome status is missing at random (MAR): $Y^*_0 \perp \!\!\! \perp S \mid W$ and $Y_0^* \perp \!\!\! \perp \Delta_0 \mid S=1, W$; and ii) positivity: $\mathbb{P}(S=1 \mid W = w) > 0$ and $\mathbb{P}(\Delta_0 = 1 \mid S = 1, W = w) > 0$ for all values $w \in W$.
The reasonableness of the identification assumptions in our motivating example are discussed in Section \ref{results}. In brief, MAR would hold if sub-study sampling was done randomly within values of $W$ \emph{and} if the only common causes of the outcome and its measurement (among those sampled) were also captured in $W$. Additionally, the positivity assumption would hold if there were no values of $W$ in which sub-sampling or measurement (among those sampled) were impossible. Together, these assumptions allow identification of the target causal parameter $\mathbb{P}(Y^*_0 = 1)$ using the observed data as an iterated expectation over the adjustment set $W$ (derivation in the supplemental materials), denoted $\psi_{den} =$ $\mathbb{E} [\mathbb{E} \{Y_0 \mid \Delta_0 = 1$, $S = 1, W \}]$.
To guide our identification of the numerator, the proportion of individuals who are positive at follow-up and at risk at baseline $\mathbb{P}(Y_1^* = 1, Y_0^* = 0)$, we consider a longitudinal dynamic intervention (e.g. \cite{hernan_comparison_2006, laan_causal_2007, robins_estimation_2008}). First, as with the denominator, we `set' $S = 1$ and $\Delta_0=1$; that is, all cluster members are sampled for the sub-study and all have their outcome measured at baseline. Second, among those at risk at baseline $(Y_0=0, \Delta_0=1)$, we `set' $\Delta_1 = 1$ to ensure complete measurement of the outcome at follow-up among those in the incidence cohort. Identification of $\mathbb{P}(Y_1^* = 1, Y_0^* = 0)$ is possible under the sequential randomization assumptions \cite{robins_new_1986} and corresponding positivity assumptions (details and derivation in supplemental materials):
$$
\psi_{num} = \mathbb{E} \left[ \mathbb{E} \{ \mathbb{E} (Y_1 \mid \Delta_1 = 1, Y_0=0,\Delta_0 = 1, S = 1, W) \mid \Delta_0 = 1, S = 1, W\} \right]
$$
If the adjustment variables are discrete, this can be equivalently expressed as
$$
\psi_{num}=\sum_w \mathbb{P}(Y_1=1 \mid \Delta_1=1, Y_0=0, \Delta_0=1, S=1 , W=w) \mathbb{P}(Y_0= 0 \mid \Delta_0=1, S=1, W=w) \mathbb{P}(W=w)
$$
In words, the statistical parameter for the numerator is the adjusted probability of having the outcome at follow-up among those at risk at baseline, scaled by the adjusted probability of being at risk at baseline, and standardized with respect the adjustment set. We discuss the reasonableness of the identification assumptions in our motivating example in Section \ref{results}.
Once the numerator and denominator and denominator are identified, they are combined to provide a cluster-level statistical parameter $Y^c = \psi_{num}/(1-\psi_{den})$, equaling the target causal parameter $Y^{c*}=\mathbb{P}(Y_1^*=1 | Y_0^*=0)$ under the above identifiability assumptions. (Recall we parameterized $\psi_{den}$ in terms of the baseline prevalence.)
\subsubsection{Estimating the cluster-level statistical parameter}
\label{est}
Several options exist to estimate the statistical parameters corresponding to the denominator $\psi_{den}$, numerator $\psi_{num}$, and, thus, the cluster-level endpoint $Y^c$. For the denominator, if the strata of the adjustment covariates $W$ are discrete and not too numerous, a weighted average of strata-specific mean outcomes can be taken among those sampled and measured at baseline, corresponding to the non-parametric G-computation result of \cite{robins_new_1986}. Equivalently, one could estimate the propensity scores $\mathbb{P}(\Delta_0 = 1, S = 1 \mid W)$ for each strata of $W$, calculate the inverse probability weights (IPW), and take the empirical mean if the weighted outcomes \cite{horvitz_generalization_1952}. (Of course, the propensity score can be factorized as $\mathbb{P}(\Delta_0 = 1, S = 1 \mid W)= \mathbb{P}(S = 1 \mid W)\times \mathbb{P}(S = 1 \mid \Delta_0 = 1, W)$.) When continuous covariates are included and/or values of $W$ have weak support, the conditional outcome expectation or propensity score must be estimated via other methods, such as logistic regression, before computing the G-computation or IPW result. Similar challenges apply to estimation of the numerator $\psi_{num}$.
We chose instead to estimate $\psi_{den}$ and $\psi_{num}$ using TMLE \cite{laan_targeted_2011}. Briefly, TMLE is a framework for constructing asymptotically linear, substitution estimators of statistical parameters by combining estimates of the outcome regression and the propensity score. TMLE is ``double-robust'' in that it will be consistent if either of those are estimated consistently, and if both are estimated consistently and at appropriate convergence rates, it will achieve the non-parametric efficiency bound, providing the smallest asymptotic variance among a wide class of estimators. For estimation of the outcome regression and propensity score, we use the ensemble machine learning method Super Learner \cite{laan_super_2007} with a diverse set of candidate learners fit via cross-validation, allowing for flexible relationships between variables. Using the ensemble allows us to avoid relying on (possibly mis-specified) parametric models and leverage advances in non-parametric machine learning. Step-by-step implementation of TMLE for $\psi_{den}$ and $\psi_{num}$ in Stage 1 are given in supplemental materials.
We then obtain a point estimate of the cluster-specific endpoint as $\hat{Y}^c=\hat{\psi}_{num}/(1-\hat{\psi}_{den})$. Additionally, since the estimators of the numerator and denominator are asymptotically linear with known influence curve, we can use the delta method to generate confidence intervals for each cluster-specific endpoint. As described next, the estimated cluster-specific endpoints $\hat{Y}^c$ are also used to evaluate the intervention effect in Stage 2 of Two-Stage TMLE.
\subsection{Stage 2: Estimating and obtaining inference for the treatment effect}
\label{stage2}
With estimates of the cluster-level endpoint $\hat{Y}^c$ accounting for sub-sampling and missingness, we turn our attention to how to optimally estimate the treatment effect. In Stage 2, our observed cluster-level data consist of $O^c = (E^c, A^c, \hat{Y}^c)$, where $E^c$ refers to a set of general cluster-level characteristics (e.g., urban vs. rural) as well as aggregated individual-level covariates $W$ (e.g., proportion of people with HIV). Using these data, we can estimate a variety of causal effects. For example, our Stage 2 statistical estimand can defined on the relative scale by the ratio of the treatment-specific means $\psi^c(a^c) = \mathbb{E}[ \mathbb{E}[Y^c | A^c=a^c, E^c]]$. This is a cluster-level analog of the G-computation identifiability result \cite{robins_new_1986, balzer_targeted_2016, balzer_new_2019}. In CRTs, the adjustment variables $E^c$ are included for improved precision - not to control for confounding or missingness. Since the cluster-level intervention $A^c$ is randomized, there is no confounding and the positivity assumption holds by design.
Therefore, Stage 2 estimation of the treatment effect can proceed by implementing a cluster-level TMLE, as detailed in \cite{balzer_two-stage_2021}. The key challenge to Stage 2 is \emph{a priori} specification of the optimal covariate adjustment set $E^c$. One solution to this challenge is \textit{Adaptive Pre-specification} (APS) \cite{balzer_adaptive_2016}, which flexibly selects the combination of estimators for the outcome regression and for the propensity score that maximizes empirical efficiency. Briefly, APS pre-specifies a candidate set of working generalized linear models (GLMs) for the outcome regression and for the propensity score, each adjusting for different baseline covariates. Next, to select the optimal combination of estimators, we pre-specify as loss function the squared influence curve of the TMLE for the target statistical parameter. Finally, using cross-validation, we estimate the expected loss (a.k.a., the risk) of candidate GLMs, and select the combination (and hence adjustment set) that has the smallest cross-validated variance estimate. Finite sample simulations and real-world applications have demonstrated precision gains over alternative approaches \cite{balzer_adaptive_2016, balzer_two-stage_2021, benitez_defining_2022}.
Under conditions detailed in \cite{balzer_two-stage_2021}, Two-Stage TMLE will be normally distributed in the large data limit, allowing for the construction of confidence intervals and hypothesis tests.
In particular, we need each of the cluster-level endpoints $Y^c$ to be consistently estimated in Stage 1. Biased estimators of the cluster-specific endpoints can result in biased estimates of and misleading inference for the treatment effect. Indeed, the Two-Stage approach is most effective when the cluster size is relatively large, allowing for flexible and well-supported estimation of the cluster-level endpoint. The regularity conditions on Stage 2 estimators of the cluster-level outcome regression and known propensity score hold, by design, when using APS to select from working GLMs. In CRTs with fewer than 40 clusters randomized ($N<40$), we recommend using the Student's $t$ distribution with $N-2$ degrees of freedom as a finite sample approximation of the asymptotic normal distribution. Alternatively, permutation tests or other randomization-based approaches can be used for statistical inference, acknowledging that these approaches are testing a different null hypothesis.
\section{(Re-)Defining the independent unit}
\label{2comm}
A fundamental premise of CRTs is that individual-level outcomes are dependent within a cluster. Sources of dependence could include shared cluster-level factors, including the intervention, as well as social interactions between participants within a cluster. Instead, clusters are assumed to be independent, providing the basis for statistical inference, as described in the prior subsection. However, as previously discussed, CRTs tend to randomize few clusters, limiting the statistical power. For example, while the main SEARCH study randomized 32 communities, measurement of incident TB infection occurred in only 9 communities in eastern Uganda. Additionally, in CRTs with many clusters, subgroup analyses to understand effect heterogeneity may be conducted among limited numbers of clusters. The extreme case of estimating a causal effect with only two clusters was treated in depth by van der Laan, Petersen, and Zheng \cite{laan_estimating_2013}, and we will draw upon much of their framework below.
In this section, our goals are to carefully define a hierarchical causal model, accurately reflecting the data generating process for a CRT \cite{balzer_new_2019, benitez_defining_2022}, detail the assumptions needed to consider a sub-cluster component to be the conditionally independent unit \cite{laan_estimating_2013}, and then present the consequences of these assumptions for statistical estimation and inference with Two-Stage TMLE. In SEARCH, for example, participants are nested within households, villages, parishes, and ultimately communities. Under different assumptions, explicitly stated below, any level of this clustering could be treated as the conditionally independent unit.
For simplicity, we focus on CRTs where individuals are grouped into sub-cluster ``partitions", and these partitions are grouped into a cluster. For ease of presentation, we also focus on CRTs with two partitions per cluster; however, our results naturally generalize to other settings. We index partitions by $j = \{1,2\}$ within clusters, which remain the unit of randomization. As before, $E^c$ is the set of cluster-level characteristics, and $A^c$ is an indicator of the cluster being randomized to the intervention arm. Now let $W^p$ be the set of partition-level covariates, which could include general characteristics of the partition (e.g., urban vs. rural) as well as aggregates of individual-level covariates. Likewise, let $Y^p$ be partition-level endpoint, defined analogously to $Y^c$ in Stage 1.
\subsection{Hierarchical structural causal model}
\label{ex0}
Following \cite{balzer_new_2019,benitez_defining_2022}, we use the structural causal model (SCM) of Pearl \cite{pearl_causality_2009} to formalize the hierarchical data-generating process for a CRT with two partitions. (Again, our presentation and results generalize to CRTs with more than two partitions.) We start with a simple SCM reflecting independence between clusters; the corresponding DAG is in Figure~\ref{s2_u}:
\begin{equation}
\begin{aligned}
E^c &= f_{E^c}(U_{E^c})\\
W^p_1 &= f_{W^p_1}(E^c, U_{W^p_1})\\
W^p_2 &= f_{W^p_2}(E^c, U_{W^p_2})\\
A^c &= f_A(U_{A^c})\\
Y^p_1 &= f_{Y^p_1}(E^c, W^p_1, W^p_2, A^c, U_{Y_1^p})\\
Y^p_2 &= f_{Y^p_2}(E^c, W^p_1, W^p_2, A^c, U_{Y_2^p})
\end{aligned}
\label{scm0}
\end{equation}
The structure of the remaining $U$s may be complex and cluster-specific; for example, the unobserved factors influencing the outcomes $(U_{Y^p_1}, U_{Y^p_2})$ might be correlated with and/or in some way related to unmeasured factors at the cluster level $U_{E^c}$. For more, we direct the reader to \cite{laan_estimating_2013}. Beyond the unmeasured factors, there are several sources of dependence between partition-level outcomes in this model. For example, the outcome for the $j^{th}$ partition $Y^p_j$ may depend on the characteristics of the other $W^p_{-j}$. This general model only allows for independence at the cluster-level, not the partition-level (yet).
\subsection{Assumptions for partition-level independence}
To treat the partition (sub-cluster) as the conditionally independent unit, we need to make several assumptions, similar to those detailed in \cite{laan_estimating_2013}.
\begin{enumerate}
\item Any effect of the cluster-level covariates $E^c$ on the partition-level outcome $Y_j^p$ is only through their effect on partition-level covariates $W^p_j$.
\item The $j^{th}$ partition's outcome $Y_j^p$ is not influenced by the characteristics of the other partition in its cluster $W^p_{-j}$.
\item There are no unmeasured common causes of the cluster-level characteristics and the partition-level characteristics or outcomes: $U_{E^c} \perp \!\!\! \perp U_{Y^p_j}, U_{W^p_j}$.
\item There are no unmeasured common causes of characteristics or outcomes between partitions in a cluster: $U_{Y^p_j}, U_{W^p_j} \perp\!\!\!\perp U_{Y^p_{-j}}, U_{W^p_{-j}}$.
\end{enumerate}
Whether or not these assumptions are reasonable depend on the study context. To maximize the effective sample size, it might be tempting to define the ``partitions" as the $J$ individuals in a cluster. However, this would entail very strong assumptions, which may be unrealistic.
In the SEARCH study, for example, the baseline HIV status of one household member likely influences the outcome risk of another, as people with HIV may be at higher risk of acquiring or spreading TB (violating assumption \#2), and a household's ventilation may be a residual (i.e., unmeasured) source of correlation between individual-level outcomes (violating assumption \#4); hence, we were unable to justify considering individuals as the unit of conditional independence. Further discussion of the motivating study is provided in Section~\ref{results}. In other settings, however, considering individuals as conditionally independent may be reasonable; for a further discussion of this option, see \cite{laan_estimating_2013}.
In general, these assumptions should be considered with caution, as they result in meaningful changes to the causal graph (Figure~\ref{s2_indep}) and model:
\begin{equation}
\begin{aligned}
E^c &= f_{E^c}(U_E^c)\\
W^p_1 &= f_{W^p_1}(E^c, U_{W^p_1})\\
W^p_2 &= f_{W^p_2}(E^c, U_{W^p_2})\\
A^c &= f_{A^c}(U_{A^c})\\
Y^p_1 &= f_{Y^p_1}(W^p_1, A^c, U_{Y^p_1})\\
Y^p_2 &= f_{Y^p_2}(W^p_2, A^c, U_{Y^p_2})
\end{aligned}
\label{scm000}
\end{equation}
where the following independence assumptions on the joint distribution of unmeasured factors $\mathbb{P}_U$ hold by design and by assumption, respectively:
$U_W, U_{E^c}, U_{Y_1^p}, U_{Y_2^p} \perp \!\!\! \perp U_{A^c}$;
$U_{E^c} \perp \!\!\! \perp U_{Y^p_j}, U_{W^p_j}$, and
$U_{Y^p_j}, U_{W^p_j} \perp \!\!\! \perp U_{Y^p_{-j}}, U_{W^p_{-j}}$.
\subsection{Estimation and inference with partition-level independence}
\label{Sec:Est_partition}
The assumptions encoded in the restrictive causal model (Eq.~\ref{scm000}) have important implications for our two-stage estimation approach. Previously, when considering the cluster to be the independent unit, we identified and estimated a cluster-specific endpoint $Y^c$ that accounted for sub-sampling of individuals, missingness on baseline outcome status of sampled individuals, and missingness on final outcome status of individuals known to be at risk at baseline. Under the more restrictive model, we now need to identify and estimate a partition-specific endpoint $Y^p$ in Stage 1. Additionally, during effect estimation in Stage 2, we previously adjusted for cluster-level covariates $E^c$ to increase precision. Now, however, adjustment for the partition-level covariates $W^p$ is \textit{required} to block the effect of the cluster-level factors $E^c$, which are no longer included in the adjustment set. Therefore, the Stage 2 statistical estimand is now defined in terms of contrasts of treatment-specific means of the partition-level endpoint, such as $\psi^p(a^c) = \mathbb{E}[\mathbb{E}\{Y^p | A^c=a^c, W^p\}]$. Here, $\psi^p(a^c)$ is the conditional expectation of the partition-level outcome given the cluster-level intervention and partition-level covariates, averaged over the distribution of the partition-level covariates. Blurring the lines between randomized trials and observational data, we now must adjust for confounders $W^p$ to identify a causal effect
and support the conditional independence assumptions.
Importantly, the revised statistical estimand $\psi^p(a^c)$ is in terms of the expected partition-level outcome and has a subtly different interpretation that the original statistical estimand $\psi^c(a^c)$, which was in terms of the expected cluster-level outcome. Additionally, when the number of partitions per cluster varies, the value of these two estimands could differ. However, we can apply weights to recover either estimand; for a detailed discussion and worked examples, see \cite{benitez_defining_2022}.
As before, the revised statistical estimand $\psi^p(a^c)$ could be estimated with a variety of algorithms. We again recommend TMLE, given its double robustness property. Recall the adjustment for $W^p$ is now required to support the independence assumptions. For flexible adjustment of those covariates, we can again use Super Learner as in Stage 1.
Importantly, treating the partition as the conditionally independent unit also changes the assumptions needed for statistical inference. Specifically, we now have stronger conditions for Two-Stage TMLE to be asymptotically linear. As before, we need Stage 1 estimators of the partition-level endpoint $Y^p$ to be consistent. Now, however, the regularity conditions on effect estimation in Stage 2 do not hold by design. Instead, we need estimators of the partition-level outcome regression and propensity score to converge to the truth at quick enough rates and avoid overfitting \cite{laan_targeted_2011}. To satisfy these Stage 2 conditions, we recommend implementing a partition-level TMLE with Super Learner with a diverse set of candidate algorithms. If these conditions hold, the TMLE will be consistent with variance equal to the variance of its influence curve.
\section{Application to the SEARCH sub-study for incident TB infection}
\label{results}
Many factors influence the TB infection risk. Examples include an individual's susceptibility (e.g., age and HIV status) and their level of exposure to TB. The latter is influenced by air quality/ventilation and social mixing patterns (e.g., the prevalence of TB in a given location and the infectiousness of persons with TB in a given location). In particular, transmission is known to occur both within and outside households \cite{martinez_paediatric_2019, carbone_active_2015, wood_tuberculosis_2010}. Given these considerations, we immediately eliminated the individual and the household as possible candidates for the conditionally independent unit. However, given the geographic distribution of the study communities, the mobility patterns of community residents, and the distribution of gathering sites such as churches, we expected that TB was far less likely to spread across \textit{parishes}, a sub-community administrative unit. Therefore, with the larger sub-study team, we critically evaluated the assumptions in Section \ref{2comm} and concluded that after adjustment for the parish-level prevalence of HIV and the parish-level proportion of adults who drink alcohol, it was reasonable to treat the parish as the conditionally independent unit. In brief, HIV and TB are linked epidemics, and outside of the household, bars and other drinking locations are the key hotpots for TB transmission. In Section~\ref{comparison}, we demonstrate the sensitivity of our study results with the parish (two parishes per community) or the community as the (conditionally) independent unit. Before doing so, we previously summarize the assumptions and implementation of Two-Stage TMLE in SEARCH.
\subsection{Assumptions \& estimator implementation}
In Two-Stage TMLE, our first step is to identify and estimate a partition-level endpoint, appropriately accounting for missingness at the individual-level. In SEARCH, there were three ways complete-case data would not be representative of the underlying population: (1) the sampling scheme for the sub-study; (2) measurement of baseline TB status among those sampled, and (3) measurement of final TB status among those in the incidence cohort (i.e., persons known to be at risk at baseline). Specifically, sampling was enriched for adult members with HIV: of the 17,858 households in the 9 eastern Ugandan communities, 1,435 were selected and 688 (47.9\%) of those households had at least one adult with HIV. The adult (aged $\geq 15$) prevalence of HIV in the sub-sample was 19.6\%, a sharp contrast to adult HIV prevalence in the overall study communities of 3.6\% \cite{havlir_hiv_2019}. Nonetheless, sampling $S$ was random within household HIV status $H$, satisfying the following assumptions by design: $Y^*_0 \perp \!\!\! \perp S \mid H$ and $\mathbb{P}(S=1 \mid H=h)>0$ for $h\in\{0,1\}$.
Despite up to three visits to the sampled households, including weekends and after hours, only 4884 (58\%) of the 8420 household members were administered a TST at baseline. With the larger sub-study team, we determined that the key risk factors for prevalent TB and its measurement were age and increased mobility, largely due to school or work. Consider, for example, an older participant who has an occupation that requires travel; they may be less likely to be measured, but also at a higher risk of TB, due to their age and travel. Nonetheless, if in addition to household HIV status $H$, age and mobility were the only common causes of prevalent TB and its measurement, then the missing data assumption $Y_0^* \perp \!\!\! \perp \Delta_0 \mid S=1, W$ would hold, where, for ease of notation, $W$ includes household HIV status, participant age, and their mobility. Put differently, we were willing to assume that for sampled individuals and within values of $W$, the prevalence of TB among those with a baseline TST was representative of the prevalence of TB among those without a baseline TST. Additionally, we assumed there was a positive probability of administering a TST within all possible values of $W$. These assumptions, together with the sampling design, allowed for the identification of baseline prevalence of TB in each partition $\mathbb{P}(Y_0^*=1)$ as $\psi_{den}=\mathbb{E}[Y_0 \mid \Delta_0=1, S=1, W]$.
Among the 4884 participants administered a TST at baseline, 3871 (78\%) were TST-negative, forming a closed cohort for incidence measurement. The assumptions on measurement of TB status among this cohort were similar. In brief, we again assumed that within strata defined by household HIV status, age, and mobility, the risk of incident TB infection among the 3003 (78\%) of cohort members with a follow-up TST was representative of the risk of incident TB infection among the 828 (22\%) of cohort members without a final TST. We also assumed a positive probability of receiving a follow-up TST (among the incidence cohort) within all strata defined by $W$. These assumptions were again supported by the study design, including the repeat visits to households, and allowed for identification of $\mathbb{P}(Y_1^*=1, Y_0^*=0)$ as $\psi_{num} = \mathbb{E}\left[ \mathbb{E}\{ \mathbb{E}( Y_1 \mid \Delta_1 = 1, Y_0=0,\Delta_0 = 1, S = 1, W) \mid \Delta_0 = 1, S = 1, W\} \right]$.
For implementation, we estimated the one-year incidence of TB infection by stratifying on parish and taking the ratio of $\hat{\psi}_{num}$ and $(1-\hat{\psi}_{den})$, each estimated with TMLE.
Within TMLE, we used Super Learner to combine estimates from main-terms GLM, multivariate adaptive regression splines, and a simple mean. Figure \ref{s1} in the supplemental materials provides estimates of the parish-specific one-year incidence of TB infection, along with 95\% confidence intervals.
Recall in Two-Stage TMLE, the second step is to use the partition-level endpoint estimates $\hat{Y}^p$ to evaluate the effect of the cluster-level intervention. In the application, we evaluated the SEARCH effect on incident TB infection with a parish-level TMLE, a procedure outlined in \cite{balzer_two-stage_2021}. To support our previously discussed independence assumptions, the adjustment set $W^p$ include the baseline parish-level prevalence of HIV and the baseline parish-level prevalence of alcohol use. For flexible adjustment, we used the Super Learner with the same library of prediction algorithms. Computing code is available at [blinded for review]
.
\subsection{Comparative results}
\label{comparison}
The results of the SEARCH sub-study on incident TB infection have been previously presented in \cite{marquez_impact_2022}. The primary prespecified analysis, using Two-Stage TMLE with the parishes as the conditionally independent unit, suggested that SEARCH intervention resulted in a 27\% reduction in incident TB infection in eastern Uganda; the adjusted relative risk (aRR) was $0.73$ $(95\%$CI: $0.57-0.92;$ one-sided $p$-value $0.005)$. Plausible mechanisms for this substantial reduction in incidence are detailed in \cite{marquez_impact_2022}.
We now explore the practical impact of varying the identifiability assumptions on estimation and inference. The results of our comparison are summarized in Figure~\ref{results_fig} and Table~\ref{tablecomp}in the supplemental materials. First, we relaxed the assumption that parishes were conditionally independent and, instead, took a more traditional approach treating the randomized unit (i.e., the cluster) as the independent unit. As expected, when we moved from an effective sample size of $N = 18$ in the parish-level analysis to a sample size of $N = 9$ in the community-level analysis, the effect estimate shifted slightly and substantial precision was lost: aRR $= 0.86$ $(95\%$CI: $0.66-1.13;$ $p = 0.115)$. In this secondary analysis, we used Two-Stage TMLE as described in Section~\ref{sec:Two-stage}. Stage 1 was implemented analogously to obtain community-level estimates of TB incidence, accounting for the 3 sources of missingness. However, Stage 2 was implementing using a commmunity-level TMLE with Adaptive Pre-specification to select the adjustment covariates that maximized empirical efficiency \cite{balzer_adaptive_2016}.
To further explore the impact of our assumption that parishes were conditionally independent, we conducted a sensitivity analysis where Stage 1 accounted for missingness (as before), but Stage 2 was implemented without adjustment. This approach corresponds to the very strong and unreasonable assumption that the only source of dependence between parishes was the shared community-level intervention $A^c$. In other words, this analysis assumed no community-level covariates (measured or not) directly or indirectly influenced the incidence of TB infection. Estimates from this approach were again in the similar direction, but even less precise: aRR $= 0.91$ $(95\%$CI: $0.63 - 1.32$; $p=0.304)$.
Next we explored the impact our missing data assumptions. Specifically, we conducted a a sensitivity analysis where Stage 1 estimates of incidence were unadjusted, but Stage 2 was adjusted (as before). This approach corresponds to the very strong and unreasonable assumption that individual-level outcomes were missing completely at random (MCAR). In fact, we know this assumption was violated: the sub-sample was enriched for persons with HIV, and HIV is a known risk factor for TB. Age and mobility are additional risk factors for TB and for not having a TST placed at baseline or follow-up. Perhaps unsurprisingly, estimates from the approach were markedly different and in the opposite direction of the primary analysis: aRR $=1.04$ $(95\%$CI: $0.80 - 1.37$; $p=0.633)$. In other words, conducting a complete-case analysis would lead to the erroneous conclusion that the SEARCH intervention increased the incidence of TB infection by 4\%.
Finally and as an extreme example of strong assumptions on measurement and independence, we conducted a fully unadjusted analysis. In Stage 1, we estimated the parish-level incidence of TB infection with the raw proportion among those measured. Then in Stage 2, we compared parish-level incidence estimates by arm without further adjustment. This approach is not recommended in practice and suggested the SEARCH intervention increased the incidence of TB infection by 18\%: aRR $= 1.18$ $(95\%$CI: $0.84-1.63$; $p=0.843)$.
\section{Discussion}
\label{discussion}
CRTs allow for the rigorous evaluation of interventions delivered at the group-level. Within CRTs, rare or expensive outcomes may only be measured in a subset of clusters and, within those clusters, on a sub-sample of participants. Missing outcomes among participants is another common issue, which can bias estimates of baseline prevalence, the incidence of the outcome, and the intervention effect. To address these challenges, we extended Two-Stage TMLE to account for sub-sampling of participants and differential measurement of their outcomes at baseline and follow-up. Additionally, we detailed the assumptions needed to consider a sub-cluster partition as the conditionally independent unit. We also extended Two-Stage TMLE to this novel setting, which blurs the lines between CRTs and observational studies. Our application to real-data from the SEARCH community randomized trial demonstrated the real-world impact of varying assumptions and analytic choices. For example, ignoring the sampling scheme and assuming the outcomes were missing-completely-at-random reversed the direction of the estimated intervention effect.
When estimating the endpoint in Stage 1 and evaluating the intervention effect in Stage 2, we used TMLE with Super Learner to minimize parametric assumptions and allow for flexible estimation of the relationships between variables.
In the absence of missing data, a single-stage approach, such as GLMMs or GEE, could be used to estimate the intervention effect. These methods account for the dependence of participants within a cluster and can incorporate adjustment for partition-level variables $W^p$, needed to support the independence assumptions. However, when adjusting of covariates, these alternative estimators are often limited in their ability to estimate marginal effects \cite{benitez_defining_2022}. For example, when using the logit-link in GLMM and GEE, the conditional odds ratio is estimated \cite{laird_random-effects_1982, hubbard_gee_2010}. Additionally, as previously discussed, even after considering the sub-cluster partition to be the conditionally independent unit, the effective sample size may still be too small to support use of these approaches without finite sample corrections.
Finally and perhaps most importantly, these methods rely on strong modeling assumptions and do not share the double robustness properties of TMLE.
Nonetheless, our approach does require real assumptions on missing data and the dependence structure within a cluster.
These assumptions have implications for trial design. First, all the shared causes of missingness and outcomes must be measured. Second, fairly large cluster (or sub-cluster) sizes are needed for stable and consistent estimation of the endpoints in Stage 1 \cite{balzer_two-stage_2021}. Finally, to support any conditional independence assumptions in Stage 2, a rich set of partition-level covariates should be collected.
In all cases, these assumptions should be carefully considered and transparently stated.
Even when these assumptions are reasonable, there may be additional finite sample concerns for estimation and inference. Specifically, there can arise a tension between adjusting for too many covariates (with the potential of overfitting, even with cross-validation) and including too few (not supporting the identifiability assumptions). As illustrated with the real-data example, in-depth discussion with the study team is imperative to identifying the minimal set of adjustment variables needed to support our assumptions. Additionally, we recommend conducting a simulation study, informed by the real-data application, to provide guidance on potential sources of bias. Finally, future work could consider implementation of a `Collaborative' Two-Stage TMLE \cite{laan_collaborative_2010}, where the propensity score is fit in response to adjustment conducted in the outcome regression.
\printbibliography
\section*{Tables and figures}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=.79\textwidth]{dag_one_clust.png}
\caption{A simplified directed acyclic graph (DAG) for a participant from a given cluster in the TB sub-study of SEARCH. For simplicity, the graph is shown without any dependence between unmeasured variables, which are, thus, omitted. Abbreviations: TST = tuberculin skin test, TB = tuberculosis.}
\label{oneclustdag}
\end{center}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=.75\textwidth]{s2_unrestricted.png}
\caption{DAG visualizing the assumed relationships between variables in SCM, assuming two partitions in each cluster.}
\label{s2_u}
\end{center}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=.75\textwidth]{s2_indep.png}
\caption{Restricted DAG reflecting the assumptions needed for the partitions to be conditionally independent. Note the independence of partition- and cluster-level $U$s, the fact that all effects of $E^c$ on the outcomes occur only through their effect on $W^p_j$, and the independence of outcome $Y^p_j$ from $W^p_{-j}$ after conditioning on $W^p_j$.}
\label{s2_indep}
\end{center}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=.85\textwidth]{results_plot_BW.png}
\caption{Graphical comparison of the results under different sets of assumptions using real data from the SEARCH sub-study on incident TB infection. The primary analysis, a parish-level analysis with adjustment in Stage 1 and Stage 2, is shown first.}
\label{results_fig}
\end{center}
\end{figure}
\section*{Acknowledgements and funding}
We extend many thanks to: the Ministries of Health of Uganda and Kenya; our research and administrative teams in San Francisco, Uganda, and Kenya; collaborators and advisory boards; and, especially, all the communities and participants involved. Funding support for this work was provided by The National Institutes of Health (U01AI099959, UM1AI068636, K23AI118592, and R01AI151209) and the President's Emergency Plan for AIDS Relief.
\end{document}
\section{Derivations of the identifiablity results}
\label{appendix:derivations}
\subsection{Stage 1 denominator}
\label{appendix:derivations:denominator}
Recall that $S$ is an indicator of being sampled for participation in the sub-study, $W$ are individual-level covariates, $\Delta_0$ is an indicator of individual measurement at baseline, $Y^*_0$ is the true underlying outcome status of the individual at baseline, and $Y_0 = \Delta_0 \times Y^*_0$ an indicator that the individual was measured and had the outcome at baseline. Our goal is to estimate the baseline outcome prevalence $\mathbb{P}(Y^*_0 = 1)$ under a hypothetical intervention where all participants were included $(S = 1)$ and all had their outcome measured at baseline $(\Delta_0 = 1)$. Under these following assumptions, together with sufficient data support (i.e., positivity), we can identify the target causal parameter $\mathbb{P}(Y^*_0 = 1)$ as
\begin{equation}
\label{eq:one}
\begin{aligned}
\mathbb{P}(Y^*_0 = 1) &= \sum_{w} \mathbb{P}(Y^*_0 \mid W = w) \mathbb{P}(W = w) \\
&\text{by } Y_0^* \perp \!\!\! \perp S \mid W \text{:}\\
&= \sum_{w} \mathbb{P}(Y^*_0 \mid S = 1, W = w) \mathbb{P}(W = w) \\
&\text{by } Y_0^* \perp \!\!\! \perp \Delta_0 \mid S=1, W \text{:}\\
&= \sum_{w} \mathbb{P}(Y^*_0 \mid \Delta_0 = 1, S = 1, W= w) \mathbb{P}(W = w)\\
& \text{since }Y_0^* = Y_0 \text{ when } \Delta_0 = 1, S = 1 \text{:}\\
&= \sum_{w} \mathbb{P}(Y_0 \mid \Delta_0 = 1, S = 1, W = w) \mathbb{P}(W = w)\\
&=\mathbb{E} \left[ \mathbb{E} \left(Y_0 \mid \Delta_0 = 1, S = 1, W \right) \right].
\end{aligned}
\end{equation}
Throughout the summation generalizes to an integral for continuous-valued variables $W$.
\subsection{Stage 1 numerator}
\label{appendix:derivations:numerator}
In addition to the notation in Appendix \ref{appendix:derivations:denominator}, recall that $\Delta_1$ is an indicator of individual measurement at follow-up, $Y^*_1$ is the underlying indicator that the individual had the outcome at follow-up, and $Y_1 = \Delta_1 \times Y^*_1$ an indicator that the individual was measured and had the outcome at follow-up. Our goal is to identify the proportion of individuals who have the outcome at follow-up and were at risk at baseline: $\mathbb{P}(Y_1^* = 1, Y_0^* = 0)$. To do so, we utilize and extend the approach of Balzer et al. \cite{balzer_far_2020}. For simplicity of presentation, we define the underlying indicator of the outcome of interest as
$Z^* = \mathbb{I}(Y^*_1 = 1, Y^*_0 = 0)$ and
its observed analog as $Z = \mathbb{I}(Y_1 = 1, Y_0 = 0)$.
To address missingness on baseline outcome status and follow-up outcome status, we consider a longitudinal dynamic regime \cite{hernan_comparison_2006, laan_causal_2007, robins_estimation_2008}. First, we `set' $S = 1$ and $\Delta_0=1$; that is, all individuals are included in the sub-sample and measured at baseline. Second, among those known to be at risk at baseline ($Y_0=0$ and $\Delta_0=1$), we `set' $\Delta_1 = 1$ to ensure complete measurement of the outcome at follow-up.
Identification of $\mathbb{P}(Z^*=1)$ from the observed data distribution is possible under the sequential randomization assumption \cite{robins_new_1986}, which is similar to the `missing at random' assumption, and sufficient data support (i.e., the relevant positivity assumptions), shown in the steps of the derivation below:
\begin{align*}
\mathbb{P} (Z^* & = 1) \\
=& \sum_w \mathbb{P}(Z^*=1 \mid W=w)\mathbb{P}(W=w)\\
\text{by }& Z^* \perp \!\!\! \perp S \mid W \text{ and } Z^* \perp \!\!\! \perp \Delta_0 \mid S = 1, W \\
=& \sum_w \mathbb{P}(Z^*=1 \mid \Delta_0=1, S=1, W=w)\mathbb{P}(W=w)\\
=& \sum_w \sum_{y_0} \mathbb{P}(Z^*=1 \mid Y_0=y_0, \Delta_0=1, S=1, W=w)
\mathbb{P}(Y_0= y_0 \mid \Delta_0=1, S=1, W=w) \mathbb{P}(W=w)\\
\text{by } & Z^* = 0 \text{ when } Y_0 = 1 \text{ and } \Delta_0 = 1
\\
=& \sum_w \mathbb{P}(Z^*=1 \mid Y_0=0, \Delta_0=1, S=1, W=w)
\mathbb{P}(Y_0= 0 \mid \Delta_0=1, S=1, W=w) \mathbb{P}(W=w)\\
\text{by } & Z^* \perp \!\!\! \perp \Delta_1 \mid Y_0 = 0, \Delta_0 = 1,S=1,W \\
=& \sum_w \mathbb{P}(Y_1^*=1 \mid \Delta_1=1, Y_0=0, \Delta_0=1, S=1, W=w)
\mathbb{P}(Y_0= 0 \mid \Delta_0=1, S=1, W=w) \mathbb{P}(W=w)\\
=& \sum_w \mathbb{P}(Y_1=1 \mid \Delta_1=1, Y_0=0, \Delta_0=1, S=1 , W=w)
\mathbb{P}(Y_0= 0 \mid \Delta_0=1, S=1, W=w) \mathbb{P}(W=w)
\end{align*}
As before, the summation generalizes to an integral for continuous-valued variables $W$.
\section{Additional results from SEARCH and comparative results under different identification assumptions}
\label{app:sens}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=.85\textwidth]{_s1_BW.png}
\caption{Adjusted Stage 1 estimates of the parish-level incidence of TB infection with 95\% confidence intervals, sorted by point estimate and study arm. Recall there are 2 parishes per community.}
\label{s1}
\end{center}
\end{figure}
\begin{table}[!hb]
\begin{tabular}{l|l|l}
& Control mean (min-max) & Intervention mean (min-max)\\
\hline
\hline
Total population & 4767 (2453-6913) & 5391 (3102-7459)\\
\hline
Sampled & 466 (186-679) & 470 (210-692)\\
\hline
Tested at baseline & 264 (112-448) & 280 (116-428)\\
\hline
TST-negative at baseline, tested at follow-up & 171 (49-289) & 162 (66-268)\\
\hline
\end{tabular}
\caption{Parish-level characteristics by arm}
\label{tablecomp}
\end{table}
\begin{table}[]
\centering
\def\arraystretch{1.5
\begin{tabularx}{\textwidth}{Y Y l}
\textbf{Estimator} & \textbf{Key assumptions} & \makecell{\textbf{Risk ratio}\\\textbf{estimate (95\% CI)}}\\
\hline
Stage 1 and Stage 2 adjusted, parish-level (Primary analysis) &
Individual-level outcomes are missing at random (MAR) given household HIV status, age and mobility. Parishes-level outcomes are conditionally independent given the prevalence of HIV and prevalence of alcohol use. & 0.73 (0.57 - 0.92)\\
Stage 1 and Stage 2 adjusted, community-level$^*$ &
Individual-level outcomes are MAR given household HIV status, age and mobility.
& 0.86 (0.66 - 1.13)\\
Stage 1 adjustment only, parish-level &
Individual-level outcomes are MAR given household HIV status, age and mobility. Parishes-level outcomes are (marginally) independent.
& 0.91 (0.63 - 1.32)\\
Stage 2 adjustment only, parish-level &
Individual-level outcomes are missing completely at random (MCAR). Parishes-level outcomes are conditionally independent given the prevalence of HIV and prevalence of alcohol use.
& 1.04 (0.80 - 1.37)\\
Unadjusted, parish-level &
Individual-level outcomes are MCAR. Parishes-level outcomes are (marginally) independent.
& 1.18 (0.84 - 1.63)\\
\hline
\multicolumn{3}{l}{\begin{footnotesize} $^*$Stage 2 adjustment covariates selected through Adaptive Prespecification to maximize empirical efficiency \cite{balzer_adaptive_2016}. \end{footnotesize}}
\end{tabularx}
\caption{Comparison of the results under different sets of assumptions using real data from the SEARCH sub-study on incident TB infection.}
\label{tab:sens}
\end{table}
\end{document} |
1,477,468,751,111 | arxiv | \section{Introduction}
Nonequilibrium phase transitions are considered a key feature of a countless number of phenomena, such as
magnetic systems, biological and ecological models, water-like anomalies,
and many others \cite{marr99,hinrichsen,odor04,henkel}.
Recently, a considerable interest has been
devoted to the inclusion of more realistic
ingredients in order to describe (or mimick) the effects
of impurities or external
fluctuations, as well as their effects
in the phase transition \cite{buendia, bustos, liu, buono, paula,voronoi}.
Commonly, these ingredients are introduced
by allowing the control parameter to assume distinct
values in space and/or time.
The former case, regarded as quenched
disorder, affects drastically the phase transitions,
leading to the existence of a new
universality classes and local regions in the absorbing phases,
characterized by large activities with slow decays towards
extinction. These rare
regions typically arise
when the activation rate $\lambda$ lies between the clean value
$\lambda_c^0$ (without disorder) and the dirty (disordered)
critical point $\lambda_c$; ie, $\lambda_c^0<\lambda<\lambda_c$.
Moreover, in these regions the system may exhibit non-universal exponents toward
full extinction \cite{igloi,oliveira,vojta}.
Heuristically, the Harris criterion
\cite{harris} establishes that quenched
disorder is a relevant perturbation
if $d\nu_{\perp} <2$, where $d$ the system
dimensionality and $\nu_{\perp}$ is the spatial correlation
length exponent.
For models belonging to
the directed percolation (DP) universality class $\nu_{\perp}=1.096854(4), 0.734(4)$ and $0.581(5)$
in $d=1,2$ and $3$, respectively. Consequently, the Harris criterion indicates
that spatial disorder
is a relevant perturbation for continuous absorbing phase transitions in all dimensions.
Conversely, the Imry-Ma
\cite{imry} and Aizenman-Wehl \cite{wehl} criteria establish that
quenched disorder suppresses the phase coexistence
in equilibrium systems for $d \le 2$.
Afterwards, it was shown \cite{hovi,corte,hoenicke} that
the discontinuous transition in the
Ziff-Gulari-Barshad
(ZGB) model becomes continuous when the disorder strength is large enough.
More recently, Villa-Mart\'in et al. \cite{paula} have suggested that
the Imry-Ma-Aizenman-Wehl conjecture should be extended for
discontinuous absorbing phase transitions for $d \le 2$,
irrespective of the disorder magnitude.
Although less studied than spatial disorder, the
influence the temporal disorder has also
been considered in some cases \cite{munoz2011,martinez,jensen96}.
In contrast to the quenched disorder, here
the control parameter becomes time-dependent,
resulting in a temporarily active (ordered)
as well as absorbing (disordered) phases,
whose effect of variability becomes pronounced at the
emergence of the phase transition.
In particular, the available results
have shown that temporal
disorder is a highly relevant perturbation \cite{vojta-hoyos},
suppressing the DP phase
transitions in all dimension. For systems with up-down symmetry
they are relevant only for $d\ge 3$.
{\em Temporal Griffiths phases} (TGPs), a region in the active
phase characterized by power-law spatial scaling and generic
divergences of the susceptibility,
have also been reported for absorbing phase transitions
\cite{munoz2011,vojta-hoyos,neto2,solano},
but not found in low dimensional systems
with up-down symmetry \cite{martinez}.
On the other hand, the effect of temporal disorder for
{\em discontinuous} absorbing phase transitions is still unknown.
In order to shed some light in this direction, here we investigate
the effects of temporal disorder in discontinuous
absorbing phase transition. Our study aims
to answer three fundamental questions: (i) is the occurrence of
phase coexistence forbidden under the presence of temporal disorder? (ii) if no, which changes does
it provoke with respect to the pure (without disorder) version?
(iii) Does the temporal disorder induce temporal
Griffiths phases around these phase transitions?
These ideas will be tested in three models which are known to yield discontinuous absorbing phase transitions in two- and infinite-dimensional systems,
namely the ZGB model for CO oxidation \cite{zgb}
and two lattice versions of the second Schl\"ogl model (SSM) \cite{schlogol72,oliveira}.
As we will show, in all cases the phase transition is characterized by a behavior similar to their pure (without disorder) counterparts,
including bistability around the coexistence point and
common finite size scaling behavior with the inverse of the system volume, as
recently proposed in \cite{martins-fiore}.
This paper is organized as follows: In Sec. II we review the models studied
and the simulation methods employed. Results and discussion are shown in Sec. III and conclusions are presented
in Sec. IV.
\section{Models and methods}
The SSM is single-species autocatalytic reaction model defined
by the reactions $2A \to 3A$ and $A\to 0$,
which occurs with transition rates
$1$ and $\alpha$, respectively.
Such system displays a discontinuous phase transition that can be qualitatively
reproduced under a mean-field treatment.
The first reaction predicts
a particle growth following a quadratic
dependence on the density, which makes
low-density (active) state unstable and thus,
a jump to a nonzero (large) density
arises as the creation probability
$1/(1+\alpha)$ increases to
a threshold value $\alpha_0=1/4$ \cite{marr99}.
Nonetheless, distinct works have claimed that
these reaction rules are not sufficient to exhibit a discontinuity in a
regular
lattice \cite{durret}. In particular, the system dimensionality
and the geometrical constraint of requiring the
presence of a pair of adjacent particles surrounding an
empty site (in order to fill the reaction $2A \to 3A$)
are essential ingredients for the emergence of a phase coexistence
\cite{fiore14,foot2}.
Here, we consider two square lattice versions of the SSM. The first one (SSM1),
proposed by Windus and Jensen \cite{jensen} and afterwards reconsidered
in Ref. \cite{paula}, is defined as follows:
A given particle $i$ is
chosen (with equal probability) from a list of currently occupied sites
and is annihilated with probability $p_a=\alpha/(1+\alpha)$. Or, with
probability $(1-p_a)/4$, a nearest
neighbor site of $i$, called site $j$, is also chosen
at random. If
the site $j$ is empty, the particle $i$ will diffuse for it.
If $j$ is filled by a particle, an offspring
will be created at one of the neighboring sites of
$i$ and $j$ (chosen with equal possibility)
with probability $p_b$ provided it is empty; otherwise
nothing happens. The value $p_{b} = 0.5$ has been considered
to directly compare our results with previous
studies \cite{jensen,paula,martins-fiore}.
After above dynamics, the time is incremented by $1/N$, where $N$ is
the number of occupied sites.
For the second version, SSM2,
the selection of particle $i$, its annihilation probability
and the choice of the nearest
neighbor site $j$ are identical to SS1.
However, in the SSM2
when a neighboring site $j$ is chosen,
its number of nearest neighbor occupied sites $nn$ will be evaluated. A new
offspring will be created at $j$ with rate $nn/4$ provided $nn \ge 2$
and it is empty. More specifically, if $nn=1$ no particle will be
created in the vacant site. On the contrary,
if $nn=2$, $3$ or $4$, the creation will occur
with probability $nn/4$.
It is worth mentioning that in the SSM1,
the discontinuous transition is caused by both the diffusion and the creation of offsprings in the presence of two particles.
Conversely, in the SSM2 model it is caused by the creation of offsprings in the presence of at least two species.
The third system we investigate is the ZGB model \cite{zgb},
which qualitatively reproduces some features of the
oxidation of carbon monoxide on a catalytic
surface. The surface is modeled as a square lattice,
in which each site can be empty ($*$), or occupied by an
oxygen (O$_{ads}$) or a carbon monoxide (CO$_{ads}$).
It is summarized by the following reactions:
\begin{eqnarray}
\mbox{CO}_{gas}+*\to \mbox{CO}_{ads} \nonumber \\
\mbox{O}_{2 gas}+2* \to 2\mbox{O}_{ads} \nonumber \\
\mbox{CO}_{ads}+\mbox{O}_{ads}\to \mbox{CO}_{2}+ 2*. \nonumber
\end{eqnarray}
\noindent
In practice, molecules of CO$_{gas}$ and O$_{2 gas}$ hit the surface with
complementary probabilities $Y$ and $(1-Y)$, respectively, at any time
the chosen site is empty. At the surface, O$_2$ molecule dissociates into two
independent O atoms, each one occupying two adjacent empty
sites. If a CO$_{ads}$O$_{ads}$ pair is placed at neighboring sites on
the surface, a CO$_{2}$ molecule will be formed, desorbing instantaneously and leaving both sites empty.
As in the SSM models, after the above dynamics
is implemented, time is incremented by $1/N$ where $N$ is the total number of empty sites.
By changing the parameter $Y$, the model exhibits two phase transitions
between an active steady state and one of two absorbing (``poisoned'') states, in which the
surface is saturated either by O or by CO. The O-poisoned transition
is found to be continuous. On the other hand, the CO-poisoned
transition is discontinuous, and in this work we will focus on this specific case.
For the SSMs, the order parameter $\phi$ is the system density $\rho$ and
the transitions take place at
$\alpha_0=0.0824(1)$ (SSM1) \cite{martins-fiore,foot}
and $\alpha_0=0.2007(6)$ (SSM2) \cite{oliveira2}.
For the ZGB, $\phi$ is the density of CO and the transition occurs
at $Y_0=0.5250(6)$\cite{martins-fiore}.
The temporal disorder is introduced so that at
each time interval $t_i\le t\le t_i+\Delta t$, a generic control parameter
$p$ assumes a value extracted from a uniform
distribution with mean ${\bar p}$ and width $\sigma$. More specifically,
$p$ is evaluated using the formula $p={\bar p}+(2\xi-1)\sigma$,
where $\xi$ is a random number drawn at each time interval $\Delta t$ from the standard uniform distribution in
$ [0,1]$.
For the SSMs, ${\bar p}$ corresponds to
the creation probability ${\bar p}=1-p_a=\frac{1}{1+\alpha}$,
with a similar formula holding for the ZGB model with $1-p_a$ replaced by $Y$.
In order to locate the transition point and the nature
of the phase transition, we consider three alternative procedures.
First, we follow the time behavior of the order-parameter
$\phi(t)$, starting from
a fully active initial configuration.
In the active phase, it converges to a constant value,
signaling the
permanent creation and annihilation of particles.
On the other hand, $\phi(t)$ decays exponentially
toward extinction (full poisoned state for the ZGB) in the absorbing phase.
In the case of a typical (without disorder)
continuous phase transitions, the above regimes are separated
by a power-law decay $\phi(t) \sim t^{-\theta}$, with
$\theta$ being the associated critical exponent. For the DP universality
class, $\theta=0.4505(10)$ in two dimensions \cite{henkel}. In the presence
of temporal disorder, the above critical behavior is replaced
by $\phi(t) \sim (\ln t)^{-1}$ \cite{neto2,solano}.
Additionally, one does not expect
similar behaviors at the emergence of a discontinuous transition.
The coexistence point can be estimated through a
threshold value ${\tilde \alpha}$, which separates the saturation toward a definite value from an exponential
decay \cite{fiore14,fioresal,fioresal2}.
Alternatively, a more reliable procedure is achieved by
performing a finite-size analysis, as
recently proposed in Ref.~\cite{martins-fiore}. According to it, the
difference between the pseudo-transition point $\alpha_L$ and
the transition point $\alpha_0$ scales with $L^{-2}$, where $L^2$ denotes
the system volume (in two dimensions)
The estimation of $\alpha_L$ can be done in a variety of ways.
For instance, as corresponding to the the peak of the system's order-parameter variance
$\chi=L^{2}(\langle \phi^2\rangle-\langle \phi \rangle^2)$, or even
through the value in which the bimodal order parameter distribution presents
two equal areas \cite{martins-fiore}.
However, such scaling behavior is verified only
by considering some kind of quasi-stationary (QS) ensemble, i.e.
an ensemble of states accessed
by the original dynamics at long times {\it conditioned on survival}
(and restricted to those which are not trapped into an absorbing state).
Here we employ an efficient numerical scheme
given in Ref. \cite{qssim}, in which configurations are stored and gradually updated
during the evolution of the stochastic process.
Whenever the transition to the absorbing state is imminent, the system is ``relocated'' to a
saved configuration. This accurately reproduces the results
from the much longer procedure of performing averages only
on samples that have not been trapped in the absorbing state at the end of their
respective runs.
The intensive quantities in a QS ensemble must converge to the
stationary ones when $L \to \infty$.
Finally, in the third procedure, the mean survival time $\tau$ is considered for different system sizes.
According to Refs. \cite{paula,ref30}, the coexistence point is the separatrix of
an exponential growth of $\tau$ and an exponential increase until $L < L_c$
followed by a decreasing behavior for $L>L_c$. Here, we shall also
quantify it, in order to compare with the pure (not disordered) cases.
\section{Results and discussion}
\subsection{Models in a square lattice}
The first analysis of the influence of temporal disorder
is achieved by inspecting the time decay of the order parameter $\phi(t)$
starting from a fully active initial configuration for
$t=0$. For the SSM1, Figs.
\ref{fig1} and \ref{fig3} (panels $(a)$)
show $\rho(t)$ for $\rho(0)=1$ for the pure versions and for
$\sigma=0.05$ (not shown) and $\sigma=0.15$, with $\Delta t=1$.
\begin{figure}[h]
\centering
\includegraphics[scale=0.345]{gra_pure.eps}
\caption{({\bf Color online}): Results for the pure SSM1.
Panel $(a)$ shows the time decay of $\rho(t)$ for
$\rho(0)=1$ and distinct values of $\alpha$. Panel
$(b)$ shows the bistable behavior of $\rho(t)$ close to the
separatrix point ${\tilde \alpha} \sim 0.0815$ for distinct initial
densities ranging
from $10^{-2}$ to $1$.
Panel $(c)$ and $(d)$ shows the order parameters variance
$\chi$ versus $\alpha$ and the value $\alpha_L$ for which $\chi$ is maximum, vs $1/L^2$).}
\label{fig1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.345]{gra_s0.15.eps}
\caption{({\bf Color online}): Results for $\sigma=0.15$.
Panel $(a)$ shows the time decay of $\rho(t)$ for
$\rho(0)=1$ and distinct values of $\alpha$. Panel
$(b)$ shows the bistable behavior of $\rho$ at ${\tilde \alpha} \sim 0.035$ for distinct initial
densities ranging from $10^{-2}$ to $1$. Panel $(c)$ and $(d)$ shows the order parameters variance
$\chi$ versus $\alpha$ and the value $\alpha_L$ for which $\chi$ is maximum, vs $1/L^2$).}
\label{fig3}
\end{figure}
In all cases there is a threshold value ${\tilde \alpha}$
separating indefinite activity and exponential
decay toward the particle extinction. They
are strongly dependent on $\sigma$ and occur
at ${\tilde \alpha}=0.0812, 0.076$ (not shown)
and $0.035$ for the pure, $\sigma=0.05$ and $0.15$, respectively.
No indication of a power-law have been verified nor
a behavior of type $\rho \sim (\ln t)^{-1}$.
By repeating the above analysis for distinct initial
configurations (panels $(b)$)
with distinct densities ($10^{-2} \le \rho(0)\le 1$), the curves
converge to two well defined stationary states,
with $\rho <<1$ and $\rho \sim \rho^*$,
signaling the bistability of active and absorbing phases, thus
suggesting in all cases a first-order phase transition.
For the pure, $\sigma=0.05$ and $0.15$,
$\rho^*$ read $0.637(2)$, $0.63(2)$ and $0.77(2)$ respectively.
Inspection of quasi-stationary properties for distinct $L$'s reveal
that the $\alpha_{L}$'s (panels $(c)$ and $(d)$),
in which the order parameter variance
$\chi$ is maximum, scales with $1/L^{2}$ and gives
$\alpha_{0} = 0.0824(2)$, $0.0823(2)$ (not shown) and $0.0680(2)$
for the pure, $\sigma=0.05$ (not shown) and $0.15$, respectively.
In particular for $L=100$, the peak in $\chi$ occurs at $0.0827(1)$,
$0.0826(1)$ and $0.0684(1)$, respectively.
Therefore, both previous analyses suggest
that temporal disorder does not forbid a discontinuous
phase transition. However, it increases the metastable region
at the emergence of the phase coexistence, i.e. $\alpha_L-{\tilde \alpha}$
increases with $\sigma$. This feature shares similarities with
some procedures studied for characterizing the first-order transition
in the ZGB and allied models close to the coexistence by taking different
initial configurations \cite{evans}.
An important point is that above the transition the number of points
decrease substantially
with increasing $\sigma$, revealing a suppression (absence) of
a phase transition for $\sigma >0.22$, which is a rather small
disorder weight.
In order to strengthen (i.e., to increase) the influence of disorder in the SSM1, we perform two changes.
First we
increase the disorder duration $\Delta t$.
Since no larger values of $\sigma$ are possible for this model, we change to a bimodal disorder distribution,
where, at each $\Delta t$, the creation probability is chosen from two values, $p=1/(1+\alpha)$ and
$p_l=1/(1+20\alpha)$, with rates
$1-q$ and $q$, respectively. The results are presented
in Fig. \ref{fig2}(a), (b) for
$q=0.2$, $L=200$ and $\Delta t=6$.
Second, the analysis
of the SSM2 for a larger value $\sigma=0.4$ (with $\Delta t=1$) is
considered. Since its pure
version yields a larger
transition point, it is possible to increase substantially the
value of $\sigma$ (in contrast to the SSM1).
These results are presented in Fig. \ref{fig2} (panels $(c)$ and $(d)$).
\begin{figure}[h]
\centering
\includegraphics[scale=0.4]{jensen_str.eps}
\includegraphics[scale=0.35]{gra_thre_s0.4.eps}
\caption{({\bf Color online}) The SSM1 with
bimodal temporal disorder distribution.
$(a)$ log-log time
decay of $\rho$ for distinct
$\alpha$'s and $\Delta t=6$. $(b)$ Bistable behavior
of $\rho$ at $\alpha=0.007$ (very
close to the separatrix point ${\tilde \alpha} \sim 0.008$) for
distinct initial densities ranging from $10^{-3}$ to $1$. Panel $(c)$
shows the same analysis in $(a)$, but for the SSM2 with uniform
distribution for $\sigma=0.4$ and $\Delta t=1$. In $(d)$,
the bistable behavior
of $\rho$ at $\alpha=0.001$,
close to the transition point ${\tilde \alpha} \sim 0.005$, for
distinct initial densities ranging from $10^{-3}$ to $1$.}
\label{fig2}
\end{figure}
In both systems, the phase transitions obey the same pattern as the previous cases:
separatrix points at ${\tilde \alpha}\sim 0.008$ (${\tilde \alpha}\sim 0.005$)
and bistable behaviors of $\rho(t)$ at
the vicinity of the transition points [exemplified here for $\alpha=0.007$ (and $\alpha=0.001$)].
In both cases, the ${\tilde \alpha}$'s are
very small, highlighting the relevance
of disorder.
As for the SSM1, the phase transition is suppressed for sufficiently large $\sigma$'s, whose
results reveal the absence of a phase transition for $\sigma >0.4$.
We now turn our attention to the ZGB model. In Figs. 4 and Fig. 5 we show the
results for different disorder strengths, $\sigma = 0.05$ and
$\sigma=0.10$, respectively. In particular, we have considered rather small disorder strengths,
in order not to ``mix" both phase transitions. In both cases,
results similar to the SSMs have been obtained.
Panels $(a)$ show once again the onset point ${\tilde Y}$ separating activity
from a exponential growth toward a full carbon monoxide poisoning.
The values of ${\tilde Y}$ decrease by raising the disorder parameter $\sigma$,
and read ${\tilde Y}=0.527(1)$, $0.523(1)$, $0.516(1)$ and $0.500(2)$
for the pure, $\sigma=0.05$, $0.1$ and $0.2$ (not shown), respectively.
In addition, $Y_{L}$'s, obtained from the maximum of the order parameter variance
$\chi$ scales with $1/L^{2}$ as seen in the pure version \cite{martins-fiore}.
For the pure, $\sigma=0.05$, $0.1$ and $0.2$ (not shown) we obtain
$Y_0=0.5253(3)$, $0.524(1)$, $0.520(1)$ and $0.509(2)$, respectively.
Although less pronounced than for the previous
example, note that the difference $Y_0-{\tilde Y}$
increases with $\sigma$, reinforcing that
disorder increases the spinodal region around the phase coexistence.
\begin{figure}[!hbt]
\includegraphics[scale=0.35]{zgb.05.eps}
\caption{({\bf Color online}): Results for the ZGB model for $\sigma=0.05$.
Panel $(a)$ shows the time decay of $\rho_{CO}$ for $\rho_{C0}(0)=0$
and distinct values of $Y$. Panel
$(b)$ shows the bistable behavior of $\rho_{CO}$ for $Y = 0.522$ for distinct initial
densities equi-spaced in the interval $[0.1,0.9]$ (Linear system size: $L=800$). Panel $(c)$ and $(d)$ shows the order parameters variance
$\chi$ versus $Y$ and the $Y_L$,
in which $\chi$ is maximum, vs $1/L^2$).}
\label{zgb2}
\end{figure}
\begin{figure}[!hbt]
\includegraphics[scale=0.35]{zgb_s0.1.eps}
\caption{({\bf Color online}): Results for the ZGB model for $\sigma=0.10$.
Panel $(a)$ shows the time decay of $\rho_{CO}$ for $\rho_{C0}(0)=0$
and distinct values of $Y$. Panel
$(b)$ shows the bistable behavior of $\rho_{CO}$ for $Y = 0.520$ for distinct initial
densities equi-spaced in the interval $[0.1,0.9]$ (Linear system size: $L=800$). Panel $(c)$ and $(d)$ shows the order parameters variance
$\chi$ versus $Y$ and the $Y_L$,
in which $\chi$ is maximum, vs $1/L^2$).}
\label{zgb2}
\end{figure}
\begin{figure}[t]
\includegraphics[scale=0.35]{tau1.eps}
\caption{({\bf Color online}): For the SSM1
(ZGB) model, panels $(a)$[$(c)$] and $(b)$[$(d)$] show the QS lifetime
for the pure and disordered versions, respectively. We
take $\sigma=0.15$ and $0.1$ for the SSM1
and ZGB models, respectively.} \label{tau}
\label{tau1}
\end{figure}
Fig. \ref{tau1} shows the mean lifetime of the QS state (defined as the time between
two absorbing attempts during the QS regime),
for the pure and disordered systems.
We observe in all
cases the same behavior (in similarity with Ref. \cite{paula}): a threshold
value separating exponential growth of $\tau$ up to a maximum system size $L_c$, followed by a decrease
of $\tau$ for $L>L_c$. For the pure cases, from such
analysis the coexistence points are located within the interval
$0.0805<\alpha<0.081$ (SSM1) and $0.5256<Y<0.5258$ (ZGB). In
the presence of temporal disorder, they
are in the interval $0.066<\alpha<0.067$ (SSM1 for $\sigma=0.15$)
and $0.515<Y<0.518$ (ZGB for $\sigma=0.1$), which
agrees with previous estimates obtained from the maxima of $\chi$.
Thus the above findings suggest that in contrast with critical
transitions, $\tau$ does not
grow algebraically in a region within the active phase. These
results are similar to those obtained for the generalized voter model
\cite{martinez}, suggesting
that TGPs do not manifest at discontinuous absorbing transitions, but only
at critical ones \cite{vojta-hoyos, neto2, solano}. However,
this point still deserves further studies.
We close this section by remarking that the active-CO poisoned
transition exhibits a behavior consistent to a continuous transition
for $\sigma>0.3$ (not shown). Thus, in contrast to the SSMs (at least until $\sigma \le 0.4$)
numerical results indicate that the increase of $\sigma$ suppress the phase coexistence.
\subsection{Models in a complete graph}
With the purpose of investigating the effects of temporal disorder
in infinite-dimensional structures, the last analysis considers a
mean-field like description of the above models, through
a complete graph (CG) treatment.
In the CG approach, each site interacts with all others, so that
an exact analysis is allowed. For the SSM, besides the reactions
$A \rightarrow 0$ and $2A \rightarrow 3A$,
one takes the coagulation process $2A \rightarrow A$
occurring with rate $\nu$ \cite{paula,dickman-vidigal02}. The
discontinuous transitions yield at the exact points
$\alpha_0=1/(2\sqrt{\nu})$ and $Y_0=2/3$ \cite{zgbqs,dickman-vidigal02},
for the SSM and ZGB, respectively.
Due to the prohibition against C$-$CO
occupying nearest-neighbor pairs, only one species (CO or O) may be present
at any moment for the ZGB analysis.
Let $\rho=\rho_{CO}-\rho_{O}$ with $\rho_{CO}$ and $\rho_O$ denoting the
fraction of sites bearing a CO and O,
respectively. This quantity allows to describe a system of
$N$ sites completely by a single variable, with
$\rho =-1$ representing the O-poisoned state and $\rho =1$ the
CO-poisoned state
(see more details in Ref. \cite{zgbqs}).
In particular, we take $\nu=1$
for the SSM and in all cases
the temporal disorder was introduced in a similar fashion way than in Sec. II.
\begin{figure}[t]
\includegraphics[scale=0.35]{schCG2.eps}
\caption{({\bf Color online}): For the SSM on a complete graph,
the QS density $\rho$ for the pure model $(a)$ and with temporal disorder
strength $\sigma =0.1$ $(b)$. In $(c)$, the
QS probability distributions for the pure model, with $\alpha$ ranging from $0.475$ to $0.525$.
In $(d)$, the same as $(c)$ but for $\sigma=0.1$, and $\alpha$ ranging from $0.4750$
to $0.5625$. System size: $N=10000$ in $(c)$ and $(d)$. }
\label{schCG}
\end{figure}
\begin{figure}[t]
\includegraphics[scale=0.35]{zgbCG.eps}
\caption{({\bf Color online}): For the ZGB model on a complete graph,
the QS order-parameter $\rho$ for the pure model $(a)$ and with temporal disorder
strength $\sigma=0.1$ $(b)$. In $(c)$, the
QS probability distributions for the pure model, with $Y$ ranging from $0.60$ to $0.70$.
In $(d)$, the same as $(c)$ but for $\sigma=0.1$, and $Y$ ranging from $0.60$
to $0.70$. System size: $N=10000$ in panels $(c)$ and $(d)$. }
\label{zgbCG}
\end{figure}
Our results for $\rho$ for the SSM and ZGB models are shown in panels $(a)$
of Figs. \ref{schCG} and \ref{zgbCG}, respectively.
In both cases, the analysis in the complete graph predicts behaviors which are similar to the numerical studies: the
reduction of the active region, when
compared to their pure counterparts
and occurrence of bimodal probability
distributions [see e.g panels (c)-(d) in Figs. \ref{schCG} and \ref{zgbCG}].
In particular, for disorder strength $\sigma=0.1$, the transition
points are shifted from $\alpha_0=0.5$
to $\alpha_0=0.526$ (SSM),
and from $Y_0=2/3$ to $Y_0=0.635$ (ZGB). Thus, the inclusion
of low disorder maintains the phase coexistence. However, by
increasing $\sigma$ the active phase
peaks become broader, suggesting the appearance of a continuous transition
as shown in Fig. \ref{CG1} (a) and (b). Despite this, there are
some differences
when compared to their low
dimensional counterparts. There is a region
in the active phase (see e.g Fig. \ref{CG1} (c) and (d)),
in which $\tau$ grows slower than exponential,
and then it saturates at a finite value. This behavior is
related to the abrupt transition that occurs when the noise takes the
control parameter to a value that drives the system to
the absorbing state.
Since configurations with intermediary densities are unstable in these
systems, one observes a bimodal QS probability distribution
in this region.
This behavior is remarkably
distinct from TGPs, in which $\tau$ increases algebraically
with the system size $L$ and it has been observed
only in continuous (absorbing) phase transitions.
\begin{figure}[t]
\includegraphics[scale=0.35]{CG1b2.eps}
\caption{({\bf Color online}): For the complete graph,
panels $(a)$ and $(b)$ show the QS order-parameter $\rho$ for the
SSM and ZGB models,
respectively for distinct $\sigma$'s and $N=10000$. Panel $(c)$
shows $\tau$ versus $N$ for the SSM model on
a complete graph for $\sigma=0.1$ and $\alpha$ ranging from
$\alpha=0.458, 0.467, 0.472, 0.476, 0.481, 0.490$, and $0.500$
(from top to bottom). In $(d)$, the same in $(c)$ but for the ZGB
with $\sigma = 0.1$ and $Y$ ranging from 0.55 to 0.61
(equi-spaced from top to bottom). }
\label{CG1}
\end{figure}
\section{Conclusions}
We studied the influence of temporal disorder in the
context of discontinuous absorbing phase transitions. We investigated extensively three models
by means of distinct numerical procedures. Our results strongly suggest that in contrast
to the spatial disorder, discontinuous absorbing transitions are not forbidden by the presence of
temporal disorder in low dimensional systems. In particular,
the behavior of quantities are similar to their pure
counterparts. However, the temporal disorder increases the metastable region close
to phase coexistence.
Our results also suggest the absence of
temporal Griffiths phases (TGPs).
Some remarks over their existence are in order:
Earlier results for different systems have shown that
the inclusion of temporal disorder does not
necessarily lead to the presence of TGPs \cite{martinez}.
Although it suppresses the DP universality class in all dimensions,
the appearance of TGPs depend on $\sigma$
and/or $\Delta t$ \cite{neto2,solano}. Similar conclusions
continue to be valid for distinct up-down
systems, in which only for $d\ge 3$ TGPs
are observed. Recent results for a one-dimensional example \cite{fiore17} confirm
the absence of TGPs
when the phase transition is discontinuous.
For the complete graph versions
($d \rightarrow \infty$), we observe the maintenance
of the phase coexistence for small disorder. However, in contrast
to the lattice versions,
there is a region in the active phase in which the lifetime
grows slower than exponential and then
saturate at a finite value.
It is worth emphasizing that our results do not exclude
a discontinuous transition becoming continuous from a disorder
threshold $\sigma_c$.
Except for the SSMs, in which the transition points decrease substantially as
$\sigma$ increases,
results for the ZGB model indicate
the suppression of phase coexistence
for $\sigma>0.3$. Again, the CG
approach and the above mentioned one-dimensional
case also reveal similar trends. This last case shows that the
crossover to the criticality is also followed by
appearance of TGPs within the active phase \cite{fiore17}.
Possible extensions of this work include the study of the effect of
temporal correlated disorder and the more general case of spatio-temporal disorder, i.e.
how the discontinuous phase transition is affected by an external perturbation that
fluctuates in both space and time \cite{dickman-vojta}.
Both cases appear
to be of particular interest in the context
of ecosystems, where the effects of noise on the
extinction of a population
due to environmental changes have been attracting considerable
attention recently \cite{meerson}.
Also, extension of both models for larger dimensions
are intended to be investigated, in order to confirm the above hypotheses.
\section*{ACKNOWLEDGMENT}
We acknowledge Gabriel T. Landi and
J. A. Hoyos for fruitful discussions.
The financial supports from CNPq and FAPESP, under grants 15/04451-2
and 307620/2015-8, are also acknowledged.
\bibliographystyle{apsrev}
|
1,477,468,751,112 | arxiv | \section{Introduction}
Self-supervised representation learning experienced tremendous advancements in the past few years in many fields. In terms of the quality of the learned feature, unsupervised learning has caught up with supervised learning or even surpassed the latter in many cases. This trend promises unparalleled scalability for data-driven machine learning in the future. One of the most successful paradigms in image self-supervised representation learning is based on instance-augmentation-invariant contrastive learning \citep{wu2018unsupervised, chen2020simple, chen2020improved}. This style of learning methods achieves the following general goal: 1) It brings the representation of two different views (augmentation) of the same instance (image) closer. 2) It keeps the representation informative of the input; in other words, avoids collapse. Several recent non-contrastive methods achieve competitive performance by explicitly achieving those two goals \citep{bardes2021vicreg, li2022neural}. While we celebrate the empirical success of SSL in a wide range of benchmarks, our understanding and knowledge of this learning process are still very limited. In this work, {\bf \em{we seek the principle behind the instance-based SSL methods and argue that the success largely comes from learning a representation of image patches based on their co-occurrence statistics in the images.}} To demonstrate this, we simplify the current SSL method to using a single crop scale to learn a representation of image patches of fixed size and establish a formal connection between our formulation and co-occurrence statistics modeling. The patch representation can be linearly aggregated (bag-of-words) to form the representation of the image. The learned representation achieves similar or better performance than the baseline representation, which is based on the entire image. In particular, even kNN classifier works surprisingly well with the aggregated patch feature. These findings also resonate with recent works in supervised learning based on patch features \citep{brendel2019approximating, dosovitskiy2020image, trockman2022patches}.
We also show that for baseline SSL methods pretrained with multi-scale crops, the whole-image representation is essentially an aggregation of different patch representations from the same instance. Further, given various SOTA baseline SSL models, we show that the same aggregation process can further improve the representation quality. Then we provide a cosine-similarity-based visualization of image patches representation on both ImageNet and CIFAR10 datasets. Particularly, we find that while the projection space has achieved significant invariance, the embedding space, frequently used for representation evaluation, tends to preserve more locality and equivariance.
Our discoveries may provide useful explanations and understanding for the success of the instance-augmentation-invariant SSL methods. The co-occurrence statistics modeling formulation and equivariance preserving property in the embedding space both supplement the current prevailing invariance perspective. Finally, these results motivate an interesting discussion of several potential future directions.
\section{Related Works}
\subsection{Instance-Based Self-Supervised Learning: Invariance without Collapse}
The instance contrastive learning \citep{wu2018unsupervised} views each of the images as a different class and uses data augmentation \citep{DosovitskiyFSRB16} to generate different views from the same image. As the number of classes is equal to the number of images, it is formulated as a massive classification problem, which may require a huge buffer or memory bank. Later, SimCLR \citep{chen2020simple} simplifies the technique significantly and uses an InfoNCE-based formulation to restrict the classification within an individual batch. While it's widely perceived that contrastive learning needs the ``bag of tricks,'' e.g., large batches, hyperparameter tuning, momentum encoding, memory queues, etc. Later works \citep{chen2021simsiam,yeh2021decoupled, haochen2021provable} show that many of these issues can be easily fixed. Recently, several even simpler non-contrastive learning methods\citep{bardes2021vicreg, zbontar2021barlow,li2022neural} are proposed, where one directly pushes the representation of different views from the same instance closer while maintaining a none-collapsing representation space. Image SSL methods mostly differ in their means to achieve a non-collapsing solution. This include classification versus negative samples\citep{chen2020simple}, Siamese networks \citep{he2020MoCo,grill2020BYOL} and more recently, covariance regularization \citep{ermolov2021whitening, zbontar2021barlow, bardes2021vicreg, haochen2021provable,li2022neural}.
The covariance regularization has also long been used in many classical unsupervised learning methods \citep{roweis2000nonlinear, tenenbaum2000global, wiskott2002slow, chen2018sparse}, also to enforce a non-collapsing solution. In fact, there is a duality between the spectral contrastive loss\citep{haochen2021provable} and the non-contrastive loss, which we prove in Appendix \ref{app:duality_short}.
All Previously mentioned instance-based SSL methods pull together representations of different views of the same instance. Intuitively, the representation would eventually be invariant to the transformation that generates those views. We would like to provide further insight into this learning process: The learning objective can be understood as using the inner product to capture the co-occurrence statistics of those image patches. We also provide visualization to study whether the learned representation truly has this invariance property.
\subsection{Patch-Based Representation}
Many works have explored the effectiveness of path-based image features. In the supervised setting, Bagnet\citep{brendel2018BagNet} and \citet{thiry2021patchinconvkernel} showed that aggregation of patch-based features can achieve most of the performance of supervised learning on Image datasets. In the unsupervised setting, \citet{gidaris2020SSLbypredBOW} performs SSL by requiring a bag-of patches representation to be invariant between different views. Due to architectural constraints, Image Transformer based methods naturally use a patch-based representation \citep{he2021MAESSL,bao2021ImageBert}.
\subsection{Learning Representation by Modeling the Co-Occurrence Statistics}
The use of word vector representation has a long history in NLP, which dates back to the 80s \citep{rumelhart1986learning, dumais2004latent}. Perhaps one of the most famous word embedding results, the word vector arithmetic operation, was introduced in \citet{mikolov2013efficient}. Particularly, to learn this embedding, a task called ``skip-gram'' was used, where one uses the latent embedding of a word to predict the latent embedding of the word vectors in a context. A refinement was proposed in \citet{mikolov2013distributed}, where a simplified variant of Noise Contrastive Estimation (NCE) was introduced for training the ``Skip-gram'' model. The task and loss are deeply connected to the SimCLR and its InfoNCE loss. Later, a matrix factorization formulation was proposed in \citet{pennington2014glove}, which uses a carefully reprocessed concurrence matrix compared to latent semantic analysis. While the task in Word2Vec and SimCLR is relatively similar, the underlying interpretations are quite different. In instance-based SSL methods, one pervasive perception is that the encoding network is trying to build invariance, i.e., different views of the same instance shall be mapped to the same latent embedding. This work supplements this classical opinion and show that similar to Word2Vect, instance-based SSL methods can be understood as building a distributed representation of image patches by modeling the co-occurrence statistics.
Although there are image SSL methods inspired by word embedding learning \citep{gidaris2020SSLbypredBOW} the proposed method still uses the invariance view and aims to learn the whole image feature while we focus on patch representations. Due to network architecture, many vision-transformer-based SSL methods also inherently learn a patch-based representation. For example, MAE \citep{he2021MAESSL} generates masked image patches conditioned on other image patches, and ImageBERT \citep{bao2021ImageBert} predicts vector-quantized tokens based on nearby tokens in a context. Consistent with the co-occurrence interpretation, their success suggests that capturing correlation between image patches is fundamental to learning image representation.
\section{Self-Supervised Image Patch Embedding and Co-Occurrence Statistics Modeling}
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=\textwidth]{figures/DRIP_Pipeline_4x.png}
\caption{\textbf{The pipeline of I$^2$ VICReg.} From the same instance, fixed-size image patches are extracted, color-augmented, encoded to embedding and projection space. During training, different image patch projections from the same instance are pulled together while an anti-collapse regularization is applied. After training, different patch embeddings from the same instance are averaged to reach the image representation.}
\label{fig:drip_figure}
\end{center}
\end{figure*}
As mentioned earlier, in contrast to the typical multi-scale augmentation used in the instance-based SSL methods, we use fixed-scale crops to learn a representation for fixed-size image patches. We show that any SSL objective can be used, which will be shown in Section~\ref{sec:experiments}, as long as they learn a non-collapsed representation where different image patches from the same context\footnote{In this work, the context refers to an image. But context could be generalized in straightforward ways.} are close in the projection space, as shown in Figure~\ref{fig:drip_figure}. In this work we mostly use covariance regularization based techniques\citep{bardes2021vicreg,zbontar2021barlow,li2022neural,haochen2021provable}, for which we present a general formulation:
\begin{definition}
Intra-instance variance-invariance-covariance regularization (I$^2$ VICReg):
\begin{align}
\underset{\theta}{\text{min}}\, -\mathbb{E}_{p(x_1,x_2)} \left[z_1^T z_2\right],\ \text{s.t.}\ \mathbb{E}_{p(x)}\left[ z z^T\right] = \frac{1}{d_{emb}}\cdot I
\label{opt:DRIP_formal}
\end{align}
\end{definition}
where $z=g(h)$ and $h = f(x;\theta)$. We call $h$ the {\it embedding} and $z$ the {\it projection} of an image patch, $x$. $\{x\}$ all have the same size. The parametric function $f(\cdot;\theta)$ is a deep neural network with parameters $\theta$, and $g$ is typically a much simpler neural network with only one or a few fully connected layers. $d_{emb}$ is the dimension of an embedding vector, $z$. This general idea is shown in Figure~\ref{fig:drip_figure}. For an image, we extract fix-size image patches, which are color augmented before embedding\footnote{In several related references, this is also called representation.} $f$ and projection $g$. Given an image patch $x_i$, which is in the red dash box in Figure~\ref{fig:drip_figure}, the objective tries to make its projection $z_i$ invariant to the projections of the other image patches within the instance. Further, the regularization tries to decorrelate different projection dimensions of $z$ while maintaining the variance of each dimension. VICReg was proposed in \cite{bardes2021vicreg}, and one concrete example of such VICReg objective is the following soft-constrained loss function proposed in \citep{li2022neural}:
\begin{align}
\underset{\theta}{\text{min}}\, \mathbb{E}\left[ -\Tr \left(Z_1^T Z_2\right) + \lambda \operatorname{logdet}\left(I + \frac{d_{emb}}{2B\epsilon^2}ZZ^{T}\right) \right]\
\label{opt:DRIP_loss}
\end{align}
where $Z = \left[Z_1, Z_2\right]$, each column of $Z_1$, $B$ is the batch size, $\epsilon$ is a chosen such that $\epsilon^2 \ll 1$. In this objective function, covariance regularization is achieved by maximizing the Total Coding Rate (TCR) \citep{ma2007segmentation}.
{\bf Relationship to Co-Occurrence Statistics Modeling.}
Assume $x_1$ and $x_2$ are two color-augmented patches sampled from the same image. We denote their marginal distribution by $p(x_1)$ and $p(x_2)$, which includes variation due to sampling different locations within an image, random color augmentation, as well as variation due to sampling images from the dataset. We also denote their joint distribution by $p(x_1,x_2)$, which assume $x_1$ and $x_2$ are sampled from the same image. We show that contrastive learning can be understood by the following objective that approximates the normalized co-occurrence statistics by the inner product of the two embeddings $z_1$ and $z_2$ generated by $x_1$ and $x_2$:
\begin{align}
\text{min}\ \int p(x_1)p(x_2) \left[ wz_1^Tz_2 - \frac{p(x_1, x_2)}{p(x_1)p(x_2)} \right]^2 dx_1 dx_2
\label{opt:cooccurrence}
\end{align}
where $w$ is a fixed weight used to compensate for scale differences.
\begin{prop}
\label{prop:cooccurrence}
The above optimization problem can be rewritten as the following spectral contrastive form:
\begin{align}
\text{min}\ \mathbb{E}_{p(x_1, x_2)}\left[- z_1^Tz_2\right] + \lambda \mathbb{E}_{p(x_1)p(x_2)} \left(z_1^Tz_2\right)^2
\label{opt:spectral_reg}
\end{align}
\end{prop}
where $\lambda=\frac{w}{2}$. The proof is rather straightforward and is presented in Appendix~\ref{app:proof_1}. As we can see that the first term resembles the similarity term in Eqn~\ref{opt:DRIP_formal}, and the second spectral contrastive term \citet{haochen2021provable} minimizes the inner product between two independent patch embeddings, which has the effect of orthogonalizing them. As we mentioned earlier, there exists a duality between the spectral contrastive regularization and covariance regularization term in Eqn~\ref{opt:DRIP_formal}. Please refer to the Appendix \ref{app:duality_short} for a more in-depth discussion.
{\bf Bag-of-Feature Model.} After we have learned an embedding for the fix-scale image patches, we can embed all of the image patches $\{x_{11},\dots,x_{HW}\}$ within an instance into the embedding space, $\{h_{11},\dots,h_{HW}\}$. Then the whole-image representation $R_{\text{img}}$ is a linear aggregation of all the patches' embedding, as shown in Figure~\ref{fig:drip_figure}. Depending on the size of the image patches, aggregating a small subset of the patches from the same instance may suffice in practice. E.g., for scale$=0.2$, we find $16$ patches aggregation achieves similar performance compared to aggregating all of the patches. We may also aggregate the projections to get the whole-image representation, but the embedding typically contains more equivariance and locality, which leads to better performance. We will show this result in Section~\ref{sec:visualization}.
\section{Quantitative Empirical Results}
\label{sec:experiments}
Through experiments, we demonstrate that representations learned by self-supervised learning method trained with fixed-size patches are nearly as strong as that learned with multi-scale crops.
For several cases, pretraining with multi-scale crops and evaluating on the fixed central crop is equivalent in terms of performance to pretraining with fixed-size small patches and evaluating by averaging the embedding across the image. We further show that for a multi-scale pretrained model, averaging embedding of fixed-scale small image patches converges to the embedding generated by the center cropped image, as the number of aggregated patches increases. Thus, the standard practice of using multi-scale pretraining and center crop evaluation can be viewed as an efficient way to obtain the averaged patch embeddings. Further, we show that the patch aggregation evaluation can further improve the representation of the baseline models by a significant margin.
Our experiments used the CIFAR-10, CIFAR-100, and the more challenging ImageNet-100 dataset. We also provide a short-epoch ImageNet pretraining to show that with small image patches, the training tends to have lower learning efficiency. In the last section, we will dive into the invariance and equivariance analysis of the patch embedding.
\subsection{CIFAR}
We first provide experimental results on the standard CIFAR-10 and CIFAR-100 datasets \citep{krizhevsky2020cifar}, which contain 10 and 100 classes, respectively. Both contain 50000 training and 10000 testing images. The results are shown in Figure~\ref{fig:cifar_linear_knn}, Tables~\ref{tab:cifar10_other_methods} and \ref{tab:cifar100_other_methods}. We show results obtained using the linear evaluation protocol and the kNN evaluation protocol which give consistent results with regard to each other. The evaluation is conducted in two different ways. The standard evaluation method generates the embedding using the full image, both during training of the linear classifier and at final evaluation (\textit{Central} in the Figures and the tables). Alternatively, an image embedding is generated by inputting a certain number of patches (same scale as training time and upsampled) into the neural network and aggregating the patch embeddings by performing averaging. This is denoted as 1, 16, and 256 patches in the Figure.
The main observation we make is that pretraining on small patches and evaluating with the averaged embedding performs as well as or better than pretraining with random-scale patches and evaluating with the full image representation. On CIFAR-10 with the TCR method, the 256-patches evaluation with pretraining fixed-scale of 0.2 outperforms the full-image evaluation with random pretraining scale between 0.08 and 1, which is the standard scale range used. When only averaging 16-patches, the same model performs on par with full image evaluation.
On the k-NN evaluation, pretraining with random-scale patches not spanning the full range 0.08 to 1.0 gives much worse performance comparatively, than linear evaluation. However, aggregated embedding does not see this comparatively worse performance, and can still outperform the full image evaluation. Using results from Table \ref{tab:cifar10_other_methods},\ref{tab:cifar100_other_methods} and \ref{tab:IN100_other_methods}, we can draw the same conclusion on other datasets and other self-supervised methods (VICReg \citep{bardes2021vicreg} and BYOL\citep{grill2020BYOL}).
\textbf{Implementation Details.} For all the experiments, we pretrain a ResNet-34 for 600 epochs. We use a batch size of 1024, LARS optimizer, and a weight decay of $1e-04$. The learning rate is set to 0.3, and follows a cosine decay schedule, with 10 epochs of warmup and a final value of 0. In the TCR loss, $\lambda$ is set to 30.0, and $\epsilon$ is set to 0.2. The projector network consists of 2 linear layers with respectively 4096 hidden units and 128 output units for the CIFAR-10 experiments and 512 output units for the CIFAR-100 experiments. All the layers are separated with a ReLU and a BatchNorm layers. The data augmentations used are identical to those of BYOL.
\begin{figure}[t]
\begin{minipage}[c]{0.49\textwidth}
\centering
\includegraphics[trim=0.4cm 0.0cm 0.3cm 0cm, clip,width=1.0\textwidth]{figures/drip_cifar_linear.pdf}\\
(a) Linear Evaluation
\end{minipage}
\hfill
\begin{minipage}[c]{0.49\textwidth}
\centering
\includegraphics[trim=0.4cm 0.0cm 0.3cm 0cm,width=\textwidth]{figures/drip_cifar_knn.pdf}\\
(b) k-NN
\end{minipage}
\caption{\textbf{Evaluation on CIFAR-10 for various \code{RandomResizedCrop} scales.} We evaluate the performance of a linear classifier (a) and a k-NN classifier (b) for pretraining with various patch sizes and various evaluation setups. During pretraining, the patches are sampled using \code{RandomResizedCrop(scale, scale)} for single values, and \code{RandomResizedCrop(min\_scale, max\_scale)} for scale values uniformly from min\_scale to max\_scale. The ``Central'' evaluation is the standard evaluation protocol where the classifier is trained and evaluated on single fixed central patches of the image, which is the entire image for CIFAR-10. For the $n$ patch evaluation, the classifier is trained and evaluated on the linearly-aggregated embedding of $n$ patches, sampled with the same scale factor as during pretraining. Scale 0.08, 0.1, 0.13, 0.2, 0.25 correspond to $9\times 9$, $10\times 10$, $13\times 13$, $14\times 14$, $16\times 16$ image patches respectively. Please note that it is expected that ``central'' evaluation performs poorly on fix-scale pretraining as the model has never seen the entire image during pretraining.}
\label{fig:cifar_linear_knn}
\vspace{-0.1in}
\end{figure}
\begin{table}[t]
\caption{\textbf{Performance on CIFAR-10 for patch-based and standard self-supervised pretraining methods.} We evaluate the performance of a linear classifier for various pretraining methods, both with the \textit{Patch-based training}, where patches of scale 0.2 are sampled during pretraining, and \textit{Standard training}, where the patch scale is uniformly sampled between scale 0.08 and 1.0 during pretraining. The ‘Central’ evaluation is the standard evaluation protocol where the linear classifier is trained and evaluated on single fixed central patches of the image, which is the whole image for CIFAR dataset. For the $n$-patch evaluation, the classifier is trained and evaluated on the linearly-aggregated embedding of $n$ patches, sampled with the same scale factor as during pretraining. Scale 0.2 and 0.08 correspond to $14\times 14$ and $9\times 9$ image patches respectively.}
\centering
\vspace{2mm}
\label{tab:cifar10_other_methods}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lcccccccc}
\toprule
Method & \multicolumn{4}{c}{\em Patch-based training} & \multicolumn{4}{c}{\em Standard training} \\
& Central & 1 patch & 16 patches & 256 patches & Central & 1 patch & 16 patches & 256 patches \\
\midrule
TCR & 46.0 & 82.2 & 90.4 & 90.8 & 90.1 & 86.5 & 91.5 & 91.8 \\
VICReg & 47.1 & 83.1 & 90.9 & 91.2 & 90.7 & 87.3 & 91.9 & 92.0 \\
BYOL & 47.3 & 83.6 & 91.3 & 91.5 & 90.9 & 87.8 & 92.3 & 92.4 \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}[t]
\caption{\textbf{Performance on CIFAR-100 for patch-based and standard self-supervised pretraining methods.} We evaluate the performance of a linear classifier for various pretraining methods, both with the \textit{Patch-based training}, where patches of scale 0.2 are sampled, and \textit{Standard training}, where the patch scale is uniformly sampled between scale 0.08 and 1.0. Scale 0.2 and 0.08 correspond to $14\times 14$ and $9\times 9$ image patches respectively.}
\centering
\vspace{2mm}
\label{tab:cifar100_other_methods}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lcccccccc}
\toprule
Method & \multicolumn{4}{c}{\em Patch-based training} & \multicolumn{4}{c}{\em Standard training} \\
& Central & 1 patch & 16 patches & 256 patches & Central & 1 patch & 16 patches & 256 patches \\
\midrule
TCR & 34.6 & 59.2 & 67.1 & 67.3 & 66.8 & 60.5 & 68.1 & 68.3 \\
VICReg & 35.5 & 60.1 & 68.0 & 68.3 & 67.6 & 61.4 & 69.0 & 69.3 \\
BYOL & 37.4 & 60.9 & 68.9 & 69.2 & 68.8 & 62.3 & 69.7 & 69.9 \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}[t]
\caption{\textbf{Performance on ImageNet-100 with Patch-based and standard self-supervised pre-training methods}. We evaluate the performance of a linear classifier with I$^2$ VICReg-TCR, both with the \textit{Patch-based training}, where patches of scale 0.2 are sampled during pretraining, and \textit{Standard training}, where the patch scale is uniformly sampled between scale 0.08 and 1.0. Scale 0.2 and 0.08 correspond to $100\times 100$ and $64\times 64$ image patches respectively.}
\centering
\vspace{2mm}
\label{tab:IN100_other_methods}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lcccccccc}
\toprule
Method & \multicolumn{4}{c}{\em Patch-based training} & \multicolumn{4}{c}{\em Standard training} \\
& Central & 1 patch & 16 patches & 48 patches & Central & 1 patch & 16 patches & 48 patches \\
\midrule
TCR & 41.3 & 45.6 & 76.1 & 76.3 & 77.3 & 70.1 & 78.5 & 78.8 \\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{ImageNet-100 and ImageNet}
We provide experimental results on the ImageNet-100 and ImageNet dataset \citep{deng2009imagenet}. We present our results using the linear evaluation protocol in Table~\ref{tab:IN100_other_methods} and Figure~\ref{fig:drip_imagenet_linear}. The behavior observed on CIFAR-10 generalizes to ImageNet-100. Averaging embeddings of 16 small patches produced by the patch-based pretrained model performs almost as well as the ``central'' evaluation of the embedding produced by the baseline model on the ImageNet-100 dataset, as shown in Table~\ref{tab:IN100_other_methods}. In Figure~\ref{fig:drip_imagenet_linear}(b), we show short-epoch pretrained models on ImageNet. As the patch-based pretrained model tends to see much less information compared to the baseline multi-scale pretraining, there is a $4.5\%$ gap between the patch-based model and the baseline model.
\textbf{Implementation Details.} For all the experiments, we pretrain a ResNet-50 with the TCR loss for 400 epochs for ImageNet-100, and 100 epochs for ImageNet. We use a batch size of 1024, the LARS optimizer, and a weight decay of $1e-04$. The learning rate is set to 0.1, and follows a cosine decay schedule, with 10 epochs of warmup and a final value of 0. In the TCR loss, $\lambda$ is set to 1920.0, and $\epsilon$ is set to 0.2. The projector network consists of 3 linear layers with each and 8192 units, separated by a ReLU and a BatchNorm layers. The data augmentations used are identical to those of BYOL.
\begin{figure}[t]
\begin{center}
\includegraphics[trim=0.15cm 0.0cm 0.15cm 0cm,width=\textwidth]{figures/ImageNet_Figure_4x.png}
\caption{\textbf{(a) Patch embedding convergence to the instance embedding.} For a baseline multi-scale pretrained VICReg model, we show that the patch embedding aggregation converges to the whole-image embedding as the number of aggregated patches increases. \textbf{(b) Linear evaluation on ImageNet for various \code{RandomResizedCrop} scales.} (a) Evolution of the cosine similarity between the aggregation of $N$ embeddings of patches and the instance embedding which is the aggregation of all possible patches in the image. (b) Evaluation of the performance of a linear classifier for various pretraining
patch sizes, on Central, $1$ and $16$ patches evaluation setups. Scale 0.02, 0.08, 0.2 and 1.0 correspond to $32\times 32$, $64\times 64$, $100\times 100$ and $224\times 224$ image patches respectively.}
\label{fig:drip_imagenet_linear}
\end{center}
\vspace{-0.2in}
\end{figure}
\subsection{Patched-Aggregation Based Evaluation with Multi-Scale Pretrained Model}
Our results in the last section show that the best performance is obtained when the pretraining step is done using patches of various sizes, and the evaluation step is done using the aggregated patch embeddings. It is therefore interesting to evaluate the embedding of models pretrained with other self-supervised learning methods to investigate if this evaluation protocol provides a uniform performance boost. We do this evaluation on the VICReg model pretrained for 1000 epochs and a SwAV model pretrained for 800 epochs. All models are downloaded from their original repository. Table \ref{tab:drip_other_methods} shows the linear evaluation performance on the validation set of ImageNet using the full image and aggregated embedding. On all the models, aggregated embedding outperforms full-image evaluation, often by more than 1\%. Also, increasing the number of patches averaged in the aggregation process also increases the performance. We do not go beyond 48 patches because of memory and run time issues, but we hypothesize that a further increase in the number of patches will improve the performance further, as we have demonstrated on CIFAR-10, where 256 patches significantly outperform 16 patches.
\begin{table}[t]
\caption{\textbf{Linear Evaluation with aggregated embedding on ImageNet with models trained with state-of-the-art SSL methods.} Using aggregated embedding outperforms embedding from the center crop. Central: embedding from the center cropped image is used in training and testing using the standard linear evaluation protocol. 1, 16, and 48 patches: The linear classifier is trained and evaluated on the aggregated embedding of 1, 16, and 48 patches respectively, sampled with the same scale factor range as during pretraining (0.08, 1.0).}
\centering
\vspace{2mm}
\label{tab:drip_other_methods}
\begin{tabular}{lcccc}
\toprule
Method & Central & 1 patch & 16 patches & 48 patches \\
\midrule
VICReg & 73.2 & 57.6 & 74.2 & 74.4\\
BYOL & 74.3 & 59.3 & 75.4 & 75.6\\
SwAV & 75.3 & 60.8 & 75.9 & 76.0\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Convergence of Patch-Based Embedding to Whole-Instance Embedding.}
In this experiment, we show that for a multi-scale pretrained SSL model, linearly aggregating the patch embedding converges to the instance embedding. We take a multi-scale pretrained VICReg baseline model and use randomly selected 512 images from the ImageNet dataset. For each image, we first get the embedding of the $224\times 224$ center crop. Then we randomly aggregate $N$ embeddings of different $100\times 100$ image patches and calculate the cosine similarity between the patch-aggregated embedding and the center crop embedding. Figure~\ref{fig:drip_imagenet_linear}(a) shows that the aggregated representation converges to the instance embedding as $N$ increases from $1$ to $16$ to all the image patches\footnote{``All'': extracting overlapped patches with stride $4$ and totally aggregate about 1000 patches' embeddings.}.
\section{Patch Embedding Visualization: Invariance or Equivariance?}
\label{sec:visualization}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\textwidth]{figures/main_projection_embedding_knn_4x.png}
\caption{\textbf{Visualization of kNN in the projection space and the embedding space for CIFAR10.} Distance is calculated by cosine similarity. Query patch is in the top left corner encircled by red dash, green box indicates patches from other image of the same class. Patches without surrounding box is from the same image as the query. While the nearest neighbors are both from same-category instances, we can see that the embedding space tends to preserse the local part information, whereas the projection space may collapse different parts of the same category.}
\label{fig:cifar_knn_main}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[trim=0.13cm 0.0cm 0.2cm 0cm,width=\textwidth]{figures/Imagenet_heatmap_4x.png}
\caption{\textbf{Visualization of cosine similarity in the projection space and the embedding space.} Query patch is indicated by red dash. Projection and Embedding cosine-similarity heatmaps use the same color scaling. The projection vectors are significantly more invariant compared to the embedding ones, and the embedding space contains localized information that is shared among similar patches, when the size of the patches is small enough. We can see that the embedding space tends to preserve more locality compared to the projection space.}
\label{fig:IN_heatmap_main}
\end{center}
\end{figure}
The instance-augmentation-invariant SSL methods are primarily motivated from an invariance perspective. In this section, we provide CIFAR-10 nearest neighbor and ImageNet cosine-similarity heatmap visualization to further understand the learned representation.
In the CIFAR-10 experiment, we take a model pre-trained with $14\times 14$ image patches on CIFAR-10 and calculate the projection and embedding vectors of all different image patches from the training set. Then for a given $14\times 14$ image patch (e.g. the ones circled by red dash boxes Fig~\ref{fig:cifar_knn_main}), we visualize its $k$ nearest neighbors in terms of cosine-similarity in both the projection and the embedding space. Figure~\ref{fig:cifar_knn_main} shows the results for two different image patches. The patches circled by green boxes are image patches from another instance of the same category, whereas the uncircled patches are from the same instance.
In the ImageNet experiment, we take a multi-scale pretrained VICReg model, then for a given image patch (e.g. circled by red dash boxes in Figure~\ref{fig:IN_heatmap_main}), we visualize the cosine-similarity between embedding from this patch and that from the other patches from the same instance. In this experiment, we use two different image patches scales, $71\times 71$ and $100\times 100$. The heatmap visualization is normalized to the same scale.
Overall, we observe that the projection vectors are significantly more invariant than the embedding vectors. This is apparent from both Figure~\ref{fig:cifar_knn_main} and Figure \ref{fig:IN_heatmap_main}. For the CIFAR kNN patches, NNs in the embedding space are visually much more similar than NNs in the projection space. In fact, in the embedding space, the nearest NNs are mostly locally shifted patches of similar ``part'' information. For projection space, however, many NNs are patches of different ``part'' information from the same class. E.g., we can see in Figure~\ref{fig:cifar_knn_main} that an NNs of a ``wheel'' in the projection space might be a ``door'' or a ``window'', however, the NNs in the embedding space all contain ``wheel'' information. In the second example, the NNs of a ``horse legs'' patch may have different ``horse'' body parts whereas the NNs in the embedding space are all ``horse leg''.
The heatmap visualization on ImageNet also illustrates the same phenomenon. Let's visualize a multi-scale pretrained VICReg model. The projection vector from a patch has a high similarity to that from the query patch whenever the patch has enough information to infer the class of the image. While for embedding vectors, the similarity area is much more localized to the query patch, or to other patches with similar features (the other leg of the dog in Figure \ref{fig:IN_heatmap_main}). This general observation is consistent with the results of the visualizations in \citet{bordes2021HDvisualizationofSSLrepre}. We slightly abused the term and call this property of the embedding vector {\it equivariant}, in contrast to the {\it invariance} possessed by the projector vectors. A more thorough visualization is provided in the Appendix~\ref{app:visualization}.
\section{Discussion}
In this paper, we seek to provide an understanding of the success of instance-augmentation-invariant SSL methods. We demonstrate learning an embedding for fixed-size image patches (I$^2$ VICReg) and linear aggregating them from the same instance can achieve on-par or even better performance than the multi-scale pretraining. On the other hand, with a multi-scale pretrained model, we show that the whole image embedding is essentially the average of patch embeddings. Conceptually we establish the close connection between I$^2$ VICReg and modeling the co-occurrence statistics of patches.
Through visualizing nearest neighbors and cosine-similarity heatmaps, we find that the projector vector is relatively invariant while the embedding vector is instead equivariant, which may explain its higher discriminative performance. This result suggests that the SSL objective, which learns the co-occurrence statistics, encourages an invariant solution, while the more favorable property of equivariance is achieved by the implicit bias introduced by the projector. In the future, it is interesting to explore if it's possible to directly encourage equivariance in the objective function in a more principled manner instead of relying on the projector head. For this, prior works in NLP may provide useful guidance. In \citet{pennington2014glove}, word embedding is learned by fitting the log co-occurrence matrix, which avoids the problem of getting dominated by large elements and allows the embedding to carry richer information. Similarly, an SSL objective that implicitly fits to the log-occurrence matrix may learn a more equivariant embedding, which may be an interesting direction for future work.
Lots of open questions still remain in the quest of understanding image SSL. For example, it's still unclear why the projector $g$ makes the embedding $h$ more equivariant than the projection $z$. For this, we hypothesize that the role of the projector can be understood as learning a feature representation for a kernel function in the embedding space. Since for $h_1$, $h_2$, the dot product of $g(h_1)$ and $g(h_2)$ always represent some positive semi-definite kernel on the original space $k(h_1,h_2) = g(h_1)^{T}g(h_2)$. It is possible that the flexible kernel function on the embedding alleviates the excess invariance problem caused by the objective on the projector vectors, which allows the embedding to be more equivariant and perform better. We leave further analysis of this hypothesis to future work.
\clearpage
{
\small
|
1,477,468,751,113 | arxiv |
\section{Introduction} \label{sec:intro}
The \emph{cospark} of a matrix $A \in \mathbb{R}^{m \times n}, m>n$\footnote{We note that the results in this paper can be straightforwardly generalized to complex numbers.}, denoted by $\textit{cospark($A$)}$, is defined to be the \emph{cardinality of the sparsest vector in the column space of $A$} \cite{candes2005decoding}. In other words, $\textit{cospark($A$)}$ is the optimum value of the following $l_0$-minimization problem:
\begin{align}
\underset{x}{\text{minimize}} &~~ ||Ax||_0, \label {cosparkprob}\\
\text{subject to} &~~ x \neq 0, \nonumber
\end{align}
where $||Ax||_{0}$ is the number of nonzero elements in the vector $Ax$.
It is well known that solving \eqref{cosparkprob} is an NP-hard problem. Indeed, it is equivalent to computing the \emph{spark} of an orthogonal complement of $A$ \cite{candes2005decoding}, where the spark of a matrix is defined to be the \emph{smallest number of linearly dependent columns of it} \cite{donoho2003optimally}. Specifically, for $A$ with a full column rank, we can find an orthogonal complement $A^{\bot}$ of it,
and \eqref{cosparkprob} is equivalent to
\begin{align}
\underset{x}{\text{minimize}} &~~ ||x||_0, \label{sparkprob}\\
\text{subject to} &~~ A^{\bot}x = 0, ~x \ne 0, \nonumber
\end{align}
and the optimal value of \eqref{sparkprob} is the spark of $A^{\bot}$, denoted by $\textit{spark($A^{\bot}$)}$. Computing spark is known to be NP hard \cite{tillmann2014computational}.
The role of $cospark(A)$ has been studied in decoding under sparse measurement errors where $A$ is the coding matrix \cite{candes2005decoding}. In particular, $\frac{cospark(A)}{2}$ gives the maximum number of errors that an ideal decoder can tolerate for exact recovery. Closely related to this is the role of $spark(A^{\bot})$ in characterizing the ability to perform compressed sensing \cite{candes2005decoding} \cite{donoho2003optimally}. Spark is also related to notions such as mutual coherence \cite{donoho2003optimally}\cite{gribonval2003sparse} and Restrict Isometry Property (RIP) \cite{candes2005decoding} \cite{candes2006stable}
which provide conditions under which sparse recovery can be performed using $l$-1 relaxation.
Last but not least, in addition to its role in the sparse recovery literature, cospark \eqref{cosparkprob} also plays a central role in security problems in cyber-physical systems (see \cite{zhao2016minimum} among others).
In this paper, we study the problem of computing the cospark of a matrix. Although it is proven that $\eqref{cosparkprob}$ is an NP hard problem, we show that the cospark that a matrix ``generically'' has can in fact be computed in polynomial time. Specifically, given the ``sparsity pattern'', (i.e., the locations of all the non-zero entries of $A$,) $cospark(A)$ equals, \emph{with probability one}, to a particular number which we termed the \emph{generic cospark} of $A$, if the non-zero entries of $A$ are drawn from independent continuous probability distributions. Then, we develop an efficient algorithm that computes the generic cospark in \emph{polynomial time}.
\section{Preliminaries} \label{sec:prelim}
\subsection{Generic Rank of a Matrix}
For a matrix $A\in \mathbb{R}^{m\times n}$, we define its $\textit{sparsity pattern}$ $S = \{ (i,j) | A_{ij} \ne 0, 1 \le i \le m, 1 \le j \le n \}$. Given a sparsity pattern $S$, we denote $A^{S}$ to be the set of all matrices with sparsity pattern $S$ over the field $\mathbb{R}$. Since there is a one to one mapping between $S$ and $A^{S}$, we use $S$ and $A^{S}$ interchangeably to denote a sparsity pattern in the remainder of the paper.
The generic rank of a matrix with sparsity pattern $S$ is defined as follows.
\begin{definition}[Generic Rank]\label{def:grank}
Given $S$, the \textit{generic rank} of $A^{S}$ is $sprank(A^{S}) \triangleq \sup_{A \in A_{S}} rank(A)$.
\end{definition}
Clearly, if $sprank(A^S) < n$, the optimal value of \eqref{cosparkprob} is zero.
We will thus focus on the case $sprank(A^S) = n$ for the remainder of the paper.
The following lemma states that the generic rank indeed ``generically'' equals to the rank of a matrix \cite{sprankbook}.
\begin{lemma} \label{lem:grank}
Given $S$, $rank(A) = sprank(A^S)$ with probability one, if the non-zero entries of $A$ are drawn from independently distributed continuous probability distributions.
\end{lemma}
\subsection{Matching Theory Basics}
We now introduce some basics from classical matching theory \cite{diestel2016graph} which are necessary for us to introduce the results in the remainder of the paper.
For a bipartite graph $G(X,Y,E)$, a subset of edges $\mathcal{N} \subseteq E$ is a $\textit{matching}$ if all the edges in $\mathcal{N}$ are vertex disjoint. A $\textit{max matching}$ from $X$ onto $Y$
is a matching with the maximum cardinality. A \emph{perfect matching from $X$ onto $Y$} is a max matching where every vertex in $Y$ is incident to an edge in the matching.
Consider a (not necessarily maximum) matching $\mathcal{N}$. A vertex is called \textit{matched} if it is incident to some edge in $\mathcal{N}$, and \textit{unmatched} otherwise. An $\textit{alternating path}$ with respect to $\mathcal{N}$ is a path which alternates between using edges in $E \setminus \mathcal{N}$ and edges in $\mathcal{N}$, or vice versa. An $\textit{augmenting path}$ w.r.t $\mathcal{N}$ is an alternating path w.r.t. $\mathcal{N}$ which starts and ends at unmatched vertices.
With an augmenting path $P$,
it can be easily shown that the symmetric difference\footnote{The \emph{symmetric difference} of two sets $S_1$ and $S_2$ is defined as $S_1\oplus S_2 = \left(S_1 \cup S_2\right) \setminus \left(S_1\cap S_2\right)$.}
$\mathcal{N} \oplus P$ gives a matching with size $|\mathcal{N}| + 1$.
\subsection{Generic Rank as Max Matching}
We now introduce an equivalent definition of generic rank via matching theory.
A sparsity pattern $A^S$ can be represented as a \emph{bipartite graph} as follows \cite{sprankbook}. Let $G(X,Y,E)$ be a bipartite graph whose a) vertices $X = \{1,2,\ldots,m\}$ correspond to all the row indices of $A^{S}$, b) vertices $Y = \{1,2,\ldots,n\}$ correspond to all the column indices of $A^{S}$, and c) edges in $E=S$ correspond to all the non-zero entries of $A^S$. Accordingly, we also denote the bipartite graph for a sparsity pattern $S$ by $G(X, Y, S)$.
The following lemma states the equality between $sprank(A^{S})$ and the max matching of $G(X,Y,S)$ \cite{sprankbook}.
\begin{lemma} \label{lem:matchgrank}
Given $G(X,Y,S)$, the generic rank $sprank(A^{S})$ equals to the cardinality of the maximum bipartite matching on $G$.
\end{lemma}
Accordingly, finding a max matching on this graph using the Hopcroft-Karp algorithm allows us to find the generic rank with $\mathcal{O}(|S|\sqrt{m+n})$ complexity \cite{hopcroft1973n}.
\section{Generic Cospark}
Similarly to the supremum definition of generic rank (cf. Definition \ref{def:grank}), given the sparsity pattern of a matrix, we define \emph{generic cospark} as follows.
\begin{definition} [Generic Cospark]
Given $S$, the $\textit{generic cospark}$ of $A^{S}$ is $spcospark(A^{S}) \triangleq \sup_{A \in A_{S}} cospark(A)$.
\end{definition}
In a spirit similar to the multiple interpretations of generic rank as in Section \ref{sec:prelim}, we provide a \emph{probabilistic} view and a \emph{matching theory} based view of generic cospark as follows.
\subsection{Cospark Equals to Generic Cospark With Probability One}
For any $T \subset [m]$, let $A_T$ and $A^{S}_{T}$ represent the matrix $A$ and the set of matrices $A^{S}$ restricted to the rows $T$ respectively. A class of matrices which has cospark equal to generic cospark are those which satisfy the following property: \\
\begin{lemma} \label{lem:matppt}
Given any sparsity pattern $S$ so that $sprank(A^{S}) = n$ for $A^{S} \subset \mathbb{R}^{m \times n}$,
for any $A\in A^S$,
if $rank(A_T) = sprank(A^{S}_{T}), \forall T \subseteq [m]$, then $cospark(A) = spcospark(A^S)$.
\end{lemma}
\begin{proof}
Let $x^* = \argmin_{x \ne 0} ||Ax||_0$, and suppose $U = \{ i | a_i x^{*} = 0 \}$, where $a_i$ is the $i$th row of $A$. Since $A_Ux^{*} = 0$, $rank(A_U) < n$. Now consider another matrix $C \in \mathbb{R}^{m \times n}$ with sparsity pattern $S$.
Since $rank(C_U) \le rank(A_U) = sprank(A^{S}_U) < n$, $\ker(C_U)$ is also nonempty, meaning there exists a nonzero vector $h \in \mathbb{R}^{n}$ such that $C_Uh = 0$. Because $A_{U^c}x^{*}$ has no zero entries, we also have $||C_{U^c}h||_0 \le ||A_{U^c}x^{*}||_0 = ||Ax^{*}||_0$. This means $||Ch||_0 = ||C_Uh||_0 + ||C_{U^c}h||_0 \le ||A_{U^c}x^{*}||_0 = ||Ax^{*}||_0$. Hence, if $\hat{x} = \argmin_{x \ne 0} ||Cx||_0$, it follows $cospark(C) = ||C\hat{x}||_0 \le ||Ch||_0 \le ||Ax^*||_0 = cospark(A)$, which proves the lemma.
\end{proof}
We note that the property $rank(A_T) = sprank(A^{S}_{T}), \forall T \subseteq [m]$ is known as the $\textit{matching property}$ of matrix $A$ \cite{mccormick1983combinatorial}.
Now, we have the following theorem showing that the generic cospark indeed ``generically'' equals to the cospark.
\begin{Theorem} \label{thm:gcospark}
Given $S$, $cospark(A) = spcospark(A^S)$ with probability one, if the non-zero entries of $A$ are drawn from independently distributed continuous probability distributions.
\end{Theorem}
\begin{proof}
If we have a matrix $A$ with sparsity pattern $S$ whose nonzeros are drawn from independent continuous distributions, then every submatrix of rows has rank equaling generic rank w. p. 1 (cf. Lemma \ref{lem:grank}).
This immediately implies $cospark(A) = spcospark(A^S)$ w. p. 1 by Lemma \ref{lem:matppt}.
\end{proof}
\subsection{A Matching Theory based Definition of Generic Cospark}
Let $G(X,Y,S)$ be the bipartite graph corresponding to $A^{S}\subseteq \mathbb{R}^{m \times n}$.
For a subset of vertices $Z\subseteq X$, we define the induced subgraph $G(Z)$ as the bipartite graph $G(Z, N(Z), \{ (i,j) | i \in Z\ , j \in N(Z)\})$, where $N(Z)$ denotes the vertices in $Y$ adjacent to the set $Z$.
$G(Z)$ is essentially a bipartite graph corresponding to
the submatrix $A^{S}_{Z}$.
We then have the following.
\begin{lemma} \label{lem:opt}
Given G(X, Y, S), let $OPT \subset X$ be a largest subset such that the induced subgraph $G(OPT)$ has a max matching of size $n-1$. We have that
$\textit{spcospark($A^S$)} = m - |OPT|$.
\end{lemma}
The intuition behind this matching theory based definition of $\textit{spcospark($A^S$)}$ is the following. To find the sparsest vector in the image of $A$, it is equivalent to find a largest set of rows in $A$, $OPT$, which span an $n-1$ dimensional subspace. With such a subset $OPT$, we can find a vector $x^{*}$ that satisfies $A_{OPT}x^* = 0$, and it is clear that $x^{*} \in \argmin_{x \ne 0} {||Ax||_0}$.
Furthermore, based on the equivalence between generic rank and max matching from Lemma \ref{lem:matchgrank}, we arrive at the matching theory based definition of generic cospark in Lemma \ref{lem:opt}.
\section{Efficient Algorithm for Computing Generic Cospark}
In this section, we introduce an efficient algorithm that computes the \emph{generic} cospark.
This algorithm is based on a greedy approach motivated by Lemma \ref{lem:opt}.
Given $G(X,Y,S)$, for any size $n-1$ subset of vertices $W \subset Y$, we define $X_W = \{x \in X | N(x) \subseteq W \}$. In other words, $X_W$ is the index set of rows of $A^{S}$ with a \emph{zero} entry in the remaining coordinate $v = Y \setminus W$.
We use $X_W$ as a basis to construct a candidate solution for $OPT$. The idea is to add a maximal subset of vertices $B \subset X_{W}^{c}$ to $X_W$, such that $\overline{X_W} = X_W \cup B$ has a matching of size $n-1$ onto $Y$. Specifically, we keep adding vertices $t \in X_{W}^{c}$ to $B$ as long as the submatrix corresponding to the index set $X_W \cup B$ has generic rank no greater than $n-1$.
The following lemma shows that adding a vertex to $B$ can only increase the generic rank of $X_W \cup B$ by \emph{at most one}.
\begin{lemma} \label{lem:inc1}
Given $G(X,Y,S)$, $\forall Z \subset X$ and $u \in X \setminus Z$,
$sprank(A^{S}_{Z \cup \{u\}}) \le sprank(A^{S}_{Z}) + 1$.
\end{lemma}
\begin{RK}
For a given $W$, depending on the order we visit the vertices in $X_{W}^{c}$, we could end up with different sets $B$, possibly of different sizes.
However, we will prove that the optimal solution is recovered regardless.
\end{RK}
$\overline{X_W}, \forall W$ are the candidate solutions for OPT, and we obtain the optimal solution by choosing the $\overline{X_W}$ with the largest cardinality, i.e. $X_f = \argmax_{W \subset Y, |W| = n-1} |\overline{X_W}|$. The generic cospark of $A^S$ then equals to $m - |{X_f}|$.
The detailed algorithm is presented in Algorithm \ref{spcospark}.
\begin{algorithm}
\caption{Computing Generic Cospark}\label{spcospark}
\begin{algorithmic}[1]
\Procedure{spcospark}{$A^{S}$}
\State Initialization: Set $B = \emptyset, t = \emptyset$, and $X_f$ = $\emptyset$
\ForAll{ $W \subset Y$ of cardinality $n-1$}
\State \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{Scan through all $m$ vertices in $X$ \\ to find $X_W$ and let $T = X_{W}^{c}$ \strut}
\State Calculate $sprank(A^{S}_{X_W})$
\While {$sprank(A^{S}_{X_W \cup B \cup \{t\}}) \le n-1$}
\State Let $B = B \cup t$ \strut
\State \parbox[t]{\dimexpr\linewidth-\algorithmicindent}{Choose any element ${t}$ from $T$, and \\ set $ T = T \setminus t$}
\EndWhile \ and let $\overline{X_W} = X_{W} \cup B$
\If{$|X_f| < |\overline{X_W}|$}
\State Set $X_f = \overline{X_W}$
\EndIf
\State Set $B = \emptyset$
\EndFor
\State Return $X_f$, and $spcospark(A^S) = m - |{X_f}|$.
\EndProcedure
\end{algorithmic}
\end{algorithm}
\section{Proof of Optimality of Algorithm \ref{spcospark}}
In this section, we prove that Algorithm \ref{spcospark} indeed solves the generic cospark. It is sufficient to prove that the set $X_f$ returned by the Algorithm satisfies the definition of $OPT$ in Lemma \ref{lem:opt}, i.e., $X_f$ is a subset of vertices of the largest size such that the induced subgraph $G(X_f)$ has a max matching of size $n-1$.
Since $G(X_f)$ by construction has a max matching of size $n-1$, \emph{it is sufficient to prove that $X_f$ has the largest size, i.e., $|X_f| = |OPT|$}.
To prove this, let us consider an optimal set $OPT\subset X$. We denote by $\mathcal{M}$ the set of $n-1$ edges of a max matching of $G(OPT)$. We denote by $W^{*} \subset Y$ the set of $n-1$ vertices in $Y$ corresponding to this max matching, and denote by $v = Y \setminus W^{*}$ the remaining vertex in $Y$. We will show that, starting with $W^*$, Algorithm \ref{spcospark} will return an $X_f$ such that $|X_f| \ge |OPT|$, and hence $|X_f| = |OPT|$.
As the notations for this section are quite involved, an illustrative diagram is plotted in Figure \ref{fig:proof} to help clarify the proof procedure in the following.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.60]{proof_drawing.pdf}
\caption{The above graph represents a sketch of the partition of the nodes. The black continuous line segments are unmatched edges in the bipartite graph. The red continuous line segments comprise $\mathcal{M}$, which forms a $n-1$ matching from $\mathcal{I}$ to $W^*$. The pair of pink dotted line segments denote the range of vertices $X_{W^*}$ is incident to. Finally, the pair of green dotted line segments denote the range of vertices $\mathcal{J}$ is incident to.}
\label{fig:proof}
\end{figure}
We first partition $OPT$ into $OPT = \mathcal{I} \cup \mathcal{J}$, $\mathcal{I}\cap\mathcal{J} = \emptyset$, where $\mathcal{I}$ is the set of $n-1$ vertices in $OPT$ corresponding to the max matching $\mathcal{M}$. Hence, $\mathcal{I}$ perfectly matches onto $W^*$ with $\mathcal{M}$. $\mathcal{J}$ consists of the remaining vertices in $OPT$ unmatched by $\mathcal{M}$.
WLOG, we assume $\mathcal{J}$ is nonempty. This is because, if $\mathcal{J}$ is empty, we then immediately have $|OPT| = n - 1 \le |X_f|$.
We then have the following lemma about $\mathcal{I}$ and $\mathcal{J}$.
\begin{lemma} \label{lem:JinXW}
For any such partition $OPT = \mathcal{I} \cup \mathcal{J}$, we have that $\mathcal{J} \subset X_{W^*}$, and $\mathcal{I} \cap X_{W^*}$ is nonempty.
\end{lemma}
\begin{proof}
Let $OPT$ be partitioned into $\mathcal{I} \cup \mathcal{J}$. Suppose $j \in \mathcal{J}$. If $j \notin X_{W^*}$, then $j$ is incident to $v$, which means $\mathcal{I} \cup \{j\}$ has a perfect matching onto $Y$. This contradicts $OPT$ has no perfect matching onto $Y$. Now suppose $\mathcal{I} \cap X_{W^*}$ is empty. This means every vertex in $\mathcal{I}$ is incident to $v$. Since $\mathcal{I}$ has a perfect matching onto $W^{*}$ and vertices in $\mathcal{J}$ are incident to vertices in $W^{*}$, it follows there exists an augmenting path from any vertex in $\mathcal{J}$ to $v$, which is a contradiction to $sprank(A^S_{OPT}) = n-1$.
\end{proof}
Accordingly, we can partition $X_{W^*} = \mathcal{C} \cup\mathcal{J}, ~ \mathcal{C}\cap\mathcal{J} = \emptyset$, with $\mathcal{C}\triangleq X_{W^*} \setminus \mathcal{J}$.
Starting from here, the general idea of proving $|X_f| \ge |OPT|$ is to lower bound
\begin{align}
|X_f| = |X_{W^*}\cup B| = |\mathcal{C}\cup\mathcal{J}\cup B| = |\mathcal{C}| + |\mathcal{J}| + |B|. \label{eq:tolowerbound}
\end{align}
We immediately have the following lower bound on $|B|$:
\begin{align}
|B| \ge (n-1) - sprank(A^{S}_{X_{W^*}}). \label{eq:ineb}
\end{align}
This is because a) Algorithm \ref{spcospark} guarantees that $sprank(A^S_{W^*\cup B}) = n-1$, and b) every time we add a new vertex $t$ into $B$ (cf. Line 7 in Algorithm \ref{spcospark}), $sprank(A^S_{W^*\cup B})$ increases by at most one (cf. Lemma \ref{lem:inc1}). Since the initial generic cospark is $sprank(A^{S}_{X_{W^*}})$, we need at least $(n-1) - sprank(A^{S}_{X_{W^*}})$ vertices added into $B$ to reach $sprank(A^S_{W^*\cup B}) = n-1$.
We next devote the majority of this section to provide a lower bound on $|\mathcal{C}|$.
\subsection{Lower Bounding $|\mathcal{C}|$}
The key result we will rely on in this subsection is the following:
\begin{Theorem} \label{thm:noj}
For the induced bipartite graph $G(X_{W^*})$, there exists a max matching that does not touch any vertices in $\mathcal{J}$.
\end{Theorem}
To prove Theorem \ref{thm:noj}, we start with a partial matching $\mathcal{M}_p \subset \mathcal{M}$ consisting only edges that touch $\mathcal{I} \cap X_{W^*}$. In other words, $\mathcal{M}_p = \{(i,j) \in \mathcal{M} | i \in \mathcal{I} \cap X_{W^*}\}$.
The idea is that we will build a max matching starting from $\mathcal{M}_p$, and this max matching will not touch any vertices in $\mathcal{J}$, thus proving Theorem \ref{thm:noj}.
We have the following two lemmas.
\begin{lemma} \label{lem:njmat}
For the induced bipartite graph $G(X_{W^*})$ with $\mathcal{M}_p$ as a (not necessarily max) matching, any vertex in $N(\mathcal{J})$ is incident to some edge in $\mathcal{M}_p$, i.e., already matched.
\end{lemma}
\begin{proof}
First, note any $j \in \mathcal{J}$ is not incident to $v$, so any vertex $k \in N(j)$ is in $W^*$. Now, for any $j \in \mathcal{J}$ and any $k \in N(j)$, we want to prove $k$ is incident to some edge in $\mathcal{M}_p$. Since $k \in W^*$, $k$ is incident to some edge $(i,k) \in \mathcal{M}$. $\mathcal{M}$ is the perfect matching from $\mathcal{I}$ to $W^*$, so certainly, $i$ is in $\mathcal{I}$. On the other hand, $i$ cannot be incident to $v$, or there will exist a length $3$ augmenting path from $j$ to $k$ to $i$ to $v$. Hence, $i \in X_W^*$, and the claim is proven.
\end{proof}
\begin{lemma} \label{lem:nojtou}
For the induced bipartite graph $G(X_{W^*})$ with $\mathcal{M}_p$ as a (not necessarily max) matching, there exists no augmenting path starting from any $j \in \mathcal{J}$.
\end{lemma}
\begin{proof}
For any vertex $ u \in N(X_{W^*}) \setminus N(\mathcal{J})$ that is unmatched w. r. t. $\mathcal{M}_p$, suppose there is an augmenting path from $j$ to $u$ using edges in $\mathcal{M}_p$. If $u$ is unmatched in the induced graph $G(X_{W^*})$ w.r.t $\mathcal{M}_p$ and $u \in W^*$, then there exists an edge $(i,u) \in \mathcal{M} \setminus \mathcal{M}_p$ which is incident to $u$. Because $(i,u) \in \mathcal{M} \setminus \mathcal{M}_p$, $i$ must be incident to $v$. This means if there exists an augmenting path from $j$ to $u$ w.r.t $\mathcal{M}_p$, then there must exist an augmenting path from $j$ to $v$ w.r.t $\mathcal{M}$, which contradicts vertices in $\mathcal{J}$ do not have augmenting paths to $v$.
\end{proof}
Lemma \ref{lem:nojtou} implies that all augmenting paths w. r. t. the partial matching $\mathcal{M}_p$ are from unmatched vertices in $\mathcal{C} \setminus \mathcal{I}$ (where $\mathcal{C} = X_{W^*} \setminus \mathcal{J}$) to unmatched vertices in $N(X_{W^*}) \setminus N(\mathcal{J})$. A corollary which will prove useful is the following:
\begin{corollary}
Suppose $P$ is an augmenting path from $c \in \mathcal{C} \setminus \mathcal{I}$ to $u \in N(X_{W^*}) \setminus N(\mathcal{J})$ w. r. t. the matching $\mathcal{M}_p$. Then for any $j \in \mathcal{J}$, there exists no alternating path w. r. t. $\mathcal{M}_p$ from $j$ to any vertex in $P$.
\end{corollary}
\begin{proof}
Let $P$ be an augmenting path from $c$ to $u$ w.r.t. $\mathcal{M}_p$. Suppose there exists an alternating path $P'_{jp}$ from $j$ to a vertex $p$, where $p$ is the first vertex in $P$ encountered when traversing $P'_{jp}$. $P'_{jp}$ must have odd number of edges, since $p$ is a matched vertex in $P$ and $j$ is unmatched. Since $P'_{jp}$ is odd, $p \in N(X_{W^*})$. Hence, if $P_{cp} \subset P$ is the restriction of $P$ from $c$ to $p$, then the alternating path $P_{cp}$ must also have odd length. The total length of $P$ must be odd since $P$ is an augmenting path, which means the length of the alternating path from $p$ to $u$ in $P$ must be even.
Since $P_{jp}$ is an odd alternating path from $j$ to $p$, and the alternating path from $p$ to $u$ in $P$ is even, then the alternating path from $P_{jp}$ to $u$ is odd. Furthermore, $j$ and $u$ are unmatched, so this path is actually an augmenting path, which immediately contradicts Lemma \ref{lem:nojtou}.
\end{proof}
From Corollary 1, any alternating path starting from $j$ w. r. t. $\mathcal{M}_p$ is \emph{vertex disjoint} to any augmenting path $P$. This implies that a) any alternating path from $j$ w. r. t. $\mathcal{M}_p \oplus P$ remains an alternating path, and b) there remains no augmenting path starting from $j$ w. r. t. $\mathcal{M}_p \oplus P$, i.e., \emph{Lemma \ref{lem:nojtou} continues to hold for $G(X_{W^*})$ with a new matching $\mathcal{M}_p \oplus P$}.
We are now ready to prove Theorem \ref{thm:noj}.
\begin{proof}[Proof of Theorem \ref{thm:noj}]
Take $\mathcal{M}_p$ to be an initial matching onto $N(X_{W^*})$. By Lemma \ref{lem:njmat}, all vertices in $N(\mathcal{J})$ are now matched, and Lemma \ref{lem:nojtou} tells us we are left with augmenting paths starting from unmatched vertices in $\mathcal{C} \setminus \mathcal{I}$ to unmatched vertices in $N(X_{W^*}) \setminus N(\mathcal{J})$. If $P_1$ is one such augmenting path, then $\mathcal{M}_p \oplus P_1$ is a matching with one greater cardinality. By Corollary 1, all alternating paths w.r.t $\mathcal{M}_p$ starting from $j$ are vertex disjoint to $P_1$, which implies alternating paths starting from $j$ remain unchanged. Furthermore, Corollary 1 tells us $\mathcal{M}_p \oplus P_1$ does not have augmenting paths starting from $j$. Hence, the only remaining augmenting paths are still from vertices $\mathcal{C} \setminus \mathcal{I}$ to vertices $N(X_{W^*}) \setminus N(\mathcal{J})$. If $P_2$ is such an augmenting path, we can now repeat the above procedure and compute the matching $\mathcal{M}_p \oplus P_1 \oplus P_2$. Again, alternating paths starting from $j$ remain unchanged, and $\mathcal{M}_p \oplus P_1 \oplus P_2$ contains no augmenting paths starting from $j$. We can repeat this procedure until all augmenting paths from $\mathcal{C} \setminus \mathcal{I}$ to $N(X_{W^*}) \setminus N(\mathcal{J})$ are eliminated. Since the final matching obtained this way has no augmenting paths, this final matching is optimal, and its edges are incident to no vertices in $\mathcal{J}$.
\end{proof}
As a result of Theorem \ref{thm:noj}, there exists a max matching of the bipartite graph $G(X_{W^*})$ that, on the ``left hand side'' of the graph, only touches vertices in $\mathcal{C} = X_{W^*}\setminus \mathcal{J}$. Since the size of the max matching of $G(X_{W^*})$ equals to $sprank\left(A^S_{X_{W^*}}\right)$ (cf. Lemma \ref{lem:matchgrank}), we arrive at the following lower bound on $|\mathcal{C}|$:
\begin{align}
|\mathcal{C}| \ge sprank\left(A^S_{X_{W^*}}\right). \label{eq:inec}
\end{align}
\subsection{Proof of the Optimality of Algorithm \ref{spcospark}}
We now show that Algorithm \ref{spcospark} indeed returns the generic cospark as in the following theorem.
\begin{Theorem}
For the $X_f$ that Algorithm \ref{spcospark} returns, we have that $|X_f| = |OPT|$.
\end{Theorem}
\begin{proof}
By the definition of $OPT$, $|X_f| \le |OPT|$. To prove $|X_f| \ge |OPT|$, starting from \eqref{eq:tolowerbound},
\begin{align}
|X_f| & = |\mathcal{C}| + |\mathcal{J}| + |B| \\
& \ge sprank(A^{S}_{X_{W^*}}) + |\mathcal{J}| + |B| \label{eq:ine1} \\
& \ge sprank(A^{S}_{X_{W^*}}) + |\mathcal{J}| + (n-1) - sprank(A^{S}_{X_{W^*}}) \label{eq:ine2} \\
& = |\mathcal{J}| + (n-1) = |\mathcal{I}| + |\mathcal{J}| = |OPT|,
\end{align}
where \eqref{eq:ine1} is from \eqref{eq:inec}, and \eqref{eq:ine2} is from \eqref{eq:ineb}.
\end{proof}
\section{Algorithm Complexity}
We now show that Algorithm \ref{spcospark} is efficient, and provide an upper bound on its computational complexity.
\begin{Theorem}\label{thm:comp}
Given any $S$,
Algorithm \ref{spcospark} computes $spcospark(A^{S})$ in $\mathcal{O}(nm(1+|S|))$ time.
\end{Theorem}
\begin{proof}
Observe in the pseudocode above, step 3 is over $n$ iterations. For each iteration, steps 4 to 9 are the most computationally expensive. Step 4 requires a $\mathcal{O}(m)$ scan of the rows of $A^{S}$, and step 5 requires us to compute a perfect matching using Hopcroft-Karp algorithm, which can be done in $\mathcal{O}(|S|\sqrt{m+n})$ time.
For the loop in steps 6 to 9, we do not need to recalculate $sprank(A^{S}_{\{X_{W} \cup B \}})$ every iteration. Given we know the max matching from the previous iteration, we only need to check if the new vertex $t$ added to $B$ has an augmenting path to an unmatched vertex in $Y$. Searching for this augmented path requires us to use breadth first search (BFS) or depth first search (DFS), which can be computed in $\mathcal{O}(|S|)$ time. Since there are $\mathcal{O}(m)$ iterations in the while loop, the total cost of steps 6 to 9 is $\mathcal{O}(m|S|)$.
Hence, for every iteration of step 3, the total cost is $\mathcal{O}(m + |S|\sqrt{m+n} + m|S|) = \mathcal{O}(m(1+|S|))$ since $n \le m$. It follows immediately our total running time is $\mathcal{O}(nm(1+|S|))$.
\end{proof}
From Theorem \ref{thm:comp}, if $A^{S}$ is extremely sparse, the running time of Algorithm \ref{spcospark} is essentially \emph{quadratic}.
\begin{RK}
The algorithm's bottleneck is in steps 6-9. For each row $t$ to add, we need to use a BFS. Since we need to add $\mathcal{O}(m)$ such vertices, the total complexity for these steps is $\mathcal{O}(m|S|)$ as in the above proof. To improve this complexity, we would like to detect multiple candidate rows to add to $B$ using a single BFS. Indeed, it can be shown further that steps 6-9 of Algorithm \ref{spcospark} can be improved to $\mathcal{O}(\sqrt{m}|S|)$ based on an idea similar to Hopcroft-Karp matching \cite{hopcroft1973n}. This will improve the total running time of Algorithm \ref{spcospark} to $\mathcal{O}(n\sqrt{m}|S|)$ . Details are omitted here.
\end{RK}
\section{Experimental Results for Verification}
We compare the results from our algorithm of finding the \emph{generic cospark} to a brute force algorithm of finding the \emph{cospark}.
Because the brute force algorithm has a computational complexity of $\mathcal{O}(m^n)$, we limit the size of the test matrices to $m = 20$ and $n = 5$.
We run our comparison over 10 different sparsity levels spaced equally between zero and one. For each sparsity level, we generate 50 matrices, where the locations of the nonzero entries are chosen uniformly at random given the sparsity level, and the values of the non-zero entries are drawn from independent uniform distributions in $[0,1]$. For each of these 50 matrices, we compare the generic cospark given by Algorithm \ref{spcospark} versus that given by the brute force method. In every case, the solutions of both algorithms match. These results support the fact that our algorithm not only computes the generic cospark in polynomial time, but also obtains the actual cospark w. p. 1 if the non-zero entries are drawn from independent continuous probability distributions.
\section{Conclusion}
We have shown that, although computing the cospark of a matrix is an NP hard problem, computing the generic cospark can be done in polynomial time. We have shown that, given any sparsity pattern of a matrix, the cospark is always upper bounded by the generic cospark, and is equal to the generic cospark with probability one if the nonzero entries of the matrix are drawn from independent continuous probability distributions. An efficient algorithm is developed that computes generic cospark in polynomial time.
\bibliographystyle{IEEEtran}
|
1,477,468,751,114 | arxiv | \section{Introduction}
Libraries of objects for algorithm testing are extremely common in computational geometry. Their set-up requires particular care: If a library consists of objects `too easy to understand',
then basically any algorithm would score great on them, thus making it impossible for the researcher to appreciate the efficiency of the algorithms.
Of course, agreeing on what examples should be regarded as `easy' is a hard challenge, and how to quantify complicatedness is even harder.
In the present paper, we focus on computational topology, which deals with simplicial complexes in an abstract manner, i.e., without prescribing a shape, a volume, or the dimension of a Euclidean space in which they embed.
We present a possible random approach, which we call \emph{random discrete Morse theory}.
The mathematical background
relies on Forman's discrete Morse theory from 1998 \cite{Forman1998,Forman2002}, which in turn builds on
Whitehead's simple homotopy theory, developed around 1939 \cite{Whitehead1939}. (Especially important is Whitehead's notion of collapsibility, which is a combinatorial strengthening of the contractibility property.)
\enlargethispage*{8mm}
Our idea is to create a quantitative version of these two theories. For example, we would like
to be able to tell not only if a complex is collapsible or not, but also `how easy it is' to find a collapsing sequence.
To give a mathematical basis to this intuition, we consider a random model where we perform elementary
collapses completely at random. The probability to find a complete collapsing sequence this way,
will measure how easy it is to collapse the complex. Although this probability is, in most cases,
too difficult to compute, we can estimate it empirically in polynomial time.
The following elementary heuristic takes also into account complexes that are not contractible.
\bigskip
\noindent
\textsc{Algorithm: Random Discrete Morse}
\medskip
\noindent
\textsc{Input}: A $d$-dimensional (abstract, finite) simplicial complex $C$, given by its list of facets.
\begin{compactitem}
\item[(0)] Initialize $c_0 = c_1 = \, \ldots \, = c_d =0$.
\item[(1)] Is the complex empty? If yes, then STOP; otherwise, go to (2).
\item[(2)] Are there free codimension-one faces? If yes, go to (3); if no, go to (4).
\item[(3)] \emph{Elementary Collapse}: Pick one free codimension-one face uniformly at random and delete it, together with
the unique face that contains it. Go back to (1).
\item[(4)] \emph{Critical Face}: Pick one of the top-dimensional faces uniformly at random and delete it from the complex. If $i$ is the dimension of the face just deleted, increment $c_i$ by 1 unit. Go back to (1).
\end{compactitem}
\noindent
\textsc{Output}: The resulting discrete Morse vector $(c_0, c_1, c_2, \ldots, c_d)$.
\bigskip
By construction, $c_i$ counts the critical faces of dimension $i$.
(We do not consider the empty face as a free face.)
According to Forman~\cite{Forman1998},
any discrete Morse vector $(c_0, c_1, c_2, \ldots, c_d)$ is also the face vector of a cell complex homotopy equivalent to $C$.
\begin{deff}
The \emph{discrete Morse spectrum} $\sigma$ of\, a (finite) simplicial complex\, $C$ is the collection of all possible resulting discrete Morse vectors produced by the algorithm
\textsc{Random Discrete Morse}
together with the distribution of the respective probabilities.
\end{deff}
\begin{figure}[t]
\begin{center}
\includegraphics[width=3.5cm]{graphs}
\end{center}
\caption{The graph $A_7$.}
\label{fig:A7}
\end{figure}
\emph{Example}: Consider the graph $A_7$ of Figure~\ref{fig:A7} above. As there are no free vertices in it, \textsc{Random Discrete Morse} picks an edge uniformly at random and deletes it. If the edge chosen is the central bridge (which happens with probability $\frac{1}{7}$), the output discrete Morse vector is $(2,3)$. If any other edge than the central one is chosen, the output vector is $(1,2)$. The discrete Morse spectrum is therefore $\{\mbox{$\frac{6}{7}$-$(1,2)$, $\frac{1}{7}$-$(2,3)$}\}$;
or, shortly, $\{(1,2), (2,3)\}$ if we simply want to list the vectors of the spectrum.
\bigskip
The algorithm \textsc{Random Discrete Morse} requires no backtracking, and `digests' the complex very rapidly. The output $(1,0,0, \ldots, 0)$ is a certificate of collapsibility.
If the output is different from $(1,0, \ldots, 0)$, the complex could still be collapsible with a different sequence of free-face deletions.
\textsc{Random Discrete Morse} declares a $k$-face critical only if there are no free $(k-1)$-faces available. This keeps the number of faces declared critical to a minimum, thus making it more likely for the output vector to be optimal. Unfortunately, there are complexes on which the probability to achieve the optimal discrete Morse vector can be arbitrarily small;
see Appendix A and also \cite{AdiprasitoBenedettiLutz2013pre} for a further discussion. But in case optimality is not reached, the algorithm still outputs something meaningful, namely (as already mentioned) the $f$-vector of a cell complex homotopy equivalent to the given complex.
Since the output arrives quickly, we can re-launch the program, say, 10000 times, possibly on separate computers (independently).
The distribution of the obtained outcomes yields an approximation of the discrete Morse spectrum.
By the so-called Morse inequalities, each output vector is componentwise larger or equal
than the vector of Betti numbers $(\beta_0, \ldots, \beta_d)$. When the spectrum stays `close' to the vector of Betti numbers, we can regard the triangulation to be easy. This allows an empirical analysis of how complicated the complex is.
We point out that the problem of finding \emph{optimal} discrete Morse functions (with as fewest critical cells as possible)
is {\cal NP}-hard \cite{JoswigPfetsch2006,LewinerLopesTavares2003a}; even the decision problem
of whether some given (connected, finite) simplicial complex is collapsible or not is {\cal NP}-complete \cite{Tancer2012pre}.
We therefore \emph{should not} expect to immediately find optimal discrete Morse vectors for any input.
Indeed, one can easily construct examples on which our (or similar) random heuristic performs poorly; see Appendix A.
However, for many triangulations even of \emph{huge} size, our elementary random heuristic produces optimal discrete Morse functions
in almost 100\% of the runs of the program.
This could be interesting in the future also for homology computations.
Discrete Morse functions (for general cell complexes) are implicitly computed in several homology algorithms
that are based on fast (co-)reduction techniques, like the packages CHomP~\cite{Chomp}, RedHom~\cite{RedHom},
and Perseus~\cite{Perseus}.
\enlargethispage*{2mm}
The paper is structured as follows: First we give details of our algorithm (Section~\ref{sec:details}) and
compare it with previous approaches (Section~\ref{sec:comparison}). Then we survey the existing topological
and combinatorial lower bounds for optimal discrete Morse vectors (Section~\ref{sec:lower bounds}).
Finally, we describe and examine a collection of examples coming from several different areas
of topology (Section~\ref{sec:computational_results}).
In our opinion, the resulting library (Appendix B) is a richer and more sensitive testing ground for implementations
based on discrete Morse theory.
\section{Details of the algorithm and computational complexity}
\label{sec:details}
\begin{figure}[t]
\begin{center}
\begin{postscript}
\psfrag{1}{1}
\psfrag{2}{2}
\psfrag{3}{3}
\psfrag{4}{4}
\psfrag{5}{5}
\includegraphics[width=3.3cm]{bipyramid}
\end{postscript}
\end{center}
\caption{The bipyramid.}
\label{fig:bipyramid}
\end{figure}
In the following, we give a more explicit description of our random heuristic.
The first thing we do is to build the Hasse diagram of the given simplicial complex $C$,
which represents the incidence structure of the face poset of $C$; see Figure~\ref{fig:hasse_bipyramid}
for an example.
It takes $O(d\cdot I\cdot T)$ steps to construct the Hasse diagram
of a simplicial complex, in case the complex is given by its
facets (or to be precise, by its vertex-facet incidences).
Here $d$ is the dimension of the input complex, $T$ the total number
of faces, and $I$ the number of vertex-facet-incidences
\cite{KaibelPfetsch2002}; cf.\ also \cite{KaibelPfetsch2003}.
\enlargethispage*{4mm}
Once the upward Hasse diagram and the downward Hasse diagram are set up (see below),
we deconstruct a copy of the upward Hasse diagram in every run of our program by deleting (randomly picked) critical faces or pairs
of faces in case there are free faces. We illustrate this with a concrete example.
\begin{example}[The bipyramid] \rm
The $2$-dimensional boundary of the bipyramid of Figure~\ref{fig:bipyramid} has $6$~triangles, $9$~edges, and $5$~vertices.
We list the faces level-wise in lexicographic order and identify
each face by a label $k^i$ denoting the $k$-th face of dimension $i$
in the respective lexicographic list.
\begin{center}
$\emph{1}^{\emph{\,2}}$: \mbox{[1,2,4]},\, $\emph{2}^{\emph{\,2}}$: \mbox{[1,2,5]},\, $\emph{3}^{\emph{\,2}}$: \mbox{[1,3,4]},\,
$\emph{4}^{\emph{\,2}}$: \mbox{[1,3,5]},\, $\emph{5}^{\emph{\,2}}$: \mbox{[2,3,4]},\, $\emph{6}^{\emph{\,2}}$: \mbox{[2,3,5]}, \\[2mm]
$\emph{1}^{\emph{\,1}}$: \mbox{[1,2]},\, $\emph{2}^{\emph{\,1}}$: \mbox{[1,3]},\, $\emph{3}^{\emph{\,1}}$: \mbox{[1,4]},\,
$\emph{4}^{\emph{\,1}}$: \mbox{[1,5]},\, $\emph{5}^{\emph{\,1}}$: \mbox{[2,3]},\, $\emph{6}^{\emph{\,1}}$: \mbox{[2,4]},\,
$\emph{7}^{\emph{\,1}}$: \mbox{[2,5]},\, $\emph{8}^{\emph{\,1}}$: \mbox{[3,4]},\, $\emph{9}^{\emph{\,1}}$: \mbox{[3,5]}, \\[2mm]
$\emph{1}^{\emph{\,0}}$: \mbox{[1]},\, $\emph{2}^{\emph{\,0}}$: \mbox{[2]},\, $\emph{3}^{\emph{\,0}}$: \mbox{[3]},\,
$\emph{4}^{\emph{\,0}}$: \mbox{[4]},\, $\emph{5}^{\emph{\,0}}$: \mbox{[5]}.
\end{center}
Next, we initialize the Hasse diagram. Hereby, the graph of the Hasse diagram is stored twice.
In the \emph{upward Hasse diagram}, we level-wise list all inclusions of $i$-dimensional faces
in $(i+1)$-dimen\-sional faces,
\medskip
\noindent
$i=1$:\, $\emph{1}^{\emph{\,1}}\!\nearrow\{\emph{1}^{\emph{\,2}},\emph{2}^{\emph{\,2}}\}$,\,
$\emph{2}^{\emph{\,1}}\!\nearrow\{\emph{3}^{\emph{\,2}},\emph{4}^{\emph{\,2}}\}$,\,
$\emph{3}^{\emph{\,1}}\!\nearrow\{\emph{1}^{\emph{\,2}},\emph{3}^{\emph{\,2}}\}$,\,
$\emph{4}^{\emph{\,1}}\!\nearrow\{\emph{2}^{\emph{\,2}},\emph{4}^{\emph{\,2}}\}$,\,
$\emph{5}^{\emph{\,1}}\!\nearrow\{\emph{5}^{\emph{\,2}},\emph{6}^{\emph{\,2}}\}$, \\
\mbox{}\hspace{10mm}
$\emph{6}^{\emph{\,1}}\!\nearrow\{\emph{1}^{\emph{\,2}},\emph{5}^{\emph{\,2}}\}$,\,
$\emph{7}^{\emph{\,1}}\!\nearrow\{\emph{2}^{\emph{\,2}},\emph{6}^{\emph{\,2}}\}$,\,
$\emph{8}^{\emph{\,1}}\!\nearrow\{\emph{3}^{\emph{\,2}},\emph{5}^{\emph{\,2}}\}$,\,
$\emph{9}^{\emph{\,1}}\!\nearrow\{\emph{4}^{\emph{\,2}},\emph{6}^{\emph{\,2}}\}$, \\[2mm]
$i=0$:\, $\emph{1}^{\emph{\,0}}\!\nearrow\{\emph{1}^{\emph{\,1}},\emph{2}^{\emph{\,1}},\emph{3}^{\emph{\,1}},\emph{4}^{\emph{\,1}}\}$,\,
$\emph{2}^{\emph{\,0}}\!\nearrow\{\emph{1}^{\emph{\,1}},\emph{5}^{\emph{\,1}},\emph{6}^{\emph{\,1}},\emph{7}^{\emph{\,1}}\}$,\,
$\emph{3}^{\emph{\,0}}\!\nearrow\{\emph{2}^{\emph{\,1}},\emph{5}^{\emph{\,1}},\emph{8}^{\emph{\,1}},\emph{9}^{\emph{\,1}}\}$, \\
\mbox{}\hspace{10mm}
$\emph{4}^{\emph{\,0}}\!\nearrow\{\emph{3}^{\emph{\,1}},\emph{6}^{\emph{\,1}},\emph{8}^{\emph{\,1}}\}$,\,
$\emph{5}^{\emph{\,0}}\!\nearrow\{\emph{4}^{\emph{\,1}},\emph{7}^{\emph{\,1}},\emph{9}^{\emph{\,1}}\}$,
\medskip
\noindent
while in the \emph{downward Hasse diagram} we level-wise list the $(j-1)$-dimensional faces that are contained
in the $j$-dimensional faces,
\medskip
\noindent
$j=2$:\, $\emph{1}^{\emph{\,2}}\!\searrow\{\emph{1}^{\emph{\,1}},\emph{3}^{\emph{\,1}},\emph{6}^{\emph{\,1}}\}$,\,
$\emph{2}^{\emph{\,2}}\!\searrow\{\emph{1}^{\emph{\,1}},\emph{4}^{\emph{\,1}},\emph{7}^{\emph{\,1}}\}$,\,
$\emph{3}^{\emph{\,2}}\!\searrow\{\emph{2}^{\emph{\,1}},\emph{3}^{\emph{\,1}},\emph{8}^{\emph{\,1}}\}$,\\
\mbox{}\hspace{10mm}
$\emph{4}^{\emph{\,2}}\!\searrow\{\emph{2}^{\emph{\,1}},\emph{4}^{\emph{\,1}},\emph{9}^{\emph{\,1}}\}$,\,
$\emph{5}^{\emph{\,2}}\!\searrow\{\emph{5}^{\emph{\,1}},\emph{6}^{\emph{\,1}},\emph{8}^{\emph{\,1}}\}$,\,
$\emph{6}^{\emph{\,2}}\!\searrow\{\emph{5}^{\emph{\,1}},\emph{7}^{\emph{\,1}},\emph{9}^{\emph{\,1}}\}$, \\[2mm]
$j=1$:\, $\emph{1}^{\emph{\,1}}\!\searrow\{\emph{1}^{\emph{\,0}},\emph{2}^{\emph{\,0}}\}$,\,
$\emph{2}^{\emph{\,1}}\!\searrow\{\emph{1}^{\emph{\,0}},\emph{3}^{\emph{\,0}}\}$,\,
$\emph{3}^{\emph{\,1}}\!\searrow\{\emph{1}^{\emph{\,0}},\emph{4}^{\emph{\,0}}\}$,\,
$\emph{4}^{\emph{\,1}}\!\searrow\{\emph{1}^{\emph{\,0}},\emph{5}^{\emph{\,0}}\}$,\,
$\emph{5}^{\emph{\,1}}\!\searrow\{\emph{2}^{\emph{\,0}},\emph{3}^{\emph{\,0}}\}$, \\
\mbox{}\hspace{10mm}
$\emph{6}^{\emph{\,1}}\!\searrow\{\emph{2}^{\emph{\,0}},\emph{4}^{\emph{\,0}}\}$,\,
$\emph{7}^{\emph{\,1}}\!\searrow\{\emph{2}^{\emph{\,0}},\emph{5}^{\emph{\,0}}\}$,\,
$\emph{8}^{\emph{\,1}}\!\searrow\{\emph{3}^{\emph{\,0}},\emph{4}^{\emph{\,0}}\}$,\,
$\emph{9}^{\emph{\,1}}\!\searrow\{\emph{3}^{\emph{\,0}},\emph{5}^{\emph{\,0}}\}$.\\
\noindent
Here, $\emph{3}^{\emph{\,1}}\!\nearrow\{\emph{1}^{\emph{\,2}},\emph{3}^{\emph{\,2}}\}$
is the short notation for the inclusion of the edge $\emph{3}^{\emph{\,1}}$:~\mbox{[1,4]}
in the two triangles $\emph{1}^{\emph{\,2}}$:~\mbox{[1,2,4]} and $\emph{3}^{\emph{\,2}}$:~\mbox{[1,3,4]}.
\begin{figure}
\begin{center}
\begin{postscript}
\psfrag{1}{1}
\psfrag{2}{2}
\psfrag{3}{3}
\psfrag{4}{4}
\psfrag{5}{5}
\psfrag{12}{12}
\psfrag{13}{13}
\psfrag{14}{14}
\psfrag{15}{15}
\psfrag{23}{23}
\psfrag{24}{24}
\psfrag{25}{25}
\psfrag{34}{34}
\psfrag{35}{35}
\psfrag{124}{124}
\psfrag{125}{125}
\psfrag{134}{134}
\psfrag{135}{135}
\psfrag{234}{234}
\psfrag{235}{235}
\psfrag{e}{$\emptyset$}
\includegraphics[width=15cm]{hasse_bipyramid}
\end{postscript}
\end{center}
\caption{The Hasse diagram of the bipyramid with one critical triangle (234), one matching edge (34--134),
and four free edges (13, 14, 23, 24) highlighted.}
\label{fig:hasse_bipyramid}
\end{figure}
During each run, the downward Hasse diagram is maintained, while a
copy of the upward Hasse diagram
is updated after the removal of a critical face or of a pair consisting of a free face
and the unique face it is contained in. The sequence of updating steps for the above example
could be as follows:
\medskip
\noindent
0. \texttt{compute downward and upward Hasse diagram}\\
1. \texttt{initialize copy of the upward Hasse diagram}\\
2. \texttt{free edges}:\, none\\
3. \texttt{select random critical triangle:}\, $\emph{5}^{\emph{\,2}}$: \mbox{[2,3,4]} \\
4. \texttt{update upward Hasse diagram:}\\[2mm]
\mbox{}\hspace{5mm}
$i=1$:\, $\emph{1}^{\emph{\,1}}\!\nearrow\{\emph{1}^{\emph{\,2}},\emph{2}^{\emph{\,2}}\}$,\,
$\emph{2}^{\emph{\,1}}\!\nearrow\{\emph{3}^{\emph{\,2}},\emph{4}^{\emph{\,2}}\}$,\,
$\emph{3}^{\emph{\,1}}\!\nearrow\{\emph{1}^{\emph{\,2}},\emph{3}^{\emph{\,2}}\}$,\,
$\emph{4}^{\emph{\,1}}\!\nearrow\{\emph{2}^{\emph{\,2}},\emph{4}^{\emph{\,2}}\}$,\,
$\emph{5}^{\emph{\,1}}\!\nearrow\{\emph{6}^{\emph{\,2}}\}$, \\
\mbox{}\hspace{16mm}
$\emph{6}^{\emph{\,1}}\!\nearrow\{\emph{1}^{\emph{\,2}}\}$,\,
$\emph{7}^{\emph{\,1}}\!\nearrow\{\emph{2}^{\emph{\,2}},\emph{6}^{\emph{\,2}}\}$,\,
$\emph{8}^{\emph{\,1}}\!\nearrow\{\emph{3}^{\emph{\,2}}\}$,\,
$\emph{9}^{\emph{\,1}}\!\nearrow\{\emph{4}^{\emph{\,2}},\emph{6}^{\emph{\,2}}\}$ \\[2mm]
5. \texttt{free edges:\, $\emph{5}^{\emph{\,1}},\emph{6}^{\emph{\,1}},\emph{8}^{\emph{\,1}}$}\\
6. \texttt{select random free edge:}\, $\emph{8}^{\emph{\,1}}$: \mbox{[3,4]}\, \texttt{paired with}\, $\emph{3}^{\emph{\,2}}$: \mbox{[1,3,4]} \\
7. \texttt{update upward Hasse diagram:}\\[2mm]
\mbox{}\hspace{5mm}
$i=1$:\, $\emph{1}^{\emph{\,1}}\!\nearrow\{\emph{1}^{\emph{\,2}},\emph{2}^{\emph{\,2}}\}$,\,
$\emph{2}^{\emph{\,1}}\!\nearrow\{\emph{4}^{\emph{\,2}}\}$,\,
$\emph{3}^{\emph{\,1}}\!\nearrow\{\emph{1}^{\emph{\,2}},\emph{3}^{\emph{\,2}}\}$,\,
$\emph{4}^{\emph{\,1}}\!\nearrow\{\emph{2}^{\emph{\,2}},\emph{4}^{\emph{\,2}}\}$,\,
$\emph{5}^{\emph{\,1}}\!\nearrow\{\emph{6}^{\emph{\,2}}\}$, \\
\mbox{}\hspace{16mm}
$\emph{6}^{\emph{\,1}}\!\nearrow\{\emph{1}^{\emph{\,2}}\}$,\,
$\emph{7}^{\emph{\,1}}\!\nearrow\{\emph{2}^{\emph{\,2}},\emph{6}^{\emph{\,2}}\}$,\,
$\emph{8}^{\emph{\,1}}\!\nearrow\{\}$,\,
$\emph{9}^{\emph{\,1}}\!\nearrow\{\emph{4}^{\emph{\,2}},\emph{6}^{\emph{\,2}}\}$, \\[2mm]
\mbox{}\hspace{5mm}
$i=0$:\, $\emph{1}^{\emph{\,0}}\!\nearrow\{\emph{1}^{\emph{\,1}},\emph{2}^{\emph{\,1}},\emph{3}^{\emph{\,1}},\emph{4}^{\emph{\,1}}\}$,\,
$\emph{2}^{\emph{\,0}}\!\nearrow\{\emph{1}^{\emph{\,1}},\emph{5}^{\emph{\,1}},\emph{6}^{\emph{\,1}},\emph{7}^{\emph{\,1}}\}$,\,
$\emph{3}^{\emph{\,0}}\!\nearrow\{\emph{2}^{\emph{\,1}},\emph{5}^{\emph{\,1}},\emph{9}^{\emph{\,1}}\}$, \\
\mbox{}\hspace{16mm}
$\emph{4}^{\emph{\,0}}\!\nearrow\{\emph{3}^{\emph{\,1}},\emph{6}^{\emph{\,1}}\}$,\,
$\emph{5}^{\emph{\,0}}\!\nearrow\{\emph{4}^{\emph{\,1}},\emph{7}^{\emph{\,1}},\emph{9}^{\emph{\,1}}\}$\\[2mm]
8. \texttt{free edges:\, $\emph{2}^{\emph{\,1}},\emph{3}^{\emph{\,1}},\emph{5}^{\emph{\,1}},\emph{6}^{\emph{\,1}}$}\\
9. \texttt{\dots}
\medskip
The downward Hasse diagram tells us precisely which parts of the upward Hasse diagram we have to update.
For example, the choice of the critical triangle $\emph{5}^{\emph{\,2}}$: \mbox{[2,3,4]}
forces us to update, via $\emph{5}^{\emph{\,2}}\!\searrow\{\emph{5}^{\emph{\,1}},\emph{6}^{\emph{\,1}},\emph{8}^{\emph{\,1}}\}$,
the inclusions of the edges $\emph{5}^{\emph{\,1}},\emph{6}^{\emph{\,1}},\emph{8}^{\emph{\,1}}$
in the upward Hasse diagram (by removing the triangle $\emph{5}^{\emph{\,2}}$ as including face).
\end{example}
Triangulations of closed manifolds initially have no free faces. Thus, we start
with an empty list of free faces and immediately remove a random critical face.
For triangulations of manifolds with boundary or general simplicial complexes,
we first have to initialize the list of free faces. (This extra effort in computation time
can be seen by comparing the respective run times for the examples \texttt{knot} and \texttt{nc\_sphere}
in Table~\ref{tbl:discrete_morse_spectra} (see also Section~\ref{sec:complicated_balls_and_spheres}):
With 0.813 seconds, the $3$-ball \texttt{knot} takes slightly
longer per round than the $3$-sphere \texttt{nc\_sphere} with 0.470 seconds.)
Whenever we are done with one level of the Hasse diagram, we initialize the set of free faces for the next level below.
Besides updating the upward Hasse diagram in each round, we also keep track of
\begin{compactitem}
\item the current \emph{list of free faces} (and update this list whenever we delete a critical face or a pair
consisting of a free face and the unique face it is contained in),
\item the current \emph{discrete Morse vector} $(c_0,c_1,\dots,c_d)$
(which is initialized by $(0,0,\dots,0)$ and updated by incrementing
$c_i$ by one whenever a critical face of dimension $i$ is selected).
\end{compactitem}
At the end of every round, the resulting discrete Morse vector $(c_0,c_1,\dots,c_d)$
is stored along with its number of appearances in the various rounds.
Eventually, we output the list of all obtained discrete Morse vectors
together with their frequencies.
\subsection{Implementation in GAP}
We implemented our random heuristic in GAP~\cite{GAP4}.
In particular, we used GAP operations on lists and sets
to initialize and update Hasse diagrams and respective lists
of free faces. Our implementation is basic and has roughly
150 lines of code.
\enlargethispage*{5mm}
The largest complex (in terms of number of faces) we tested our program on has
face vector $f=(5013,72300,290944,495912,383136,110880)$.
For this triangulation \cite{AdiprasitoBenedettiLutz2013pre} of a contractible $5$-manifold different (!) from a $5$-ball,
it took in total 60:17:33\,+\,21:41:31 h:min:sec to first build
the Hasse diagram and then to run the random heuristic once.
As resulting discrete Morse vector we obtained $(1,0,0,0,0,0)$; thus, this non-trivial $5$-manifold is collapsible \cite{AdiprasitoBenedettiLutz2013pre}.
We point out that there is considerable room for improvement with respect to computation time.
First of all, the Hasse diagram of a complex can be stored
in terms of (sparse) boundary matrices on which fast elimination steps
represent elementary collapses; see Joswig~\cite{Joswig2004pre}
for a discussion. In addition, it is way faster to perform
matrix operations in, say, C++ compared to elementary set operations in GAP.
However, if it comes to compute (respectively to simplify a
presentation of) the fundamental group
of a simplicial complex, then GAP provides efficient heuristics;
cf.\ Section~\ref{sec:fundamental_groups}.
\section{Comparison with other algorithms}
\label{sec:comparison}
There are three main previous algorithmic approaches that aim to compute optimal discrete Morse functions
for simplicial complexes, one by Joswig and Pfetsch~\cite{JoswigPfetsch2006},
one by Eng\-str\"om~\cite{Engstroem2009b}, and one by Lewiner, Lopes, and Tavares~\cite{LewinerLopesTavares2003a}
(cf.\ also Lewiner~\cite{Lewiner2005}).
Tools that allow to improve discrete Morse functions were provided by Hersh~\cite{Hersh2005}.
King, Knudson, and Mramor \cite{HKingKnudsonMramor2005} discussed improving
discrete Morse functions for geometric complexes in ${\mathbb R}^3$.
A completely different random approach to discrete Morse theory was attempted already by Nicolaescu \cite{Nicolaescu2012pre}.
Essentially he tried to randomly choose edges in the Hasse diagram to obtain discrete Morse matchings,
but showed that this approach will not be successful. Indeed, choosing edges in the Hasse diagram at random
will produce bottlenecks even for complexes that are easily collapsible.
\subsection{The algorithm of Joswig and Pfetsch}
This deterministic algorithm, apart from a complete backtrack search, is currently the only available implementation
that actually determines optimal discrete Morse functions for all inputs.
In the Joswig--Pfetsch approach~\cite{JoswigPfetsch2006}, the problem of finding
an optimal discrete Morse function is translated into a maximal matching problem
for the underlying graph of the Hasse diagram with an additional acyclicity condition \cite{Chari2000,Forman1998}.
The acyclic matching problem is then solved as an integer linear program.
\enlargethispage*{4mm}
For various small instances, the Joswig--Pfetsch approach successfully produces
optimal discrete Morse functions~\cite{JoswigPfetsch2006}.
A first case, however, for which the associated integer linear program
was too large to handle is for the $16$-vertex triangulation \texttt{poincare}~\cite{BjoernerLutz2000}
of the Poincar\'e homology $3$-sphere with $f=(16,106,180,90)$.
Joswig and Pfetsch interrupted the computation after one week (perhaps because they did not make use of the
fact that at least six critical cells are necessary since the fundamental group
of the Poincar\'e homology $3$-sphere is non-cyclic; cf.~Section~\ref{sec:classical}).
For the same instance our heuristic found the optimal Morse vector $(1,2,2,1)$ within 0.02 seconds.
Also for other small instances our heuristic was much faster,
e.g., for Rudin's ball \texttt{rudin} \cite{Rudin1958,Wotzlaw2005} with $f=(14,66,94,41)$,
Joswig and Pfetsch needed 103.78 seconds to achieve the optimal discrete Morse vector $(1,0,0,0)$
while our heuristic found the optimum in 0.004+0.00107 seconds; cf.~Section~\ref{sec:complicated_balls_and_spheres}.
\subsection{The approach by Engstr\"om}
The heuristic approach by Engstr\"om~\cite{Engstroem2009b} is elegant and fast. Roughly speaking, the idea is to proceed by deleting vertex stars, rather than by deleting pairs of faces.
Engstr\"om introduces what he calls `Fourier--Morse theory', a theory based on Kahn--Saks--Sturtevant's notion of non-evasiveness, much like Forman's discrete Morse theory was based on Whitehead's notion of collapsibility.
Instead of computing discrete Morse functions, Engstr\"om's heuristic computes Fourier--Morse functions, which are \emph{some} discrete Morse functions, but not necessarily the optimal ones among them.
In particular, obtaining an output $(1,0,\ldots,0)$ with this approach yields a certificate of non-evasiveness,
a stronger property than collapsibility; cf.~\cite{KahnSaksSturtevant1984}.
\enlargethispage*{1mm}
However, there is a $3$-ball with only $12$ vertices which has the collapsibility property,
but not the non-evasiveness property~\cite{BenedettiLutz2013apre}.
As for other examples, Engstr\"om obtains $(1,5,5,1)$ as a discrete Fourier--Morse vector
for the $16$-vertex triangulation of the Poincar\'e homology $3$-sphere \texttt{poincare}.
Instead, the optimal discrete Morse vector for this example is $(1,2,2,1)$. For Rudin's ball \texttt{rudin}
Engstr\"om found $(1,2,2,0)$ compared to the optimum $(1,0,0,0)$.
Engstr\"om's implementation depends on the vertex-labeling of a complex.
For a fixed labeling, the optimal discrete Morse vector is often missed, even on triangulations of relatively small size.
\subsection{The heuristic of Lewiner, Lopes, and Tavares}
The heuristic approach of Lewiner, Lopes, and Tavares~\cite{LewinerLopesTavares2003a}
(cf.\ also Lewiner~\cite{Lewiner2005}) is fast and was used to produce optimal discrete Morse vectors
for several large $2$- and $3$-dimensional complexes.
The problem of finding optimal discrete Morse vectors is reformulated in terms
of finding maximal hyperforests of hypergraphs. Then different greedy heuristics
are used to obtain large hyperforests.
It has to be remarked, though, that most of the instances listed in \cite{LewinerLopesTavares2003a}
and later in~\cite{Lewiner2005} are mostly harmless from the point of view of discrete Morse theory;
they mainly are $2$-dimensional surfaces or shellable $3$-dimensional balls and
spheres, or products thereof --- with the exception of
the three more complicated examples \texttt{knot}, \texttt{nc\_sphere}, and \texttt{bing}
from Hachimori's simplicial complex library \cite{Hachimori_url}.
It is precisely on these three examples that the greedy heuristics of
Lewiner et al.\ produce somewhat inconsistent results.
In \cite{LewinerLopesTavares2003a}, $(1,1,1,0)$ was obtained for
\texttt{bing} and \texttt{knot}. In~\cite{Lewiner2005} on p.~92,
\texttt{bing} and \texttt{knot} appear with $(1,0,0,0)$
without mentioning the improvement with respect to \cite{LewinerLopesTavares2003a}.
Moreover, \texttt{nc\_sphere} is listed on p.~92 of~\cite{Lewiner2005} with $(1,2,2,1)$
and it is noted on p.~89:
``Trickier, the non-shellable $3$-sphere (NC Sphere) is a delicate model
since no discrete Morse function can reach the minimal number
of critical points for smooth homotopy.'' This latter statement is false
as we found (in 12 out of 10000 runs) the optimal discrete Morse
vector $(1,0,0,1)$ for \texttt{nc\_sphere}; cf.\ Table~\ref{tbl:discrete_morse_spectra}.
In fact, the $3$-sphere \texttt{nc\_sphere} with $f=(381,2309,3856,1928)$ is obtained
from the $3$-ball \texttt{knot} with $f=(380,1929,2722,1172)$ by adding a cone
over the boundary of \texttt{knot}. By this, every discrete Morse function
on \texttt{knot} with discrete Morse vector $(1,0,0,0)$ can be used to
produce a discrete Morse function with discrete Morse vector $(1,0,0,1)$
on \texttt{nc\_sphere}. In contrast, it would theoretically be possible
to have \texttt{knot} with optimal discrete Morse vector $(1,1,1,0)$,
while \texttt{nc\_sphere} has optimal discrete Morse vector $(1,0,0,1)$.
The best discrete Morse vector we found in 1000000 runs for \texttt{knot} is $(1,1,1,0)$;
see Table~\ref{tbl:discrete_morse_spectra} --- whereas, as mentioned above, Lewiner~\cite{Lewiner2005}
seemed to claim $(1,0,0,0)$ for this example, which would beat our algorithm.
\enlargethispage*{4mm}
\section{Theoretical lower bounds for discrete Morse vectors}
\label{sec:lower bounds}
In this section, we briefly recall some theoretical lower bounds for minimal discrete Morse vectors. The obstructions for the existence of discrete Morse functions with a certain number of critical cells are of various nature. Here we basically use four different criteria.
The first concerns ridge-facet incidences, the second follows from elementary algebraic topology (applied to the Morse complex),
the third uses knot theory, and the fourth comes from smooth Morse theory.
\subsection{Ridge-facet incidences and Euler characteristic}
In order for a collapse to start, there need to be free faces.
This is how to create a first obstruction, namely by constructing $d$-dimensional triangulations
in which every $(d-1)$-face is contained in two or more $d$-faces.
The most famous example
of this type is the dunce hat, a contractible $2$-complex obtained from a single triangle by identifying all three boundary edges
in a non-coherent way. In any triangulation of the dunce hat each edge belongs to
either two or three triangles; cf.~\cite{BenedettiLutz2009pre}.
Hence, the dunce hat cannot be collapsible or, in other words, it cannot have $(1,0,0)$ as discrete Morse vector.
The vectors $(1,0,1)$ and $(1,1,0)$ are also forbidden for the dunce hat.
In fact, since each elementary collapse deletes two faces of consecutive dimension,
it does not change the Euler characteristic. In particular, the alternating sum
of the entries of a discrete Morse vector should always be equal to the
Euler characteristic of a complex.
The dunce hat does, however, admit $(1,1,1)$
as discrete Morse vector, which is therefore optimal.
\subsection{The Morse complex}
Forman showed that any discrete Morse vector on a simplicial complex $C$ is also the face-vector of a \emph{model} for $C$, that is, a CW-complex homotopy equivalent to $C$.
\begin{thm}[{Forman~\cite{Forman2002}}]
Assume that some $d$-complex $C$ admits a discrete Morse function with $c_i$ critical faces of dimension $i$ ($i=0,\ldots, d$). Then $C$ has a model with $c_i$ $i$-cells, called \emph{Morse complex}.
\end{thm}
This theorem results in several obstructions. First of all, the $i$-th (rational) Betti number of an arbitrary CW-complex is always bounded above by its number of $i$-dimensional cells.
\begin{cor}[Forman's weak Morse inequalities~\cite{Forman2002}] \label{cor:weakMorse}
Assume that some $d$-complex $C$ admits a discrete Morse function with $c_i$ critical faces of dimension $i$ ($i=0,\ldots, d$). Then $c_i \ge \beta_i (C)$ for each $i$.
\end{cor}
The previous result still holds if we consider homology over a finite field.
\begin{cor} Assume some $d$-complex $C$ admits a discrete Morse function with $c_i$ critical faces of dimension $i$ ($i=0,\ldots, d$). Then $c_i \ge \dim H_i (C; \mathbb{Z}_p)$ for each $i$ and for each prime~$p$.
\end{cor}
Sometimes it is convenient to focus on homotopy groups rather than on homology groups.
Recall that the fundamental group of a CW-complex with one $0$-cell is completely determined by its $2$-skeleton; a presentation of the group can be obtained using the $1$-cells as generators and the $2$-cells as relators. In particular, if the CW-complex has no $1$-cells, its fundamental group must be trivial; and if the CW-complex has only one $1$-cell, its fundamental group must be trivial or cyclic.
\begin{cor} Assume some $d$-complex $C$ with fundamental group $G$ admits a discrete Morse function with $1$ critical face of dimension $0$ and $c_1$ critical faces of dimension $1$. Then $c_1 \ge \operatorname{rank}(G)$, the minimal number of generators in a presentation of $G$. (In particular, if $G$ is non-abelian, then $c_1 \ge 2$.)
\end{cor}
\subsection{Knot-theoretic obstructions}
Obstructions coming from short knots have been considered first by Bing~\cite{Bing1964}, Goodrick~\cite{Goodrick1968},
and Lickorish~\cite{Lickorish1991}, and later investigated
by the two authors~\cite{Benedetti2012,BenedettiLutz2013apre,Lutz2004b} and others.
Recall that a knot $K$ inside a triangulation of a $3$-sphere is just a $1$-dimensional subcomplex homeomorphic to a $1$-sphere (or in other words, a closed path in the $1$-skeleton.) The \emph{knot group} is the fundamental group of the knot complement inside the sphere. Knot groups are perhaps the main invariant in knot theory.
In the simplest form (that is, for $3$-dimensional spheres) the obstructions are of the following type:
\begin{thm}[Lickorish~\cite{Lickorish1991}; cf.\ also \cite{Benedetti2012}]\label{thm:Lickorish}
Assume some triangulated $3$-sphere $S$ admits some discrete Morse function with $c_2$ critical $2$-faces. Then, for \emph{any} knot $K$ inside $S$, one has
\[ c_2 \; \ge \; \operatorname{rank}(G_K) - f_1(K),\]
where $G_K$ is the knot group of $K$ and $f_1(K)$ is the number of edges of $K$.
\end{thm}
The previous theorem is usually applied together with the following two well-known facts:
\begin{compactenum}[(1)]
\item there are knots whose groups have arbitrarily high rank; for example, the knot group of a connected sum of $m$ trefoils has rank $\ge m+1$ (Goodrick~\cite{Goodrick1968});
\item any knot can be realized with only $3$ edges in a suitably triangulated $3$-sphere (Bing~\cite{Bing1964}).
\end{compactenum}
In particular, if we consider a $3$-sphere $S$ containing the connected sum of three trefoils realized on three edges,
then Theorem~\ref{thm:Lickorish} yields $c_2 \ge 1$ for all discrete Morse vectors $(1, c_1, c_2, 1)$. Note that $c_1= c_2$, because of Euler characteristic reasons.
A similar statement can be proven for $3$-dimensional balls.
\begin{thm}[\mbox{\cite[Corollary 4.25]{Benedetti2012}}]
\label{thm:benedetti_4_25}
Assume some triangulated $3$-ball $B$ admits some discrete Morse function with $c_1$ critical edges. Let $K$ be a knot in the $1$-skeleton of $B$, realized as a path of $b$ edges in the boundary of $B$ plus a path of\, $e=f_1(K)-b$ interior edges. Then
\[ c_1 \; \ge \; \operatorname{rank}(G_K) - 2e,\]
where $G_K$ is the knot group of $K$.
\end{thm}
\subsection{Morse-theoretical obstructions}
Very recently, the first author proved the following result for smooth manifolds.
\begin{thm} (\cite{Benedetti2012pre})
Every smooth Morse vector is also a discrete Morse vector on some (compatible) PL triangulation.
In dimensions up to $7$, the converse holds too.
\end{thm}
The converse statement is interesting for us because it yields further obstructions. For example, we know from the work by
Boileau and Zieschang~\cite{BoileauZieschang1984} and others, that for every $r>0$, there is a (smooth) $3$-manifold $M_r$ of Heegaard genus $g\geq \operatorname{rank} (M_r) + r$. It follows that \emph{for every PL triangulation $T$ of $M_r$}, every discrete Morse vector on $T$ has $c_1 \ge g\geq \operatorname{rank} (M_n)+r$ critical edges.
\section{Towards a new library of triangulations}
\label{sec:computational_results}
Table~\ref{tbl:discrete_morse_spectra} provides a library of $45$ instances
for which we sampled the discrete Morse spectrum.
We ran our random algorithm 10000 rounds on each example, except for eight
examples for which we did fewer runs. The $45$ examples were selected
for different reasons as we will explain below. The respective examples are
listed at the beginning of each subsection.
The library of examples can be found online at \cite{BenedettiLutz_LIBRARY}.
An additional infinite series of complicated triangulations, based on a handlebody construction
of Akbulut and Kirby \cite{AkbulutKirby1985}, was recently given in \cite{TsurugaLutz2013ext}.
\subsection{`Trivial' triangulations}
\label{sec:trivial}
\texttt{Examples:} \texttt{dunce\_hat,} \texttt{d2n12g6,} \texttt{regular\_2\_21\_23\_1}
\medskip
\noindent
Discrete Morse theory is trivial on $1$-dimensional complexes (graphs)
and $2$-dimensional compact manifolds (surfaces); cf.~\cite{LewinerLopesTavares2003a}.
A simple modification of our heuristic allows to incorporate this as follows.
Once we reduced a simplicial complex to a $1$-dimensional complex,
we switch to a deterministic strategy: As~long as there are edges
that are contained in a cycle, delete such (critical) edges iteratively;
then collapse the remaining tree/forest to a point/a collection of points, respectively.
\begin{deff}
Let\, $C$ be a connected finite simplicial complex.
The \emph{normalized discrete Morse spectrum} $\sigma_N$ of $C$ is obtained from the discrete Morse spectrum $\sigma$ of\, $C$
by normalizing every discrete Morse vector $(c_0, c_1, c_2, \ldots, c_d)$ in the spectrum
to $(1, c_1-c_0+1, c_2, \ldots, c_d)$ and adding up the probabilities for the original vectors
that have the same normalization.
\end{deff}
\emph{Example}: The graph $A_7$ of Figure~\ref{fig:A7} has normalized discrete Morse spectrum $\{\mbox{$1$-$(1,2)$}\}$
or, for short, $\{(1,2)\}$.
\bigskip
We introduce the following averages:
\begin{compactitem}
\item $c_{\sigma}$, the average number of critical cells for the vectors in the discrete Morse spectrum $\sigma$ of a simplicial complex $C$;
\item $c^N_{\sigma}$, the average number of critical cells for the vectors in the normalized discrete Morse spectrum $\sigma_N$ of $C$.
\end{compactitem}
By Corollary~\ref{cor:weakMorse} we have $$c_{\sigma}\,\geq\, c^N_{\sigma}\,\geq\, \beta_0+\beta_1+\dots+\beta_d.$$
The coefficient $c^N_{\sigma}$ (and also $c_{\sigma}$) is of some interest if we want to randomly reduce the size of a complex
as a preprocessing step for homology computations; it gives an estimate for the number of cells that we are left
with for the Smith normal form computations.
\begin{lem}
Every connected simplicial $1$-complex $K$ with $n$ vertices and $m\geq n-1$ edges has normalized discrete Morse spectrum
$\{(1,m-n+1)\}$ and\, $c^N_{\sigma}=2+m-n$.
\end{lem}
The homology vector in this case is $H_*(K)=({\mathbb Z},{\mathbb Z}^{m-n+1})$,
so the weak discrete Morse inequalities (see Section~\ref{sec:classical}) are sharp.
\begin{lem}
Every triangulation $K$ of a closed (connected) surface of Euler characteristics $\chi$ has normalized discrete Morse spectrum
$\{(1,2-\chi,1)\}$ and\, $c^N_{\sigma}=4-\chi$. More generally, the same holds for every strongly connected $2$-complex $K$
without free edges.
\end{lem}
\emph{Proof}: Triangulations of surfaces are strongly connected. Hence, after the removal of only one critical triangle
the remaining complex collapses onto a $1$-dimensional complex, i.e., $c_2=1$.
The conclusion follows from the previous lemma and extends to all strongly connected $2$-complex $K$ without free edges.\hfill $\Box$
\bigskip
\noindent
The example \texttt{d2n12g6} in Table~\ref{tbl:discrete_morse_spectra} is the unique vertex-transitive,
vertex-minimal neighborly triangulation of the orientable surface of genus $6$ \cite{AltshulerBokowskiSchuchert1996},
the example \texttt{regular\_2\_21\_23\_1} is a regular triangulation of the orientable surface of genus $15$
with $21$ vertices \cite[Ch.~5]{Lutz1999}.
Something can also be said on complexes with few vertices:
\begin{thm} \label{thm:bagchi_datta}
{\rm (Bagchi and Datta~\cite{BagchiDatta2005})}
Every ${\mathbb Z}_2$-acyclic simplicial complex with at most $7$~vertices is collapsible.
\end{thm}
\begin{cor}
Every ${\mathbb Z}_2$-acyclic simplicial complex $K$ with at most $7$ vertices is extendably collapsible
and therefore has trivial discrete Morse spectrum $\{(1,0,0)\}$ with $c_{\sigma}=c^N_{\sigma}=1$.
\end{cor}
The $7$-vertex bound is sharp; the triangulation \texttt{dunce\_hat} (cf.~\cite{BenedettiLutz2009pre})
of the dunce hat is an $8$-vertex example of a non-collapsible contractible complex.
\subsection{3-manifolds with up to ten vertices}
\label{sec:small_3manif}
\begin{table}
\small\centering
\defaultaddspace=0.15em
\caption{Total time to find optimal discrete Morse functions (in a single run) for each of the combinatorial $3$-manifolds with up to $10$ vertices.}\label{tbl:ten3d_leq10}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}c@{\hspace{5mm}}r@{\hspace{5mm}}r@{\hspace{5mm}}r@{\hspace{5mm}}r@{\hspace{5mm}}r@{}}
\\\toprule
\addlinespace
\addlinespace
Vertices$\backslash$Types & $S^3$ & $S^2\hbox{$\times\hspace{-1.62ex}\_\hspace{-.4ex}\_\hspace{.7ex}$}S^1$ & $S^2\!\times\!S^1$ & All & Total time \\
& & & & &(in Min:Sec.Frac) \\ \midrule
\addlinespace
\addlinespace
5 & 1 & -- & -- & 1 & 0.008 \\[-1mm]
\addlinespace
6 & 2 & -- & -- & 2 & 0.008 \\[-1mm]
\addlinespace
7 & 5 & -- & -- & 5 & 0.012 \\[-1mm]
\addlinespace
8 & 39 & -- & -- & 39 & 0.060 \\[-1mm]
\addlinespace
9 & 1\,296 & 1 & -- & 1\,297 & 3.836 \\[-1mm]
\addlinespace
10 & 247\,882 & 615 & 518 & 249\,015 & 17:35.606 \\
\bottomrule
\end{tabular*}
\end{table}
For all $250\,359$ examples in the catalog \cite{Lutz2008a} of triangulations of $3$-manifolds with up to $10$~vertices,
optimal discrete Morse vectors were found by a \emph{single run} of our program each;
see Table~\ref{tbl:ten3d_leq10}.
\begin{thm}
All\, $250\,359$ examples of triangulated $3$-manifolds with up to $10$~vertices
admit a perfect discrete Morse function.
\end{thm}
The spheres in this list are all shellable, as are all $3$-spheres
with up to $11$ vertices~\cite{SulankeLutz2009}. The smallest known non-shellable $3$-sphere \texttt{S\_3\_13\_56} (\texttt{trefoil})
has $13$ vertices~\cite{Lutz2004b}. For all the 1134 non-spherical examples the statement of the theorem is new.
\enlargethispage*{5mm}
\subsection{Polytopal spheres}
\texttt{Examples:} \texttt{S\_3\_100\_4850,} \texttt{600\_cell,} \texttt{S\_3\_1000\_2990,} \texttt{S\_5\_100\_472.}
\medskip
\noindent
We ran our program
on the $3$-dimensional boundary \texttt{S\_3\_100\_4850} of the cyclic $4$-polytope with $100$ vertices and $4850$ facets,
on the $3$-dimensional boundary \texttt{600\_cell} of the $600$-cell,
on the $3$-dimensional boundary \texttt{S\_3\_1000\_2990} of a stacked $4$-polytope with $1000$ vertices and $2990$ facets,
and on the $4$-dimensional boundary \texttt{S\_5\_100\_472} of a stacked $5$-polytope with $100$ vertices and $472$ facets.
In all these cases we obtained the optimal discrete Morse vector $(1,0,\dots,0,1)$ in 10000 out of 10000 tries.
We also tested various other examples
of simplicial polytopal spheres and we always observed a trivial spectrum in these experiments.
However, the normalized discrete Morse spectrum of simplicial polytopal spheres is not trivial in general.
\begin{thm} \label{thm:KCrowley_etal}
{\rm (Crowley, Ebin, Kahn, Reyfman, White, and Xue~\cite{KCrowleyEbinKahnReyfmanWhiteXue2003pre})}
The $7$-simplex $\Delta_7$ with $8$~vertices contains in its $2$-skeleton an $8$-vertex triangulation of the dunce hat
onto which it collapses.
\end{thm}
\enlargethispage*{5mm}
As a direct consequence of Theorem~\ref{thm:KCrowley_etal}, the $7$-simplex $\Delta_7$
is \emph{not} extendably collapsible. Therefore the spectrum of its boundary is non-trivial.
Similarly, the simplicial polytopal Gr\"unbaum--Sreed\-ha\-ran $3$-sphere No.32 on $8$ vertices,
which contains a dunce hat \cite{BenedettiLutz2009pre}, has non-trivial Morse spectrum; see
the example \texttt{dunce\_hat\_in\_3\_ball} below.
\subsection{Random spheres}
\texttt{Example:} \texttt{S\_3\_50\_1033.}
\medskip
\noindent
While random surfaces can easily be generated, we lack good random models
for $3$- or higher-dimensional manifolds; cf.~\cite{DunfieldThurston2006}.
One possible approach is to consider all triangulations of $3$-spheres
or $3$-manifolds with a fixed number $n$ of vertices, with the uniform distribution.
While this setting is very promising for performing random experiments,
we need to get a hold on the set of all the triangulations with $n$ vertices
in the first place, a task that so far has only been solved for $3$-manifolds
with up to $11$ vertices \cite{SulankeLutz2009}.
Another model can be derived by performing random walks on the set of all triangulations where each step
is represented by a single bistellar flip.
According to a theorem of Pachner~\cite{Pachner1987}, two distinct triangulations
of a manifold are PL homeomorphic if and only if they can be connected
by a sequence of bistellar flips.
An implementation of bistellar flips for exploring the space of triangulations within one PL component
is the program BISTELLAR~\cite{Lutz_BISTELLAR}; see \cite{BjoernerLutz2000}
for a program description. The bistellar flip approach for generating random triangulations
depends on the number of executed flips as well as on the way the flips are chosen.
As a consequence, triangulations with $n$ vertices are not selected according to the uniform
distribution.
For the example \texttt{S\_3\_50\_1033}, we started with the boundary
of the cyclic $4$-polytope with $50$ vertices and face vector $f=(50,1225,2350,1175)$.
We then applied $1500$ bistellar $1$-flips and reverse-$1$-flips that were chosen
randomly from all admissible flips. The resulting sphere \texttt{S\_3\_50\_1033}
has $f$-vector $(50,1083,2066,1033)$. The average number of critical cells in 10000 runs
turned out experimentally to be roughly~$3.2$ (which is considerably larger than $2$).
We therefore can conclude heuristically that random spheres tend to have a non-trivial spectrum.
\subsection{Knotted triangulations of balls and spheres}\label{sec:complicated_balls_and_spheres}
\texttt{Examples:} \texttt{dunce\_hat\_in\_3\_ball},
\texttt{Barnette\_sphere},
\texttt{B\_3\_9\_18},
\texttt{trefoil\_arc},
\texttt{trefoil}, \linebreak
\texttt{rudin},
\texttt{double\_trefoil\_arc},
\texttt{double\_trefoil},
\texttt{triple\_trefoil\_arc},
\texttt{triple\_trefoil}, \linebreak
\texttt{non\_4\_2\_colorable},
\texttt{knot},
\texttt{nc\_sphere},
\texttt{bing}.
\medskip
\noindent
The example \texttt{dunce\_hat\_in\_3\_ball} \cite{BenedettiLutz2009pre} is a triangulated $3$-ball that
contains the $8$-vertex triangulation \texttt{dunce\_hat}
in its $2$-skeleton. To indeed get stuck with \texttt{dunce\_hat},
we need to perform collapses without removing any
of the $17$ triangles of the dunce hat.
This results in a low probability to get stuck.
Indeed, in 1\,000\,000 runs we always found $(1,0,0,0)$
as resulting discrete Morse vector.
The non-polytopal \texttt{Barnette\_sphere} \cite{Barnette1973c} with
$8$ vertices also has trivial observed spectrum: In 1\,000\,000 runs of our program
we obtained the optimal discrete Morse vector $(1,0,0,1)$.
For the non-shellable $3$-ball \texttt{B\_3\_9\_18} \cite{Lutz2004a}
with $9$ vertices and Rudin's non-shellable $3$-ball \texttt{rudin} \cite{Rudin1958,Wotzlaw2005}
with $14$ vertices we achieved the optimal discrete Morse vector $(1,0,0,0)$ in every run.
Therefore, non-polytopality and non-shellability not necessarily cause a non-trivial observed spectrum.
If we wish to construct triangulated balls or spheres of small size with a very non-trivial observed spectrum,
we need to build in complicated substructures
of small size (like complicated knots on few edges) to get stuck at.
The triangulated $3$-sphere \texttt{trefoil} (\texttt{S\_3\_13\_56} \cite{Lutz2004b})
contains a $3$-edge trefoil knot in its $1$-skeleton
and has optimal discrete Morse vector $(1,0,0,1)$.
This vector was obtained in roughly 96\% of the runs of our heuristic.
The triangulated $3$-sphere \texttt{double\_trefoil} (\texttt{S\_3\_16\_92} \cite{BenedettiLutz2013apre})
with optimal discrete Morse vector $(1,0,0,1)$
has a $3$-edge double trefoil knot in its $1$-skeleton. Here, $(1,0,0,1)$ was achieved only in 40\%
of the runs.
The triangulated $3$-sphere \texttt{triple\_trefoil} (\texttt{S\_3\_18\_125} \cite{BenedettiLutz2013apre})
contains a $3$-edge triple trefoil knot in its $1$-skeleton
and has optimal discrete Morse vector $(1,1,1,1)$, which we found 30\% of the time.
The $3$-ball \texttt{trefoil\_arc} is obtained
from the $3$-sphere \texttt{trefoil} by deleting the
star of a vertex. It contains the trefoil knot as a spanning arc
and has optimal discrete Morse vector $(1,0,0,0)$.
The deletion of the star of a vertex from the $3$-sphere \texttt{double\_trefoil}
yields the $3$-ball \texttt{double\_trefoil\_arc}
with the double trefoil knot as spanning arc and optimal discrete Morse vector $(1,1,1,0)$.
For the triple trefoil knot the deletion of a vertex from the $3$-sphere
\texttt{triple\_trefoil} yields the $3$-ball \texttt{triple\_trefoil\_arc}
for which the optimal discrete Morse vector is $(1,2,2,0)$; see Theorem~\ref{thm:benedetti_4_25}.
We found this vector in about 60\% of the runs.
A larger $3$-ball \texttt{knot} that has the trefoil knot as spanning arc
was constructed (via a pile of cubes) by Hachimori \cite{Hachimori_url}.
The best discrete Morse vector we found for \texttt{knot} is $(1,1,1,0)$.
It might as well be that \texttt{knot} admits $(1,0,0,0)$ as optimal discrete Morse vector.
The non-constructible $3$-sphere \texttt{nc\_sphere} \cite{Hachimori_url}
is obtained from \texttt{knot} by adding the cone over the boundary of \texttt{knot}.
For this example, we found $(1,0,0,1)$ as optimal discrete Morse vector,
but only in 12 out of 10000 runs.
The triangulation \texttt{bing} is a $3$-dimensional thickening of Bing's house with two rooms \cite{Bing1964}
due to Hachimori \cite{Hachimori_url} (again, via a pile of cubes). It is a $3$-ball with $480$ vertices
for which we found $(1,0,0,0)$ as optimal discrete Morse vector in
only $7$ out of 10000 runs. We therefore can regard this ball as
\emph{barely collapsible}.
A non-$(4,2)$-colorable triangulation \texttt{non\_4\_2\_colorable} of the $3$-sphere
was constructed in \cite{LutzMoller2013pre}
with 167 vertices by using 10 copies of the double trefoil knot.
The best discrete Morse vector we found once for this example in 10000 runs
is $(1,2,2,1)$. The average number of critical cells for \texttt{non\_4\_2\_colorable} computed and normalized
over only 10 random runs (for the sake of simplicity) as listed in Table~\ref{tbl:discrete_morse_spectra} is roughly 25.2.
\subsection{Barycentric subdivisions}
\begin{table}
\small\centering
\defaultaddspace=0.15em
\caption{Average number of critical cells for the knotted spheres and their barycentric subdivisions, based on 10000 random runs.}\label{tbl:barycentric}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\hspace{1mm}}r@{\hspace{10mm}}l@{\hspace{1mm}}r@{\hspace{10mm}}l@{\hspace{1mm}}r@{}}
\\\toprule
\addlinespace
\addlinespace
\texttt{trefoil} & $2.0778$ & \texttt{double\_trefoil} & $3.5338$ & \texttt{triple\_trefoil} & $5.9898$ \\[-1mm]
\addlinespace
\texttt{trefoil\_bsd} & $2.0202$ & \texttt{double\_trefoil\_bsd} & $3.3414$ & \texttt{triple\_trefoil\_bsd} & $5.7352$ \\[-1mm]
\addlinespace
\bottomrule
\end{tabular*}
\end{table}
\texttt{Examples:} \texttt{trefoil\_bsd,} \texttt{double\_trefoil\_bsd,} \texttt{triple\_trefoil\_bsd.}
\medskip
\noindent
Interestingly, the barycentric subdivisions \texttt{trefoil\_bsd},
\texttt{double\_trefoil\_bsd}, and \linebreak
\texttt{triple\_trefoil\_bsd}
of the knotted spheres \texttt{trefoil}, \texttt{double\_trefoil}, and
\texttt{triple\_trefoil}, respectively, have a lower observed spectrum
than the corresponding original spheres; compare Table~\ref{tbl:barycentric}.
\subsection{Standard and exotic PL structures on 4-manifolds}
\texttt{Examples:} \texttt{CP2,} \texttt{RP4,} \texttt{K3\_16,} \texttt{K3\_17,}
\texttt{RP4\_K3\_17,} \texttt{RP4\_11S2xS2.}
\medskip
\noindent
Freedman's classification \cite{Freedman1982} of simply connected closed topological
$4$-manifolds settled the $4$-dimensional topological Poincar\'e conjecture.
The $4$-dimensional smooth Poincar\'e conjecture, however, is still wide open:
Does the $4$-dimensional sphere $S^4$ have a unique differentiable structure
or are there exotic $4$-spheres that are homeomorphic but not diffeomorphic to~$S^4$?
The categories PL and DIFF coincide in dimension $4$ (see the survey
of Milnor \cite{Milnor2011} and the references contained therein), and the $4$-dimensional smooth
Poincar\'e conjecture therefore can be rephrased on the level of triangulations:
Is every triangulation of $S^4$ PL homeomorphic to the boundary of the $5$-simplex?
Exotic structures on simply connected $4$-manifolds have been intensively studied over the past years.
One main task has been to find smaller and smaller $k$ and $l$ such that the connected sum
$(\# k\,{\mathbb C}{\bf P}^2)\#(-\# l\,{\mathbb C}{\bf P}^2)$ has exotic structures.
While it is now known that ${\mathbb C}{\bf P}^2\#(-\#2\,{\mathbb C}{\bf P}^2)$~\cite{AkhmedovBDPark2010}
admits (infinitely many) exotic structures,
the remaining interesting open cases are ${\mathbb C}{\bf P}^2\#(-{\mathbb C}{\bf P})$, ${\mathbb C}{\bf P}^2$,
and $S^4$ (the smooth Poincar\'e conjecture).
The example \texttt{CP2} in Table~\ref{tbl:discrete_morse_spectra}
is the unique vertex-minimal $9$-vertex triangulation of ${\mathbb C}{\bf P}^2$
due to K\"uhnel and Banchoff \cite{KuehnelBanchoff1983}
and it carries the standard PL structure.
The constructions of exotic structures are often delicate and it is not straightforward
to derive corresponding triangulations. A very explicit example,
though not simply connected, is due to Kreck \cite{Kreck1984b}.
\begin{thm} \label{thm:kreck}
{\rm (Kreck~\cite{Kreck1984b})}
The $4$-dimensional manifolds\, ${\mathbb R}{\bf P}^4\#K3$\, and\, ${\mathbb R}{\bf P}^4\#(S^2\times S^2)^{\#11}$\,
are homeomorphic but not diffeomorphic; the constituting components being equipped with the standard smooth structures.
\end{thm}
A $17$-vertex triangulation \texttt{K3\_17} of the K3 surface with the standard PL type
is due to Spreer and K\"uhnel \cite{SpreerKuehnel2011}.
A vertex-minimal $16$-vertex triangulation \texttt{K3\_16} of the topological K3 surface
was previously found by Casella and K\"uhnel \cite{CasellaKuehnel2001}.
It is not clear whether the two triangulations are PL homeomorphic ---
we tried bistellar flips to establish a PL homeomorphism between these two triangulations,
but without success.
A vertex-minimal $16$-vertex triangulation \texttt{RP4}
of ${\mathbb R}{\bf P}^4$ was obtained in \cite{Lutz2005bpre} by applying bistellar flips
to the $31$-vertex standard triangulation of ${\mathbb R}{\bf P}^4$ by K\"uhnel \cite{Kuehnel1987}.
\enlargethispage*{3mm}
Let $K$ and $L$ be two triangulated $4$-manifolds $K$ and $L$ with $n$ and $m$ vertices,
respectively. Their connected sum $K\#L$ (or $K\#-L$ in cases when
orientation of the components matters) is obtained from $K$ and $L$ by removing a $4$-simplex
from each of the triangulations and then gluing together the remainders along the respective boundaries.
The resulting triangulation $K\#L$ then has $n+m-5$ vertices.
Triangulations of connected sums $(S^2\times S^2)^{\# k}$, $k\geq 2$,
are therefore easily constructed from
a vertex-minimal $11$-vertex triangulation of $S^2\times S^2$ \cite{Lutz2005bpre} by taking connected sums
and then applying bistellar flips to reduce the numbers of vertices.
This way, we obtained triangulations of $(S^2\times S^2)^{\# 2}$ with $12$~vertices (vertex-minimal; c.f.\ \cite{Lutz2005bpre}),\linebreak
of $(S^2\times S^2)^{\# 3}$ with $14$~vertices, of $(S^2\times S^2)^{\# 5}$ with $16$~vertices,
of $(S^2\times S^2)^{\# 6}$ with $16$~vertices, of $(S^2\times S^2)^{\# 9}$ with $18$~vertices,
and of $(S^2\times S^2)^{\# 11}$ with $20$~vertices.
\begin{thm} \label{thm:non_PL_homeomorphic}
Let ${\mathbb R}{\bf P}^4$, $K3$, and $(S^2\times S^2)^{\#11}$
be equipped with their standard PL structures.
The PL $4$-manifold\, ${\mathbb R}{\bf P}^4\#K3$\, has a triangulation
\texttt{RP4\_K3\_17} with $16+17-5=28$ vertices
and the PL $4$-manifold\, ${\mathbb R}{\bf P}^4\#(S^2\times S^2)^{\#11}$\,
has a triangulation \texttt{RP4\_11S2xS2} with $16+20-5=31$ vertices.
While the underlying topological manifolds of these PL manifolds are homeomorphic,
the respective triangulations are not PL homeomorphic.
\end{thm}
By Theorem~\ref{thm:non_PL_homeomorphic} we see that homeomorphic but not PL homeomorphic triangulations
of $4$-manifolds can be constructed with only few vertices. (Most likely,
the explicit numbers of vertices in Theorem~\ref{thm:non_PL_homeomorphic}
can be further reduced with bistellar flips. However, this would require a rather extensive search, which is beyond the scope
of this article.)
\begin{thm}
The examples \texttt{CP2}, \texttt{RP4}, \texttt{K3\_16},
\texttt{K3\_17}, \texttt{RP4\_K3\_17}, and \texttt{RP4\_11S2xS2}
have perfect discrete Morse functions with $3$, $5$, $24$, $24$, $27$, and $27$ critical cells, respectively.
\end{thm}
Interestingly, the computed discrete Morse spectra of \texttt{K3\_16} and \texttt{K3\_17} look rather similar.
The same can be said for the pair \texttt{RP4\_K3\_17} and \texttt{RP4\_11S2xS2}.
\subsection{Hom complexes}\label{sec:hom_complexes}
\texttt{Examples:} \texttt{Hom\_C5\_K4,} \texttt{Hom\_n9\_655\_compl\_K4,}
\texttt{Hom\_C6\_compl\_K5\_small,} \texttt{Hom\_C6\_compl\_K5,}
\texttt{Hom\_C5\_K5.}
\medskip
\noindent
Hom complexes of certain graphs provide interesting examples of
prodsimplicial manifolds~\cite{CsorbaLutz2006}. The prodsimplicial
structure allows to easily triangulate these manifolds without adding
new vertices.
The $3$-dimensional Hom complex \texttt{Hom\_C5\_K4} is a triangulation of the
$3$-dimensional real projective space ${\mathbb R}{\bf P}^3$,
the Hom complex \texttt{Hom\_n9\_655\_compl\_K4} triangulates $(S^2\!\times\!S^1)^{\# 13}$.
The $4$-dimensional example \texttt{Hom\_C6\_compl\_K5\_small}
with $f=(33,379,1786,2300,920)$ is obtained from \texttt{Hom\_C6\_compl\_K5}
with $f=(1920,30780,104520,126000,50400)$ via bistellar flips.
Both examples triangulate $(S^2\!\times\!S^2)^{\# 29}$,
the first with computed normalized average $63.92$, the latter with normalized average $83.0$.
In only three out of 2000 runs we found the discrete Morse vector $(1,1,59,0,1)$ for \texttt{Hom\_C6\_compl\_K5},
but never the optimum $(1,0,58,0,1)$. In contrast, both the \texttt{lex}
and the \texttt{rev\_lex} heuristics yielded $(1,0,58,0,1)$.
In order to keep the list short, Table~\ref{tbl:discrete_morse_spectra} only lists 10 random runs for \texttt{Hom\_C6\_compl\_K5}.
The Hom complex \texttt{Hom\_C5\_K5} with $f=(1020,25770,143900,307950,283200,94400)$
is a triangulation of $S^3\!\times\!S^2$
with normalized average $4.6$.
\subsection{Higher-dimensional manifolds}
\label{sec:classical}
\texttt{Examples:} \texttt{poincare,} \texttt{hyperbolic\_dodecahedral\_space,} \texttt{S2xpoincare,}
\texttt{SU2\_SO3,} \texttt{RP5\_24,}
\texttt{non\_PL,} \texttt{\_HP2.}
\medskip
\noindent
The $16$-vertex triangulation \texttt{poincare} \cite{BjoernerLutz2000,BjoernerLutz2003}
of the Poincar\'e homology $3$-sphere with $f$-vector $f=(16,106,180,90)$
has the binary icosahedral group as its fundamental group. Since this
group is non-cyclic, we have $c_2\geq 2$, and therefore
every discrete Morse vector for \texttt{poincare} must have
at least six critical cells, with $(1,2,2,1)$
being the optimal discrete Morse vector according to
Table~\ref{tbl:discrete_morse_spectra}; cf.\ also Lewiner~\cite{LewinerLopesTavares2003b}.
For the $21$-vertex triangulation \texttt{hyperbolic\_dodecahedral\_space} \cite{LutzSulankeSwartz2009}
of the Weber--Seifert hyperbolic dodecahedral space \cite{WeberSeifert1933} with face vector $f=(21,193,344,172)$
the best discrete Morse vector we found is $(1,4,4,1)$. The fundamental group of this manifold
can be presented with $4$ generators; see Table~\ref{tbl:gen_fund_groups}.
The product triangulation \texttt{S2xpoincare} of $S^2$ (taken as the boundary of a tetrahedron)
with \texttt{poincare} again has the binary icosahedral group as its fundamental group,
inherited from \texttt{poincare}; for constructing product triangulations
see \cite{Lutz2003bpre} and references therein.
The best discrete Morse vector we found for this examples in 103 out of 1000 runs is $(1,2,3,3,2,1)$.
(Table~\ref{tbl:discrete_morse_spectra} list only 20 random runs for \texttt{S2xpoincare}
to keep the list short.)
The two $5$-manifolds $SU2/SO3$ and ${\mathbb R}{\bf P}^5$
have homology vectors $(\mathbb{Z},0,\mathbb{Z},\mathbb{Z},0,\mathbb{Z})$ and
$(\mathbb{Z},\mathbb{Z}_2,0,\mathbb{Z}_2,0,\mathbb{Z})$
and triangulations \texttt{SU2\_SO3} with $13$ vertices
and \texttt{RP5\_24} with $24$ vertices, respectively \cite{Lutz1999,Lutz2005bpre}.
The $15$-vertex triangulation \texttt{\_HP2} of an $8$-dimensional manifold
`like a quaternionic projective plane' by Brehm and K\"uhnel \cite{BrehmKuehnel1992}
has homology $(\mathbb{Z},0,0,0,\mathbb{Z},0,0,0,\mathbb{Z})$.
\begin{thm}
The triangulations \texttt{SU2\_SO3}, \texttt{RP5\_24}, and \texttt{\_HP2}
have optimal discrete Morse vectors $(1,0,1,1,0,1)$, $(1,1,1,1,1,1)$,
and $(1,0,0,0,1,0,0,0,1)$, respectively.
\end{thm}
The $18$-vertex non-PL triangulation \texttt{non\_PL} \cite{BjoernerLutz2000} of the
$5$-dimensional sphere $S^5$ admits $(1,0,0,2,2,1)$ as discrete Morse vector.
\subsection{Random 2-complexes and fundamental groups} \label{sec:fundamental_groups}
\texttt{Example:} \texttt{rand2\_n25\_p0.328}
\medskip
\noindent
In generalization of the classical Erd\H{o}s--R\'enyi model for random graphs \cite{ErdosRenyi1960},
Linial and Meshulam \cite{LinialMeshulam2006} considered random $2$-dimensional complexes
with complete $1$-skeleton on $n$~vertices; every triangle with vertices from the set $\{1,\dots,n\}$
is then added with probability $p$ independently. Let $Y(n,p)$ be the set of such complexes.
For the elements of $Y(n,p)$, Linial and Meshulam proved a sharp threshold for the vanishing of the first homology
with ${\mathbb Z}_2$-coefficients,
$$\lim_{n\rightarrow\infty}\,{\rm Prob}[\,Y\in Y(n,p)\,|\,H_1(Y,{\mathbb Z}_2)=0\,]
=\left\{\begin{tabular}{lll}$1$ & {\rm for} & $p=\frac{2{\rm log}n+\omega(n)}{n}$,\\[2mm]
$0$ & {\rm for} & $p=\frac{2{\rm log}n-\omega(n)}{n}$,
\end{tabular}\right.$$
for any function $\omega(n)\rightarrow\infty$ as $n\rightarrow\infty$ (as long as $p\in [0,1]$).
Replacing homological connectivity by simple connectivity, Babson, Hoffman, and Kahle \cite{BabsonHoffmanKahle2010}
showed that there is a range for $p$ for which asymptotically almost surely the complexes $Y\in Y(n,p)$ have non-trivial
fundamental groups with trivial abelianizations,
$$\lim_{n\rightarrow\infty}\,{\rm Prob}[\,Y\in Y(n,p)\,|\,\pi_1(Y)=0\,]\,=1 \quad {\rm for}\quad p\geq \displaystyle\big(\textstyle\frac{3{\rm log}n+\omega(n)}{n}\displaystyle\big)^{\frac{1}{2}},$$
with the exponent $\frac{1}{2}$ being best possible.
More recently, Cohen et al.~\cite{DCohenCostaFarberKappeler2012} showed that for
$p\ll n^{-1}$ asymptotically almost surely the complexes $Y\in Y(n,p)$ admit a discrete Morse functions
with no critical $2$-cells. See also the recent results in higher dimensions by Aronshtam, Linial, {\L}uczak and Meshulam~\cite{AronshtamLinialLuczakMeshulam2013}, where they consider the case $p=c\cdot n^{-1}$.
\enlargethispage*{3mm}
The example \texttt{rand2\_n25\_p0.328} on $n=25$ vertices from Table~\ref{tbl:discrete_morse_spectra}
with homology $(\mathbb{Z},0,\mathbb{Z}^{475})$ has $751$ triangles, each picked with probability $p=0.328$.
We found the optimal discrete Morse vector $(1,0,475)$ in $275$ out of 10000 runs.
According to Seifert and Threlfall \cite[\S 44]{SeifertThrelfall1934}, a presentation of the fundamental group
of a simplicial complex can be obtained via the edge-path group. For this, a spanning tree of edges
is deleted from the 1-skeleton of the complex and each remaining edge contributes a generator to the
fundamental group, while each triangle of the 2-skeleton contributes a
relator; see \cite{Lutz_FundamentalGroup} for an implementation.
We used the GAP command \texttt{SimplifiedFpGroup} to simplify the edge-path group presentation
of the fundamental group. The heuristic \texttt{SimplifiedFpGroup}
not necessarily outputs a minimal presentation of a finitely presented
group with the minimal number of generators and relators.
Nevertheless, even in the case of huge complexes, \texttt{SimplifiedFpGroup}
succeeded with recognizing trivial, cyclic (one generator, at most one
relator), or free groups (no relators).
In Table~\ref{tbl:gen_fund_groups}, we list for the examples of Table~\ref{tbl:discrete_morse_spectra}
the number of generators (Ge.)~and the number of relators (Re.)~of the initial presentation
and the number of generators (SGe.)~and the number of relators (SRe.)~of the simplified group
along with the resulting fundamental group (F.~Gr.)~and the time it took for the simplification.
In Tables~\ref{tbl:gen_fund_groups}--\ref{tbl:rand_2compl_n50},
$F(k)$ denotes the free group with $k$ generators.
In the Tables~\ref{tbl:rand_2compl_n25} and~\ref{tbl:rand_2compl_n50}
we list resulting fundamental groups for random $2$-complexes with 25
and 50 vertices, respectively. In these tables, the Linial--Meshulam
threshold can be observed quite nicely. For $p=\frac{2{\rm log}25}{25}\approx 0.26$
and $p=\frac{2{\rm log}50}{50}\approx 0.16$, 73 and 75 out of 100 random
examples with $n=25$ and $n=50$ vertices had trivial fundamental groups, respectively.
Thus, for these values of $p$ we precisely are in the range of the slope of the threshold.
While most of the examples in the Tables~\ref{tbl:rand_2compl_n25} and~\ref{tbl:rand_2compl_n50}
have free fundamental groups, we found `non-free' examples (for which their
presentations could not be simplified to remove all relators) in the
range when $p$ is slightly smaller than $\frac{3}{n}$, the value for which
Linial, Meshulam, and Rosenthal~\cite{LinialMeshulamRosenthal2010}
constructed acyclic examples as sum complexes.
In our experiments we did not observe the Babson--Hoffmann--Kahle examples
with non-trivial fundamental groups that have trivial abelianizations.
However, as pointed out by Kenyon, `exceptional events' can occur
for random groups in the case when $n$ is small while
the asymptotical behavior can be rather different; cf.~\cite[pp.~42--43]{Ollivier2005}.
\subsection{Vertex-homogeneous complexes and the Evasiveness Conjecture}
\texttt{Example:} \texttt{contractible\_vertex\_homogeneous.}
\medskip
\noindent
As remarked by Kahn, Saks, and Sturtevant \cite{KahnSaksSturtevant1984} we have
the following implications for simplicial complexes:
$$\mbox{non-evasive}\quad\Longrightarrow\quad\mbox{collapsible}\quad\Longrightarrow\quad\mbox{contractible}\quad\Longrightarrow\quad\mbox{${\mathbb Z}$-acyclic}.$$
The Evasiveness Conjecture \cite{KahnSaksSturtevant1984} for simplicial complexes states that
every vertex-homo\-ge\-neous non-evasive simplicial complex is a simplex.
The first examples of vertex-homo\-ge\-neous ${\mathbb Z}$-acyclic simplicial complexes different from simplices
were given by Oliver (cf.~\cite{KahnSaksSturtevant1984}); see~\cite{Lutz2002b} for further examples.
While join products and other constructions can be used to derive vertex-homo\-ge\-neous contractible
simplicial complexes different from simplices, non-trivial vertex-homo\-ge\-neous non-evasive examples
cannot be obtained this way~\cite{Welker1999}.
The smallest example \texttt{contractible\_vertex\_homogeneous} of a contractible vertex-homo\-ge\-neous
simplicial complex from \cite{Lutz2002b} is $11$-dimensional with
$$f=(60,1290,12380,58935,148092,220840,211740,136155,59160,16866,2880,225).$$
The best discrete Morse vector we found with the \texttt{lex}
and the \texttt{rev\_lex} heuristics for this contractible space is $(1,0,0,4,8,4,0,0,0,0,0,0)$.
We do not know whether the example is collapsible or not.
\subsection*{Acknowledgments}
Thanks to Karim Adiprasito, Herbert Edelsbrunner, Alex Engstr\"om, Michael Joswig, Roy Meshulam, Konstantin Mischaikow,
Vidit Nanda and John M.~Sullivan for helpful discussions and remarks.
\section*{Appendix A: Complexes on which our heuristic fails}
In this section, we construct simplicial complexes on which our random approach
will most likely go far from guessing the right Morse vector.
On these examples, exponentially many rounds (in the number of facets)
of the program may be necessary, before an optimal Morse vector shows up as output.
Such pathological examples can be produced in any positive dimension.
The crucial idea is highlighted by the following $1$-dimensional case.
\begin{figure}
\begin{center}
\includegraphics[width=6.5cm]{graphs_long}
\end{center}
\caption{The graph $A_{k+6}$ with $k+6$ vertices.}
\label{fig:Ak6}
\end{figure}
\begin{example} \label{ex:Bad1} \rm
Let $k$ be a positive integer. Let $A_{k+6}$ be the graph consisting of two cycles
of length $3$ that are connected by a path of $k$ edges; see Figure~\ref{fig:Ak6}.
Since $A_{k+6}$ has no free edge, our algorithm picks an edge $e$ uniformly at random and
removes it. The final outcome depends on this choice, and on this choice only:
\begin{compactitem}
\item If $e$ belongs to the $k$-edge path, it is easy to see that the program will always output the discrete Morse vector~$(2,3)$.
\item If instead $e$ belongs to one of the two triangles, then the program
will always output the Morse vector $(1,2)$.
\end{compactitem}
Hence, the algorithm finds a perfect
Morse function on $A_{k+6}$ with probability $p=\frac{6}{6+k}$.
For large $k$, the algorithm will most likely (i.e. with
probability $q=\frac{k}{6+k}$) return a Morse vector that is `off by $2$', displaying $5$ critical cells instead of $3$.
\end{example}
\enlargethispage*{12mm}
\begin{example} \label{ex:Bad2} \rm
Let $s$ be a positive integer. Let $B_{k+6} (s)$ be a bouquet of $s$ copies of
$A_{k+6}$. An optimal discrete Morse function on $B_{k+6} (s)$ has Morse vector $(1,2s)$.
Finding a discrete Morse function on $B_{k+6} (s)$ is the same as (independently) finding
$s$ discrete Morse functions on the $s$ copies of $A_{k+6}$. Therefore, the probability
of getting the optimal Morse vector on $B_{k+6} (s)$ is $p^s$, where $p=\frac{6}{6+k}$.
This corresponds to putting together $s$ optimal Morse functions on the different copies
of $A_{k+6}$, or in other words, to picking one favorable edge in each copy of $A_{k+6}$.
For $0 \le i \le s$, the probability that the program outputs the Morse vector $(1+i,2s+i)$\,
is\, $\binom{s}{i} p^{s-i} (1-p)^i$, corresponding to $i$ `bad choices' and $s-i$ `good choices'.
\end{example}
To show that an analogous phenomenon occurs also in higher dimensions, let us recall a classical definition in PL topology.
\begin{deff} \rm
Let $C$ be a $d$-dimensional complex. A \emph{stacking operation}
on $C$ is the transition from $C$ to
$ C' = (C - \operatorname{star}(\sigma, C) \, ) \; \cup \; \hat{
\sigma } \ast \operatorname{link}(\sigma, C)$,
where $\sigma$ is an arbitrary facet of $C$ and $\hat{\sigma}$ is a new vertex
(e.g., the barycenter of $\sigma$). More generally, we say that $C'$ {\em is obtained from $C$ by stacking}
if some finite sequence of stacking operations leads from $C$ to $C'$.
\end{deff}
Each stacking operation adds $d$ facets;
so, a complex obtained by performing $s$ stacking operations on a $d$-simplex has exactly $ds + 1$ facets.
\begin{lem}
If $C'$ is obtained from a simplex by stacking, then $C'$ is shellable. In particular,
it is endo-collapsible: For any facet $\sigma$ of $C'$, there is a sequence of elementary collapses
that reduces $C' - \sigma$ to $\partial C'$.
\end{lem}
In dimension $d \ge 3$, there is no guarantee that \emph{any} sequence of elementary collapses
on $C' - \sigma$ can be continued until one reaches $\partial C'$.
This is why, in the following example, probabilities have to be estimated rather than computed.
\begin{figure}[t]
\begin{center}
\includegraphics[width=4cm]{C_2_11}
\end{center}
\caption{An example $C^2_{2\cdot 5+1}$ with the central $2$-simplex $S$ (in grey)
subdivided $5$ times and the boundary edges of $S$ blocked by three empty tetrahedra.}
\label{fig:C_2_11}
\end{figure}
\begin{example} \label{ex:Bad3} \rm
Let $d, k$ be positive integers, with $k \equiv 1$ mod $d$. Take a
disjoint union of $d+1$ edges $e_0, \ldots, e_d$, and a $d$-simplex $S$
(with facets $F_0, \ldots, F_d$). For each $i$ in $\{0, \ldots, d\}$, glue
in the boundary of the join $F_i \ast e_i$. The resulting complex $C^d$ is
homotopy equivalent to a bouquet of $d+1$ spheres of dimension $d$; the
homotopy is just the contraction of the central simplex $S$ to a point.
Let $C^d_k$ be a complex obtained by stacking the simplex $S$ exactly $s$~times,
so that $S$ gets subdivided into $k = d s + 1$ simplices of the same dimension.
Note first of all that $C^1_k$ coincides with the $A_{k+6}$ of Example~\ref{ex:Bad1}.
For an example $C^2_{2\cdot 5+1}$ see Figure~\ref{fig:C_2_11}.
Since $C^d_k$ has no free $(d-1)$-faces, our algorithm starts by removing some $d$-face $\sigma$ at random.
We have two possible cases:
\begin{compactitem}
\item With probability $\frac{k}{(d+2)(d+1) + k}$ we pick $\sigma$ from
the subdivision of the central simplex.
\item With probability $\frac{(d+2)(d+1)}{(d+2)(d+1) + k}$ we pick
$\sigma$ from one of the $d$-spheres.
\end{compactitem}
In the first case, \emph{some} sequence of elementary collapses reduces $C^d_k - \sigma$
onto $ C^d - S$. So our algorithm will output a Morse vector
that is either $(1, 0, \ldots, 0, 1, d+2)$ or a (componentwise) larger vector;
but certainly not the vector $(1,0, \ldots, 0, 0, d+1)$.
Thus the probability of obtaining the optimal Morse vector $(1,0, \ldots, 0, 0, d+1)$
is positive, but smaller or equal to $\frac{(d+2)(d+1)}{(d+2)(d+1) + k}$.
As $k$ gets larger, this upper bound gets smaller and smaller.
\end{example}
\begin{example} \rm
By taking a bouquet of $w$ copies of Example \ref{ex:Bad3}, we obtain a
complex $B^d_k (w)$. For $d=1$, $B^d_k (w)$ coincides with the $B_{k+6} (w)$ of Example \ref{ex:Bad2}.
The probability of seeing the perfect Morse vector $(1, 0, \ldots, 0, 0, (d+1)w)$ on $B^d_k (w)$
is smaller or equal to $\left( \frac{(d+2)(d+1)}{(d+2)(d+1) + k} \right)^w.$
\end{example}
For practical purposes, it is useful to understand how this probability grows
\emph{with respect to the number $N$ of facets}. In fact, given a complex with $N$ facets,
we would like to concretely know how often we should run the algorithm before
we can expect an optimal Morse vector to appear among the outputs.
\enlargethispage*{1mm}
For the sake of brevity, we do
the calculations in dimension one --- but similar estimates can be easily
derived in all dimensions. The graph constructed in Example~\ref{ex:Bad2}
has $N=(6+k)w$ edges. To study the probability $(\frac{6}{6+k})^w$ of
finding an optimal Morse function, we should regard $N$ as a constant,
write $w$ as $\frac{N}{6+k}$, and study the function
\[ P(k) = \left( \frac{6}{6+k} \right) ^{\frac{N}{6+k}}.\]
Now, classical calculus reveals that the function $x \longmapsto x^x =
e^{x \log x}$ is strictly decreasing on the interval $(0,e^{-1})$ and
strictly increasing on $(e^{-1}, \infty)$. It achieves its minimum at
$e^{-1}$. So, given any bijection $g: (0, \infty) \rightarrow (0,1)$, the
function $y \longmapsto g(y)^{g(y)}$ achieves its minimum at the (unique) point $y$
such that $g(y)=e^{-1}$. Applying this to
$g(y)=\frac{6}{6+y}$, we get
\[ \min_{y \in \mathbb{R}} \left( \frac{6}{6+y} \right) ^{\frac{N}{6+y}}
=
\min_{y \in \mathbb{R} } \left( g(y)^{g(y) \frac{N}{6}} \right)
=
\left( \min_{y \in \mathbb{R} } \, g(y)^{g(y) } \right)^{\frac{N}{6}} =
\left( \, (e^{-1})^{e^{-1}} \right)^{\frac{N}{6}} = \ e^{-\frac{N}{6e}}.\]
Yet we wanted to minimize the function $P(k)$ over the integers, not over
the reals. However, if we choose the integer $k$ so that $\frac{6}{6+k}$
is close to $e^{-1}$, one can see that the value of $P(k)$ is close to
$P(e^{-1})$. The minimum is in fact achieved at $k=10$. Thus $P(k)$ can be as small
as $e^{-cN}$, where $c$ is some constant `close' to $\frac{1}{6e}$:
It is in fact $c=\frac{1}{16} (\log 8 - \log 3) \approx 0,0613018$.
\section*{Appendix B: Library and Tables}
Table~\ref{tbl:discrete_morse_spectra} lists computational results
for the examples of Section~\ref{sec:computational_results}.
Of each example we present the discrete Morse spectrum we experimentally observed in a certain number of runs (usually 10000, when not otherwise stated; sometimes we did fewer runs for reasons concerning either excessive computation time or excessive variance of the spectrum).
Let $c_{\approx}$ and $c^N_{\approx}$ be the average numbers of critical cells for the vectors
in the approximated discrete Morse spectrum and the approximated normalized discrete Morse spectrum,
respectively. The longer we run \textsc{Random Discrete Morse}, the better the approximation of $c_{\sigma}$ by $c_{\approx}$
and of $c^N_{\sigma}$ by $c^N_{\approx}$ will get --- and possibly optimal discrete Morse vectors will show up.
In Table~\ref{tbl:discrete_morse_spectra}, optimal discrete Morse vectors are highlighted in bold. We wrote an output vector in italics if it is the best we could find with our algorithm and we do not know if it is indeed the optimal or not.
For Table~\ref{tbl:lex_rev_lex}, we replaced the random choices in our algorithm
with a deterministic lexicographic or reverse lexicographic choice.
The labeling of the vertices of course now plays a role; see~\cite{AdiprasitoBenedettiLutz2013pre}
for a discussion of a randomized version (by randomly renumbering vertices first) of \texttt{lex} and \texttt{rev\_lex}.
All computations were run on a cluster of 2.6 GHz processors.
\pagebreak
{\small
\defaultaddspace=.1em
\setlength{\LTleft}{0pt}
\setlength{\LTright}{0pt}
\begin{longtable}{@{}l@{\extracolsep{\fill}}l@{\extracolsep{2pt}}r@{\extracolsep{\fill}}l@{}}
\caption{\protect\parbox[t]{15cm}{Library of triangulations and discrete Morse spectra.}}\label{tbl:discrete_morse_spectra}
\\\toprule
\addlinespace
\addlinespace
\addlinespace
\addlinespace
Name of example/Homology/ & \multicolumn{2}{@{}l@{}}{Distribution of obtained} & Time for Hasse diagram/ \\
$f$-vector/$c^{N}_{\approx}$ & \multicolumn{2}{@{}l@{}}{discrete Morse vectors} & Time per round \\
& \multicolumn{2}{@{}l@{}}{in 10000 rounds} & (in Hour:Min:Sec.Frac) \\ \midrule
\endfirsthead
\caption{\protect\parbox[t]{15cm}{Library of triangulations and discrete Morse spectra (continued).}}
\\\toprule
\addlinespace
\addlinespace
\addlinespace
\addlinespace
Name of example/Homology/ & \multicolumn{2}{@{}l@{}}{Distribution of obtained} & Time for Hasse diagram/ \\
$f$-vector/$c^{N}_{\approx}$ & \multicolumn{2}{@{}l@{}}{discrete Morse vectors} & Time per round \\
& \multicolumn{2}{@{}l@{}}{in 10000 rounds} & (in Hour:Min:Sec.Frac) \\ \midrule
\endhead
\bottomrule
\endfoot
\texttt{dunce\_hat} & \textbf{(1,1,1)}: & 10000 & 0.004 \\[-1mm]
$(\mathbb{Z},0,0)$ & & & 0.00024 \\[-1.1mm]
$(8,24,17)$ & & & \\[-1.1mm]
$3.0000$ & & & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{d2n12g6} & \textbf{(1,12,1)}: & 9722 & 0.004 \\[-1mm]
$(\mathbb{Z},\mathbb{Z}^{12},\mathbb{Z})$ & $(2,13,1)$: & 277 & 0.00076 \\[-1.1mm]
$(12,66,44)$ & $(3,14,1)$: & 1 & \\[-1.1mm]
$14.0000$ & & & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{regular\_2\_21\_23\_1} & \textbf{(1,30,1)}: & 9337 & 0.008 \\[-1mm]
$(\mathbb{Z},\mathbb{Z}^{30},\mathbb{Z})$ & $(2,31,1)$: & 649 & 0.00201 \\[-1.1mm]
$(21,147,98)$ & $(3,32,1)$: & 14 & \\[-1.1mm]
$32.0000$ & & & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{rand2\_n25\_p0.328} & $(1,3,478)$: & 2185 & 0.228 \\[-1mm]
$(\mathbb{Z},0,\mathbb{Z}^{475})$ & $(1,4,479)$: & 1874 & 0.00428 \\[-1.1mm]
$(25,300,751)$ & $(1,2,477)$: & 1847 & \\[-1.1mm]
$482.9032$ & $(1,5,480)$: & 1265 & \\[-1.3mm]
& $(1,1,476)$: & 1048 & \\[-1.1mm]
& $(1,6,481)$: & 704 & \\[-1.1mm]
& $(1,7,482)$: & 318 & \\[-1.1mm]
& \textbf{(1,0,475)}: & 275 & \\[-1.1mm]
& $(1,8,483)$: & 140 & \\[-1.1mm]
& $(2,4,478)$: & 66 & \\[-1.1mm]
& $(2,5,479)$: & 66 & \\[-1.1mm]
& $(2,6,480)$: & 54 & \\[-1.1mm]
& $(2,7,481)$: & 41 & \\[-1.1mm]
& $(1,9,484)$: & 40 & \\[-1.1mm]
& $(2,3,477)$: & 24 & \\[-1.1mm]
& $(2,8,482)$: & 21 & \\[-1.1mm]
& $(1,10,485)$: & 12 & \\[-1.1mm]
& $(2,9,483)$: & 8 & \\[-1.1mm]
& $(1,11,486)$: & 4 & \\[-1.1mm]
& $(2,10,484)$: & 3 & \\[-1.1mm]
& $(2,11,485)$: & 3 & \\[-1.1mm]
& $(3,6,479)$: & 1 & \\[-1.1mm]
& $(3,8,481)$: & 1 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{dunce\_hat\_in\_3\_ball} & \textbf{(1,0,0,0)}: & 10000 & 0.004 \\[-1.1mm]
$(\mathbb{Z},0,0,0)$ & & & 0.00049 \\[-1.1mm]
$(8,25,30,12)$ & & & \\[-1.1mm]
$1.0000$ & & & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{Barnette\_sphere} (non-polytopal) & \textbf{(1,0,0,1)}: & 10000 & 0.004 \\[-1.1mm]
$(\mathbb{Z},0,0,\mathbb{Z})$ & & & 0.00060 \\[-1.1mm]
$(8,27,38,19)$ & & & \\[-1.1mm]
$2.0000$ & & & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{B\_3\_9\_18} (non-shellable ball) & \textbf{(1,0,0,0)}: & 10000 & 0.004 \\[-1.1mm]
$(\mathbb{Z},0,0,0)$ & & & 0.00073 \\[-1.1mm]
$(9,33,43,18)$ & & & \\[-1.1mm]
$1.0000$ & & & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{trefoil\_arc} & \textbf{(1,0,0,0)}: & 9529 & 0.004 \\[-1.1mm]
$(\mathbb{Z},0,0,0)$ & $(1,1,1,0)$: & 466 & 0.00158 \\[-1.1mm]
$(12,58,85,38)$ & $(1,2,2,0)$: & 5 & \\[-1.1mm]
$1.0952$ & & & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{trefoil} & \textbf{(1,0,0,1)}: & 9617 & 0.004 \\[-1.1mm]
$(\mathbb{Z},0,0,\mathbb{Z})$ & $(1,1,1,1)$: & 377 & 0.00208 \\[-1.1mm]
$(13,69,112,56)$ & $(1,2,2,1)$: & 6 & \\[-1.1mm]
$2.0778$ & & & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{rudin} (Rudin's ball) & \textbf{(1,0,0,0)}: & 10000 & 0.004 \\[-1.1mm]
$(\mathbb{Z},0,0,0)$ & & & 0.00107 \\[-1.1mm]
$(14,66,94,41)$ & & & \\[-1.1mm]
$1.0000$ & & & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{double\_trefoil\_arc} & \textbf{(1,1,1,0)}: & 7080 & 0.012 \\[-1.1mm]
$(\mathbb{Z},0,0,0)$ & $(1,2,2,0)$: & 2698 & 0.00329 \\[-1.1mm]
$(15,93,145,66)$ & $(1,3,3,0)$: & 197 & \\[-1.1mm]
$3.6260$ & $(2,3,2,0)$: & 18 & \\[-1.1mm]
& $(1,4,4,0)$: & 6 & \\[-1.1mm]
& $(2,4,3,0)$: & 1 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{poincare} & \textbf{(1,2,2,1)}: & 9073 & 0.016 \\[-1.1mm]
$(\mathbb{Z},0,0,\mathbb{Z})$ & $(1,3,3,1)$: & 864 & 0.00400 \\[-1.1mm]
$(16,106,180,90)$ & $(1,4,4,1)$: & 45 & \\[-1.1mm]
$6.1952$ & $(2,4,3,1)$: & 7 & \\[-1.1mm]
& $(2,3,2,1)$: & 6 & \\[-1.1mm]
& $(1,5,5,1)$: & 5 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{double\_trefoil} & $(1,1,1,1)$: & 4550 & 0.012 \\[-1.1mm]
$(\mathbb{Z},0,0,\mathbb{Z})$ & \textbf{(1,0,0,1)}: & 3972 & 0.00408 \\[-1.1mm]
$(16,108,184,92)$ & $(1,2,2,1)$: & 1316 & \\[-1.1mm]
$3.5338$ & $(1,3,3,1)$: & 145 & \\[-1.1mm]
& $(1,4,4,1)$: & 8 & \\[-1.1mm]
& $(2,3,2,1)$: & 7 & \\[-1.1mm]
& $(2,4,3,1)$: & 2 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{triple\_trefoil\_arc} & \textbf{(1,2,2,0)}: & 6027 & 0.024 \\[-1.1mm]
$(\mathbb{Z},0,0,0)$ & $(1,3,3,0)$: & 3220 & 0.00528 \\[-1.1mm]
$(17,127,208,97)$ & $(1,4,4,0)$: & 569 & \\[-1.1mm]
$5.9352$ & $(1,5,5,0)$: & 77 & \\[-1.1mm]
& $(2,4,3,0)$: & 51 & \\[-1.1mm]
& $(2,3,2,0)$: & 42 & \\[-1.1mm]
& $(2,5,4,0)$: & 10 & \\[-1.1mm]
& $(1,6,6,0)$: & 4 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{triple\_trefoil} & $(1,2,2,1)$: & 4427 & 0.024 \\[-1.1mm]
$(\mathbb{Z},0,0,\mathbb{Z})$ & \textbf{(1,1,1,1)}: & 3080 & 0.00640 \\[-1.1mm]
$(18,143,250,125)$ & $(1,3,3,1)$: & 1911 & \\[-1.1mm]
$5.9898$ & $(1,4,4,1)$: & 430 & \\[-1.1mm]
& $(1,5,5,1)$: & 57 & \\[-1.1mm]
& $(2,3,2,1)$: & 40 & \\[-1.1mm]
& $(2,4,3,1)$: & 33 & \\[-1.1mm]
& $(2,5,4,1)$: & 15 & \\[-1.1mm]
& $(2,6,5,1)$: & 4 & \\[-1.1mm]
& $(1,6,6,1)$: & 3 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{hyperbolic\_dodecahedral\_space} & \emph{(1,4,4,1)}: & 4792 & 0.036 \\[-1.1mm]
$(\mathbb{Z},\mathbb{Z}_5^3,0,\mathbb{Z})$ & $(1,5,5,1)$: & 3338 & 0.01017 \\[-1.1mm]
$(21,190,338,169)$ & $(1,6,6,1)$: & 1245 & \\[-1.1mm]
$11.4672$ & $(1,7,7,1)$: & 326 & \\[-1.1mm]
& $(2,5,4,1)$: & 82 & \\[-1.1mm]
& $(2,6,5,1)$: & 80 & \\[-1.1mm]
& $(1,8,8,1)$: & 62 & \\[-1.1mm]
& $(2,7,6,1)$: & 45 & \\[-1.1mm]
& $(2,8,7,1)$: & 18 & \\[-1.1mm]
& $(1,9,9,1)$: & 8 & \\[-1.1mm]
& $(2,9,8,1)$: & 3 & \\[-1.1mm]
& $(1,10,10,1)$: & 1 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{S\_3\_50\_1033} (random) & \textbf{(1,0,0,1)}: & 7087 & 0.900 \\[-1.1mm]
$(\mathbb{Z},0,0,\mathbb{Z})$ & $(1,1,1,1)$: & 1383 & 0.153 \\[-1.1mm]
$(50,1083,2066,1033)$ & $(1,2,2,1)$: & 697 & \\[-1.1mm]
$3.1966$ & $(1,3,3,1)$: & 386 & \\[-1.1mm]
& $(1,4,4,1)$: & 189 & \\[-1.1mm]
& $(1,5,5,1)$: & 118 & \\[-1.1mm]
& $(1,6,6,1)$: & 42 & \\[-1.1mm]
& $(2,4,3,1)$: & 25 & \\[-1.1mm]
& $(2,3,2,1)$: & 18 & \\[-1.1mm]
& $(2,5,4,1)$: & 14 & \\[-1.1mm]
& $(1,7,7,1)$: & 12 & \\[.25mm]
\pagebreak
& $(2,6,5,1)$: & 9 & \\[-1.1mm]
& $(1,8,8,1)$: & 9 & \\[-1.1mm]
& $(2,7,6,1)$: & 4 & \\[-1.1mm]
& $(2,8,7,1)$: & 3 & \\[-1.1mm]
& $(1,10,10,1)$: & 2 & \\[-1.1mm]
& $(2,9,8,1)$: & 1 & \\[-1.1mm]
& $(1,9,9,1)$: & 1 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{S\_3\_100\_4850} (cyclic polytope) & \textbf{(1,0,0,1)}: & 10000 & 17.829 \\[-1.1mm]
$(\mathbb{Z},0,0,\mathbb{Z})$ & & & 1.883 \\[-1.1mm]
$(100,4950,9700,4850)$ & & & \\[-1.1mm]
$2.0000$ & & & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{600\_cell} & \textbf{(1,0,0,1)}: & 10000 & 0.364 \\[-1.1mm]
$(\mathbb{Z},0,0,\mathbb{Z})$ & & & 0.076 \\[-1.1mm]
$(120,720,1200,600)$ & & & \\[-1.1mm]
$2.0000$ & & & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{non\_4\_2\_colorable} & $(4,15,12,1)$: & 2 & 1.728 \\[-1.1mm]
$(\mathbb{Z},0,0,\mathbb{Z})$ & $(1,7,7,1)$: & 1 & 0.254 \\[-1.1mm]
$(167,1579,2824,1412)$ & $(2,12,11,1)$ & 1 & \mbox{[10 rounds]}\\[-1.1mm]
$25.2$ & $(2,13,12,1)$ & 1 & \\[-1.1mm]
& $(3,13,11,1)$ & 1 & \\[-1.1mm]
& $(4,16,13,1)$: & 1 & \\[-1.1mm]
& $(5,14,10,1)$: & 1 & \\[-1.1mm]
& $(5,18,14,1)$: & 1 & \\[-1.1mm]
& $(7,20,14,1)$: & 1 & \\[.5mm]
\addlinespace
\addlinespace
\texttt{Hom\_C5\_K4} ($\mathbb{R}\textbf{P}^3$)
& \textbf{(1,1,1,1)}: & 9753 & 1.864 \\[-1.1mm]
$(\mathbb{Z},\mathbb{Z}_2,0,\mathbb{Z})$ & $(1,2,2,1)$: & 240 & 0.379 \\[-1.1mm]
$(240,1680,2880,1440)$ & $(2,3,2,1)$: & 6 & \\[-1.1mm]
$4.0496$ & $(1,3,3,1)$: & 1 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{trefoil\_bsd} & \textbf{(1,0,0,1)}: & 9902 & 1.716 \\[-1.1mm]
$(\mathbb{Z},0,0,\mathbb{Z})$ & $(1,1,1,1)$: & 95 & 0.308 \\[-1.1mm]
$(250,1594,2688,1344)$ & $(1,2,2,1)$: & 3 & \\[-1.1mm]
$2.0202$ & & & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{knot} & \emph{(1,1,1,0)}: & 9414 & 1.576 \\[-1.1mm]
$(\mathbb{Z},0,0,0)$ & $(1,2,2,0)$: & 560 & 0.813 \\[-1.1mm]
$(380,1929,2722,1172)$ & $(2,3,2,0)$: & 15 & \\[-1.1mm]
$3.1194$ & $(1,3,3,0)$: & 9 & \\[-1.1mm]
& $(2,4,3,0)$: & 2 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{nc\_sphere} & $(1,1,1,1)$: & 7902 & 3.228 \\[-1.1mm]
$(\mathbb{Z},0,0,\mathbb{Z})$ & $(1,2,2,1)$: & 1809 & 0.470 \\[-1.1mm]
$(381,2309,3856,1928)$ & $(1,3,3,1)$: & 234 & \\[-1.1mm]
$4.4760$ & $(1,4,4,1)$: & 25 & \\[-1.1mm]
& \textbf{(1,0,0,1)}: & 12 & \\[-1.1mm]
& $(2,3,2,1)$: & 9 & \\[-1.1mm]
& $(1,6,6,1)$: & 3 & \\[-1.1mm]
& $(2,4,3,1)$: & 3 & \\[-1.1mm]
& $(2,5,4,1)$: & 2 & \\[-1.1mm]
& $(1,5,5,1)$: & 1 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{double\_trefoil\_bsd} & $(1,1,1,1)$: & 4819 & 4.376 \\[-1.1mm]
$(\mathbb{Z},0,0,\mathbb{Z})$ & \textbf{(1,0,0,1)}: & 4274 & 0.811 \\[-1.1mm]
$(400,2608,4416,2208)$ & $(1,2,2,1)$: & 833 & \\[-1.1mm]
$3.3414$ & $(1,3,3,1)$: & 64 & \\[-1.1mm]
& $(1,4,4,1)$: & 4 & \\[-1.1mm]
& $(2,3,2,1)$: & 4 & \\[-1.1mm]
& $(2,4,3,1)$: & 2 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{bing} & $(1,1,1,0)$: & 9764 & 2.788 \\[-1.1mm]
$(\mathbb{Z},0,0,0)$ & $(1,2,2,0)$: & 217 & 1.398 \\[-1.1mm]
$(480,2511,3586,1554)$ & \textbf{(1,0,0,0)}: & 7 & \\[-1.1mm]
$3.0456$ & $(1,3,3,0)$: & 6 & \\[-1.1mm]
& $(2,3,2,0)$: & 6 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{triple\_trefoil\_bsd} & $(1,2,2,1)$: & 4793 & 8.024 \\[-1.1mm]
$(\mathbb{Z},0,0,\mathbb{Z})$ & \emph{(1,1,1,1)}: & 3390 & 1.456 \\[-1.1mm]
$(536,3536,6000,3000)$ & $(1,3,3,1)$: & 1543 & \\[-1.1mm]
$5.7352$ & $(1,4,4,1)$: & 208 & \\[-1.1mm]
& $(1,5,5,1)$: & 22 & \\[-1.1mm]
& $(2,3,2,1)$: & 20 & \\[-1.1mm]
& $(2,4,3,1)$: & 17 & \\[-1.1mm]
& $(1,6,6,1)$: & 3 & \\[-1.1mm]
& $(2,5,4,1)$: & 3 & \\[-1.1mm]
& $(1,8,8,1)$: & 1 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{S\_3\_1000\_2990} (stacked sphere) & \textbf{(1,0,0,1)}: & 10000 & 8.444 \\[-1.1mm]
$(\mathbb{Z},0,0,\mathbb{Z})$ & & & 1.498 \\[-1.1mm]
$(1000,3990,5980,2990)$ & & & \\[-1.1mm]
$2.0000$ & & & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{Hom\_n9\_655\_compl\_K4} $((S^2\!\times\!S^1)^{\# 13})$
& \textbf{(1,13,13,1)}: & 67 & 5:39.809 \\[-1.1mm]
$(\mathbb{Z},\mathbb{Z}^{13},\mathbb{Z}^{13},\mathbb{Z})$
& $(1,14,14,1)$: & 20 & 45.682 \\[-1.1mm]
$(3096,22104,38016,19008)$ & $(1,15,15,1)$: & 5 & \mbox{[100 rounds]} \\[-1.1mm]
$28.68$ & $(2,14,13,1)$: & 5 & \\[-1.1mm]
& $(2,15,14,1)$: & 2 & \\[-1.1mm]
& $(2,16,15,1)$: & 1 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{CP2} & \textbf{(1,0,1,0,1)}: & 9994 & 0.012 \\[-1.1mm]
$(\mathbb{Z},0,\mathbb{Z},0,\mathbb{Z})$ & $(1,1,2,0,1)$: & 6 & 0.00226 \\[-1.1mm]
$(9,36,84,90,36)$ & & & \\[-1.1mm]
$3.0012$ & & & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{RP4} & \textbf{(1,1,1,1,1)}: & 9765 & 0.056 \\[-1.1mm]
$(\mathbb{Z},\mathbb{Z}_2,0,\mathbb{Z}_2,0)$ & $(1,2,2,1,1)$: & 136 & 0.01678 \\[-1.1mm]
$(16,120,330,375,150)$ & $(1,1,2,2,1)$: & 89 & \\[-1.1mm]
$5.0490$ & $(1,3,3,1,1)$: & 5 & \\[-1.1mm]
& $(1,2,3,2,1)$: & 3 & \\[-1.1mm]
& $(1,1,3,3,1)$: & 1 & \\[-1.1mm]
& $(2,3,2,1,1)$: & 1 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{K3\_16} (unknown PL type) & \textbf{(1,0,22,0,1)}: & 6702 & 0.168 \\[-1.1mm]
$(\mathbb{Z},0,\mathbb{Z}^{22},0,\mathbb{Z})$ & $(1,1,23,0,1)$: & 2615 & 0.04417 \\[-1.1mm]
$(16,120,560,720,288)$ & $(1,2,24,0,1)$: & 506 & \\[-1.1mm]
$24.8218$ & $(1,3,25,0,1)$: & 60 & \\[-1.1mm]
& $(1,0,23,1,1)$: & 31 & \\[-1.1mm]
& $(1,1,24,1,1)$: & 15 & \\[-1.1mm]
& $(1,0,24,2,1)$: & 13 & \\[-1.1mm]
& $(1,0,25,3,1)$: & 9 & \\[-1.1mm]
& $(1,2,25,1,1)$: & 6 & \\[-1.1mm]
& $(2,3,24,0,1)$: & 5 & \\[-1.1mm]
& $(1,0,26,4,1)$: & 4 & \\[-1.1mm]
& $(1,1,26,3,1)$: & 4 & \\[-1.1mm]
& $(1,4,26,0,1)$: & 4 & \\[-1.1mm]
& $(1,0,27,5,1)$: & 3 & \\[-1.1mm]
& $(1,1,27,4,1)$: & 3 & \\[-1.1mm]
& $(1,2,27,3,1)$: & 2 & \\[-1.1mm]
& $(1,3,26,1,1)$: & 2 & \\[-1.1mm]
& $(1,1,28,5,1)$: & 2 & \\[-1.1mm]
& $(1,2,28,4,1)$: & 1 & \\[-1.1mm]
& $(1,3,27,2,1)$: & 1 & \\[-1.1mm]
& $(1,2,29,5,1)$: & 1 & \\[-1.1mm]
& $(1,2,26,2,1)$: & 1 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{K3\_17} (standard PL type) & \textbf{(1,0,22,0,1)}: & 6337 & 0.196 \\[-1.1mm]
$(\mathbb{Z},0,\mathbb{Z}^{22},0,\mathbb{Z})$ & $(1,1,23,0,1)$: & 2939 & 0.05093 \\[-1.1mm]
$(17,135,610,780,312)$ & $(1,2,24,0,1)$: & 618 & \\[-1.1mm]
$24.8978$ & $(1,3,25,0,1)$: & 78 & \\[-1.1mm]
& $(1,4,26,0,1)$: & 8 & \\[-1.1mm]
& $(1,0,23,1,1)$: & 6 & \\[-1.1mm]
& $(1,0,25,3,1)$: & 4 & \\[.25mm]
\pagebreak
& $(2,3,24,0,1)$: & 3 & \\[-1.1mm]
& $(1,0,24,2,1)$: & 2 & \\[-1.1mm]
& $(1,0,26,4,1)$: & 2 & \\[-1.1mm]
& $(1,1,24,1,1)$: & 1 & \\[-1.1mm]
& $(1,2,27,3,1)$: & 1 & \\[-1.1mm]
& $(1,5,27,0,1)$: & 1 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{RP4\_K3\_17} & \textbf{(1,1,23,1,1)}: & 55 & 0.392 \\[-1.1mm]
$(\mathbb{Z},\mathbb{Z}_2,\mathbb{Z}^{22},\mathbb{Z}_2,0)$
& $(1,2,24,1,1)$: & 24 & 0.0754 \\[-1.1mm]
$(28,245,930,1150,460)$ & $(1,3,25,1,1)$: & 8 & \mbox{[100 rounds]}\\[-1.1mm]
$28.56$ & $(1,1,24,2,1)$: & 3 & \\[-1.1mm]
& $(1,2,25,2,1)$: & 2 & \\[-1.1mm]
& $(1,4,26,1,1)$: & 2 & \\[-1.1mm]
& $(1,1,26,4,1)$: & 1 & \\[-1.1mm]
& $(1,3,26,2,1)$: & 1 & \\[-1.1mm]
& $(1,3,27,3,1)$: & 1 & \\[-1.1mm]
& $(1,3,28,4,1)$: & 1 & \\[-1.1mm]
& $(1,5,30,4,1)$: & 1 & \\[-1.1mm]
& $(2,4,25,1,1)$: & 1 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{RP4\_11S2xS2} & \textbf{(1,1,23,1,1)}: & 51 & 0.496 \\[-1.1mm]
$(\mathbb{Z},\mathbb{Z}_2,\mathbb{Z}^{22},\mathbb{Z}_2,0)$
& $(1,2,24,1,1)$: & 29 & 0.0945 \\[-1.1mm]
$(31,283,1052,1295,518)$ & $(1,3,25,1,1)$: & 14 & \mbox{[100 rounds]}\\[-1.1mm]
$28.46$ & $(1,1,24,2,1)$: & 1 & \\[-1.1mm]
& $(1,1,25,3,1)$: & 1 & \\[-1.1mm]
& $(1,1,26,4,1)$: & 1 & \\[-1.1mm]
& $(1,2,27,4,1)$: & 1 & \\[-1.1mm]
& $(1,4,26,1,1)$: & 1 & \\[-1.1mm]
& $(2,4,25,1,1)$: & 1 & \\[.5mm]
\addlinespace
\addlinespace
\texttt{Hom\_C6\_compl\_K5\_small} $((S^2\!\times\!S^2)^{\# 29})$
& $(1,1,59,0,1)$: & 33 & 1.460 \\[-1.1mm]
$(\mathbb{Z},0,\mathbb{Z}^{58},0,\mathbb{Z})$
& $(1,2,60,0,1)$: & 30 & 0.348 \\[-1.1mm]
$(33,379,1786,2300,920)$ & $(1,3,61,0,1)$: & 12 & \mbox{[100 rounds]} \\[-1.1mm]
$63.92$ & \textbf{(1,0,58,0,1)}: & 11 & \\[-1.1mm]
& $(1,4,62,0,1)$: & 5 & \\[-1.1mm]
& $(1,5,63,0,1)$: & 3 & \\[-1.1mm]
& $(1,6,64,0,1)$: & 3 & \\[-1.1mm]
& $(2,3,60,0,1)$: & 1 & \\[-1.1mm]
& $(1,4,63,1,1)$: & 1 & \\[-1.1mm]
& $(1,7,65,0,1)$: & 1 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{Hom\_C6\_compl\_K5} $((S^2\!\times\!S^2)^{\# 29})$
& $(1,10,68,0,1)$: & 3 & 2:18:33.603 \\[-1.1mm]
$(\mathbb{Z},0,\mathbb{Z}^{58},0,\mathbb{Z})$
& $(1,17,75,0,1)$: & 2 & 19:26.475 \\[-1.1mm]
$(1920,30780,104520,126000,50400)$ & $(1,7,65,0,1)$: & 1 & \mbox{[10 rounds]} \\[-1.1mm]
$83.0$ & $(1,8,66,0,1)$: & 1 & \mbox{([2000 rounds], Sec.~\ref{sec:hom_complexes})}\\[-1.1mm]
& $(1,9,67,0,1)$: & 1 & \\[-1.1mm]
& $(1,11,69,0,1)$: & 1 & \\[-1.1mm]
& $(2,16,73,0,1)$: & 1 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{SU2\_SO3} & \textbf{(1,0,1,1,0,1)}: & 9369 & 0.124 \\[-1.1mm]
$(\mathbb{Z},0,\mathbb{Z},\mathbb{Z},0,\mathbb{Z})$
& $(1,0,2,2,0,1)$: & 554 & 0.03250 \\[-1.1mm]
$(13,78,286,533,468,156)$ & $(1,1,2,1,0,1)$: & 35 & \\[-1.1mm]
$4.1354$ & $(1,0,3,3,0,1)$: & 32 & \\[-1.1mm]
& $(1,1,3,2,0,1)$: & 5 & \\[-1.1mm]
& $(1,0,4,4,0,1)$: & 4 & \\[-1.1mm]
& $(1,2,3,1,0,1)$: & 1 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{non\_PL} & \emph{(1,0,0,2,2,1)}: & 9383 & 0.324 \\[-1.1mm]
$(\mathbb{Z},0,0,0,0,\mathbb{Z})$ & $(1,0,0,3,3,1)$: & 441 & 0.06964 \\[-1.1mm]
$(18,139,503,904,783,261)$ & $(1,0,1,3,2,1)$: & 134 & \\[-1.1mm]
$6.1328$ & $(1,0,0,4,4,1)$: & 23 & \\[-1.1mm]
& $(1,0,1,4,3,1)$: & 12 & \\[-1.1mm]
& $(1,0,2,4,2,1)$: & 2 & \\[-1.1mm]
& $(1,0,0,5,5,1)$: & 2 & \\[-1.1mm]
& $(1,0,2,5,3,1)$: & 1 & \\[-1.1mm]
& $(1,1,2,3,2,1)$: & 1 & \\[-1.1mm]
& $(1,0,4,6,2,1)$: & 1 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{RP5\_24} & \textbf{(1,1,1,1,1,1)}: & 9181 & 1.800 \\[-1.1mm]
$(\mathbb{Z},\mathbb{Z}_2,0,\mathbb{Z}_2,0,\mathbb{Z})$
& $(1,1,2,2,1,1)$: & 344 & 0.429 \\[-1.1mm]
$(24,273,1174,2277,2028,676)$ & $(1,2,2,1,1,1)$: & 315 & \\[-1.1mm]
$6.1766$ & $(1,1,1,2,2,1)$: & 97 & \\[-1.1mm]
& $(1,2,3,2,1,1)$: & 21 & \\[-1.1mm]
& $(1,1,3,3,1,1)$: & 15 & \\[-1.1mm]
& $(1,3,3,1,1,1)$: & 9 & \\[-1.1mm]
& $(1,1,2,3,2,1)$: & 6 & \\[-1.1mm]
& $(1,2,2,2,2,1)$: & 5 & \\[-1.1mm]
& $(1,1,1,3,3,1)$: & 3 & \\[-1.1mm]
& $(1,3,3,2,2,1)$: & 1 & \\[-1.1mm]
& $(1,3,4,2,1,1)$: & 1 & \\[-1.1mm]
& $(2,3,2,1,1,1)$: & 1 & \\[-1.1mm]
& $(2,4,3,1,1,1)$: & 1 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{S2xpoincare} & $(1,3,4,3,2,1)$: & 6 & 46.611 \\[-1.1mm]
$(\mathbb{Z},0,\mathbb{Z},\mathbb{Z},0,\mathbb{Z})$
& $(1,4,6,4,2,1)$: & 3 & 8.460 \\[-1.1mm]
$(64,1156,5784,11892,10800,3600)$ & \emph{(1,2,3,3,2,1)}: & 2 & \mbox{[20 rounds]} \\[-1.1mm]
$15.70$ & $(1,2,4,4,2,1)$: & 2 & \mbox{([1000 rounds], Sec.~\ref{sec:classical})} \\[-1.1mm]
& $(1,3,5,4,2,1)$: & 2 & \\[-1.1mm]
& $(1,3,6,5,2,1)$: & 2 & \\[-1.1mm]
& $(1,2,5,5,2,1)$: & 1 & \\[-1.1mm]
& $(1,4,7,5,2,1)$: & 1 & \\[-1.1mm]
& $(1,3,7,6,2,1)$: & 1 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{S\_5\_100\_472} (stacked sphere) & \textbf{(1,0,0,0,0,1)}: & 10000 & 1.188 \\[-1.1mm]
$(\mathbb{Z},0,0,0,0,\mathbb{Z})$ & & & 0.309 \\[-1.1mm]
$(100,579,1430,1895,1416,472)$ & & & \\[-1.1mm]
$2.0000$ & & & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{Hom\_C5\_K5} $(S^3\!\times\!S^2)$
& \textbf{(1,0,1,1,0,1)}: & 7 & 16:16:14.156 \\[-1.1mm]
$(\mathbb{Z},0,\mathbb{Z},\mathbb{Z},0,\mathbb{Z})$
& $(1,0,2,2,0,1)$: & 2 & 2:53:37.911 \\[-1.1mm]
$(1020,25770,143900,307950,283200,$ & $(1,1,2,1,0,1)$: & 1 & \mbox{[10 rounds]} \\[-1.1mm]
\mbox{}\hspace{5mm}$94400)$ & & & \\[-1.1mm]
$4.6$ & & & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{\_HP2} & \textbf{(1,0,0,0,1,0,0,0,1)}: & 9474 & 11.348 \\[-1.1mm]
$(\mathbb{Z},0,0,0,\mathbb{Z},0,0,0,\mathbb{Z})$
& $(1,0,0,1,2,0,0,0,1)$: & 459 & 1.425 \\[-1.1mm]
$(15,105,455,1365,3003,4515,4230,$ & $(1,0,0,2,3,0,0,0,1)$: & 46 & \\[-1.1mm]
\mbox{}\hspace{5mm}$2205,490)$ & $(1,0,0,3,4,0,0,0,1)$: & 7 & \\[-1.1mm]
$3.1212$ & $(1,0,0,0,2,1,0,0,1)$: & 4 & \\[-1.1mm]
& $(1,0,1,2,2,0,0,0,1)$: & 3 & \\[-1.1mm]
& $(1,0,1,4,4,0,0,0,1)$: & 2 & \\[-1.1mm]
& $(1,0,1,1,1,0,0,0,1)$: & 1 & \\[-1.1mm]
& $(1,0,0,1,5,3,0,0,1)$: & 1 & \\[-1.1mm]
& $(1,0,2,2,1,0,0,0,1)$: & 1 & \\[-1.1mm]
& $(1,0,0,1,6,4,0,0,1)$: & 1 & \\[-1.1mm]
& $(1,0,0,4,5,0,0,0,1)$: & 1 & \\[-.25mm]
\addlinespace
\addlinespace
\texttt{contractible\_vertex\_homogeneous} & $(1,3,38,98,83,20,0,\dots,0)$: \hspace{-8mm}\mbox{}& 1 & 11:42:37.994 \\[-1.1mm]
$(\mathbb{Z},0,0,0,0,0,0,0,0,0,0,0)$ & $(1,3,42,72,43,10,0,\dots,0)$: \hspace{-8mm}\mbox{}& 1 & 1:39:02.745 \\[-1.1mm]
$(60,1290,12380,58935,148092,$ & $(1,4,34,80,64,14,0,\dots,0)$: \hspace{-8mm}\mbox{}& 1 & \mbox{[10 rounds]} \\[-1.1mm]
\mbox{}\hspace{5mm}$220840,211740,136155,59160,$
& $(1,4,53,87,56,18,0,\dots,0)$: \hspace{-8mm}\mbox{}& 1 & \\[-1.1mm]
\mbox{}\hspace{5mm}$16866,2880,225)$ & $(1,5,63,110,61,9,0,\dots,0)$: \hspace{-8mm}\mbox{}& 1 & \\[-1.1mm]
$273.6$ & $(1,5,70,139,92,18,0,\dots,0)$: \hspace{-8mm}\mbox{}& 1 & \\[-1.1mm]
& $(1,6,42,115,108,29,0,\dots,0)$: \hspace{-8mm}\mbox{}& 1 & \\[-1.1mm]
& $(1,8,75,160,113,20,0,\dots,0)$: \hspace{-8mm}\mbox{}& 1 & \\[-1.1mm]
& $(1,9,66,124,89,22,0,\dots,0)$: \hspace{-8mm}\mbox{}& 1 & \\[-1.1mm]
& $(1,13,74,144,97,14,0,\dots,0)$: \hspace{-8mm}\mbox{}& 1 & \\
\addlinespace
\addlinespace
\addlinespace
\addlinespace
\end{longtable}
}
{\small
\defaultaddspace=.1em
\setlength{\LTleft}{0pt}
\setlength{\LTright}{0pt}
\begin{longtable}{@{}l@{\extracolsep{\fill}}l@{\extracolsep{\fill}}l@{}}
\caption{\protect\parbox[t]{15cm}{Discrete Morse vectors with \texttt{lex} and \texttt{rev\_lex} heuristics.}}\label{tbl:lex_rev_lex}
\\[-.7mm]\toprule
\addlinespace
\addlinespace
\addlinespace
\addlinespace
Name of example & lex & rev\_lex \\[-.7mm] \midrule
\endfirsthead
\caption{\protect\parbox[t]{15cm}{Discrete Morse vectors with \texttt{lex} and \texttt{rev\_lex} heuristics (continued).}}
\\[-.7mm]\toprule
\addlinespace
\addlinespace
\addlinespace
\addlinespace
Name of example & lex & rev\_lex \\[-.7mm] \midrule
\endhead
\bottomrule
\endfoot
\addlinespace
\addlinespace
\addlinespace
\addlinespace
\texttt{dunce\_hat} & \textbf{(1,1,1)} & \textbf{(1,1,1)} \\[-0.5mm]
\texttt{d2n12g6} & \textbf{(1,12,1)} & \textbf{(1,12,1)} \\[-0.5mm]
\texttt{regular\_2\_21\_23\_1} & \textbf{(1,30,1)} & \textbf{(1,30,1)} \\[-0.5mm]
\texttt{rand2\_n25\_p0.328} & \textbf{(1,0,475)} & \textbf{(1,0,475)} \\[-0.5mm]
\texttt{dunce\_hat\_in\_3\_ball} & \textbf{(1,0,0,0)} & \textbf{(1,0,0,0)} \\[-0.5mm]
\texttt{Barnette\_sphere} & \textbf{(1,0,0,1)} & \textbf{(1,0,0,1)} \\[-0.5mm]
\texttt{B\_3\_9\_18} (non-shellable ball) & \textbf{(1,0,0,0)} & \textbf{(1,0,0,0)} \\[-0.5mm]
\texttt{trefoil\_arc} & $(1,2,2,0)$ & \textbf{(1,0,0,0)} \\[-0.5mm]
\texttt{trefoil} & $(1,2,2,1)$ & \textbf{(1,0,0,1)} \\[-0.5mm]
\texttt{rudin} (Rudin's ball) & \textbf{(1,0,0,0)} & \textbf{(1,0,0,0)} \\[-0.5mm]
\texttt{double\_trefoil\_arc} & $(1,3,3,0)$ & $(1,2,2,0)$ \\[-0.5mm]
\texttt{poincare} & \textbf{(1,2,2,1)} & \textbf{(1,2,2,1)} \\[-0.5mm]
\texttt{double\_trefoil} & $(1,3,3,1)$ & $(1,1,1,1)$ \\[-0.5mm]
\texttt{triple\_trefoil\_arc} & $(1,4,4,0)$ & $(1,3,3,0)$ \\[-0.5mm]
\texttt{triple\_trefoil} & $(1,4,4,1)$ & $(1,2,2,1)$ \\[-0.5mm]
\texttt{hyperbolic\_dodecahedral\_space} & \emph{(1,4,4,1)} & $(1,5,5,1)$ \\[-0.5mm]
\texttt{S\_3\_50\_1033} (random) & \textbf{(1,0,0,1)} & \textbf{(1,0,0,1)} \\[-0.5mm]
\texttt{S\_3\_100\_4850} (cyclic polytope) & \textbf{(1,0,0,1)} & \textbf{(1,0,0,1)} \\[-0.5mm]
\texttt{600\_cell} & \textbf{(1,0,0,1)} & \textbf{(1,0,0,1)} \\[-0.5mm]
\texttt{non\_4\_2\_colorable} & $(1,30,30,1)$ & \textbf{(1,0,0,1)} \\[-0.5mm]
\texttt{Hom\_C5\_K4} ($\mathbb{R}\textbf{P}^3$) & \textbf{(1,1,1,1)} & \textbf{(1,1,1,1)} \\[-0.5mm]
\texttt{trefoil\_bsd} & $(1,2,2,1)$ & \textbf{(1,0,0,1)} \\[-0.5mm]
\texttt{knot} & \emph{(1,1,1,0)} & \emph{(1,1,1,0)} \\[-0.5mm]
\texttt{nc\_sphere} & $(1,2,2,1)$ & $(1,1,1,1)$ \\[-0.5mm]
\texttt{double\_trefoil\_bsd} & $(1,3,3,1)$ & $(1,1,1,1)$ \\[-0.5mm]
\texttt{bing} & $(1,1,1,0)$ & $(1,1,1,0)$ \\[-0.5mm]
\texttt{triple\_trefoil\_bsd} & $(1,4,4,1)$ & $(1,3,3,1)$ \\[-0.5mm]
\texttt{S\_3\_1000\_2990} (stacked sphere) & \textbf{(1,0,0,1)} & \textbf{(1,0,0,1)} \\[-0.5mm]
\texttt{Hom\_n9\_655\_compl\_K4} $((S^2\!\times\!S^1)^{\# 13})$ & (2,14,13,1) & \textbf{(1,13,13,1)} \\[-0.5mm]
\texttt{CP2} & \textbf{(1,0,1,0,1)} & \textbf{(1,0,1,0,1)} \\[-0.5mm]
\texttt{RP4} & \textbf{(1,1,1,1,1)} & \textbf{(1,1,1,1,1)} \\[-0.5mm]
\texttt{K3\_16} (unknown PL type) & \textbf{(1,0,22,0,1)} & \textbf{(1,0,22,0,1)} \\[-0.5mm]
\texttt{K3\_17} (standard PL type) & \textbf{(1,0,22,0,1)} & \textbf{(1,0,22,0,1)} \\[-0.5mm]
\texttt{RP4\_K3\_17} & \textbf{(1,1,23,1,1)} & \textbf{(1,1,23,1,1)} \\[-0.5mm]
\texttt{RP4\_11S2xS2} & \textbf{(1,1,23,1,1)} & \textbf{(1,1,23,1,1)} \\[-0.5mm]
\texttt{Hom\_C6\_compl\_K5\_small} $((S^2\!\times\!S^2)^{\# 29})$ & \textbf{(1,0,58,0,1)} & \textbf{(1,0,58,0,1)} \\[-0.5mm]
\texttt{Hom\_C6\_compl\_K5} $((S^2\!\times\!S^2)^{\# 29})$ & \textbf{(1,0,58,0,1)} & \textbf{(1,0,58,0,1)} \\[-0.5mm]
\texttt{SU2\_SO3} & \textbf{(1,0,1,1,0,1)} & \textbf{(1,0,1,1,0,1)} \\[-0.5mm]
\texttt{non\_PL} & \emph{(1,0,0,2,2,1)} & \emph{(1,0,0,2,2,1)} \\[-0.5mm]
\texttt{RP5\_24} & \textbf{(1,1,1,1,1,1)} & \textbf{(1,1,1,1,1,1)} \\[-0.5mm]
\texttt{S2xpoincare} & \emph{(1,2,3,3,2,1)} & \emph{(1,2,3,3,2,1)} \\[-0.5mm]
\texttt{S\_5\_100\_472} (stacked sphere) & \textbf{(1,0,0,0,0,1)} & \textbf{(1,0,0,0,0,1)} \\[-0.5mm]
\texttt{Hom\_C5\_K5} $(S^3\!\times\!S^2)$ & \textbf{(1,0,1,1,0,1)} & \textbf{(1,0,1,1,0,1)} \\[-0.5mm]
\texttt{\_HP2} & \textbf{(1,0,0,0,1,0,0,0,1)} & \textbf{(1,0,0,0,1,0,0,0,1)} \\[-0.5mm]
\texttt{contractible\_vertex\_homogeneous} & \emph{(1,0,0,4,8,4,0,0,0,0,0,0)} & \emph{(1,0,0,4,8,4,0,0,0,0,0,0)} \\
\addlinespace
\addlinespace
\addlinespace
\addlinespace
\addlinespace
\end{longtable}
}
\pagebreak
{\small
\defaultaddspace=.1em
\setlength{\LTleft}{0pt}
\setlength{\LTright}{0pt}
\begin{longtable}{@{}l@{\extracolsep{\fill}}r@{\extracolsep{\fill}}r@{\extracolsep{\fill}}r@{\extracolsep{\fill}}r@{\extracolsep{\fill}}r@{\extracolsep{\fill}}r@{}}
\caption{\protect\parbox[t]{15cm}{Simplified presentations of
fundamental groups with GAP.}}\label{tbl:gen_fund_groups}
\\[-.7mm]\toprule
\addlinespace
\addlinespace
\addlinespace
\addlinespace
Name of example & Ge.\ & Re.\ & SGe.\ & SRe.\ & F.~Gr.\ & Time \\[-.7mm] \midrule
\endfirsthead
\caption{\protect\parbox[t]{15cm}{Simplified presentations of
fundamental groups with GAP (continued).}}
\\[-.7mm]\toprule
\addlinespace
\addlinespace
\addlinespace
\addlinespace
Name of example & Ge.\ & Re.\ & SGe.\ & SRe.\ & F.~Gr.\ & Time \\[-.7mm] \midrule
\endhead
\bottomrule
\endfoot
\addlinespace
\addlinespace
\addlinespace
\addlinespace
\texttt{dunce\_hat} & 17 & 17 & 0 & 0 & 0 & 0.036 \\[-0.5mm]
\texttt{d2n12g6} & 55 & 44 & 12 & 1 & 12 gen. & 0.132 \\[-0.5mm]
\texttt{regular\_2\_21\_23\_1} & 127 & 98 & 30 & 1 & 30 gen. & 0.304 \\[-0.5mm]
\texttt{rand2\_n25\_p0.328} & 276 & 751 & 0 & 0 & 0 & 0.876 \\[-0.5mm]
\texttt{dunce\_hat\_in\_3\_ball} & 18 & 30 & 0 & 0 & 0 & 0.044 \\[-0.5mm]
\texttt{Barnette\_sphere} & 20 & 38 & 0 & 0 & 0 & 0.048 \\[-0.5mm]
\texttt{B\_3\_9\_18} (non-shellable ball) & 25 & 43 & 0 & 0 & 0 & 0.048 \\[-0.5mm]
\texttt{trefoil\_arc} & 47 & 85 & 0 & 0 & 0 & 0.092 \\[-0.5mm]
\texttt{trefoil} & 57 & 112 & 0 & 0 & 0 & 0.084 \\[-0.5mm]
\texttt{rudin} (Rudin's ball) & 53 & 94 & 0 & 0 & 0 & 0.060 \\[-0.5mm]
\texttt{double\_trefoil\_arc} & 79 & 145 & 0 & 0 & 0 & 0.148 \\[-0.5mm]
\texttt{poincare} & 91 & 180 & 2 & 2 & 2 gen. & 0.104 \\[-0.5mm]
\texttt{double\_trefoil} & 93 & 184 & 0 & 0 & 0 & 0.160 \\[-0.5mm]
\texttt{triple\_trefoil\_arc} & 111 & 208 & 0 & 0 & 0 & 0.164 \\[-0.5mm]
\texttt{triple\_trefoil} & 126 & 250 & 0 & 0 & 0 & 0.156 \\[-0.5mm]
\texttt{hyperbolic\_dodecahedral\_space} & 170 & 338 & 4 & 5 & 4 gen. & 0.276 \\[-0.5mm]
\texttt{S\_3\_50\_1033} (random) & 1034 & 2066 & 0 & 0 & 0 & 6.372 \\[-0.5mm]
\texttt{S\_3\_100\_4850} (cyclic polytope) & 4851 & 9700 & 0 & 0 & 0 & 2:38.918 \\[-0.5mm]
\texttt{600\_cell} & 601 & 1200 & 0 & 0 & 0 & 2.220 \\[-0.5mm]
\texttt{non\_4\_2\_colorable} & 1413 & 2824 & 0 & 0 & 0 & 12.644 \\[-0.5mm]
\texttt{Hom\_C5\_K4} ($\mathbb{R}\textbf{P}^3$) & 1441 & 2880 & 1 & 1 & $\mathbb{Z}_2$ & 13.492 \\[-0.5mm]
\texttt{trefoil\_bsd} & 1345 & 2688 & 0 & 0 & 0 & 11.292 \\[-0.5mm]
\texttt{knot} & 1550 & 2722 & 0 & 0 & 0 & 12.536 \\[-0.5mm]
\texttt{nc\_sphere} & 1929 & 3856 & 0 & 0 & 0 & 23.509 \\[-0.5mm]
\texttt{double\_trefoil\_bsd} & 2209 & 4416 & 0 & 0 & 0 & 30.002 \\[-0.5mm]
\texttt{bing} & 2032 & 3586 & 0 & 0 & 0 & 22.877 \\[-0.5mm]
\texttt{triple\_trefoil\_bsd} & 3001 & 6000 & 0 & 0 & 0 & 57.947 \\[-0.5mm]
\texttt{S\_3\_1000\_2990} (stacked sphere) & 2991 & 5980 & 0 & 0 & 0 & 58.099 \\[-0.5mm]
\texttt{Hom\_n9\_655\_compl\_K4} $((S^2\!\times\!S^1)^{\# 13})$ & 19009 & 38016 & 13 & 0 & F(13) & 59:14.006 \\[-0.5mm]
\texttt{CP2} & 28 & 84 & 0 & 0 & 0 & 0.036 \\[-0.5mm]
\texttt{RP4} & 105 & 330 & 1 & 1 & $\mathbb{Z}_2$ & 0.200 \\[-0.5mm]
\texttt{K3\_16} (unknown PL type) & 105 & 560 & 0 & 0 & 0 & 0.344 \\[-0.5mm]
\texttt{K3\_17} (standard PL type) & 119 & 610 & 0 & 0 & 0 & 0.372 \\[-0.5mm]
\texttt{RP4\_K3\_17} & 218 & 930 & 0 & 0 & $\mathbb{Z}_2$ & 0.408 \\[-0.5mm]
\texttt{RP4\_11S2xS2} & 253 & 1052 & 0 & 0 & $\mathbb{Z}_2$ & 0.472 \\[-0.5mm]
\texttt{Hom\_C6\_compl\_K5\_small} $((S^2\!\times\!S^2)^{\# 29})$ & 347 & 1786 & 0 & 0 & 0 & 2.420 \\[-0.5mm]
\texttt{Hom\_C6\_compl\_K5} $((S^2\!\times\!S^2)^{\# 29})$ & 28861 & 104520 & 0 & 0 & 0 & 3:57:09.873 \\[-0.5mm]
\texttt{SU2\_SO3} & 66 & 286 & 0 & 0 & 0 & 0.180 \\[-0.5mm]
\texttt{non\_PL} & 122 & 503 & 0 & 0 & 0 & 0.368 \\[-0.5mm]
\texttt{RP5\_24} & 250 & 1174 & 1 & 1 & $\mathbb{Z}_2$ & 1.244 \\[-0.5mm]
\texttt{S2xpoincare} & 1093 & 5784 & 2 & 2 & 2 gen. & 22.493 \\[-0.5mm]
\texttt{S\_5\_100\_472} (stacked sphere) & 480 & 1430 & 0 & 0 & 0 & 2.380 \\[-0.5mm]
\texttt{Hom\_C5\_K5} $(S^3\!\times\!S^2)$ & 24751 & 143900 & 0 & 0 & 0 & 5:22:22.520 \\[-0.5mm]
\texttt{\_HP2} & 91 & 455 & 0 & 0 & 0 & 0.356 \\[-0.5mm]
\texttt{contractible\_vertex\_homogeneous} & 1231 & 12380 & 0 & 0 & 0 & 46.431 \\
\addlinespace
\addlinespace
\addlinespace
\addlinespace
\addlinespace
\end{longtable}
}
\begin{table}
\small\centering
\defaultaddspace=0.15em
\caption{Distribution of fundamental groups for random $2$-complexes
with $25$ vertices (100~runs for each $p$).}\label{tbl:rand_2compl_n25}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\hspace{9mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{}}
\\\toprule
\addlinespace
\addlinespace
$p$ & $0$ & $F(1)$ & $F(2)$ & $F(3)$ & $F(4)$ & $F(5)$ & $F(6)$ & $F(7)$ & $F(8)$ & $F(9)$ & $F(10)$ & $F(\geq 11)$ & `non-free' \\ \midrule
\addlinespace
\addlinespace
0.40 & 100 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.39 & 100 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.38 & 99 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.37 & 100 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.36 & 98 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.35 & 96 & 3 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.34 & 99 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.33 & 97 & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.32 & 95 & 5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.31 & 97 & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.30 & 97 & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.29 & 88 & 10 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.28 & 83 & 16 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.27 & 83 & 15 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.26 & 73 & 19 & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.25 & 73 & 21 & 3 & 2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.24 & 66 & 27 & 6 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.23 & 39 & 39 & 20 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.22 & 31 & 39 & 20 & 8 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.21 & 23 & 24 & 35 & 7 & 7 & 2 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.20 & 16 & 33 & 26 & 16 & 5 & 3 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.19 & 8 & 18 & 24 & 16 & 18 & 7 & 7 & 1 & 0 & 1 & 0 & 0 & 0 \\[-0.5mm]
0.18 & 9 & 13 & 24 & 18 & 19 & 8 & 3 & 1 & 4 & 1 & 0 & 0 & 0 \\[-0.5mm]
0.17 & 2 & 6 & 10 & 20 & 9 & 21 & 13 & 10 & 5 & 3 & 0 & 1 & 0 \\[-0.5mm]
0.16 & 0 & 1 & 3 & 13 & 12 & 17 & 21 & 9 & 6 & 4 & 6 & 8 & 0 \\[-0.5mm]
0.15 & 0 & 0 & 0 & 4 & 16 & 5 & 10 & 14 & 10 & 10 & 7 & 24 & 0 \\[-0.5mm]
0.14 & 0 & 0 & 0 & 0 & 0 & 1 & 3 & 5 & 11 & 13 & 5 & 62 & 0 \\[-0.5mm]
0.13 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 3 & 6 & 89 & 0 \\[-0.5mm]
0.12 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3 & 96 & 1 \\[-0.5mm]
0.11 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 92 & 8 \\[-0.5mm]
0.10 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 98 & 2 \\[-0.5mm]
0.09 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 99 & 1 \\[-0.5mm]
0.08 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 100 & 0 \\[-0.5mm]
0.07 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 100 & 0 \\[-0.5mm]
0.06 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 100 & 0 \\
\bottomrule
\end{tabular*}
\end{table}
\begin{table}
\small\centering
\defaultaddspace=0.15em
\caption{Distribution of fundamental groups for random $2$-complexes
with $50$ vertices (100~runs for each $p$).}\label{tbl:rand_2compl_n50}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\hspace{9mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{\hspace{2mm}}r@{}}
\\\toprule
\addlinespace
\addlinespace
$p$ & 0 & $F(1)$ & $F(2)$ & $F(3)$ & $F(4)$ & $F(5)$ & $F(6)$ & $F(7)$ & $F(8)$ & $F(9)$ & $F(10)$ & $F(\geq 11)$ & `non-free' \\\midrule
\addlinespace
\addlinespace
0.25 & 100 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.24 & 100 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.23 & 99 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.22 & 99 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.21 & 97 & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.20 & 98 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.19 & 96 & 4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.18 & 90 & 10 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.17 & 81 & 18 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.16 & 75 & 24 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.15 & 68 & 26 & 4 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.14 & 43 & 27 & 21 & 8 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.13 & 16 & 39 & 31 & 10 & 2 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.12 & 8 & 15 & 22 & 19 & 25 & 6 & 3 & 1 & 1 & 0 & 0 & 0 & 0 \\[-0.5mm]
0.11 & 1 & 4 & 9 & 10 & 16 & 20 & 21 & 8 & 7 & 1 & 1 & 2 & 0 \\[-0.5mm]
0.10 & 0 & 0 & 1 & 4 & 5 & 6 & 13 & 17 & 11 & 15 & 10 & 18 & 0 \\[-0.5mm]
0.09 & 0 & 0 & 0 & 0 & 1 & 0 & 2 & 0 & 4 & 4 & 5 & 84 & 0 \\[-0.5mm]
0.08 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 100 & 0 \\[-0.5mm]
0.07 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 100 & 0 \\[-0.5mm]
0.06 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 95 & 5 \\[-0.5mm]
0.05 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 86 & 14 \\[-0.5mm]
0.04 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 100 & 0 \\[-0.5mm]
0.03 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 100 & 0 \\[-0.5mm]
0.02 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 100 & 0 \\[-0.5mm]
0.01 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 100 & 0 \\
\bottomrule
\end{tabular*}
\end{table}
\pagebreak
|
1,477,468,751,115 | arxiv | \section{Introduction}
The phase diagram of QCD at finite temperature and baryon density is still largely unknown
today, because
lattice QCD suffers from a severe sign problem when chemical potential for baryon number
is non-vanishing.
Several methods have been devised to circumvent this
obstacle (see e.g. \cite{deForcrand:2010ys} and references therein), but all of them introduce
additional approximations that are valid for small quark chemical potentials only, $\mu/T\lesssim1$.
In order to reach higher chemical potentials and/or low temperatures,
methods are required which at least potentially may solve this
problem. Among these are Complex Langevin Dynamics (CLD)
\cite{Aarts:2013bla,Aarts:2013uxa},
transformation of the degrees of freedom into so-called dual variables as demonstrated in scalar models
\cite{Gattringer:2012df,Delgado:2012tm},
and the formulation of quantum field theories on a Lefshetz thimble \cite{Cristoforetti:2012su}.
In particular, CLD has recently been applied to full QCD in a previously inaccessible
parameter range \cite{denes}.
However, even if
an approach should finally succeed in solving the sign problem, it will remain very
hard to study the regime of cold and dense matter. This is because, in order to avoid
the limiting artifact of saturation at finite lattice spacing, very fine
lattices are required for high density, which implies in turn very
large temporal lattice extents near $T=0$.
In this work we further elaborate on
an effective theory approach \cite{Langelage:2010yr,Fromm:2011qi,Fromm:2012eb,procs},
where analytic strong coupling and hopping expansion methods are used to derive
an effective lattice action whose numerical simulation is feasible also in the cold and dense regime.
The sign problem can be handled by complex Langevin simulations in a controlled way, and in certain
parameter ranges even Monte Carlo simulations are possible. Moreover, the effective action
resembles a three-dimensional spin model, such that the numerical effort is vastly
smaller than for full lattice QCD simulations. At the present stage of the project, simulations can
still be run on time scales of days on university PC clusters.
The drawback is that the effective action is
only valid in parameter ranges where the expansion converges, which is
currently restricted to the heavy mass region and the confined phase. Even there,
the effective theory is unsuitable for long range correlation functions, but it
gives accurate results for bulk thermodynamic quantities and phase transitions \cite{test}.
In particular, it has already provided
predictions with better than 10\% accuracy for the critical couplings of $SU(2),SU(3)$
Yang-Mills \cite{Langelage:2010yr}, the critical quark masses where the deconfinement transition
changes to a crossover \cite{Fromm:2011qi} and the tricritical point of the deconfinement transition
at imaginary chemical potential \cite{pinke}.
A similar approach is used in
\cite{Unger:2011it,Fromm:2011kq,Kawamoto:2005mq,Nakano:2010bg}
with staggered fermions.
There, the chiral regime can be studied directly but the strong coupling series is much harder to
compute and no continuum extrapolations are possible so far.
The paper is organised as follows. In section \ref{sec:efft} we summarise the derivation of the
effective action in the pure gauge sector and give a systematic calculation of the fermion determinant. In section \ref{sec:an} we analyse the effective action by analytic methods
to leading and next-to-leading order in the small effective couplings. Section \ref{sec:lang}
is devoted to a systematic study of the validity of complex Langevin simulations. Finally, section \ref{sec:phys} contains our physics results for the cold and dense
regime of QCD with heavy quarks. Readers not interested in the technical aspects of the derivation
and simulation may skip sections \ref{sec:efft}, \ref{sec:lang} and read
sections \ref{sec:an}, \ref{sec:phys} only.
\section{The effective action \label{sec:efft}}
Starting point is a $(3+1)$-dimensional lattice with Wilson's gauge and
fermion
actions for $N_f$ flavours, which after Grassmann integration may be written as
\begin{eqnarray}
Z=\int[dU_\mu]\;\exp\left[-S_g\right]\prod_{f=1}^{N_f}\det\left[Q^f\right]\;,\qquad
-S_g=\frac{\beta}{2N_c}\sum_p\left[\mathrm{Tr}\, U_p+\mathrm{Tr}\, U_p^\dagger\right]\;,
\end{eqnarray}
with elementary plaquettes $U_p$, the quark hopping matrix for the flavour $f$,
\begin{eqnarray}
&&(Q^f)^{ab}_{\alpha\beta,xy}=\delta^{ab}\delta_{\alpha\beta}\delta_{xy}\\ \hspace*{-1.5cm}
&&-\kappa_f\sum_{\nu=0}^3\left[e^{a\mu_f\delta_{\nu0}}(1+\gamma_\nu)_{\alpha\beta}U_\nu^{ab}(x)
\delta_{x,y-\hat{\nu}}+e^{-a\mu_f\delta_{\nu0}}(1-\gamma_\nu)_{\alpha\beta}U_{-\nu}^{ab}(x)
\delta_{x,y+\hat{\nu}}\right]\;,
\;\nonumber
\end{eqnarray}
and $U_{-\nu}^{ab}(x) = U_{\nu}^{\dagger ab}(x-\hat{\nu})$. The effective action is then defined by integrating out the spatial link variables
\begin{eqnarray}
Z&=&\int[dU_0]\;\exp[-S_\mathrm{eff}]\;,\nonumber\\
\exp[-S_{\mathrm{eff}}]&\equiv&\int[dU_k]\exp\left[-S_g\right]\prod_{f=1}^{N_f}\det\left[Q^f\right]\;, \nonumber\\ S_{\mathrm{eff}} &=& \sum_{i=0}^{\infty}S^s_i(\beta, \kappa_f,N_{\tau};W) + \sum_{i=1}^{\infty} S^a_i(\beta, N_{\tau}, \kappa_f, \mu_f;W) \nonumber\\
&=& \sum_{i=0}^{\infty}S^g_{i}(\beta, \kappa_f,N_{\tau};W) + \sum_{i=0}^{\infty} S^f_{i}(\beta, N_{\tau}, \kappa_f, \mu_f;W) \;.
\label{eq_defeffth}
\end{eqnarray}
In the first line we split into a part which is $Z(N_c)$ centre symmetric and a part with symmetry breaking terms. For the present work it is more convenient to split the action into a purely gluonic part
and a fermionic part due to the determinant, which contains both symmetric and symmetry breaking
contributions.
All terms depend only
on temporal Wilson lines $W_{\vec{x}}$ or their traces, the Polyakov loops,
\begin{eqnarray}
L_{\vec{x}}\equiv\mathrm{Tr}\, W_{\vec{x}}\equiv \mathrm{Tr}\prod_{\tau=1}^{N_\tau}U_0\left(\vec{x},\tau\right)\;.
\end{eqnarray}
The effective action features an infinite tower of interaction
terms between loops to all powers and at all distances, where $S^{x}_{i}$ denote $i$-point-interactions.
These are completely determined in terms of Wilson lines and the parameters of the original theory.
Note that, without truncations, the effective action is unique and exact.
Non-perturbative determinations of the effective action \cite{poly1,poly2,poly3,poly4,poly5}
can in principle be applied
at all parameter values. In practice they necessarily imply truncation and modelling,
which may have to be different in different parameter regimes.
In our approach we compute the effective action in a combined strong coupling and
hopping parameter expansion, which orders terms according to their leading powers in $\beta, \kappa$.
By summing up all temporal windings we
make sure that we
have the complete dependence on chemical potential, or fugacity, in each order of the hopping
parameter expansion.
\subsection{Pure gauge theory}
For the Yang-Mills part, it is advantageous to perform a character expansion
\begin{eqnarray}
\exp\left[\frac{\beta}{2N_c}\Big(\mathrm{Tr}\, U_p+\mathrm{Tr}\, U_p^\dagger\Big)\right]
=c_0(\beta)\left[1+\sum_{r\neq0}d_ra_r(\beta)\chi_r(U_p)\right]\;,
\end{eqnarray}
where the factor $c_0(\beta)$ can be neglected as it is independent of gauge links and
cancels in
expectation values. In earlier publications
\cite{Fromm:2011qi,Langelage:2010yr,Langelage:2010nj}, we have shown how to compute
the effective
gauge theory up to rather high orders in the fundamental character expansion
coefficient
$u(\beta)\equiv a_f(\beta) = \frac{\beta}{18} + \ldots$ . In leading order we have a chain of $N_\tau$ fundamental
plaquettes winding around the temporal direction and closing via periodic boundary
conditions. Therefore the leading order is a two-point interaction,
\begin{eqnarray}
S^g_{2} (\beta, N_{\tau}) =\lambda(u,N_\tau)\sum_{<\vec{x} \vec{y}>}\left(L_{\vec{x}}L_{\vec{y}}^\ast+L_{\vec{x}}^\ast
L_{\vec{y}}\right)\;,
\qquad\lambda(u,N_\tau)=u^{N_\tau}\Big[1+\ldots\Big]\;,
\label{eq_seffgauge}
\end{eqnarray}
where higher order corrections of $\lambda(u,N_\tau)$ as well as a discussion of
higher order
interaction terms can be found in \cite{Langelage:2010yr}. In the leading order
expression of eq.~(\ref{eq_seffgauge}) we already see that
$\lambda(u,N_\tau)$ is suppressed for large $N_\tau$, since $u<1$, see also
\cite{Fromm:2011qi}
for a further discussion. In this work we aim at temperatures $T\leq 10$ MeV with lattice
parameters $\beta\mathop{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} 6, N_\tau\geq 100$, where $\lambda\mathop{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} 10^{-25}$ can be safely neglected.
\subsection{Static quark determinant}
The quark determinant is expanded in a hopping expansion.
In order to keep the complete
dependence on chemical potential it is useful to split the quark matrix in positive and negative temporal and spatial hops,
\begin{eqnarray}
Q=1-T-S=1-T^+-T^--S^+-S^-\;.
\end{eqnarray}
The static determinant is
then given
by neglecting the spatial parts,
\begin{eqnarray}
\det[Q_{\mathrm{stat}}] &=& \det[1-T] = \det[1-T^+ - T^-] \nonumber \\
&=& \det \Big[1-\kappa e^{a\mu}(1+\gamma_0)U_0(x) \delta_{x,y-\hat{0}} \nonumber\\ &&\hspace*{1.2cm}
-\kappa e^{-a\mu}(1-\gamma_0)U^{\dagger}_0(x-\hat{0}) \delta_{x,y+\hat{0}}\Big]\;,
\end{eqnarray}
with propagation in the temporal direction only. Calculating the space and spin
determinant we get
\begin{eqnarray}
\det[Q_{\mathrm{stat}}]&=& \prod_{\vec{x}}
\det \Big[1+(2 \kappa e^{a \mu})^{N_{\tau}}W_{\vec{x}}\Big]^2
\det \Big[1+(2 \kappa e^{-a \mu})^{N_{\tau}}W^{\dagger}_{\vec{x}}\Big]^2\;.
\label{q_static}
\end{eqnarray}
Note that this includes all windings of Wilson lines around the temporal direction and thus
the full fugacity dependence.
A well-known
relation valid
for $SU(3)$ then
allows us to reformulate this in terms of Polyakov loops,
\begin{eqnarray}
\det[Q _{\mathrm{stat}}]&=& \prod_{\vec{x}}
\left[1 + c L_{\vec{x}} + c^2 L^*_{\vec{x}}+c^3\right]^2
\left[1 + \bar{c} L^*_{\vec{x}} + \bar{c}^2 L_{\vec{x}}+\bar{c}^3\right]^2,
\label{eq_qsim}
\end{eqnarray}
with the abbreviation
\begin{equation}
c(\mu)\equiv\left(2\kappa e^{a\mu}\right)^{N_\tau}= e^{\frac{\mu-m}{T}} \equiv \bar{c}(-\mu)\;,
\end{equation}
and the constituent quark mass $am=-\ln(2\kappa)=\frac{am_B}{3}$, to leading
order of eq. ($\ref{eq:hadron}$). When $\det[Q_{\mathrm{stat}}]$ is exponentiated, the parameter $c$ also constitutes the effective one-point coupling constant of $S^f_1$ to leading order \cite{Fromm:2011qi},
\begin{equation}
h_1=c, \quad \bar{h}_1=\bar{c}\;.
\end{equation}
\subsection{Kinetic quark determinant}
In order to compute a systematic hopping expansion about the static limit, we define the kinetic quark
determinant
\begin{eqnarray}
\det[Q]&\equiv&\det[Q_{\mathrm{stat}}]\det[Q_{\mathrm{kin}}]\;,\nonumber\\
\det[Q_{\mathrm{kin}}]&=&\det[1-(1-T)^{-1}(S^++S^-)] \nonumber\\
&\equiv&\det[1-P-M]=\exp\left[\mathrm{Tr}\, \ln
(1-P-M)\right]\;,
\label{eq_detqkin}
\end{eqnarray}
which we then split into parts describing quarks moving in positive and negative
spatial
directions, $P=\sum_kP_k$ and $M=\sum_kM_k$. The reason for this is that the trace
occurring
in eq.~(\ref{eq_detqkin}) is also a trace in coordinate space. This means
that only closed loops contribute and hence
we need the same number of $P$s and $M$s in the expansion of the logarithm.
Through $\mathcal{O}\left(\kappa^4\right)$ we have
\begin{eqnarray}
\det[Q_{\mathrm{kin}}]&=&\exp\left[-\mathrm{Tr}\, PM-\mathrm{Tr}\, PPMM-
\frac{1}{2}\mathrm{Tr}\, PMPM\right]\left[1+\mathcal{O}(\kappa^6)\right] \\
&=&\left[1-\mathrm{Tr}\, PM - \mathrm{Tr}\, PPMM -
\frac{1}{2} \mathrm{Tr}\, PMPM+\frac{1}{2}\left(\mathrm{Tr}\, PM\right)^2\right]
\left[1+\mathcal{O}(\kappa^6)\right]\;. \nonumber
\label{eq_detqkin2}
\end{eqnarray}
The next step is to consider the different directions in $P$ and
$M$ which also need to come in pairs,
\begin{eqnarray}
\sum_{ij}\mathrm{Tr}\, P_iM_j&=&\sum_i\mathrm{Tr}\, P_iM_i\;,\label{eq_qdet2}\\
\sum_{ijkl}\mathrm{Tr}\, P_iP_jM_kM_l&=&\sum_{i}\mathrm{Tr}\, P_iP_iM_iM_i+
\sum_{i\neq j}\mathrm{Tr}\, P_iP_jM_iM_j \nonumber \\
&& +\sum_{i\neq
j}\mathrm{Tr}\, P_iP_jM_jM_i\label{eq_ppmm}\;,\\
\frac12 \sum_{ijkl}\mathrm{Tr}\, P_iM_jP_kM_l&=& \frac12 \sum_i \mathrm{Tr}\, P_iM_iP_iM_i+
\frac12 \sum_{i\neq j}\mathrm{Tr}\, P_iM_iP_jM_j \nonumber \\
&&+ \frac12 \sum_{i\neq
j}\mathrm{Tr}\, P_iM_jP_jM_i\label{eq_pmpm}\;,\\
\frac12 \sum_{ijkl}\mathrm{Tr}\, P_iM_j\mathrm{Tr}\, P_iM_j&=&\frac12 \sum_{i,
j}\mathrm{Tr}\, P_iM_i\mathrm{Tr}\, P_jM_j \label{eq_trpm2}\;.
\end{eqnarray}
\subsection{Static quark propagator}
We now compute the static quark
propagator
$(1-T)^{-1}$ appearing in eq.~(\ref{eq_detqkin}).
Since $(1+\gamma_\mu)(1-\gamma_\mu)=0$, hops in forward and backward time
direction
do not mix and
the full static quark propagator is given by
\begin{eqnarray}
(Q_{\mathrm{stat}})^{-1}=(Q^+_{\mathrm{stat}})^{-1}
+(Q^-_{\mathrm{stat}})^{-1}-1\;.
\end{eqnarray}
In order to compute the positive static quark propagator, we
use the series expansion
\begin{eqnarray}
(Q^+_{\mathrm{stat}})^{-1}=\left(1-T^+\right)^{-1}=\sum_{n=0}^\infty
(T^+)^n\;.
\end{eqnarray}
The inverse is then given by
\begin{eqnarray}
(Q^+_{\mathrm{stat}})^{-1}_{\tau_1\tau_2} &=& \delta_{\tau_1\tau_2}\left(1-qz^{N_\tau}W\right)
+qz^{\tau_2-\tau_1}W(\tau_1,\tau_2)\Big[\Theta(\tau_2-\tau_1)-z^{N_\tau}
\Theta(\tau_1-\tau_2)\Big]\;,\nonumber\\
q&\equiv&\frac{1}{2}(1+\gamma_0)\left(1+z^{N_\tau} W\right)^{-1}\;,\qquad z = 2\kappa e^{a\mu}\;.
\end{eqnarray}
$W(\tau_1,\tau_2)$ is a temporal Wilson line from $\tau_1$ to $\tau_2$ and we have suppressed its spatial
location. If
$\tau_1=\tau_2$, the Wilson line winds around the lattice, $W(\tau_1,\tau_1)=W$.
The contribution in negative time direction
$(Q^-_{\mathrm{stat}})^{-1}_{\tau_1\tau_2}$ can then be obtained from
$(Q^+_{\mathrm{stat}})^{-1}_{\tau_1\tau_2}$ by the following replacements
\begin{eqnarray}
\tau_1\leftrightarrow\tau_2\;,\qquad
W(\tau_1,\tau_2)\leftrightarrow W^\dagger(\tau_1,\tau_2)\;,\qquad
\mu\leftrightarrow-\mu\;,
\end{eqnarray}
and reads
\begin{eqnarray}
(Q^-_{\mathrm{stat}})^{-1}_{\tau_1\tau_2}&=&\delta_{\tau_1\tau_2}\left(1-\bar{q}\bar{z}^{N_\tau} W^\dagger
\right)+\bar{q}\bar{z}^{\tau_1-\tau_2}W^\dagger(\tau_1,\tau_2)\Big[\Theta(\tau_1-\tau_2)-\bar{z}^{N_\tau}
\Theta(\tau_2-\tau_1)\Big]\;, \nonumber \\
\bar{q}&=&\frac{1}{2}(1-\gamma_0)\left(1+\bar{z}^{N_\tau}W^\dagger\right)^{-1}\;,\qquad
\bar{z}=2\kappa e^{-a\mu}\;.
\end{eqnarray}
Finally we split the temporal quark propagator in spin space as well as in
propagation in positive
and negative temporal direction according to
\begin{equation}
\label{eq_qstat}
\left(Q_{\mathrm{stat}}\right)^{-1}= A + \gamma_0 B = A^++\gamma_0B^+ + A^--\gamma_0 B^-\;,
\end{equation}
\begin{eqnarray}
A^+_{xy}&=&\frac12 \left[1-\frac{c W}{1+c W}\right]\delta_{xy}
+\frac{1}{2}z^{\tau_y-\tau_x}\frac{W(\tau_x,\tau_y)}{1+c W}\bigg[\Theta(\tau_y-\tau_x)-c
\Theta(\tau_x-\tau_y)\bigg]\delta_{\vec{x}\vec{y}}\;,\nonumber\\
B^+_{xy}&=&-\frac{1}{2}\frac{cW}{1+cW}\delta_{xy}
+\frac{1}{2}z^{\tau_y-\tau_x}\frac{W(\tau_x,\tau_y)}{1+cW}\bigg[\Theta(\tau_y-\tau_x)-c
\Theta(\tau_x-\tau_y)\bigg]\delta_{\vec{x}\vec{y}}\;,\nonumber\\
A^-_{xy}&=&\frac12 \left[1-\frac{\bar{c}W^\dagger}{1+\bar{c}W^\dagger}\right]\delta_{xy}
+\frac{1}{2}\bar{z}^{\tau_x-\tau_y}\frac{W^\dagger(\tau_x,\tau_y)}{1+\bar{c}W^\dagger}\bigg[\Theta(\tau_x-\tau_y)-\bar{c}
\Theta(\tau_y-\tau_x)\bigg]\delta_{\vec{x}\vec{y}}\;,\nonumber\\
B^-_{xy}&=&-\frac{1}{2}\frac{\bar{c}W^\dagger}{1+\bar{c}W^\dagger}\delta_{xy}
+\frac{1}{2}\bar{z}^{\tau_x-\tau_y}\frac{W^\dagger(\tau_x,\tau_y)}{1+\bar{c}W^\dagger}\bigg[\Theta(\tau_x-\tau_y)-\bar{c}
\Theta(\tau_y-\tau_x)\bigg]\delta_{\vec{x}\vec{y}}\;.\nonumber
\end{eqnarray}
\subsection{Gauge integrals for the leading fermionic action \label{sec:gi}}
Next we compute the leading strong coupling contribution to the effective action by performing the group integrations. We will arrange the fermionic part of the effective action as
\begin{eqnarray}
\int [dU_k] \prod_{f} \det[Q^f_{\mathrm{kin}}] = e^{\sum_{i = 1}^{\infty} S^f_{i}(\beta = 0, \kappa_f, N_{\tau},\mu_f)}\;.
\end{eqnarray}
Since it is not known how to analytically perform the gauge integral over links in the exponent, we have expanded it in a Taylor series. After the integration we shall see that it is possible to resum some terms back into an exponential.
At the order $\kappa^4$ there are zero-point contributions (or vacuum graphs) from closed hops around a plaquette.
In a strong coupling series these only contribute after being dressed with a plaquette,
$\sim \kappa^4 u$, and thus are neglected here.
The one-point contributions of the Polyakov
loops constitute the static determinant and have been computed already.
\subsubsection{Two-point interaction}
Dealing with more than one trace, as in $\Big(\sum_{i}\mathrm{Tr}\, P_iM_i \Big)^2$, it will be necessary to explicitly display spatial coordinates, i.e.
\begin{eqnarray}
(\mathrm{Tr}\, P_i M_i)^2 = \sum_{\vec{x},i} (\mathrm{Tr}\, P_{\vec{x}, i}M_{\vec{x}, i}) \sum_{\vec{y},j} (\mathrm{Tr}\, P_{\vec{y},j}M_{\vec{y},j})\;.
\label{PM^2}
\end{eqnarray}
Here we have to consider three different possibilities: The two nearest-neighbour
contributions may share $0$, $1$ or $2$ sites, where only the last one contributes to the two-point interaction.
To the order $\kappa^4$ it is then
\begin{eqnarray}
e^{-S^f_{2}} \equiv\int[dU_k] \Big[
-\sum_{i}\mathrm{Tr}\, P_iM_i
- \frac12 \sum_{i}\mathrm{Tr}\, P_iM_i P_i M_i \\
+ \frac12 \sum_{\vec{x},i} \mathrm{Tr}\, P_{\vec{x},i} M_{\vec{x},i} \mathrm{Tr}\, P_{\vec{x},i} M_{\vec{x},i}
\Big]\;. \nonumber
\end{eqnarray}
The first contribution to the two-point interaction is of order $\kappa^2$:
\begin{eqnarray}
&&-\int[dU_k]\sum_{i}\mathrm{Tr}\, P_iM_i=
-\sum_i\int[dU_k]\mathrm{Tr}\, \Big[Q_{\mathrm{stat}}^{-1}\,S^+_i\,Q_{\mathrm{stat}}^{-1}\,
S^-_i\Big]\\
&=&-\frac{8 \kappa^2}{N_c}\sum_{u,i} \mathrm{Tr}\, B_{u,u}
\mathrm{Tr}\,B_{u +\hat{\imath},u+\hat{\imath}} \nonumber \\
&=& -2h_2 \sum_{\vec{x},i} \Bigg[\bigg(\mathrm{Tr}\,
\frac{c W_{\vec{x}}}{1 + c W_{\vec{x}}} - \mathrm{Tr}\, \frac{\bar{c}
W^{\dagger}_{\vec{x}}}{1 + \bar{c}
W^{\dagger}_{\vec{x}}} \bigg)\bigg(
\mathrm{Tr}\,
\frac{c W_{\vec{x}+\hat{\imath}}}{1 + c W_{\vec{x}+\hat{\imath}}}
- \mathrm{Tr}\, \frac{\bar{c}
W^{\dagger}_{\vec{x}+\hat{\imath}}}{1 + \bar{c} W^{\dagger}_{\vec{x}+\hat{\imath}}}
\bigg)
\Bigg] \;.\nonumber
\end{eqnarray}
Here we have used the expressions eq.~(\ref{eq_qstat}) for $B$,
evaluated the trace over
spin and coordinate space and introduced the coupling
\begin{equation}
h_2=\frac{\kappa^2N_\tau}{N_c}\;.
\end{equation}
The group integrations have been computed via
\begin{eqnarray}
\int dU \,U_{ij}U^\dagger_{kl}=\frac{1}{N_c}\delta_{il}\delta_{jk}\;.
\end{eqnarray}
Note that this enforces the spatial link variables to be at the same
temporal location and yields a factor $N_\tau$ rather than $N_\tau^2$
from the two temporal traces. From now on we will skip the last step, where one
has to insert the definitions of $A$ and $B$ and perform the temporal sums. Explicit expressions
for all types of terms appearing in the following can be found in the appendix.
The next correction to the two-point interaction is of order $\kappa^4$:
\begin{eqnarray}
&& -\frac12 \int[dU_k]\sum_{i}\mathrm{Tr}\, P_iM_i P_i M_i = \\
&&-\frac{16\kappa^4}{N_c^2}\sum_{u \neq v, i}\left[\mathrm{Tr}\,
B_{u,v}B_{v,u}
\Big(\mathrm{Tr}\, B_{u+\hat{\imath},u+\hat{\imath}}\Big)^2 +
\Big(\mathrm{Tr}\,B_{u,u}\Big)^2
\mathrm{Tr}\,B_{u+\hat{\imath},v+\hat{\imath}}B_{v+\hat{\imath},u+\hat{\imath}} \right]\nonumber\\
&&-\frac{16\kappa^4}{(N_c^2 -
1)}\sum_{u, i}\bigg\lbrace\mathrm{Tr}\,B_{u,u}B_{u,u}
\Big(\mathrm{Tr}\, B_{u+\hat{\imath},u+\hat{\imath}}\Big)^2 +
\Big(\mathrm{Tr}\,B_{u,u}\Big)^2
\mathrm{Tr}\,B_{u+\hat{\imath},u+\hat{\imath}}B_{u+\hat{\imath},u+\hat{\imath}}\nonumber\\
&&-\frac{1}{N_c}\left[\mathrm{Tr}\,B_{u,u}B_{u,u} \mathrm{Tr}\,B_{u +\hat{\imath}, u +\hat{\imath}}B_{u +\hat{\imath}, u +\hat{\imath}}
+ \Big(\mathrm{Tr}\, B_{u,u}\Big)^2 \Big(\mathrm{Tr}\,
B_{u+\hat{\imath},u+\hat{\imath}}\Big)^2\right]\bigg\rbrace\;.\nonumber \label{eq_det44}
\end{eqnarray}
In this calculation it can happen that there is a spatial
link which is occupied by four matrices and we need the group integral (see e.g.
\cite{Creutz:1978ub})
\\
\begin{eqnarray}
\int
dU\,U_{i_1j_1}U_{i_2j_2}U^\dagger_{k_1l_1}U^\dagger_{k_2l_2}&=&\frac{1}{N_c^2-1}\Big[\delta_{i_1l_1}\delta_{i_2l_2}\delta_{j_1k_1}\delta_{j_2k_2}+\delta_{i_1l_2}\delta_{i_2l_1}\delta_{j_1k_2}\delta_{j_2k_1}\Big]\\
&-&\frac{1}{N_c(N_c^2-1)}\Big[\delta_{i_1l_2}\delta_{i_2l_1}\delta_{j_1k_1}\delta_{j_2k_2}+\delta_{i_1l_1}\delta_{i_2l_2}\delta_{j_1k_2}\delta_{j_2k_1}\Big]\;.
\nonumber
\end{eqnarray}
\\
The next contribution of order $\kappa^4$ comes from eq.~(\ref{PM^2}), which is a two-point interaction in the case that $\vec{x} = \vec{y}$ and $i = j$:
\begin{eqnarray}
&& \frac12 \int[dU_k]\sum_{\vec{x},i} \mathrm{Tr}\, P_{\vec{x},i} M_{\vec{x},i} \mathrm{Tr}\, P_{\vec{x},i} M_{\vec{x},i} \\
&=&\frac{32\kappa^4}{N_c^2}\sum_{u \neq v, i}\left[ \Big(\mathrm{Tr}\, B_{u,u}\Big)^2 \Big(\mathrm{Tr}\,
B_{v+\hat{\imath},v+\hat{\imath}}\Big)^2
+\mathrm{Tr}\,B_{u,v}B_{v,u} \mathrm{Tr}\,B_{u+\hat{\imath},v+\hat{\imath}}B_{v+\hat{\imath},u+\hat{\imath}}\right]\nonumber\\
&&+\frac{32\kappa^4}{N_c^2-1}\sum_{u,i}\Bigg\lbrace\Big(\mathrm{Tr}\,
B_{u,u}\Big)^2 \Big(\mathrm{Tr}\,
B_{u+\hat{\imath},u+\hat{\imath}}\Big)^2+\mathrm{Tr}\,B_{u,u}B_{u,u}\mathrm{Tr}\,B_{u+\hat{\imath},u+\hat{\imath}}
B_{u+\hat{\imath},u+\hat{\imath}}\nonumber\\
&&-\frac{1}{N_c}\bigg[\mathrm{Tr}\,B_{u,u}B_{u,u}
\Big(\mathrm{Tr}\, B_{u+\hat{\imath},u+\hat{\imath}}\Big)^2 +
\Big(\mathrm{Tr}\,B_{u,u}\Big)^2
\mathrm{Tr}\,B_{u+\hat{\imath},u+\hat{\imath}}B_{u+\hat{\imath},u+\hat{\imath}}
\bigg]\Bigg\rbrace\;.\nonumber
\end{eqnarray}
Higher corrections to the two-point interaction start with $\mathcal{O}(\kappa^6)$.
\subsubsection{Three-point interaction}
The three-point interaction starts at $\mathcal{O}(\kappa^4)$;
\begin{eqnarray}
e^{-S^f_{3}} &\equiv&\int[dU_k] \Big[
-\sum_{i}\mathrm{Tr}\, P_i P_i M_i M_i
- \sum_{i \neq j}\mathrm{Tr}\, P_i P_j M_j M_i \\
&&- \frac12 \sum_{i \neq j}\mathrm{Tr}\, P_i M_i P_j M_j
- \frac12 \sum_{i \neq j}\mathrm{Tr}\, P_i M_j P_j M_i
+ \frac12 \sum_{\vec{x},\vec{y},i,j} \mathrm{Tr}\, P_{\vec{x},i} M_{\vec{x},i} \mathrm{Tr}\, P_{\vec{y},j} M_{\vec{y},j}
\Big]\;. \nonumber
\end{eqnarray}
The different contributions are evaluated to be
\begin{align}
-\int[dU_k] \sum_{i}\mathrm{Tr}\, P_i P_i M_i M_i &= \nonumber \\
-\frac{32 \kappa^4}{N_c^2} \sum_{u,v,i} &\mathrm{Tr}\,
B_{u,u}\mathrm{Tr}\,A_{u + \hat{\imath}, v + \hat{\imath}} A_{v + \hat{\imath}, u + \hat{\imath}} \mathrm{Tr}\,B_{u + 2 \hat{\imath}, u + 2 \hat{\imath}}\;,
\end{align}
\begin{align}
-\int[dU_k] \sum_{i \neq j}\mathrm{Tr}\, P_i P_j M_j M_i &= \nonumber \\
-\frac{16 \kappa^4}{N_c^2} \sum_{u,v,i \neq j} &\mathrm{Tr}\,
B_{u-\hat{\imath},u-\hat{\imath}} \Big[\mathrm{Tr}\,A_{u,v}A_{v,u}+\mathrm{Tr}\,B_{u,v}B_{v,u}\Big] \mathrm{Tr}\,B_{u + \hat{\jmath}, u + \hat{\jmath}}\;,
\end{align}
\begin{align}
- \frac12 \int[dU_k] \sum_{i \neq j}\mathrm{Tr}\, P_i M_i P_j M_j &= \nonumber \\
-\frac{8 \kappa^4}{N_c^2}\sum_{u,v,i \neq j}
&\mathrm{Tr}\,B_{u+\hat{\imath},u+\hat{\imath}}
\Big[\mathrm{Tr}\,A_{u,v}A_{v,u}+\mathrm{Tr}\,B_{u,v}B_{v,u}\Big]
\mathrm{Tr}\, B_{u+\hat{\jmath},u+\hat{\jmath}}\;,\\
- \frac12 \int[dU_k] \sum_{i \neq j}\mathrm{Tr}\, P_i M_j P_j M_i &= \nonumber \\
-\frac{8 \kappa^4}{N_c^2}\sum_{u,v,i \neq j}
&\mathrm{Tr}\,B_{u-\hat{\imath},u-\hat{\imath}}
\Big[\mathrm{Tr}\,A_{u,v}A_{v,u}+\mathrm{Tr}\,B_{u,v}B_{v,u}\Big]
\mathrm{Tr}\, B_{u-\hat{\jmath},u-\hat{\jmath}}\;,
\end{align}
\begin{align}
\frac12 \int[dU_k] \sum_{\vec{x}, \vec{y}, i, j} \mathrm{Tr}\, P_{\vec{x},i} M_{\vec{x},i} \mathrm{Tr}\, P_{\vec{y},j} M_{\vec{y},j} &= \nonumber \\
\frac{32\kappa^4}{N_c^2}\sum_{u,v,i,j}
&\mathrm{Tr}\, B_{u,u} \mathrm{Tr}\,B_{u+\hat{\imath},u+\hat{\imath}}\mathrm{Tr}\,
B_{v,v} \mathrm{Tr}\,B_{v+\hat{\jmath},v+\hat{\jmath}}\;,
\end{align}
where the sum is only over terms where the two traces share one spatial point.
\subsubsection{Four-point interaction}
There are only two four-point interactions to order $\kappa^4$:
\begin{eqnarray}
e^{-S^f_{4}} \equiv\int[dU_k]
\Big[-\sum_{i \neq j}\mathrm{Tr}\, P_i P_j M_i M_j + \frac12 \sum_{\vec{x},\vec{y},i,j} \mathrm{Tr}\, P_{\vec{x},i} M_{\vec{x},i} \mathrm{Tr}\, P_{\vec{y},j} M_{\vec{y},j}\Big]\;.
\end{eqnarray}
After integration the first contribution vanishes in the strong coupling limit and only gives a non-zero contribution if a plaquette is inserted into the fermionic loop:
\begin{eqnarray}
\int[dU_k] \sum_{i \neq j}\mathrm{Tr}\, P_i P_j M_i M_j &=&\mathcal{O}(\kappa^4u)\;.
\end{eqnarray}
Since we only calculate the action to order $\kappa^m u^n$ with $m+n=4$ we neglect this term.
The second contribution is
\begin{eqnarray}
\frac12 \int[dU_k]\sum_{\vec{x},\vec{y},i,j} \mathrm{Tr}\, P_{\vec{x},i} M_{\vec{x},i} \mathrm{Tr}\, P_{\vec{y},j} M_{\vec{y},j}
&& = \\
\frac{32\kappa^4}{N_c^2}\sum_{u,v, i , j}
&& \mathrm{Tr}\, B_{u,u} \mathrm{Tr}\,B_{u+\hat{\imath},u+\hat{\imath}}\mathrm{Tr}\,
B_{v,v} \mathrm{Tr}\,B_{v+\hat{\jmath},v+\hat{\jmath}}\;, \nonumber
\end{eqnarray}
where the sum is only over terms where the traces share no common spatial point.
\begin{figure}
\scalebox{0.5}{
\begin{tikzpicture}
\draw(16,0) -- (20,0) -- (20,2) -- (22,2) -- (22,0) -- (28,0);
\draw(20.2,0) -- (20.2,1.8) -- (21.8,1.8) -- (21.8,0) -- (20.2,0);
\draw(32,0) -- (34,0) -- (34,2) -- (36,2) -- (36,0) -- (38,0) -- (38,-2) -- (42,-2)
-- (42,0) --
(44,0);
\draw(34.2,0) -- (34.2,1.8) -- (35.8,1.8) -- (35.8,0) -- (34.2,0);
\draw(38.2,0) -- (38.2,-1.8) -- (39.9,-1.8) -- (39.9,0) -- (38.2,0);
\draw(40.1,0) -- (40.1,-1.8) -- (41.8,-1.8) -- (41.8,0) -- (40.1,0);
\end{tikzpicture}
}
\caption{Finite gauge coupling corrections to the Polyakov line. After spatial link
integration
these graphs give rise to terms $\sim \mathrm{Tr} W$.}
\label{fig_pl}
\end{figure}
\subsection{Resummations}
In order to include as many terms as possible and improve convergence we perform a
resummation. Note that in order to perform the gauge integration, we had to expand the exponential
of hopping matrices, e.g.,
\begin{eqnarray}
e^{-\sum_i\mathrm{Tr} P_i M_i} = 1 - \sum_i \mathrm{Tr} P_i M_i + \frac12 \left(\sum_i \mathrm{Tr} P_i M_i\right)^2 - \mathcal{O}(\kappa^6)\;.
\end{eqnarray}
After the integration it is possible to resum many of the resulting terms back into an exponential,
\begin{eqnarray}
\int[dU_k] e^{-\sum_i\mathrm{Tr} P_i M_i} = 1
&-& \frac{8 \kappa^2}{N_c}\sum_{u,i} \mathrm{Tr}\, B_{u,u}
\mathrm{Tr}\,B_{u +\hat{\imath},u+\hat{\imath}} \nonumber \\
&+& \frac{32\kappa^4}{N_c^2}\sum_{u,v, i,j}
\mathrm{Tr}\, B_{u,u} \mathrm{Tr}\,B_{u+\hat{\imath},u+\hat{\imath}}\mathrm{Tr}\,
B_{v,v} \mathrm{Tr}\,B_{v+\hat{\jmath},v+\hat{\jmath}} \nonumber \\
= && e^{-\frac{8 \kappa^2}{N_c}\sum_{u,i} \mathrm{Tr}\, B_{u,u}
\mathrm{Tr}\,B_{u +\hat{\imath},u+\hat{\imath}}} + \mathcal{O}(\kappa^6) \;.
\label{PM-resum}
\end{eqnarray}
Inspection of higher order terms indicates that this should always be possible.
Note that terms that have been resummed, like the higher orders in eq.~(\ref{PM-resum}), have to be excluded in the appropriate higher order to avoid double counting.
\subsection{Leading gauge corrections to the strong coupling limit}
Leaving the strong coupling limit, i.e.~for $\beta \neq 0$, the gauge action
has to be included when performing the group integration.
As a consequence the effective coupling constants depend on the gauge coupling also.
The leading gauge corrections are of order $N_{\tau} \kappa^2 u$ coming from
attaching plaquettes
to the Wilson line, cf.~figure \ref{fig_pl}, and
\begin{eqnarray}
c\rightarrow
h_1 =(2\kappa e^{a\mu})^{N_\tau} \ \Big[1+6\kappa^2 N_{\tau} u + \mathcal{O}(\kappa^2 u^5) \Big]\;.
\end{eqnarray}
This can also be exponentiated by summing over multiple attached, disconnected plaquettes at
different locations
\begin{eqnarray}
h_1&=&
\exp\left[N_\tau\left(a\mu+\ln2\kappa+6 \kappa^2 \frac{u -
u^{N_{\tau}}}{1-u}\right)\right]\;,
\end{eqnarray}
and we see that in this way the Polyakov line receives mass corrections due to
interactions.
Note that this generates overcounting in higher orders, but in our opinion the
resummation effects of this procedure more than compensates for this additional
care.
Let us finally also give the gauge correction for the prefactor of the leading order of $S^f_{2}$
\begin{eqnarray}
h_2= \frac{\kappa^2N_\tau}{N_c}\left[1+2\frac{u-u^{N_\tau}}{1-u}+\ldots\right] \;.
\end{eqnarray}
This correction does not appear to exponentiate.
\subsection{Effective action for the cold and dense regime}
The terms evaluated in the last sections and displayed in the appendix can now be added up to
provide the complete effective action. Fortunately, simplifications occur because some terms
have the same structure. Moreover, in this work we focus on the
cold and dense regime and mostly simulate with $N_\tau > 100$, for which $\lambda\mathop{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} 10^{-25}$, and terms that are of subleading order in $N_{\tau}$ as well as terms proportional to $\bar{h}_1$
are neglected, since $\bar{h}_1 \rightarrow 0$ as $T \rightarrow 0$. For $N_f=1$ we then simulate the simplified action
\begin{eqnarray}
\begin{split}
-S_{\mathrm{eff}}&=
-\text{log} \sum_{\vec{x}} (1 + h_1 \text{Tr} W_{\vec{x}} + h_1^2 \text{Tr} W_{\vec{x}}^{\dagger} + h_1^3)^2
-2 h_2 \sum_{\vec{x},i} \text{Tr} \frac{h_1 W_{\vec{x}}}{1+h_1 W_{\vec{x}}} \text{Tr} \frac{h_1 W_{\vec{x}+i}}{1+h_1 W_{\vec{x}+i}} \\
&+ 2\frac{\kappa^4 N_{\tau}^2}{N_c^2} \sum_{\vec{x},i} \text{Tr} \frac{h_1 W_{\vec{x}}}{(1+h_1 W_{\vec{x}})^2} \text{Tr} \frac{h_1 W_{\vec{x}+i}}{(1+h_1 W_{\vec{x}+i})^2} \\
&+ \frac{\kappa^4 N_{\tau}^2}{N_c^2} \sum_{\vec{x},i,j}
\text{Tr} \frac{h_1 W_{\vec{x}}}{(1+h_1 W_{\vec{x}})^2} \text{Tr} \frac{h_1 W_{\vec{x}-i}}{1+h_1 W_{\vec{x}-i}}
\text{Tr} \frac{h_1 W_{\vec{x}-j}}{1+h_1 W_{\vec{x}-j}} \\
&+ 2\frac{\kappa^4 N_{\tau}^2}{N_c^2} \sum_{\vec{x},i,j}
\text{Tr} \frac{h_1 W_{\vec{x}}}{(1+h_1 W_{\vec{x}})^2} \text{Tr} \frac{h_1 W_{\vec{x}-i}}{1+h_1 W_{\vec{x}-i}}
\text{Tr} \frac{h_1 W_{\vec{x}+j}}{1+h_1 W_{\vec{x}+j}} \\
&+ \frac{\kappa^4 N_{\tau}^2}{N_c^2} \sum_{\vec{x},i,j}
\text{Tr} \frac{h_1 W_{\vec{x}}}{(1+h_1 W_{\vec{x}})^2} \text{Tr} \frac{h_1 W_{\vec{x}+i}}{1+h_1 W_{\vec{x}+i}}
\text{Tr} \frac{h_1 W_{\vec{x}+j}}{1+h_1 W_{\vec{x}+j}} \;.
\end{split}
\end{eqnarray}
For $N_f=2$ some care has to be taken when introducing the determinant for the second flavour, which
introduces mixing terms that are not present in the above expression.
\subsection{Hadron masses in strong coupling and hopping expansion}
In order to interpret the results in the following sections, it is convenient to also have the
leading order of the meson and baryon masses,
\begin{eqnarray}
am_M&=&-2\ln(2\kappa)-6\kappa^2-24\kappa^2\frac{u}{1-u}+\ldots\;,\nonumber\\
am_B&=&-3\ln (2\kappa)-18\kappa^2\frac{u}{1-u}+\ldots\;.
\label{eq:hadron}
\end{eqnarray}
To the orders given here, these expressions are the same for $N_f=1,2$ degenerate masses.
From the second equation
we extract the running of the hopping parameter in the strong coupling limit for later use,
\begin{equation}
\left.\frac{\partial\kappa}{\partial a}\right|_{u=0}=-\kappa\frac{ m_B}{3}+O(\kappa^2)\;.
\label{eq:rk}
\end{equation}
\section{Analytic analysis of the effective theory \label{sec:an}}
\subsection{NLO perturbation theory for $N_f=1$}
A lot of insight about the behaviour of the effective
theory can be gained by studying the static strong coupling limit, where the
partition function factorises into a product of one-link integrals which can be solved
analytically. For the case of $N_f=1$ we previously observed the onset transition as a step function from zero density to lattice saturation \cite{Fromm:2012eb}. Here we extend this analysis beyond the
static strong coupling limit by using perturbation theory in the small couplings $\lambda,h_2$,
permitting a clear understanding how the nuclear liquid gas transition is driven by
interactions.
To this end we consider the partition function with nearest-neighbour interaction between a Polyakov loop and its conjugate, as well as between two Polyakov loops, i.e.~including the couplings $\lambda,h_1,h_2$.
Here we are interested in the cold and dense regime. Near the zero temperature limit and for $\mu>0$,
the anti-quark contributions vanish exponentially because $\bar{h}_{1}\rightarrow 0 \quad \mbox{for} \quad T\rightarrow 0$ and the simplified partition function is
\begin{eqnarray}
\label{zpt}
Z&=&\int[dW]\prod_{<\vec{x}, \vec{y}>}\left[1+\lambda(L_{\vec{x}}L_{\vec{y}}^*+L_{\vec{x}}^*L_{\vec{y}})\right]\prod_{\vec{x}}[1+h_1L_{\vec{x}}+h_2^2L_{\vec{x}}^*+h_1^3]^2
\\
&&\times \prod_{<\vec{x}, \vec{y}>}\left[1-2h_{2}\left(\frac{h_1L_{\vec{x}}+2h_1^2L_{\vec{x}}^*+3h_1^3}{1+h_1 L_{\vec{x}}+h_1^2L_{\vec{x}}^*+h_1^3}\right)\left(\frac{h_1L_{\vec{y}}+2h_1^2L_{\vec{y}}^*+3h_1^3}{1+h_1L_{\vec{y}}+h_1^2L_{\vec{y}}^*+h_1^3}\right)\right]\;.\nonumber
\end{eqnarray}
Note that the coupling $h_1$ parametrises $(\mu-m)$ and moreover approaches one around the onset transition. Therefore it cannot serve as an
expansion parameter. On the other hand, there are physically interesting parameter regimes where
$\lambda,h_2$ are sufficiently small to allow
for a power series expansion.
The leading orders for the partition function and pressure read
\begin{eqnarray}
Z&=&z_0^{N_s^3}+6\lambda N_s^3z_0^{N_s^3-2}z_1z_2-6h_2N_s^3z_0^{N_s^3-2}z_3^2\;,\nonumber\\
p&=&\frac{T}{V}\ln Z=\frac{1}{N_\tau N_s^3}\ln Z\nonumber\\
&=&N_\tau^{-1}\left(\ln z_0+6\lambda\frac{z_1z_2}{z_0^2}-6h_2\frac{z_3^2}{z_0^2}\right)\;,
\end{eqnarray}
with
\begin{eqnarray}
z_0&=&1+4h_1^3+h_1^6\;,\nonumber\\
z_1&=&3h_1^2+2h_1^5\;,\nonumber\\
z_2&=&2h_1+3h_1^4\;,\nonumber\\
z_3&=&6h_1^3+3h_1^6\;.
\end{eqnarray}
In the cold and dense regime we are working with $N_\tau\geq 10$ for which
$\lambda(\beta=6.0,N_\tau)<10^{-5}$ plays no quantitative role, so we neglect it from here on.
The static strong coupling limit is obtained for $\lambda=h_2=0$ and has already been discussed in
\cite{Fromm:2012eb}. In this case the partition function factorises into one-link partition functions $z_0$, i.e.~it represents a non-interacting system. We identify $z_0$ to consist of baryons, a spin
3/2 quadruplet and a spin 0 baryon made of six quarks. Note that the Pauli principle
for $N_f=1$ does not admit spin 1/2 doublets.
The quark number density and the energy density then follow as
\begin{eqnarray}
a^3n&=&\frac{1}{N_\tau N_s^3}\frac{\partial}{\partial a\mu}\ln Z\nonumber\\
&=&\frac{1}{N_\tau N_s^3}\frac{\partial h_1}{\partial a\mu}\frac{\partial}{\partial h_1}\ln Z\nonumber\\
&=&\frac{12h_1^3+6h_1^6}{z_0}-648h_2\frac{h_1^6(2+h_1^3)(1+h_1^3+h_1^6)}{z_0^3}\nonumber\\
&=&3a^3n_B\;,
\label{eq:density}
\end{eqnarray}
\begin{eqnarray}
a^4e&=&-\frac{a}{N_\tau N_s^3}\frac{\partial}{\partial a}\ln Z\bigg\vert_z\nonumber\\
&=&-\frac{a}{N_\tau N_s^3}\left(\frac{\partial h_1}{\partial a}\right)\bigg\vert_z\frac{\partial}{\partial h_1}\ln Z+\frac{6a}{N_\tau}\left(\frac{\partial h_2}{\partial a}\right)\left(\frac{z_3}{z_0}\right)^2\nonumber\\
&=&am_Ba^3n_B-4am_B\frac{h_2}{N_\tau}\left(\frac{z_3}{z_0}\right)^2\;,
\label{eq:e-density}
\end{eqnarray}
where we have made use of eq.~(\ref{eq:rk}).
\subsection{The nuclear liquid gas transition for $N_f=1$ \label{sec:pt}}
With these formulae at hand, it is easy to analyse the physics of the cold and dense regime. Let us
begin with the static strong coupling limit.
At high density, the lattice is populated until it is saturated with six quarks per lattice
site according to the Pauli principle,
\begin{equation}
\lim_{\mu\rightarrow\infty}(a^3n)=2\cdot N_c \equiv 2(a^3n_{B,\mathrm{sat}})\;.
\end{equation}
Note that the dominating contribution to $z_0$ is a bosonic
baryon. However, it is a composite of quarks such that the Pauli principle,
built into the partition function in the original QCD action, is still contained in $z_0$.
Another limit of interest is that of zero temperature. In this case we have
\begin{eqnarray}
\lim_{T\rightarrow 0} a^4p&=&\left\{\begin{array}{cc} 0, & \mu<m\\
2N_c (a\mu-am), & \mu>m\end{array}\right.\;,\nonumber\\
\nonumber\\
\lim_{T\rightarrow 0} a^3n&=&\left\{\begin{array}{cc} 0, & \mu<m\\
2N_c, & \mu>m\end{array}\right.\;.
\end{eqnarray}
Thus we find the so-called silver blaze property, i.e.~the thermodynamic functions stay zero as the
chemical potential is raised until it crosses the constituent quark mass. Then it is possible to excite
baryons and the onset phase transition to nuclear matter takes place. In the static strong coupling limit,
this transition is a step function from zero to saturation density. This step function gets immediately
smeared out to a smooth
crossover as soon as a finite temperature ($N_\tau<\infty$) or coupling $h_2$ is switched on, cf.~figure \ref{fig:onset}.
\begin{figure}[t]
\centerline{
\includegraphics[width=0.5\textwidth]{onset-different-Ntau}
\includegraphics[width=0.5\textwidth]{onset-different-k}
}
\caption[]{The onset transition in lattice units, eq.~(\ref{eq:density}), for $\kappa=0.01,\beta=0$ and different $N_\tau$ (left) and for $N_\tau=10,\beta=0$ and different $\kappa$ (right).}
\label{fig:onset}
\end{figure}
We can unambiguously identify this transition as baryon condensation by also looking at the energy
density. Away from the static limit, there are non-vanishing attractive quark-quark (and hence
baryon-baryon) interactions parametrised by $h_2$. These are identified by the quantity
\begin{equation}
\epsilon\equiv\frac{e-n_Bm_B}{n_Bm_B}=\frac{e}{n_Bm_B}-1\;,
\label{eq:bind}
\end{equation}
which gives the energy per baryon minus its rest mass in units of $m_B$.
For temperatures approaching zero,
this is the binding energy per baryon.
In perturbation theory, the result is
\begin{equation}
\epsilon=-\frac{4}{3}\frac{1}{a^3n_B}\left(\frac{z_3}{z_0}\right)^2\,\kappa^2=
-\frac{1}{3}\frac{1}{a^3n_B}\left(\frac{z_3}{z_0}\right)^2\, e^{-am_M}\;,
\label{eq:bindpt}
\end{equation}
where we have used the leading order of eq.~(\ref{eq:hadron}) to express the hopping parameter
by the meson mass. This result beautifully illustrates several interesting physics points.
In the non-interacting static limit with $\kappa=h_2=0$, there is no binding energy and hence no
true phase transition for the onset to nuclear matter. For finite $\kappa$ we see from the behaviour
of $z_3,z_0$ that for $\mu<m$ and $T\rightarrow 0$ the binding energy is also zero,
another manifestation of the silver blaze phenomenon. On the other hand, for $\mu>m, T\rightarrow 0$
it is explicitly negative and its absolute value increases with decreasing meson mass.
This is in complete accord with expectations from nuclear physics models based on meson exchange.
The binding energy as a function of chemical potential is shown in figure \ref{fig:binda} (left), the
asymptotes towards larger chemical potential are due to lattice saturation.
Plotting against the number density, we obtain the equation of
state as qualitatively expected for nuclear matter, figure \ref{fig:binda} (right).
In particular, the binding energy per baryon gets more negative
as the quarks get lighter, until we see a minimum forming. Note that all curves eventually should turn upwards again, but for finite lattice spacing they are limited by the saturation density. With the explicit
mass dependence in the binding energy and without a continuum extrapolation,
quantitative predictions for physical QCD cannot be made until the physical
flavour content and masses can be controlled. Nevertheless, it is interesting to keep in mind
the physical binding energy per nucleon, $\epsilon\approx 0.017$ and the nuclear saturation density,
$n_{B0}/m_\text{proton}^3\approx 0.016$.
\begin{figure}[t]
\centerline{
\includegraphics[width=0.5\textwidth]{binding-vs-mu}
\includegraphics[width=0.5\textwidth]{binding-vs-density-scaled}
}
\caption[]{Binding energy per nucleon in the strong coupling limit, eq.~(\ref{eq:bindpt}) with $N_\tau=10$. Quark mass decreases with growing $\kappa$.}
\label{fig:binda}
\end{figure}
\subsection{The static strong coupling limit for $N_f=2$ at finite baryon density}
For $\beta=0$, the partition function consists of the static determinant factors only
\begin{eqnarray}
Z (\beta=0)= \Big[ \int &[dW]& \prod_{\vec{x}} (1 + h_u L_{\vec{x}} + h_u^2 L_{\vec{x}}^{*} + h_u^3)^2 (1 + \bar{h}_u L_{\vec{x}}^{*} + \bar{h}_u^2 L_{\vec{x}} + \bar{h}_u^3)^2 \\
&&(1 + h_d L_{\vec{x}} + h_d^2 L_{\vec{x}}^{*} + h_d^3)^2 (1 + \bar{h}_d L_{\vec{x}}^{*} + \bar{h}_d^2 L_{\vec{x}} + \bar{h}_d^3)^2\Big]^V = z_0^V \;.\nonumber
\label{eq:ssc}
\end{eqnarray}
We again consider the zero temperature limit at $\mu>0$, for which
the anti-quark contributions vanish.
After the gauge integration the result reads
\begin{eqnarray}
z_0& =& (1 + 4 h_d^3 + h_d^6)+ (6 h_d^2 + 4 h_d^5) h_u+ (6 h_d + 10 h_d^4)h_u^2+
(4 + 20 h_d^3 + 4 h_d^6)h_u^3 \nonumber \\
&& + (10 h_d^2 + 6 h_d^5) h_u^4+ ( 4 h_d + 6 h_d^4) h_u^5
+(1 + 4 h_d^3 + h_d^6)h_u^6\;.
\label{eq:freegas}
\end{eqnarray}
All exponents of $h_u^nh_d^m$ come in multiples of three, $n+m=\rm{mod} \;3$.
Just as in the one-flavour case (with $h_d=0$), this has the form of a free baryon gas where the prefactors
give the degeneracy of the spin multiplets. Note that for $N_f=2$ we also find the standard spin 1/2
nucleons and many more combinations.
To illustrate the prefactors, consider the example $h_u^2h_d$. There is the
spin 1/2 doublet, the proton, as well as a spin 3/2 quadruplet, the $\Delta^+$, i.e.~six states altogether.
The states corresponding to $h_d^2h_u$ are the neutron and the $\Delta^0$, while
$h_u^3,h_d^3$ are the $\Delta^{++},\Delta^-$ quadruplets, respectively.
It continues with six-quark states. For example, $h_u^4h_d^2$ has
10 allowed spin-flavour combinations, corresponding to three spin 1 triplets and one spin 0 singlet.
These are consistent with an interpretation as di-baryon states built of $\Delta^{++}\Delta^0$ or $pp$.
Thus, eq.~(\ref{eq:freegas}) contains all baryonic spin-flavour multiplets that are consistent with the Pauli principle, i.e.~up to a
maximum of 12 constituent quarks.
The quark density reads
\begin{eqnarray}
n_B&=&\frac{T}{V}\frac{\partial}{\partial \mu_B}\ln Z \nonumber\\
&=&2\Big[h_u^3 (2 + h_u^3) + h_d h_u^2 (3 + 4 h_u^3) + h_d^5 h_u (4 + 9 h_u^3) \nonumber \\
&&+ h_d^4 h_u^2 (10 + 9 h_u^3) + h_d^2 h_u (3 + 10 h_u^3) \nonumber\\
&& + h_d^6 (1 + 6 h_u^3 + 2 h_u^6) + h_d^3 (2 + 20 h_u^3 + 6 h_u^6)\Big]\nonumber \\
&/&
\Big[1 +
4 h_u^3 + h_u^6 + 2 h_d^4 h_u^2 (5 + 3 h_u^3) + 2 h_d^2 h_u (3 + 5 h_u^3)
+ h_d^5 (4 h_u + 6 h_u^4) \nonumber\\
&&+ h_d (6 h_u^2 + 4 h_u^5) +
h_d^6 (1 + 4 h_u^3 + h_u^6) + 4 h_d^3 (1 + 5 h_u^3 + h_u^6)\Big] \;.
\end{eqnarray}
In the high density limit numerator and denominator are dominated by the term with the highest power
and we obtain
\begin{equation}
\lim_{\mu\rightarrow\infty}(a^3n)=2\cdot 2\cdot N_c \equiv 4(a^3n_{B,\mathrm{sat}})\;.
\end{equation}
This is the saturation density with two spin, two flavour
and three colour states per lattice site and fermion.
In the zero temperature limit we have again the silver blaze property followed by
a transition to lattice saturation
\begin{eqnarray}
\lim_{T\rightarrow 0} a^4p&=&\left\{\begin{array}{cc} 0, & \mu<m\\
4N_c (a\mu-am), & \mu>m\end{array}\right.\;,\nonumber\\
\nonumber\\
\lim_{T\rightarrow 0} a^3n&=&\left\{\begin{array}{cc} 0, & \mu<m\\
4N_c, & \mu>m\end{array}\right.\;.
\end{eqnarray}
\subsection{The static strong coupling limit for $N_f=2$ at finite isospin density \label{sec:iso}}
Finite isospin density is realised for $\mu_I=\mu_u=-\mu_d$ \cite{son}. Choosing $\mu_u>0$, the zero temperature limit implies
$
\bar{h}_u,h_d\rightarrow 0 \quad \mbox{for} \quad T\rightarrow 0.
$
Omitting the corresponding terms from eq.~(\ref{eq:ssc}) and performing the gauge integration we
find the expression
\begin{eqnarray}
z_0 &=& (1 + 4 \bar{h}_d^3 +
\bar{h}_d^6) + (4 \bar{h}_d + 6 \bar{h}_d^4) h_u + (10 \bar{h}_d^2 + 6 \bar{h}_d^5) h_u^2 + (4 +
20 \bar{h}_d^3 + 4 \bar{h}_d^6) h_u^3 \nonumber\\
&& + (6 \bar{h}_d + 10 \bar{h}_d^4) h_u^4 + (6 \bar{h}_d^2 +
4 \bar{h}_d^5) h_u^5 + (1 + 4 \bar{h}_d^3 + \bar{h}_d^6) h_u^6\;.
\end{eqnarray}
With isospin chemical potential, $d$-anti-quarks are now playing the same role as $u$-quarks
and the partition function is a free gas of baryons, anti-baryons and mesons.
Differentiating with respect to isospin chemical potential gives the isospin density,
\begin{eqnarray}
n_I&=&\frac{T}{V}\frac{\partial}{\partial \mu_I}\ln Z\\
&=&2 \Big[3 h_u^3 (2 + h_u^3) + 5 \bar{h}_d^4 h_u (3 + 8 h_u^3) + \bar{h}_d h_u (4 + 15 h_u^3)
+\bar{h}_d^5 h_u^2 (21 + 20 h_u^3) \nonumber\\
&&+ \bar{h}_d^2 h_u^2 (20 + 21 h_u^3) + 3 \bar{h}_d^6 (1 + 6 h_u^3 + 2 h_u^6)
+ 6 \bar{h}_d^3 (1 + 10 h_u^3 + 3 h_u^6)\Big]\nonumber\\
&/&\Big[1 + 4 h_u^3 + h_u^6 +
2 \bar{h}_d^2 h_u^2 (5 + 3 h_u^3) + 2 \bar{h}_d^4 h_u (3 + 5 h_u^3) \nonumber\\
&& +
\bar{h}_d (4 h_u + 6 h_u^4) + \bar{h}_d^5 (6 h_u^2 + 4 h_u^5) +
\bar{h}_d^6 (1 + 4 h_u^3 + h_u^6) + 4 \bar{h}_d^3 (1 + 5 h_u^3 + h_u^6)\Big]\;. \nonumber
\end{eqnarray}
Also in this case, we find saturation in the high density limit,
\begin{equation}
\lim_{\mu\rightarrow\infty}(a^3n_I)=2\cdot 2\cdot N_c \equiv 4(a^3n_{I,\mathrm{sat}})\;.
\end{equation}
Just as in the case of finite baryon density, the high density expression is dominated by a bosonic
composite state which "knows" that it consists of constituent quarks, of which only a finite number can
be packed on one lattice site. The saturation level is hence identical to that for
large baryon chemical potential.
Similarly, in the zero temperature limit we find again the silver blaze property followed by a non-analytic
transition to isospin condensation,
\begin{eqnarray}
\lim_{T\rightarrow 0} a^4p&=&\left\{\begin{array}{cc} 0, & \mu<m\\
4N_c (a\mu-am), & \mu>m\end{array}\right.\;,\nonumber\\
\nonumber\\
\lim_{T\rightarrow 0} a^3n_I&=&\left\{\begin{array}{cc} 0, & \mu<m\\
4N_c, & \mu>m\end{array}\right.\;.
\end{eqnarray}
Note that for static quarks, $m_B/3= m_\pi/2$ and the onset transition to nuclear or isospin matter fall on top
of each other as a function of quark chemical potential. We shall see in our numerical investigations that
a gap between them opens up as expected when interactions between the hadrons are switched on.
\section{Simulation of the effective theory by complex Langevin \label{sec:lang}}
The effective theory specified in the last sections has a sign problem. With less
degrees of freedom
and the theory being only three-dimensional, the sign problem is milder than in the
original theory
such that Monte Carlo methods are feasible at finite temperatures and chemical
potentials $\mu/T\mathop{\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}} 3$ \cite{Fromm:2011qi}.
If, however, one is interested in cold dense matter in the zero
temperature limit, the sign problem becomes strong and Monte Carlo methods are
restricted to small volumes.
Fortunately, the effective theory is amenable to simulations using complex Langevin
algorithms (for an introductory review, see \cite{dh}) and the onset transition to
nuclear matter could be demonstrated explicitly for
very heavy quarks \cite{Fromm:2012eb}. In this section we discuss the validity of
complex Langevin for the effective
theory. We will only sketch the general method here, as there is an abundant
literature on this subject
\cite{dh,clsu3,bilic88,etiology,su3lang}.
The basic idea is to introduce a fictitious Langevin time $\theta$, in which a field
theoretical
system with a generic field $\phi$ evolves according to the Langevin equation
\begin{equation}
\label{langevin-eq}
\frac{\partial \phi(x,\theta)}{\partial \theta}=-\frac{\delta S}{\delta
\phi(x,\theta)}+\eta(x,\theta)\;,
\end{equation}
where $\eta(x,\theta)$ denotes Gaussian noise.
In the case of a complex action, the field variables have to be complexified too,
$\phi\rightarrow \phi_r + i\phi_i$.
In our case, the degrees of freedom of the effective theory are
the temporal Wilson lines
\begin{equation}
\int [d U_0] f(W,W^\dag) = \int [d W]
f(W,W^\dag)\;.
\end{equation}
We may further simplify this by taking the trace of the Wilson lines and
parametrising the resulting Polyakov loops in terms of two
angles,
bringing them into a diagonal form \cite{gross83}
\begin{equation}
L(\theta,\phi) = e^{i \theta}+e^{i \phi}+e^{-i (\theta+\phi)}\;.
\end{equation}
This introduces a potential term denoted by $e^V$ with
\begin{equation}
V=\frac12 \mathrm{ln}(27-18|L|^2+8 \mathrm{Re}(L^3)-|L|^4)\;.
\end{equation}
Hence the integration measure we use in our simulation is the reduced Haar measure
\begin{equation}
\int [d W] = \int [dL] e^V = \int_{-\pi}^\pi [d\theta] \int_{-\pi}^\pi [d\phi] \ e^{2V}\;.
\end{equation}
This means instead of an integration over SU(3) matrices we have 2
complex degrees of freedom on every spatial lattice point.
Furthermore, having only diagonal matrices their inversion
is trivial.
With these ingredients eq.(\ref{langevin-eq}) was solved numerically using stepsizes
of around $\epsilon = 10^{-3}$ and applying the adaptive stepsize technique proposed
in \cite{adaptive-stepsize} to avoid numerical instabilities. \\
\begin{figure}[t]
\centerline{
\includegraphics[width=0.5\textwidth]{criterion-k2.eps}
\includegraphics[width=0.5\textwidth]{criterion-k4.eps}
}
\caption[]{Test of the convergence criterion for complex Langevin in the effective
theory to order
$\kappa^2$ (left) and $\kappa^4$ (right) for $\kappa^2 N_{\tau}/N_c = 0.01 $ and $\beta = 5.7$.
$\hat{L}$ refers to the Langevin operator in (\ref{eq:lop})}
\label{fig:convcrit}
\end{figure}
\subsection{Criteria for correctness}
It is well known that the complex Langevin algorithm is not a general
solution to the
complex action problem since it converges to the wrong limit in some cases, including
some parameter ranges for QCD \cite{dh,amb86}. The failure can be attributed to
insufficient localisation of
the probability distribution in the complex field space, and a set of criteria was
developed
to check whether this localisation is sufficient in a given simulation \cite{etiology}.
A necessary condition is that
the expectation value of all observables $O[\phi]$ vanishes after a Langevin operator
$\hat{L}$ has been
applied to them,
\begin{equation}
\langle \hat{L}O[\phi]\rangle=0, \quad
\hat{L}=\sum_{a,x}\left(\frac{\partial}{\partial \phi_a(x)}
-\frac{\partial S}{\partial \phi_a(x)}\right)\frac{\partial}{\partial \phi_a(x)}\;.
\label{eq:lop}
\end{equation}
While, strictly speaking, this test is necessary on {\it all} observables of the
theory, in practice only
a select few can be tested. Note that in the framework of our effective theory, all observables
are expressed as functions of Polyakov loops and one might hope that its proper behaviour
propagates to more complicated functions of it. In figure \ref{fig:convcrit} we show the expectation
value of the Polyakov loop as a function of the step size of the Langevin algorithm
for the effective theory to order $\kappa^2$ (left) and $\kappa^4$ (right).
In both cases the criterion is satisfied in the limit of vanishing stepsize.
\subsection{The logarithm of the static determinant}
\begin{figure}[t]
\centerline{
\includegraphics[width=0.5\textwidth]{0995.eps}
\includegraphics[width=0.5\textwidth]{1001.eps}
}
\caption[]{Distribution of the static determinant, eq.~(\ref{eq_qsim}), in the course of simulations
with $N_f=1, \kappa=0.0173, N_\tau=100, \beta=0$. No crossings of the negative real axis are observed.}
\label{fig:scatter}
\end{figure}
Another problem related to the distribution in the complexified field space has recently been pointed
out for all partition functions containing a complex determinant \cite{kim13}. Its contribution
to the effective action is $\sim \log \det$, and the logarithm develops a cut along the negative
real axis, i.e.~it is multi-valued. This may cause a problem whenever the calculation of the
drift term for the Langevin time requires a derivative to be taken across the cut. In \cite{kim13} it
was found for a random matrix model that these crossings lead to incorrect predictions for
observables if they happen frequently in a Monte Carlo history. Here we can see another benefit
of the effective theory compared to a Langevin simulation of full QCD. In the effective theory, only
the static determinant features this problem, while the corrections to the effective action
in the hopping expansion are exponentials of polynomials. We have
monitored the static determinant during the Langevin evolution, an example is shown in
figure \ref{fig:scatter} at baryon density slightly below (left) and above (right) the onset transition to
nuclear matter. Note that the static determinant is dominated by the Polyakov loop.
One observes the expected distortion from the centre symmetric distribution of the
vacuum state to the distribution preferring the real centre sector, and this distortion is amplified
exponentially with chemical potential. For simulation purposes, the crucial observation is that
there are nearly no crossings of the negative real axis, in accord with the satisfied
convergence criterion above. We have monitored such scatter plots over a wide range of
parameter values. Occasionally crossings of the negative axis do occur, but the observed
frequency was $<10^{-4}$ in all cases.
\subsection{Comparison with Monte Carlo}
\begin{figure}[t]
\centerline{
\includegraphics[width=0.5\textwidth]{vergleich-cl-mc-3.eps}
\includegraphics[width=0.5\textwidth]{vergleich-cl-mc-2.eps}
}
\caption[]{Comparison between Langevin and Monte Carlo for quark number density at different values of $\kappa$ with $N_{\tau} = 100$ and $\beta = 0$ (left)
and the Polyakov loop at different $\mu$ with $\beta=5.7, \kappa=0.01$ and $N_{\tau}=200$ (right), both using the $\kappa^4$-action for $N_f=1$.}
\label{fig:cfMC}
\end{figure}
As a final and complementary check of the validity of the complex Langevin
simulations, one may also compare with reweighted Monte Carlo results where this is possible,
i.e.~on small volumes. In \cite{Fromm:2012eb} we have shown a successful comparison
for very small hopping parameters $\kappa\sim 10^{-4}$.
Figure \ref{fig:cfMC} shows that this test is also passed
for significantly larger values $\kappa\sim 0.01$.
We conclude that complex Langevin simulations of the effective theory constructed here
are fully controlled for the entire coupling range investigated, $0<\beta<6$ and $0<\kappa <0.12$.
This is an algorithmic advantage over Langevin simulations in full QCD, where gauge cooling
techniques \cite{cool} are required to control the field distribution
and even then simulations at small lattice couplings are ruled out \cite{denes}.
\section{Numerical Results \label{sec:phys}}
\begin{figure}[t]
\centerline{
\includegraphics[width=0.5\textwidth]{no-gauge.eps}
\includegraphics[width=0.5\textwidth]{gauge.eps}
}
\caption[]{Comparison between $\kappa^2,\kappa^4$ actions, with and without
resummation for $N_f=1,c=0.8$ and $\beta=0$ (left) and resummed, including gauge corrections
for $\beta=5.7$ (right).}
\label{fig:convergence}
\end{figure}
\subsection{Convergence region of the hopping series}
An important task is to find the region of validity
of the effective theory. By this we mean the region, determined by a self-consistent test,
where the truncated effective theory is a
good approximation to the full theory.
As criteria we choose the difference between expectation values of observables,
calculated from the $\kappa^2$ and the $\kappa^4$ action,
$\langle O\rangle_{\kappa^2}, \langle O \rangle_{\kappa^4}$. These can
be evaluated as a function of the expansion
parameter $\frac{\kappa^2 N_{\tau}}{N_c}$, and the convergence region is where the difference
is smaller than the desired accuracy.
Since we are interested in the onset of baryon number,
we choose the density in lattice units $a^3 n$ as an observable and compute it at a fixed
value of the coupling $h_1 = 0.8$.
As can be seen in figure \ref{fig:convergence}, the static limit is only a valid
approximation in
the $\kappa \to 0$ limit. Note
that the resummed
action offers a slightly better convergence. Therefore, we will use this version for our
simulations.
The expansion parameter already shows that the region of
convergence is limited in the direction of low temperatures and light quarks,
i.e.~one can reach lower quark masses at larger temperatures.
\subsection{Setting a scale and units}
Setting a scale and performing continuum limits along lines of constant physics
is a computationally very demanding task. Rigorously speaking, this is truly possible only at
or near the physical point. On the other hand, the
effective theory considered here is only valid for very
heavy quarks, due to the truncated hopping series. While it exhibits most qualitative features of
physical QCD, its spectrum is still far from the experimentally
observed one. For this reason we do not attempt to accurately fix our hadron masses. (In the
mass ranges considered this would anyway demand heavy quark effective theories \cite{hqet}).
Instead we only provide
a very rough guide where we are in parameter space.
Our procedure is as follows:
heavy quarks have little influence on the running of the coupling. Thus we use the non-perturbative beta-function of pure gauge theory
for the lattice spacing in units of the Sommer parameter, $a(\beta)/r_0$ \cite{sommer}.
Taking $r_0=0.5~{\rm fm}$ sets a physical scale for our lattices, while $N_\tau$ tunes
temperature via $T=(aN_\tau)^{-1}$. In a {\it very rough} approximation we then use the
strong coupling expressions eq.~(\ref{eq:hadron}) for the hadron masses.
\subsection{The nuclear liquid gas transition in heavy dense QCD}
\begin{figure}[t]
\centerline{
\includegraphics[width=0.5\textwidth]{extrapolation-example.eps}
\includegraphics[width=0.5\textwidth]{continuum-extrapolated-density.eps}
}
\caption[]{
Example for the continuum extrapolation for $N_f=2$ (left).
Shown are extrapolations with one d.o.f.
Continuum extrapolated results for the transition to cold nuclear matter
for T=10MeV and one or two flavours (right). }
\label{fig:silver}
\end{figure}
\begin{figure}[t]
\centerline{
\includegraphics[width=0.5\textwidth]{pressure1.eps}
\includegraphics[width=0.5\textwidth]{pressure2.eps}
}
\caption[]{
Pressure and equation of state for $N_f=2$ at $T=10$ MeV.}
\label{fig:eos}
\end{figure}
In our previous work \cite{Fromm:2012eb} we performed a continuum extrapolation for
the transition
to cold nuclear matter based on the $\kappa^2$ action.
In figure \ref{fig:convergence} we repeat this calculations including the
$\kappa^4$ corrections. This allows us to simulate smaller lattice spacings
$a=0.08$ fm without leaving the
region of convergence, since reducing $a$ while keeping $m_B/T$ and $T$ fixed means
going to higher
$\kappa$ and $N_{\tau}$. Nevertheless, the
extrapolation suffers from considerable uncertainties, resulting in large errors in
the high density phase.
This can be seen in fig. \ref{fig:silver} (left), where we show the two best fits for our data at
$\mu_B/m_B=1$ at several lattice spacings.
This is the chemical potential where different extrapolation fits differ the most.
The systematic truncation error for our $\kappa^4$ data is estimated as the difference to the data obtained from the $\kappa^2$ action and included in the error bars in the figure. This data was then fitted to get a value for $a \rightarrow 0$. For each value of the chemical potential we tried several fits (linear and quadratic) with one to three degrees of freedom. For the best fits we always achieved
$\chi^2_{red}<2$ as long as $\mu_B/m_B < 1.0014$.
For the continuum result we quote the average of the two best fits, the error was estimated as difference between those two fits.
We note that the results at $\kappa^4$ are somewhat higher than
our $\kappa^2$-results in \cite{Fromm:2012eb}. This is because inclusion of $\kappa^4$ is the first
order allowing for a realistic estimate of
the truncation error, and thus permits inclusion of data with
smaller lattice spacing.
This results in the continuum extrapolated baryon number density in figure
\ref{fig:silver} (right), where we display the results for $N_f=1,2$ for a temperature $T=10$ MeV.
In the low density region the "silver blaze" property, i.e.~the independence of the thermodynamic
functions of chemical potential can be seen.
The growing uncertainties in the high density region are caused by the unphysical
saturation on the lattice which limits the density to $2 N_f N_c$ quarks per
lattice site, while in the continuum no such saturation exists.
As expected, the onset of nuclear matter happens at a critical value $\mu_B^c<m_B$,
due to the nuclear binding energy. The location of the onset suggests a very small binding energy
$\sim 10^{-3} m_B$ for the heavy quarks considered here,
in accord with our perturbative analysis, section \ref{sec:pt}. This explains why the
onset transition is a smooth crossover rather than the first-order transition expected for light quarks.
The endpoint of the nuclear liquid gas transition sits at a temperature of the order of the binding
energy and is not visible for very heavy quarks. In accord with expectation, the onset with two flavours
is steeper than with one flavour.
\begin{figure}[t]
\centerline{
\includegraphics[width=0.5\textwidth]{energy-density-extrapolation.eps}
\includegraphics[width=0.5\textwidth]{binding-energy-extrapolation.eps}
}
\caption[]{Left: Energy density, eq.~(\ref{eq:e-density}). Right: Binding energy per nucleon, eq.~(\ref{eq:bind}). Both plots show $N_f=2, T=10$ MeV. }
\label{fig:ebind}
\end{figure}
It is now straightforward to compute the other thermodynamic functions and from them the equation
of state. Figure \ref{fig:eos} shows the pressure as a function of baryon chemical potential as well
as a function of baryon density, whereas the binding energy per nucleon is shown in
figure \ref{fig:ebind}. Note that in all plots the error bars include the systematic uncertainty of both,
the truncation of the effective theory as well as the continuum extrapolation. The plot of the
binding energy is particularly intriguing. For small density it is zero, another manifestation of the silver
blaze property, until it turns negative, thus causing the condensation of nuclear matter.
At larger density, lattice saturation is reached before the expected
upturn of the curve. Nevertheless, the shape of the curve suggests that the minimum has been
reached near the right border. Its numerical value of the order of $10^{-3}$ is
consistent with that observed from the location of the onset transition in figure \ref{fig:silver} (right).
\subsection{Nuclear liquid gas transition for light quarks}
As in our previous work \cite{Fromm:2012eb}, the accessible quark masses in the convergence region of the effective theory are
too high to realise the expected first order transition for the onset of nuclear matter.
Finite size scaling
analyses reveal the transition to be a smooth crossover, in accord with the interplay between
accessible temperatures and the values of the binding energies.
Of course it is highly interesting to see whether the effective theory includes the expected
physics features when the quark mass is lowered.
We now consider $\kappa = 0.12$, corresponding to a small quark mass, and very
low temperatures parametrised by $N_{\tau} \sim O(10^3)$. We stress that this choice of parameters
is far outside the convergence region of our $\kappa^4$-action, cf.~figure \ref{fig:convergence}.
In other words, there is no reason to expect the results to accurately represent QCD and
an attempt at a continuum extrapolation makes no sense. Nevertheless, this is an interesting
check of the qualitative features of the effective theory.
\begin{figure}[t]
\centerline{
\includegraphics[width=0.35\textwidth]{k4-hist1.eps}
\includegraphics[width=0.35\textwidth]{k4-hist2.eps}
\includegraphics[width=0.35\textwidth]{k4-hist3.eps}
}
\caption[]{Distributions of the quark density in the transition region with temperature increasing from left to right,
$\kappa = 0.12$ and $\beta = 5.7$ }
\label{fig:polyakov-hist}
\end{figure}
\begin{figure}[t]
\centerline{
\includegraphics[width=0.5\textwidth]{susz-Ntau=500.eps}
\includegraphics[width=0.5\textwidth]{susz-Ntau=250.eps}
}
\caption[]{Quark number susceptibility for $\kappa = 0.12$ and $\beta = 5.7$ and
$N_{\tau} = 500$ (left) and $N_\tau=250$. The divergence with volume signals a true phase
transition, whereas saturation at a finite value implies a smooth crossover.}
\label{fig:polyakov-susc}
\end{figure}
Figure \ref{fig:polyakov-hist} shows distributions of the Polyakov loop
in the onset transition region for three choices of $N_\tau$, corresponding to increasing temperatures
from left to right. We clearly observe the coexistence of two phases at the lowest temperatures, which
indicates a first order transition between them.
As the temperature is raised ($N_\tau$ is lowered), the two-state signal weakens and merges to a
single gaussian distribution, signalling a weakening and eventual disappearance of the first-order
transition. This picture is corroborated by a finite size analysis of the quark number susceptibility in
figure \ref{fig:polyakov-susc}. First-order and crossover transition are clearly distinguished by diverging
and finite susceptibility as a function of volume. Thus we conclude, while our $\kappa^4$-action
used in this work is not quantitatively reliable in this parameter range, it displays all the qualitative
features expected for the nuclear liquid gas transition: a first-order transition from the vacuum to
nuclear matter which weakens with temperature until it vanishes in a critical endpoint. We therefore
expect higher orders in the effective action to only correct the quantitative details of this transition.
\subsection{Isospin vs.~baryon chemical potential}
\begin{figure}[t]
\centerline{
\includegraphics[width=0.5\textwidth]{large-mass.eps}
\includegraphics[width=0.5\textwidth]{onset-light-quarks.eps}
}
\caption[]{Onset of finite isospin density vs. baryon density for $N_f=2, N_\tau=100, \beta=5.7$ and
heavy quarks, $\kappa=0.03$ (left) and light quarks, $\kappa=0.12$ (right).
}
\label{fig:iso-bar}
\end{figure}
Let us finally consider the situation in the two-flavour theory
with finite isospin chemical potential, $\mu_I=\mu_u=-\mu_d$. In section \ref{sec:iso} we have
discussed the situation in the static strong coupling limit, where the onset transition for
pion condensation at $\mu_I=m_\pi/2$ happens at the same chemical potential as
the one for baryon condensation at $\mu_B=\mu_B/3$. With interactions included, this gets modified
in two ways. Firstly, we have $m_\pi/2 < m_B/3$ in this case, and secondly the onset gets shifted to
smaller chemical potentials by the non-vanishing binding energy. The first effect also leads to
the expected gap opening between the onset to pion condensation vs.~that to baryon condensation \cite{cohen}, when plotted
against quark chemical potential, as shown in figure \ref{fig:iso-bar}.
\section{Conclusions}
In this work we further elaborated the construction of an effective three-dimensional lattice theory
for QCD thermodynamics.
It is formulated entirely in terms of Polyakov loops and calculated from the 4d Wilson action as a strong coupling and hopping series
which is now complete to order $\kappa^nu^m, (n+m)=4$. In the static strong coupling limit, the
effective theory can be solved exactly, providing the complete spin-flavour structure of the hadron
spectrum as well as an onset transition from zero density to lattice saturation.
The interacting
effective theory has a sign problem that can be handled by complex Langevin simulations with fully
satisfied convergence criteria. Moreover, the sign problem is mild enough that on small volumes
Monte Carlo simulations are feasible, even at real chemical potential. The couplings of the effective theory
are sufficiently small to also permit a perturbative evaluation, which agrees with numerical results
in wide regions of the parameter space. Altogether this allows for a controlled and very efficient
evaluation of thermodynamic functions and critical couplings.
Working in the heavy quark region near the static limit, where
continuum extrapolations of thermodynamic functions are feasible, we have explicitly demonstrated
the onset transition to cold nuclear matter
and calculated the nuclear equation of state for the first time directly from QCD. In particular, we
find a negative binding energy per nucleon as the expected reason for baryon condensation. In accord
with expectations from models of nuclear interactions, the binding energy is governed by exponentials
of the meson mass and suppressed for heavy quarks. Decreasing the quark mass beyond the convergence
region of our expansion, we indeed observe the nuclear onset transition to emerge as a first order
liquid gas transition with an endpoint at some small temperature. In this parameter range also the expected
gap opens up between the onset of pion condensation in the case of finite isospin chemical potential
and the nuclear onset at finite baryon density.
In summary, the effective lattice theory described in this work contains all the qualitative physics
expected for cold nuclear matter.
It remains to be seen whether high enough orders of the hopping
expansion can be generated in the future in order to reach physical quark mass values.
However, since the hopping
convergence is much faster at high temperatures, the current effective theory might already be
useful to describe the finite temperature phase structure of QCD with light quarks.
Work in this direction is in progress.
\section*{Acknowledgements}
We thank Georg Bergner for providing the Monte Carlo data for figure \ref{fig:cfMC} and are indebted to
Georg Bergner, Jonas Glesaaen and Wolfgang Unger for innumerable discussions, checks, proof
reading and advice.
J.L. is supported by the Swiss National Science Foundation under
grant 200020-137920. M.N. and O.P. are partially supported by the German BMBF,
grant 06FY7100, and the Helmholtz International
Center for FAIR within the LOEWE program launched by the State of Hesse.
|
1,477,468,751,116 | arxiv | \section{Introduction}\label{intro}
Talagrand's transport inequality and the logarithmic Sobolev inequality
are known to share important features: they both hold for the Gaussian
measure in any dimension, they enjoy the tensorization property and
they imply Gaussian concentration results. We refer to
\cite{villani,ledoux,ane,gozlan-leonard} for surveys about these notions.
Otto and Villani \cite{otto-villani} proved that the logarithmic
Sobolev inequality implies, in full generality, Talagrand's transport
inequality (see also \cite{bgl}) and under a curvature condition, that
the converse also holds (see also \cite{gozlan}). However, since the
work by Cattiaux and Guillin \cite{cattiaux-guillin}, it is known that the
two inequalities are not equivalent, in general.
In this paper, we prove that Talagrand's transport inequality is
actually equivalent to some restricted form of the logarithmic Sobolev
inequality. Our strategy easily generalizes to other transport
inequalities. As a byproduct, we obtain an elementary and direct proof
of the fact that transport inequalities can be perturbed by bounded functions.
In order to present our main results, we need some definitions and notation.
\subsection{Definitions and notation}\label{sec11}
In all what follows, $c\dvtx\R^k\to\R^+$ is a differentiable function
such that $c(0)=\nabla c(0)=0$. Let $\mu$ and $\nu$ be two
probability measures on
$\R^k$; the \textit{optimal transport cost} between $\nu$ and $\mu$
(with respect to the cost function $c$) is defined by
\[
\mathcal{T}_c(\nu,\mu):=\inf_\pi\biggl\{ \iint c(x-y) \,d\pi(x,y)
\biggr\},
\]
where the infimum is taken over all the probability measures $\pi$ on
$\R^k \times\R^k$ with marginals $\nu$ and $\mu$.
Optimal transport costs are used in a wide class of problems,
in statistics, probability and PDE theory, see \cite{villani}. Here,
we shall focus on the following transport inequality.
\begin{defi}[{[Transportation-cost inequality (\ref{eqiTcC})]}] \label{def:tci}
A probability measure $\mu$ on $\R^k$ satisfies (\ref{eqiTcC}), with
$C>0$, if
{\renewcommand{\theequation}{$\T_c(C)$}
\begin{equation}\label{eqiTcC}\hypertarget{eqiTcClink}
\mathcal{T}_c(\nu,\mu)\leq C H(\nu| \mu)\qquad \forall\nu\in
\mathcal{P}(\R^k),
\end{equation}}
\noindent where
\[
H(\nu|\mu)= \cases{
\displaystyle \int\log\frac{d\nu}{d\mu} \,d\nu, &\quad if $\nu\ll\mu$,\cr
+\infty, &\quad otherwise,}
\]
is the relative entropy of $\nu$ with respect to $\mu$ and $ \mathcal
{P}(\R^k)$ is the set of all probability measures on $\Rk$.
\end{defi}
The inequality (\ref{eqiTcC}) implies concentration results as shown by
Marton \cite{marton}, see also \cite{bobkov-gotze,ledoux}
and \cite{gozlan-leonard} for a full introduction to
this notion.
The quadratic cost $c(x)=|x|^2/2$ (where \mbox{$|\cdot|$} stands for the
Euclidean norm) plays a special role.
In this case, we write \hyperlink{eqiTcClink}{($\T_2(C)$)} and say that Talagrand's transport,
or the quadratic transport, inequality is satisfied. Talagrand proved
in \cite{talagrand}, among other results, that the standard Gaussian
measure satisfies \hyperlink{eqiTcClink}{($\T_2(1)$)} in all dimensions. In turn, inequality
\hyperlink{eqiTcClink}{($\T_2(C)$)} implies dimension free Gaussian concentration results. Recently,
the first author showed that the converse is also true, namely that a
dimension free Gaussian concentration result implies \hyperlink{eqiTcClink}{($\T_2(C)$)} \cite{gozlan}.
Now, we introduce the notion of restricted logarithmic Sobolev
inequalities. To that purpose, we need first to
define $K$-semi-convex functions.
\begin{defi}[($K$-semi-convex function)]
A function $f\dvtx\R^k\to\R$ is $K$-semi-convex ($K \in\R$) for the
cost function $c$ if for all
$\lambda\in[0,1]$, and all $x,y\in\R^k$
\setcounter{equation}{2}
\begin{eqnarray}\label{K semi-convex cost c 1}
f\bigl(\lambda x+(1-\lambda)y\bigr)&\leq&\lambda f(x)+(1-\lambda)f(y)+\lambda
Kc\bigl((1-\lambda)(y-x)\bigr)\nonumber\\[-8pt]\\[-8pt]
&&{}+(1-\lambda) Kc\bigl(\lambda(y-x)\bigr).\nonumber
\end{eqnarray}
\end{defi}
As shown in Proposition \ref{prop:sc} below, for differentiable
functions, (\ref{K semi-convex cost c 1})
is equivalent to the condition
\[
f(y)\geq f(x)+\nabla f(x)\cdot(y-x)-Kc(y-x)\qquad \forall x,y\in\R^k.
\]
The reader\vspace*{1pt} might see the semi-convexity as an answer to the question:
how far is the function $f$ from being convex?
The quadratic case $c(x)=\frac{1}{2}|x|^2$ is particularly
enlightening since a function $f$ is $K$-semi-convex if
and only if $x \mapsto f(x)+\frac{K}{2}|x|^2$ is convex. Note that the
semi-convexity can be related to the notion of convexity-defect, see,
for example, \cite{barthe-kolesnikov} and references therein where it
is largely discussed and used.
Note also that our definition differs from others, such as \cite
{villani}, Definition~10.10, or
\cite{evans}, Lemma 3 in Chapter 3, page 130.
Dealing only with semi-convex functions leads to the following definition.
\begin{defi}[{[Restricted (modified) logarithmic Sobolev inequality]}]
A probability measure $\mu$ on $\R^k$ verifies \textit{the restricted
logarithmic Sobolev inequality} with constant $C>0$, in short
(\ref{eqrLSIC}), if for all $0\leq K<\frac{1}{C}$ and all
$K$-semi-convex $f\dvtx\R^k\to\R$,
{\renewcommand{\theequation}{$\mathbf{rLSI}(C)$}
\begin{equation}\label{eqrLSIC}
\ent_{\mu}(e^f)\leq\frac{2C}{(1-KC)^2}
\int|\nabla f|^2 e^f \,d\mu,
\end{equation}}
\noindent
where $\ent_{\mu}(g):=\int g\log g \,d\mu-\int g \,d\mu\log\int g
\,d\mu$.
More generally, a probability measure $\mu$ on $\R^k$ verifies the
\textit{restricted modified logarithmic Sobolev inequality}
with constant $C>0$ for the cost $c$, in short (\ref{eqrMLSIcC}),
if for all $K\geq0$, $\eta>0$ with $ \eta+K<1/C$ and all $K$-semi-convex
$f\dvtx\R^k\to\R$ for the cost $c$,
{\renewcommand{\theequation}{$\mathbf{rMLSI}(c,C)$}
\begin{equation}\label{eqrMLSIcC}\hypertarget{eqrMLSIcClink}\quad
\ent_{\mu}(e^f)\leq\frac{\eta}{1-C(\eta+ K)} \int
c^*\biggl(\frac{\nabla f}{\eta}\biggr) e^f \,d\mu,
\end{equation}}
\noindent
where $c^*(u):=\sup_{h\in\R^k} \{ u\cdot h-c(h) \}$ and
$u\cdot h$ is the usual scalar product in $\Rk$.
\end{defi}
Note that (\ref{eqrMLSIcC}) reduces to (\ref{eqrLSIC}) for
$c(x)=c^*(x)=\frac{1}{2} |x|^2$, optimizing over $\eta$.
Without the restriction on the set of $K$-semi-convex functions, the
first inequality
corresponds to the usual logarithmic Sobolev inequality introduced by
Gross \cite{gross} (see also \cite{stam}).
For the second one (without the restriction), we recognize the modified
logarithmic Sobolev inequalities
introduced first by Bobkov and Ledoux~\cite{bobkov-ledoux}, with
$c^*(t)=2|t|^2/(1-\gamma)$ for $|t| \leq\gamma$ and $c^*(t)=+\infty
$ otherwise, $t\in\R$, in order to recover the celebrated result by
Talagrand \cite{talagrand91} on the concentration phenomenon for
products of exponential measures.
Gentil, Guillin and Miclo \cite{ggm} established modified logarithmic
Sobolev inequalities for products of the probability measures
$d\nu_p(t)=e^{-|t|^p}/Z_p$, $t\in\R$ and $p \in(1,2)$, with
$c^*(t)$ that compares to $\max(t^2,|t|^q)$ where $q=p/(p-1) \in
(2,\infty)$
is the dual exponent of $p$. In a subsequent paper \cite{ggm2}, they
generalized their results to a large class of measures with tails
between exponential and Gaussian (see also \cite{barthe-roberto} and
\cite{gozlan07}). In \cite{ggm}, the authors also prove that the
modified logarithmic Sobolev inequality [without the restriction, and
with $c^*(t)$ that compares to $\max(t^2,|t|^q)$] implies the
corresponding transport inequality (\ref{eqiTcC}).
Our results below show that the functional inequalities \hyperlink{eqrMLSIcClink}{($\mathbf
{rMLSI}(c, \cdot)$)} and \hyperlink{eqiTcClink}{($\T_c( \cdot)$)} are equivalent (up to
universal factors in the constants). To give a more complete
description of this equivalence, let us consider yet another type of
logarithmic Sobolev inequalities that we call inf-convolution
logarithmic Sobolev inequality.
\begin{defi}[(Inf-convolution logarithmic Sobolev inequality)]
A probability measure $\mu$ on $\R^k$ verifies \textit{the
inf-convolution logarithmic Sobolev inequality} with constant $C>0$, in
short (\ref{eqICLSIcC}), if for all $\lambda\in(0,1/C)$ and all
$f\dvtx\R^k\to\R$,
{\renewcommand{\theequation}{$\mathbf{ICLSI}(c,C)$}
\begin{equation}\label{eqICLSIcC}\hypertarget{eqICLSIcClink}
\ent_{\mu}(e^f)\leq\frac{1}{1-\lambda C} \int(f-Q^\lambda
f) e^f \,d\mu,
\end{equation}}
\noindent where $Q^\lambda f\dvtx\R^k \rightarrow\R$ denotes the
infimum-convolution of $f$:
\[
Q^\lambda f(x) =\inf_{y\in R^k} \{f(y)+\lambda c(x-y)\}.
\]
\end{defi}
\subsection{Main results}\label{sec12}
Our first main result is the following.
\begin{theorem}\label{main-result2}
Let $\alpha\dvtx\R\to\R^+$ be a convex symmetric function of class
$C^1$ such that $\alpha(0)=\alpha'(0)=0$,
$\alpha'$ is concave on $\R^+$.
Define $c(x)=\sum_{i=1}^k \alpha(x_i)$ and let
$\mu$ be a probability measure on $\R^k$. The following propositions
are equivalent:
\begin{enumerate}[(3)]
\item[(1)] There exists $C_1>0$ such that $\mu$ verifies the inequality
\hyperlink{eqiTcClink}{($\T_c(C_1)$)}.
\item[(2)] There exists $C_2>0$ such that $\mu$ verifies the inequality
\hyperlink{eqICLSIcClink}{($\mathbf{ICLSI}(c,C_2)$)}.
\item[(3)] There exists $C_3>0$ such that $\mu$ verifies the inequality
\hyperlink{eqrMLSIcClink}{($\mathbf{rMLSI}(c,C_3)$)}.
\end{enumerate}
The constants $C_1$, $C_2$ and $C_3$ are related in the following way:
\begin{eqnarray*}
(1)\Rightarrow(2)\Rightarrow(3) \qquad\mbox{with } C_1&=&C_2=C_3, \\
(3)\Rightarrow(1) \qquad\mbox{with } C_1&=&8C_3.
\end{eqnarray*}
\end{theorem}
The typical example of function $\alpha$ satisfying the setting of
Theorem \ref{main-result2}
is a smooth version of $\alpha(x) = \min(x^2,x^p)$, with $p \in[1,2]$.
The first part $(1) \Rightarrow(2)\Rightarrow(3)$ actually holds in a
more general setting (see Theorem \ref{th:main1}), it is proven in
Section \ref{sec:12}. Moreover, the inequality (\ref{eqICLSIcC})
has a meaning even if $\R^k$ is replaced by an abstract metric space
$X$.
The proof of the second part $(3)\Rightarrow(1)$ is given in Section
\ref{sec:212}.
It uses the Hamilton--Jacobi approach of \cite{bgl} based on explicit
computations on the sup-convolution semi-group (Hopf--Lax formula).
An alternative proof of $(3)\Rightarrow(1)$, with a worst constant,
is given in the subsequent Section \ref{sec:alternative} in the
particular case of
the quadratic cost $c(x)= |x|^2/2$. We believe that such an approach
may lead to further developments in the future and so that it is worth
mentioning it.
In order to keep the arguments as clean as possible and to go straight
to the proofs, we decided to collect most of results
on semi-convex functions, and most of the technical lemmas, in an
independent section (Section \ref{sec5}).
Finally, we present some extensions and comments in Section \ref{sec6}.
We first
give an extension of our main Theorem \ref{th:main1} to Riemannian
manifolds verifying a certain curvature condition (see Theorem \ref
{extensionthm}). Then, in Section \ref{autres log-sob}, we show that
other types of\vadjust{\goodbreak} logarithmic Sobolev inequalities can be derived from
transport inequalities (see Theorem \ref{th:main1bis}). The last
Section \ref{poincare} is a discussion on the links between Poincar\'e
inequality and (restricted) modified logarithmic Sobolev inequality.
Let us end this Introduction with an important application of Theorem
\ref{main-result2}.
It is well known that many functional inequalities of Sobolev type are
stable under bounded perturbations. The first perturbation property of
this type was established by Holley and Stroock in \cite{HS87} for the
logarithmic Sobolev inequality.
\begin{theorem}[(Holley--Stroock)]\label{HS}
Let $\mu$ be a probability measure verifying the logarithmic Sobolev
inequality with a constant $C>0$ [$\mathbf{LSI}(C)$ for short]:
\[
\mathrm{Ent}_\mu(f^2)\leq C \int|\nabla f|^2 \,d\mu\qquad\forall f.
\]
Let $\varphi$ be a bounded function; then the probability measure
$d\tilde{\mu}=\frac{1}{Z}e^{\varphi} \,d\mu$ verifies $\mathbf
{LSI}$ with the constant $\tilde{C}=e^{\mathrm{Osc}(\varphi)}C$,
where the oscillation of $\varphi$ is defined by
\[
\mathrm{Osc}(\varphi)=\sup\varphi- \inf\varphi.
\]
\end{theorem}
A longstanding open question was to establish such a property for
transport inequalities. We have even learned from Villani that this
question was one of the initial motivations behind the celebrated work
\cite{otto-villani}. The representation furnished by Theorem \ref
{main-result2} is the key that enables us to give the first bounded
perturbation property for transport inequalities. The following
corollary is our second main result.
\begin{cor}
Let $\alpha$ be a convex symmetric function of class $C^1$ such that
$\alpha(0)=\alpha'(0)=0$,
$\alpha'$ is concave on $\R^+$. Let $c(x)=\sum_{i=1}^k \alpha(x_i)$ and
$\mu$ be a probability measure on $\R^k$.
Assume that $\mu$ verifies (\ref{eqiTcC}). Let $\varphi\dvtx\R^k\to\R$ be
bounded and define $d\tilde{\mu}(x)=\frac{1}{Z}e^{\varphi(x)} \,d\mu
(x)$, where $Z$ is the normalization constant. Then $\tilde{\mu}$
verifies \hyperlink{eqiTcClink}{($\T_c(8 C e^{\mathrm{Osc}(\varphi)})$)} where
$\mathrm{Osc} (\varphi)= \sup\varphi- \inf\varphi$.
\end{cor}
\begin{pf}
The proof below is a straightforward adaptation of the original proof
of Theorem \ref{HS}.
Using the following representation of the entropy
\[
\ent_\mu(g ) = \inf_{t >0} \biggl\{ \int\biggl(g \log
\biggl( \frac{g}{t} \biggr) - g + t \biggr) \,d\mu\biggr\}
\]
with $g=e^f$, we see that [since $g \log( \frac{g}{t} )
- g + t \geq0$]
\[
\ent_{\tilde{\mu}} (g) \leq\frac{e^{\sup\varphi
}}{Z} \ent_{\mu} (g) .
\]
From the first part of Theorem \ref{main-result2}, it follows that for
all $K\geq0$, $\eta>0$, with $\eta+K<1/C$ and all $K$-semi-convex
functions $f$ for the cost $c$,
\begin{eqnarray*}
\ent_{\tilde{\mu}} (e^f)
& \leq &
\frac{e^{\sup\varphi}}{Z} \frac{\eta}{1-C(\eta+ K)} \int c^*
\biggl(\frac{\nabla f}{\eta}\biggr) e^f \,d\mu\\
& \leq &
\frac{\eta e^{{\mathrm{Osc}}(\varphi)}}{1-C(\eta+ K)} \int c^*
\biggl(\frac{\nabla f}{\eta}\biggr) e^f \,d\tilde{\mu} .
\end{eqnarray*}
Let $u=e^{{\mathrm{Osc}}(\varphi)}$ and $c_u(x):=uc(x/u)$, $x\in\Rk
$. Let $f$ be a $K$-semi-convex function for the cost $c_u$. Since
$u\geq1$ the convexity of $\alpha$ yields $c_u(x)\leq c(x)$, $x\in
\Rk$. Hence, $f$ is a $K$-semi-convex function for the cost $c$.
Observing that $c^*_u(x)= u c^*(x), x\in\Rk$, from the above
inequality, it follows that $\tilde{\mu}$ verifies the inequality
\hyperlink{eqrMLSIcClink}{($\mathbf{rMLSI}(c_u,C)$)}. Then, the second part of Theorem \ref
{main-result2} implies that $\tilde{\mu}$ verifies
\hyperlink{eqiTcClink}{($\T_{c_u}(8C)$)}. From point (i) of the technical Lemma \ref
{lem:tec2}, one has $uc(x/u)\geq c(x)/u$ for $u\geq1$, $x\in\Rk$.
This inequality completes the proof.
\end{pf}
\begin{rem}
After the preparation of this work, we have learned from E.~Milman that
he has obtained in \cite{Mil09e} new perturbation results for various
functional inequalities on a Riemannian manifold equipped with a
probability measure $\mu$ absolutely continuous with respect to the
volume element. His results also cover transport inequalities but are
only true under an additional curvature assumption. To be more precise,
suppose that $\mu$ verifies say \hyperlink{eqiTcClink}{($\T_2(C)$)} and consider another
probability measure of the form $d\tilde{\mu}(x)=e^{-V(x)} \,dx$ such
that
\[
\mathrm{Ric} + \operatorname{Hess} V\geq-\kappa,
\]
for some $\kappa\geq0$. Then if $C>\frac{\kappa}{2}$ and if $\mu$
and $\tilde{\mu}$ are close in some sense to each other, then $\tilde
{\mu}$ verifies \hyperlink{eqiTcClink}{($\T_{2}(\tilde{C})$)} for some $\tilde{C}$ depending
only on $C$, $\kappa$ and on the ``distance'' between $\mu$ and
$\tilde{\mu}$.
Actually, the curvature assumption above makes possible to go beyond
the classical Holley--Stroock property and to work with measures $\tilde
{\mu}$ which are more serious perturbations of $\mu$. Proofs of these
results are based on the remarkable equivalence between concentration
and isoperimetric inequalities under curvature bounded from below,
discovered by Milman in \cite{Mil09d}.
\end{rem}
\section{From transport inequalities to restricted modified
logarithmic Sobolev inequalities} \label{sec:12}
In this section, we prove the first part $(1) \Rightarrow
(2)\Rightarrow(3)$ of Theorem~\ref{main-result2}. As mentioned in the
\hyperref[intro]{Introduction},
this implication holds in a more general setting as we explain now.
Let $X$ denote a Polish space equipped with the Borel $\sigma
$-algebra. Then the optimal transport cost between two probability
measures $\mu$ and $\nu$ on $X$, with cost $\rmc \dvtx X \times X
\to
\mathbb{R}^+$ is
\[
\mathcal{T}_{\rmc}(\nu,\mu):=\inf_\pi\iint\rmc(x,y) \,d\pi(x,y),
\]
where the infimum is taken over all probability measures $\pi$ on $X
\times X$ with marginals $\nu$ and $\mu$. Assume $\rmc$ is
symmetric so that $\mathcal{T}_{\rmc}(\nu,\mu)=\mathcal
{T}_{\rmc}(\mu,\nu)$.
The transport inequality \hyperlink{eqiTcClink}{($\T_{\rmc}(C)$)} is defined accordingly as in
Definition \ref{def:tci}.
For $f\dvtx X \to\mathbb{R}$ and $\lambda>0$, the inf-convolution
$Q^\lambda f \dvtx X \to\mathbb{R}$ is given by
\[
Q^\lambda f(x) = \inf_{y\in X} \{ f(y) + \lambda{\rmc} (
x,y) \}.
\]
The first part of Theorem \ref{main-result2} will be a consequence of
the following general result.
\begin{theorem} \label{th:main1}
Let ${\rmc}\dvtx X \times X \rightarrow\R^+$ be a symmetric
continuous function.
Let $\mu$ be a probability measure on $X$ satisfying \hyperlink{eqiTcClink}{($\T_{\rmc}(C)$)}
for some $C>0$.
Then for all functions $f \dvtx X \to\mathbb{R}$ and all $\lambda\in
(0,1/C)$, it holds
\[
\ent_\mu(e^f) \leq\frac{1}{1 - \lambda C} \int
(f - Q^\lambda f) e^f \,d\mu.
\]
Assume moreover that ${\rmc}(x,y)=c(x-y)$, $x,y\in\R^k$, where
$c\dvtx\R
^k\rightarrow\R^+ $ is a differentiable function such that
$c(0)=\nabla c(0)=0$. Then $\mu$ verifies the inequality \hyperlink{eqrMLSIcClink}{($\mathbf
{rMLSI}(c,C)$)}.
\end{theorem}
\begin{pf
Fix $f\dvtx X \to\mathbb{R}$, $\lambda\in(0,1/C)$,
and define $d\nu_{f}=\frac{e^f}{\int e^f \,d\mu} \,d\mu$. One has
\begin{eqnarray*}
H(\nu_{f}|\mu)
& = &
\int\log\biggl( \frac{e^f}{\int e^f \,d\mu}\biggr)\frac{e^f}{\int
e^f \,d\mu} \,d\mu
=
\int f \,d\nu_{f}-\log\int e^f \,d\mu\\
&
\leq&\int f \,d\nu_{f}-\int f \,d\mu,
\end{eqnarray*}
where the last inequality comes from Jensen inequality.
Consequently, if $\pi$ is a probability measure on $X\times X$ with
marginals $\nu_f$ and $\mu$
\[
H(\nu_{f}| \mu)\leq\iint\bigl(f(x)-f(y)\bigr) \,d\pi(x,y).
\]
It follows from the definition of the inf-convolution function that
$f(x)-f(y) \leq f(x) -Q^\lambda f(x) + \lambda{\rmc}(x,y)$, for all
$x,y \in X$. Hence,
\[
H(\nu_{f}| \mu)
\leq\iint\bigl(f(x) - Q^\lambda f(x) \bigr) \,d\pi(x,y) + \lambda
\iint{\rmc}(x,y) \,d\pi(x,y),
\]
and optimizing over all $\pi$ with marginals $\nu_f$ and $\mu$
\begin{eqnarray*}
H(\nu_{f}| \mu)&=& \int(f - Q^\lambda f) \,d\nu_f +
\lambda\mathcal{T}_c(\nu_f,\mu) \\
&
\leq&\frac{1}{\int e^f \,d\mu} \,d\mu\int(f - Q^\lambda f
) e^f \,d\mu+ \lambda C H(\nu_{f}| \mu) .
\end{eqnarray*}
The first part of Theorem \ref{th:main1} follows by noticing that
$(\int e^f \,d\mu) H(\nu_{f}| \mu) = \ent_\mu
(e^f)$.
Then the proof of Theorem \ref{th:main1} is completed by applying
Lemma \ref{lem:easy} below.
\end{pf}
\begin{lem} \label{lem:easy}
Let $c\dvtx\mathbb{R}^k \to\mathbb{R}^+$ be a differentiable function
such that $c(0)=\nabla c(0)=0$ and define $c^*(x)=\sup_y \{ x \cdot y
- c(y) \}\in\R\cup\{+\infty\}$, $x\in\R^k$.
Then, for any $K$-semi-convex differentiable function $f \dvtx\mathbb
{R}^k \to\mathbb{R}$ for the cost $c$, it holds
\[
f(x) - Q^{K+\eta}f(x) \leq\eta c^* \biggl( - \frac{\nabla f(x)
}{\eta} \biggr)\qquad
\forall x \in\mathbb{R}^k, \forall\eta>0 .
\]
\end{lem}
\begin{pf}
Fix a $K$-semi-convex differentiable function $f \dvtx\mathbb{R}^k \to
\R
$. Also fix $x \in\mathbb{R}^k$ and $\eta>0$.
By Proposition \ref{prop:sc} and\vadjust{\goodbreak} the Young inequality $X\cdot Y \leq
\eta c^*(\frac{X}{\eta}) + \eta c(Y)$, we have
\[
f(x) - f(y) - Kc(y-x)
\leq
- \nabla f(x) \cdot(y-x)
\leq
\eta c^* \biggl( - \frac{\nabla f (x) }{\eta} \biggr) + \eta c( y-x ).
\]
Hence, for any $y \in\mathbb{R}^k$,
\[
f(x) - f(y) -(K+\eta)c(y-x) \leq\eta c^* \biggl( - \frac{\nabla f
(x) }{\eta} \biggr) .
\]
This yields the expected result.
\end{pf}
\section{From restricted modified logarithmic Sobolev inequalities to
transport inequalities---I: Hamilton--Jacobi approach} \label{sec:212}
In this section, we prove the second part $(3) \Rightarrow(1)$ of
Theorem \ref{main-result2}.
The proof is based on the approach of Bobkov,
Gentil and Ledoux \cite{bgl}, using the Hamilton--Jacobi equation. We
will use the following notation: given a convex function $\alpha\dvtx\R
\rightarrow\R^+$ with $\alpha(u)\neq0$ for $u\neq0$, we define
\begin{equation} \label{eq:omega}
\omega_\alpha(x) = \sup_{u >0} \frac{\alpha(ux)}{\alpha(u)}\qquad
\forall x\in\R.
\end{equation}
\begin{pf*}{Proof of $(3) \Rightarrow(1)$ of Theorem
\ref{main-result2}}
Let $f \dvtx\mathbb{R}^k \to\mathbb{R}$ be a bounded continuous function.
For $x \in\mathbb{R}^k$ and $t \in(0,1)$, define
\[
P_tf(x) = \sup_{y \in\mathbb{R}^k} \biggl\{ f(y) - t c \biggl( \frac
{x-y}{t} \biggr) \biggr\} .
\]
It is well known that $u_{t}=P_{t}f$ verifies the following
Hamilton--Jacobi equation (see, e.g., \cite{evans}): for almost every
$x\in\R^k$ and almost every $t \in(0,+\infty)$,
\[
\cases{
\partial_t u_t (x) = c^* ( -\nabla u_t(x) ), \cr
u_0 = f.}
\]
To avoid lengthy technical arguments, we assume in the sequel that
$P_{t}f$ is continuously differentiable in space and time and that the
equation above holds for all $t$ and $x$. We refer to
\cite{LV07}, proof of Theorem 1.8, or \cite{villani}, proof of Theorem
22.17, for a
complete treatment of the problems arising from the nonsmoothness of $P_{t}f$.
Defining $Z(t)=\int e^{\ell(t )P_{1-t} f} \,d\mu$, where $\ell$ is a
smooth nonnegative function on $\R^+$ with $\ell(0)=0$ that will be
chosen later, one gets
\begin{eqnarray*}
Z'(t)
& = &
\int\biggl( \ell'(t)P_{1-t} f + \ell(t) \,\frac{\partial}{\partial
t}P_{1-t} f \biggr) e^{\ell(t ) P_{1-t} f} \,d\mu\\
& = &
\int\ell'(t) P_{1-t} f e^{\ell(t) P_{1-t} f} \,d\mu- \ell(t) \int
c^* ( {\nabla P_{1-t} f} )e^{\ell(t) P_{1-t} f} \,d\mu.
\end{eqnarray*}
On the other hand,
\[
\ent_\mu\bigl( e^{\ell(t) P_{1-t} f} \bigr) = \ell(t) \int
P_{1-t} f e^{\ell(t) P_{1-t} f} \,d\mu- Z(t) \log Z(t) .
\]
Therefore provided $\ell'(t)\neq0$,
\begin{eqnarray} \label{eq:step1}
\ent_\mu\bigl( e^{\ell(t) P_{1-t} f} \bigr)
& = &
\frac{\ell(t)}{\ell'(t)} Z'(t) -Z(t) \log Z(t) \nonumber\\[-8pt]\\[-8pt]
&&{}
+ \frac{\ell(t)^2}{\ell'(t)} \int c^* ( {\nabla P_{1-t} f}
)e^{\ell(t) P_{1-t} f} \,d\mu.\nonumber
\end{eqnarray}
By Lemma \ref{lem:semiconv} [with $A=\ell(t)(1-t)$ and $B=1-t$], the
function $g=\ell(t) P_{1-t}f$
is $K(t)$ semi-convex for the cost function $c(x)=\sum_{i=1}^k\alpha
(x_i)$, $x\in\R^k$, where $K(t)=4 \ell(t)(1-t) \omega_\alpha
( \frac{1}{2(1-t)} )$.
Hence, we can apply the restricted logarithmic Sobolev inequality to
get that for any $\eta>0$, any $t \in(0,1)$ such that
$K(t) + \eta<1/C_3$,\setcounter{footnote}{1}\footnote{Note that this condition is not empty
since $K(0)=0$.}
\begin{eqnarray*}
\ent_\mu\bigl( e^{\ell(t) P_{1-t} f} \bigr)
&
\leq&\frac{\eta}{1-(K(t) + \eta)C_3} \int c^* \biggl( \frac{\ell
(t) \nabla P_{1-t} f}{\eta} \biggr) e^{\ell(t) P_{1-t} f} \,d\mu\\
&
\leq&
\frac{\eta\omega_{\alpha^*} ( {\ell(t)}/{\eta}
)}{1-(K(t) + \eta)C_3} \int c^* ( {\nabla P_{1-t} f} )
e^{\ell(t) P_{1-t} f} \,d\mu,
\end{eqnarray*}
since $c^*(x)=\sum_{i=1}^k\alpha^*(x_i)$, $x\in\R^k$.
Combining this bound with (\ref{eq:step1}) leads to
\begin{eqnarray*}
&& \frac{\ell(t)}{\ell'(t)}Z'(t) -Z(t) \log Z(t) \\
&&\qquad\leq
\biggl( \frac{\eta\omega_{\alpha^*} ( {\ell(t)}/{\eta}
)}{1-(K(t) + \eta)C_3} - \frac{\ell(t)^2}{\ell'(t)} \biggr)
\int c^* ( {\nabla P_{1-t} f} )e^{\ell(t) P_{1-t} f}
\,d\mu.
\end{eqnarray*}
Our aim is to choose the various parameters so that to have the
right-hand side of the latter inequality nonpositive.
We will make sure to choose $\ell$ so that $\ell(t)/\eta< 1$; then
by Lemma \ref{lem:tec2} below $K(t)\leq\ell(t)/(1-t)$ and $\omega
_{\alpha^*} ( \frac{\ell(t)}{\eta} )\leq\frac{\ell
^2(t)}{\eta^2}$.
Setting $v=1-C_3\eta$, one has $0< v< 1$,
\begin{equation}\label{eq:*}
C_3\bigl(K(t)+\eta\bigr)\leq(1-v)\biggl(\frac{\ell(t)}{\eta(1-t)} +1\biggr)
\end{equation}
and
\begin{eqnarray}\label{inetech}
&&\biggl( \frac{\eta\omega_{\alpha^*} ( {\ell(t)}/{\eta}
)}{1-(K(t) + \eta)C_3} - \frac{\ell(t)^2}{\ell'(t)}
\biggr)\nonumber\\[-8pt]\\[-8pt]
&&\qquad\leq\ell^2(t) \biggl(\frac{1}{\eta v-(1-v){\ell
(t)}/({1-t})}-\frac{1}{\ell'(t)}\biggr).\nonumber
\end{eqnarray}
We choose $\ell(t)=\eta((1-t)^{1-v}-(1-t)), t\in(0,1)$,
so that $\ell(0)=0$ and the right-hand side of (\ref{inetech}) is
equal to zero. Furthermore
$\ell'(t)=\eta(1-\frac{1-v}{(1-t)^v})\geq0, \forall
t\in[0,1-(1-v)^{1/v}]$.
As assumed earlier, $\ell(t)$ is nonnegative and $\ell(t)/\eta< 1$
on $(0,1)$. Let us observe that
\[
\biggl[ \frac{\log Z(t)}{\ell(t)} \biggr]'
=\frac{\ell'(t)}{Z(t)\ell^2(t)}\biggl[\frac{\ell(t)}{\ell
'(t)}Z'(t) -Z(t) \log Z(t) \biggr] .
\]
Let $T=T(v):=1-(1-v)^{1/v}$, since $\ell'(t)>0$ on $(0,T(v))$, the
above inequalities imply that on that interval $[\frac{\log
Z(t)}{\ell(t)}]'\leq0$ provided
$C_3(K(t)+\eta)<1$. By (\ref{eq:*}), this is indeed satisfied for
$t\in[0,T(v)]$.
This gives that the function $t \mapsto\frac{\log Z_t}{\ell(t)}$ is
nonincreasing on $(0,T]$.
Hence, we have
\[
\int e^{\ell(T) P_{T} f} \,d\mu= Z_{T} \leq\exp\biggl( \ell(T)
\lim_{t \to0} \frac{\log Z_t}{\ell(t)} \biggr) = e^{\ell(T) \int
P_1 f \,d\mu} .
\]
In other words, since $P_{T}f \geq f$, then for all bounded continuous
functions $g= \ell(T)f$,
\[
\int e^g \,d\mu\leq e^{\int\tilde P g \,d\mu}
\]
with
\[
\tilde P g (x) = \sup_{y \in\mathbb{R}^k} \{ g(y) - \ell
(T) c(x-y) \} .
\]
According to the Bobkov and G\"otze sup-convolution characterization of
transport inequalities (which for the reader's convenience we quote
below as Theorem \ref{bg}), this implies that $\mu$ verifies
\hyperlink{eqiTcClink}{($\T_c(1/\ell(T))$)}. One has $\ell(T)=\eta v (1-v)^{(1/v)-1}$ and
$C_3\ell(T)=v (1-v)^{1/v}$. Hence, $\mu$ verifies
\hyperlink{eqiTcClink}{($\T_c(K)$)}
with
\[
K = \frac{C_3}{\sup_{v\in(0,1)}v(1-v)^{1/v}}\leq7,7 C_3.
\]
The proof of $(3)\Rightarrow(1)$ is complete.
\end{pf*}
\begin{theorem}[\cite{bobkov-gotze}]\label{bg}
Let $\mu$ be a probability measure on $\mathbb{R}^k$, $\lambda>0$
and $c$ defined as in Theorem \ref{main-result2}.
Then, the following two statements are equivalent:
\begin{enumerate}[(ii)]
\item[(i)] $\mu$ satisfies \hyperlink{eqiTcClink}{($\T_c(1/\lambda)$)};
\item[(ii)] for any bounded function $f\dvtx\mathbb{R}^k \to\mathbb{R}$ it holds
\[
\int e^f \,d\mu\leq
\exp\biggl\{ \int\sup_{y \in\mathbb{R}^k} \{ f(y) - \lambda
c(x-y) \} \biggr\} \,d\mu.
\]
\end{enumerate}
\end{theorem}
Note that Theorem \ref{bg} holds in much more general setting, see
\cite{villani}.
\section{From the restricted logarithmic Sobolev inequality to $\T_{2}$---II:
An alternative proof} \label{sec:alternative}
In this section, we give an alternative proof of the second part $(3)
\Rightarrow(1)$ of Theorem \ref{main-result2}.
The final\vadjust{\goodbreak} result will lead to a worst constant, so we will present our
approach only in the particular case of
the quadratic cost function $c(x)=\frac{1}{2}|x|^2$. More precisely,
we will prove that
(\ref{eqrLSIC}) $\Rightarrow$ \hyperlink{eqiTcClink}{($\T_2(9C)$)}
[leading, for the quadratic cost,
to the implication $(3) \Rightarrow(1)$ of Theorem \ref{main-result2}
with $C_1=9C_3$].
We believe that this alternative approach may lead to other
results in the future and so that it is worth mentioning it.
The strategy is based on the following recent characterization of
Gaussian dimension free concentration by the first author.
\begin{theorem}[\cite{gozlan}]\label{characterization}
A probability measure $\mu$ on $\R^k$ verifies the inequality
\hyperlink{eqiTcClink}{($\T_2(C/2)$)} if and only if there are some $r_o\geq0$ and $b>0$ such that
for all positive integer $n$ and all subset $A$ of $(\R^k
)^n$ with $\mu^n(A)\geq1/2$, the following inequality holds
\[
\mu^n(A+rB_2)\geq1-be^{-(r-r_o)^2/C}\qquad \forall r\geq r_o,
\]
where $B_2$ is the Euclidean unit ball of $(\R^k)^n$.
\end{theorem}
So, in order to get that (\ref{eqrLSIC}) $\Rightarrow$ \hyperlink{eqiTcClink}{($\T_2(9C)$)} it
is enough to
prove that the dimension free Gaussian concentration inequality holds
with $-(r-r_o)^2/(18C)$ in the exponential.
First, let us observe that the restricted logarithmic Sobolev
inequality tensorizes.
\begin{prop}\label{tensorization}
If a probability measure $\mu$ on $\R^k$ verifies (\ref{eqrLSIC})
for some $C>0$, then for all positive integer $n$ the probability $\mu
^n$ verifies (\ref{eqrLSIC}).
\end{prop}
\begin{pf}
If $f\dvtx(\R^k)^n\to\R$ is $K$-semi-convex, then for all
$i\in\{1,\ldots,n\}$ and all $x_1, \ldots, x_{i-1}, x_{i+1},
\ldots, x_n \in\R^k$ the function $f_i\dvtx\R^k\to\R$ defined by
$f_i(x)=f(x_1,\ldots,x_{i-1},x,x_{i+1},\ldots,x_n)$ is $K$-semi-convex.
According to the classical additive property of the entropy functional
(see, e.g., \cite{ane}, Chapter 1)
\[
\ent_{\mu^n}(e^f)\leq\int\sum_{i=1}^n \ent_{\mu}(e^{f_i}) \,d\mu^n.
\]
Applying to each $f_i$ the restricted logarithmic Sobolev inequality
completes the proof.
\end{pf}
The next proposition uses the classical Herbst argument (see,
e.g.,\break
\cite{ledoux}).
\begin{prop}\label{Herbst}
If $\mu$ verifies the restricted logarithmic Sobolev inequality
(\ref{eqrLSIC}) then
for all $f\dvtx\R^k\to\R$ which is $1$-Lipschitz with respect to the
Euclidean norm and $K$-semi-convex with $K\geq0$ one has
\[
\int e^{\lambda(f(x)-\int f \,d\mu)} \,d\mu(x)\leq\exp\biggl(\frac
{2\lambda^2C}{1-\lambda KC}\biggr)\qquad \forall\lambda\in
\bigl(0, 1/(CK)\bigr).
\]
\end{prop}
\begin{pf}
Let us denote $H(\lambda)=\int e^{\lambda f} \,d\mu$, for all $\lambda
\geq0$.
The function $\lambda f$ is $\lambda K$ semi-convex, so if $0\leq
\lambda<1/(CK)$, one can apply the\vadjust{\goodbreak} inequality (\ref{eqrLSIC}) to
the function $\lambda f$.
Doing so yields the inequality
\begin{eqnarray*}
\lambda H'(\lambda)-H(\lambda)\log H(\lambda)
& = &
\ent_\mu( e^{\lambda f} )
\leq
\frac{2C\lambda^2}{(1-\lambda K C)^2}\int|\nabla
f|^2e^{\lambda f} \,d\mu\\
& \leq &
\frac{2C\lambda^2}{(1-\lambda K C)^2}H(\lambda),
\end{eqnarray*}
where the last inequality comes from the fact that $f$ is $1$-Lipschitz.
Consequently, for all $0\leq\lambda<1/(CK)$,
\[
\frac{d}{d\lambda}\biggl(\frac{\log H(\lambda)}{\lambda}
\biggr)\leq\frac{2C}{(1-\lambda K C)^2}.
\]
Observing that $\log H(\lambda)/\lambda\to\int f \,d\mu$ when
$\lambda\to0$ and integrating the differential inequality above gives
the result.
\end{pf}
Now let us show how to approach a given $1$-Lipschitz function by a
$1$-Lipschitz and $K$-semi-convex function.
\begin{prop}\label{approximation}
Let $f\dvtx\R^k\to\R$ be a $1$-Lipschitz function. Define
\[
P_tf(x)=\sup_{y\in\R^k}\biggl\{f(y)-\frac{1}{2t}|x-y|^2\biggr\}\qquad
\forall x\in\R^k, \forall t>0 .
\]
Then:
\begin{enumerate}[(iii)]
\item[(i)] For all $t>0$, $P_t f$ is $1$-Lipschitz.
\item[(ii)] For all $t>0$, $P_t f$ is $1/t$-semi-convex.
\item[(iii)] For all $t>0$ and all $x\in\R^k$, $f(x)\leq P_tf(x)\leq f(x)+
\frac{t}{2}$.
\end{enumerate}
\end{prop}
\begin{pf}
(i)
Write $P_tf(x)=\sup_{z\in\R^k}\{f(x-z)-\frac
{1}{2t}|z|^2\}$. For all $z\in\R^k$, the function $x\mapsto
f(x-z)-\frac{1}{2t}|z|^2$ is $1$-Lipschitz. So $P_tf$ is $1$-Lipschitz
as a supremum of $1$-Lipschitz functions.
(ii)
Expanding $|x-y|^2$ yields $P_t f(x)=\sup_{y\in\R^k}\{
f(y)-\frac{1}{2t}|y|^2+\frac{1}{t}x\cdot y\}-\frac{1}{2t}|x|^2$.
Since a supremum of affine functions is convex, one concludes that
$x\mapsto P_tf(x)+\frac{|x|^2}{2t}$ is convex, which means that $P_tf$
is $1/t$-semi-convex.
(iii)
The inequality $P_tf(x)\geq f(x)$ is immediate. Since $f$ is $1$-Lipschitz,
\begin{eqnarray*}
P_tf(x)-f(x)&=&\sup_{y\in\R^k}\biggl\{f(y)-f(x)-\frac
{1}{2t}|x-y|^2\biggr\}\\ \noalign{\vspace{-2pt}}
&\leq&\sup_{y\in\R^k}\biggl\{|y-x|-\frac{1}{2t}|x-y|^2\biggr\}\\ \noalign{\vspace{-2pt}}
&=&\sup_{r\geq0} \biggl\{r-\frac{r^2}{2t}\biggr\}=\frac{t}{2}
\end{eqnarray*}
\end{pf}
We are now ready to complete the proof.\vadjust{\goodbreak}
\begin{pf*}{Proof of (\ref{eqrLSIC}) $\Rightarrow$ \hyperlink{eqiTcClink}{($\T_2(9C)$)}}
Let $n\geq1$. Consider a $1$-Lipschitz function $g$ on $(\R
^k)^n$ and define
$P_t g(x)=\sup_{y\in(\R^k)^n}\{g(y)-\frac
{1}{2t}|x-y|^2\}$, $t>0$. Thanks to Proposition \ref
{approximation}, the function $P_tg$ is $1$-Lipschitz and
$1/t$-semi-convex, so according to Propositions \ref{tensorization}
and \ref{Herbst}, for all $0\leq\lambda<t/C$, one has
\[
\int e^{\lambda(P_tg(x)-\int P_tg \,d\mu^n)} \,d\mu^n(x)\leq\exp
\biggl(\frac{2\lambda^2C}{1-{\lambda C}/{t}}\biggr).
\]
Moreover, according to point (iii) of Proposition \ref
{approximation}, $P_tg(x)-\int P_tg \,d\mu^n\geq g(x)-\int g \,d\mu
^n-\frac{t}{2}$, for all $x\in(\R^k)^n$. Plugging this
in the inequality above gives
\[
\int e^{\lambda(g(x)-\int g \,d\mu^n)} \,d\mu^n(x)\leq\exp
\biggl(\frac{\lambda t}{2}+\frac{2\lambda^2C}{1-{\lambda
C}/{t}}\biggr).
\]
For a given $\lambda\geq0$, this inequality holds as soon as
$t>C\lambda$. Define $\varphi(t)=\frac{\lambda t}{2}+\frac{2\lambda
^2C}{1-{\lambda C}/{t}}$, $t>0$. It is easy to check that $\varphi
$ attains its minimum value at $t_{\min}=3C\lambda$ (which is
greater than $C\lambda$) and that $\varphi(t_{\min})=9C\lambda^2/2$.
Consequently, we arrive at the following upper
bound on the Laplace transform of $g$:
\[
\int e^{\lambda(g(x)-\int g \,d\mu^n)} \,d\mu^n(x)\leq e^{9C\lambda
^2/2}\qquad \forall\lambda\geq0.
\]
From this, we deduce that every $1$-Lipschitz function $g$ verifies the
following deviation inequality around its mean
\[
\mu^n \biggl(g\geq\int g \,d\mu^n + r\biggr)\leq e^{-r^2/(18C)}\qquad \forall
r\geq0.
\]
Let $r_o$ be any number such that $e^{-r_o^2/(18C)}<1/2$, then denoting
by $m(g)$ any median of $g$, we get $ \int g \,d\mu^n+r_o\geq m(g)$.
Applying this inequality to $-g$, we conclude that $|m(g)-\int g \,d\mu
^n|\leq r_o$. So the following deviation inequality around the median holds
\[
\mu^n\bigl(g\geq m(g)+r\bigr)\leq e^{-(r-r_o)^2/(18C)}\qquad \forall r\geq r_o.
\]
Take $A\subset(\R^k)^n$ with $\mu^n(A)\geq1/2$, and define
$g_A(x)=d_2(x,A)$ where $d_2$ is the usual Euclidean distance. Since
$0$ is a median of $g_A$, the preceding inequality applied to $g_A$ reads
\[
\mu^n(A+rB_2)\geq1-e^{-(r-r_o)^2/(18C)}\qquad \forall r\geq r_o.
\]
According to Theorem \ref{characterization}, this Gaussian dimension
free concentration property implies \hyperlink{eqiTcClink}{($\T_2(9C)$)}.
\end{pf*}
\section{Some technical results}\label{sec5}
In this section, we collect some useful results on semi-convex functions.
In the case of differentiable functions, it is easy to rephrase the
definition of semi-convexity, in the following way.
\begin{prop}\label{prop:sc}
Let $c\dvtx\Rk\rightarrow\R^+$ be a differentiable function with
$c(0)=\nabla c(0)=0$. Then,
a differentiable function $f\dvtx\R^k\to\R$ is $K$-semi-convex for the
cost function $c$ if and only if
\begin{equation}\label{K semi-convex cost c 2}
f(y)\geq f(x)+\nabla f(x)\cdot(y-x)-Kc(y-x)\qquad \forall x,y\in\R^k.
\end{equation}
\end{prop}
\begin{pf}
Suppose that $f$ is $K$-semi-convex; according to the definition, for
all $x,y\in\R^k$ and $\lambda\in[0,1]$, the following holds
\begin{eqnarray*}
f(y)&\geq& f(x)+\frac{f(\lambda x+(1-\lambda)y)-f(x)}{1-\lambda
}\\
&&{}-K\frac{\lambda}{1-\lambda}c\bigl((1-\lambda)(x-y)\bigr)-Kc\bigl(\lambda(y-x)\bigr).
\end{eqnarray*}
Letting $\lambda\to1$ and using $c(0)=\nabla c(0)=0$ one obtains
(\ref{K semi-convex cost c 2}).
Let us prove the converse; according to (\ref{K semi-convex cost c 2}),
\begin{eqnarray*}
f(x)&\geq &f\bigl(\lambda x+(1-\lambda)y\bigr) -(1-\lambda) \nabla f\bigl(\lambda
x+(1-\lambda)y\bigr)\cdot(y-x)\\
&&{}+Kc\bigl((1-\lambda)(y-x)\bigr)
\end{eqnarray*}
and
\[
f(y)\geq f\bigl(\lambda x+(1-\lambda)y\bigr) +\lambda\nabla f\bigl(\lambda
x+(1-\lambda)y\bigr)\cdot(y-x)+Kc\bigl(\lambda(y-x)\bigr).
\]
This gives immediately (\ref{K semi-convex cost c 1}).
\end{pf}
\begin{lem}\label{lemme1}
If $\alpha\dvtx\R\to\R^+$ is a convex symmetric function of class $C^1$
such that $\alpha(0)=\alpha'(0)=0$ and $\alpha'$ is concave on $\R
^+$, then the following inequality holds
\begin{equation}\label{alpha-ineq}
\alpha(u+v)\leq\alpha(u)+ v\alpha'(u)+4\alpha(v/2)\qquad \forall
u,v\in\R.
\end{equation}
In particular, the function $-c(x) = - \sum_{i=1}^k \alpha(x_i)$,
$x=(x_1,\ldots,x_k)\in\R^k$, is $4$-semi-convex for the cost
$x \mapsto c(x/2)$.
\end{lem}
Note that (\ref{alpha-ineq}) is an equality for $\alpha(t)=t^2$.
\begin{pf*}{Proof of Lemma \ref{lemme1}}
Since $\alpha(v)=\alpha(-v)$, it is enough to prove the inequality
(\ref{alpha-ineq}) for $u\leq0$ and $v\in\R$.
Let us consider the function $G(w):=\alpha(u+w)-\alpha(u)-w\alpha'(u)$.
For $w\geq0$, using the concavity of $\alpha'$ on $\R^+$, either
$u+w\geq0$ and one has
\[
G'(w)=\alpha'(u+w)-\alpha'(u)= \alpha'(u+w)+\alpha'(-u) \leq2
\alpha'(w/2),
\]
or $u+w\leq0$ and one has
\[
G'(w)=\alpha'(-u)-\alpha'(-u-w)\leq\alpha'(w) \leq2 \alpha'(w/2),
\]
since $w\geq0$ and
\begin{eqnarray*}
\frac{\alpha'(w/2)-\alpha'(0)}{w/2}
& \geq &
\frac{\alpha'(w)-\alpha'(0)}{w}\geq\frac{\alpha'(w)-\alpha
'(-u-w)}{2w+u}\\
& \geq &
\frac{\alpha'(-u)-\alpha'(-u-w)}{w}.
\end{eqnarray*}
Similarly, if $w\leq0$, from the convexity of $\alpha'$ on $\R^-$,
$G'(w)\geq\alpha'(w)\geq2\alpha'(w/2)$.
The proof is complete integrating the above inequalities between $0$
and $v$ either for $v\geq0$ or for $v\leq0$.
The second part of the lemma is immediate.
\end{pf*}
The next lemma gives some conditions
on $\alpha$ under which the sup-convolution semi-group $P_t$
transforms functions into semi-convex.
Let us recall that $\omega_\alpha$ is defined by
\[
\omega_\alpha(x) = \sup_{u >0} \frac{\alpha(ux)}{\alpha(u)}
\qquad \forall x\in\R.
\]
\begin{lem} \label{lem:semiconv}
Let $\alpha\dvtx\R\to\R^+$ be a convex symmetric function of class
$C^1$ such that $\alpha(0)=\alpha'(0)=0$ and $\alpha'$ is concave on
$\R^+$.
Let $f\dvtx\R^k\to\R$, $u>0$ and define $g(x)=P_u f(x)=\sup_{y\in\R
^k}\{f(y)-uc((y-x)/u)\}$ with $c(x)=\sum_{i=1}^k \alpha(x_i)$,
$x\in\mathbb{R}^k$. Then
$g$ is $4u \omega_\alpha( \frac{1}{2u} )$-semi-convex
for the cost function $c$.
\end{lem}
\begin{pf}
By Lemma \ref{lemme1}, the function $-c$ is $4$-semi-convex with the
cost function $x \mapsto c(x/2)$.
Consequently, for all $y\in\R^k$, the function $x\mapsto
f(y)-uc((y-x)/u)$ is $4$-semi-convex with the cost function
$x \mapsto uc(x/(2u))$. From the definition (\ref{K semi-convex cost c
1}), we observe that a supremum of $K$-semi-convex functions remains
$K$-semi-convex. Consequently, by definition of $\omega_\alpha$,
we finally get
\begin{eqnarray*}
g(y)
& \geq &
g(x) + \nabla g(x) \cdot(y-x) - 4uc \biggl( \frac{y-x}{2u} \biggr) \\
& \geq &
g(x) + \nabla g(x) \cdot(y-x) - 4u \omega_\alpha\biggl( \frac
{1}{2u} \biggr) c (y-x) .
\end{eqnarray*}
\upqed\end{pf}
\begin{lem}\label{lem:tec2}
Let $\alpha$ be a convex symmetric function of class $C^1$ such that
$\alpha(0)=\alpha'(0)=0$,
$\alpha'$ is concave on $\R^+$. Denote by $\alpha^*$ the conjugate
of $\alpha$.
Then:
\begin{enumerate}[(iii)]
\item[(i)] For any $u \in(0,1)$, $x\in\R$, $\alpha(x/u)\leq\alpha
(x)/u^2$.\vspace*{1pt}
\item[(ii)] For any $u \in(0,1)$, $ \omega_\alpha(1/u ) \leq
1/{u^2}$.
\item[(iii)] For any $u \in(0,1)$, $\omega_{\alpha^*}(u) \leq u^2$.
\end{enumerate}
\end{lem}
\begin{pf}
Point (i). Let $x\geq0$, by concavity of $\alpha'$ on $\R^+$,
$\alpha'(x)\geq u\alpha'(x/u)+(1-u)\alpha'(0)=u\alpha'(x/u)$. The
result follows for $x\geq0$ by integrating between 0 and $x$ and then
for $x\leq0$ by symmetry.
Point (ii) is a direct consequence of point~(i).
Point (iii). Observing that $(\alpha^*)'=(\alpha')^{-1}$, it
follows that $(\alpha^*)'$ is convex on $\R^+$ and $(\alpha
^*)'(0)=\alpha^*(0)=0$. Then the proof is similar to the proof of
point (ii).
\end{pf}
\section{Final remarks}\label{sec6}
In this final section, we state some remarks and extensions on the
topic of this paper.
\subsection{Extension to Riemannian manifolds} \label{Riemannian manifolds}
Otto--Villani theorem holds true on general Riemannian manifolds
\cite{otto-villani}. Furthermore, efforts have been made recently to extend
the Otto--Villani theorem to spaces with poorer structure such as length
spaces \cite{LV07,Balogh09} or general metric spaces \cite{gozlan}.
This section is an attempt to extend our main result to spaces other
than Euclidean spaces. We will focus our attention on the inequality
\hyperlink{eqiTcClink}{($\T_{2}$)} on a Riemannian manifold.
In all what follows, $X$ will be a complete and connected Riemannian
manifold equipped with its geodesic distance $d$:
\begin{eqnarray}\label{geodesic distance}
d(x,y)=\inf\biggl\{\int_{0}^1 |\dot{\gamma}_{s}| \,ds; \gamma\in
\mathcal{C}^1([0,1],X), \gamma_{0}=x,
\gamma_{1}=y\biggr\}\nonumber\\[-8pt]\\[-8pt]
\eqntext{\forall x,y\in X.}
\end{eqnarray}
A minimizing path $\gamma$ in (\ref{geodesic distance}) is called a
minimal geodesic from $x$ to $y$; in general it is not unique. It is
always possible to consider that minimal geodesics are parametrized in
such a way that
\[
d(\gamma_{s},\gamma_t)=|s-t|d(x,y)\qquad \forall s,t\in[0,1],
\]
and this convention will be in force in all the sequel.
A function $f\dvtx X\to\R$ will be said $K$-semi-convex, $K\geq0$ if for
all $x,y\in X$ and all minimal geodesics $\gamma$ between $x$ and $y$,
the following inequality holds
\[
f(\gamma_s)\leq(1-s)f(x)+sf(y) + s(1-s)\frac{K}{2}d^2(x,y)\qquad
\forall s\in[0,1].
\]
When $f$ is of class $\mathcal{C}^1$ this is equivalent to the
following condition:
\begin{equation}\label{semiconvex Riem}
f(y)\geq f(x)+\langle\nabla f(x), \dot{\gamma}_0\rangle- \frac
{K}{2}d^2(x,y)\qquad \forall x,y\in X,
\end{equation}
for all minimal geodesics $\gamma$ from $x$ to $y$ (see, e.g.,
\cite{villani}, Proposition 16.2).
If $f$ is semi-convex, then it is locally Lipschitz \cite{villani}.
According to Rademacher's theorem (see, e.g.,
\cite{villani}, Theorem 10.8), $f$ is thus almost everywhere
differentiable. So the
inequality (\ref{semiconvex Riem}) holds for almost all $x\in X$ and
for all $y\in X$.
A function $f$ will be said $K$-semi-concave if $-f$ is $K$-semi-convex.
\begin{lem}
If $f$ is $K$-semi-convex, then for almost all $x\in X$, the inequality
\[
f(y)\geq f(x)-|\nabla f|(x)d(x,y)-\frac{K}{2}d^2(x,y),
\]
holds for all $y\in X$.
\end{lem}
\begin{pf}
Since the geodesic is constant speed, $|\dot{\gamma}_0|=d(x,y)$.
Applying Cauchy--Schwarz inequality in (\ref{semiconvex Riem}) yields
the desired inequality.
\end{pf}
With this inequality at hand, the proofs of Lemma \ref{lem:easy}
generalizes at once, and we get the following half part of our main result.
\begin{prop}
Suppose that an absolutely continuous probability measure $\mu$ on $X$
verifies the inequality \hyperlink{eqiTcClink}{($\T_2(C)$)}, then it verifies the following
restricted logarithmic Sobolev inequality:
for all $0\leq K<\frac{1}{C}$ and all $K$-semi-convex $f\dvtx X\to\R$,
\[
\ent_{\mu}(e^f)\leq\frac{2C}{(1-KC)^2}
\int|\nabla f|^2 e^f \,d\mu.
\]
\end{prop}
The generalization of the second half part of our main result is more delicate.
We have seen two proofs of the fact that the restricted logarithmic
Sobolev inequality implies \hyperlink{eqiTcClink}{($\T_{2}$)}: one based on the Hamilton--Jacobi
equation and the other based on dimension free concentration.
The common point of these two approaches is that we have used in both
cases the property that the sup-convolution operator $f\mapsto P_t f$
transforms functions into semi-convex functions (see Proposition \ref
{approximation} and Lemma \ref{lem:semiconv}). Let us see how this
property can be extended to Riemannian manifolds.
\begin{lem}\label{P_t semiconvex}
Suppose that there is some constant $S\geq1$, such that the inequality
\begin{eqnarray}\label{Ohta}
d^2(\gamma_s,y)&\geq&(1-s)d^2(x,y)+sd^2(z,y)\nonumber\\[-8pt]\\[-8pt]
&&{}-s(1-s)S^2d^2(x,z)\qquad
\forall s\in[0,1],\nonumber
\end{eqnarray}
holds for all $x,y,z \in X$, where $\gamma$ is a minimal geodesic
joining $x$ to $z$. This amounts to say that for all $y\in X$, the
function $x\mapsto d^2(x,y)$ is $2S^2$-semi-concave.
Then for all $f\dvtx X\to\R$ and all $u>0$ the function
\begin{equation}\label{P_t Riem}
x\mapsto P_u f(x)=\sup_{y\in X}\biggl\{ f(y)-\frac{1}{2u}
d^2(x,y)\biggr\}
\end{equation}
is $S^2/ u$-semi-convex.
\end{lem}
\begin{pf}
Under the assumption made on $d^2$, for all $y\in X$, the function
$x\mapsto f(y)-\frac{1}{2u}d^2(x,y)$ is $S^2/u$-semi-convex. Since a
supremum of $S^2/u$ semi-convex functions is $S^2/u$-semi-convex, this
ends the proof.
\end{pf}
Let us make some remarks on condition (\ref{Ohta}). This condition was
first introduced by Ohta in \cite{Ohta09} and Savare in
\cite{Savare07} in their studies of gradient flows in the Wasserstein
space over nonsmooth metric spaces. The condition (\ref{Ohta}) is
related to the Alexandrov curvature of geodesic spaces which
generalizes the notion of sectional curvature in Riemannian geometry.
The first point is a classical consequence of Toponogov's theorem
\cite{Cheeger}.
The second point in the following proposition is due to Ohta
\cite{Ohta09}, Lemma 3.3.
\begin{prop}
Let $X$ be a complete and connected Riemannian manifold.
\begin{enumerate}[(2)]
\item[(1)] The condition (\ref{Ohta}) holds with $S=1$ if and only if the
sectional curvature of $X$ is greater than or equal to $0$ everywhere.
\item[(2)] Suppose that the sectional curvature is greater than or equal to
$\kappa$, where $\kappa\leq0$, then for all $x,y,z \in X$ and every
geodesic $\gamma$ joining $x$ to $z$, one has
\begin{eqnarray}
d^2(\gamma_s,y)&\geq&(1-s)d^2(x,y)+sd^2(z,y)\nonumber\\[-8pt]\\[-8pt]
&&{}-\Bigl(1+\kappa^2\sup_{t\in[0,1]}d^2(\gamma_t,y)
\Bigr)(1-s)sd^2(x,z).\nonumber
\end{eqnarray}
In particular, if $(X,d)$ is bounded, then (\ref{Ohta}) holds with
\[
S=\bigl(1+\kappa^2\operatorname{diam}(X)^2\bigr)^{1/2}.
\]
\end{enumerate}
\end{prop}
In particular, the case of the Euclidean space, studied in the
preceding sections, corresponds to the case where the sectional
curvature vanishes everywhere.
Now, let us have a look to Hamilton--Jacobi equation. The following
theorem comes from \cite{villani}, Proposition 22.16 and Theorem 22.46.
\begin{theorem}\label{hamilton-generalise}
Let $f$ be a bounded and continuous function on $X$, the function
$(t,x)\mapsto P_t f(x)$ defined by (\ref{P_t Riem}) verifies the
following: for all $t>0$ and $x\in X$,
\[
\lim_{h\to0^+} \frac{P_{t+h}f(x)-P_{t}f(x)}{h}=\frac{|\nabla
^-(-P_{t}f)|^2(x)}{2},
\]
where the metric sub-gradient $|\nabla^- g|$ of a function $g$ is
defined by
\[
|\nabla^- g|(x)=\limsup_{y\to x} \frac{[g(y)-g(x)]_-}{d(y,x)}\qquad
\forall x\in X.
\]
\end{theorem}
Under the condition (\ref{Ohta}), $x\mapsto P_tf(x)$ is semi-convex,
and so differentiable almost everywhere, so for all $t$ and almost all
$x\in X$,
\[
\lim_{h\to0^+} \frac{P_{t+h}f(x)-P_{t}f(x)}{h}=\frac{|\nabla
P_{t}f|^2(x)}{2}.
\]
\begin{theorem}\label{extensionthm}
Suppose that the Riemannian manifold $X$ verifies condition (\ref
{Ohta}) for some $S\geq1$; if an absolutely continuous probability
measure $\mu$ on $X$ verifies the following restricted logarithmic
Sobolev inequality:
for all $0\leq K<\frac{1}{C}$ and all $K$-semi-convex $f\dvtx X\to\R$,
\[
\ent_{\mu}(e^f)\leq\frac{2C}{(1-KC)^2}
\int|\nabla f|^2 e^f \,d\mu,
\]
then it verifies \hyperlink{eqiTcClink}{($\T_2(8CS^2)$)}.
\end{theorem}
\begin{pf}
Setting $C_S=C S^2$, by assumption, for all $KS^2$ semi-convex
functions $f\dvtx X\rightarrow\R$ with $0\leq K<\frac{1}{C_S}$,
\begin{eqnarray*}
\ent_{\mu}(e^f)&\leq&\frac{2C}{(1-KS^2C
)^2} \int|\nabla f|^2 e^f \,d\mu\\
&\leq&\frac{2C_S}{(1-KC_S)^2} \int|\nabla f|^2 e^f
\,d\mu,
\end{eqnarray*}
where the last inequality holds since $S\geq1$. As mentioned in the
\hyperref[intro]{Introduction}, it is still equivalent to \hyperlink{eqrMLSIcClink}{($\mathbf
{rMLSI}(c,C_S)$)} where
$c$ is the quadratic cost function: for all $K\geq0$, $\eta>0$, with
$\eta+K<1/C_S$,
and all $KS^2$ semi-convex functions $f$
\begin{equation}\label{rmlsi}
\ent_{\mu}(e^f)\leq\frac{\eta}{1-C_S(\eta+ K)} \int
c^*\biggl(\frac{|\nabla f|}{\eta}\biggr) e^f \,d\mu,
\end{equation}
with $c^*(h)= h^2/2$, $h\in\R$.
The end of the proof exactly follows the proof of Theorem \ref
{main-result2} $(3)\Rightarrow(1)$ by replacing $C$ by $C_S$.
There is an additional technical problem due to the right derivatives;
as in the proof of Theorem \ref{main-result2}, we refer to
\cite{LV07,villani} where this difficulty has been circumvented. Therefore,
by Theorem \ref{hamilton-generalise}, we assume that $P_tf$ satisfies
the Hamilton--Jacobi equation $\partial_t P_tf(x) = c^*(|\nabla
P_tf(x)|)$ for all $t>0$ and all $x\in X$. Moreover, by Lemma \ref{P_t
semiconvex} $P_uf$ is $S^2/u$ semi-convex (for the cost
$c(x,y)=d^2(x,y)/2$). Then the continuation of the proof is identical
to the one of Theorem \ref{main-result2} by applying the inequality
(\ref{rmlsi}) to the $K(t)S^2 $ semi-convex function $\ell(t)P_{1-t}f$.
\end{pf}
To conclude this section, let us say that the proof presented in
Section \ref{sec:alternative} can also be adapted to the Riemannian
framework. Essentially,
all we have to do is to adapt the first point of Proposition \ref
{approximation}: the fact that $P_tf$ is $1$-Lipschitz when $f$ is
$1$-Lipschitz. A proof of this can be found in the proof of
\cite{Balogh09}, Theorem 2.5(iv).
\subsection{From transport inequalities to other logarithmic
Sobolev type inequalities} \label{autres log-sob}
Following the ideas of Theorem \ref{th:main1}, we may simply recover
other types of logarithmic Sobolev inequalities. These new forms of
inequalities should be of interest for further developments.
Let $X$ denote a Polish space equipped with the Borel $\sigma
$-algebra. Given Borel functions ${\rmc}\dvtx X\times X\rightarrow
\R$ and $f\dvtx X\rightarrow\R$, define for $\lambda>0$, $x\in X$,
\[
P^\lambda f(x)= \sup_{y\in X} \{ f(y)-\lambda{\rmc}(x,y)
\}.
\]
By definition, one says that a function $ f\dvtx X\rightarrow\R$ is
$K$-semi-concave for the cost ${\rmc}$ if $-f $ is $K$-semi-convex for
the cost ${\rmc}$.
\begin{theorem} \label{th:main1bis}
Let ${\rmc}\dvtx X \times X \rightarrow\R^+$ be a symmetric Borel function.
Let $\mu$ be a probability measure on $X$ satisfying \hyperlink{eqiTcClink}{($\T_{\rmc}(C)$)}
for some
$C>0$. Then for
all $\lambda\in(0,1/C)$, and all function $f\dvtx X\rightarrow\R$,
\begin{equation}\label{LS1}
\ent_\mu(e^f) \leq\frac{1}{1 - \lambda C} \int
(P^\lambda f - f) \,d\mu\int e^f \,d\mu.
\end{equation}
Assume moreover that ${\rmc}(x,y)=c(x-y)$, $x,y\in\R^k$, where
$c\dvtx\R
^k\rightarrow\R^+ $ is a differentiable symmetric function with
$c(0)=\nabla c(0)=0$. Then
for all $K\geq0 ,\eta> 0$ with $\eta+K<1/C$ and all $K$-semi-concave
differentiable function $f\dvtx\R^k \rightarrow\R$,
\begin{equation}\label{LS2}
\ent_\mu(e^f) \leq\frac{\eta}{1 - C(\eta+K)} \int
c^*\biggl(\frac{\nabla f}{\eta}\biggr) \,d\mu\int e^f \,d\mu.
\end{equation}
\end{theorem}
\begin{pf
Following the proof of Theorem \ref{th:main1}, one has for every
probability measure $\pi$ with marginals $\nu_f$ and $\mu$,
\[
H(\nu_{f}| \mu)\leq\iint\bigl(f(x)-f(y)\bigr) \,d\pi(x,y).
\]
From the definition of the sup-convolution function $P^\lambda f$, one has
\[
H(\nu_{f}| \mu)\leq\iint\bigl(P^\lambda f(y) - f(y) \bigr)
\,d\pi(x,y) + \lambda\iint c(y,x) \,d\pi(x,y).
\]
Optimizing over all probability measure $\pi$ and since $\mu$
satisfies (\ref{eqiTcC}), this yields
\[
H(\nu_{f}| \mu)\leq\int\bigl(P^\lambda f(y) - f(y) \bigr)
\,d\mu+ \lambda C H(\nu_{f}| \mu) .
\]
This is exactly the inequality (\ref{LS1}).
Now, if ${\rmc}(x,y)=c(x-y)$, $x,y\in\R^k$, and $f\dvtx\R
^k\rightarrow
\R$ is a $K$-semi-concave differentiable function, then by Lemma \ref
{lem:easy} one has: for all $\eta> 0$,
\[
P^{K+\eta}f- f\leq\eta c^* \biggl(\frac{\nabla f }{\eta} \biggr).
\]
The restricted modified logarithmic Sobolev inequalities (\ref{LS2})
then follows.
\end{pf}
\subsection{\texorpdfstring{On Poincar\'e inequalities}{On Poincare inequalities}} \label{poincare}
Let $c\dvtx\R^k\rightarrow\R$ be a differentiable function such that
$c(0)=\nabla c(0)=0$, with Hessian at point $0$ such that $D^2c(0)>0$
(as symmetric matrices). As for the logarithmic Sobolev inequalities,
it is known that a linearized version of the
transport inequality (\ref{eqiTcC}) is Poincar\'e inequality (see
\cite{maurey,otto-villani,bgl}).
Naturally, (\ref{eqrMLSIcC}) or (\ref{eqICLSIcC}) also
provide Poincar\'e inequality by using basic ideas given in
\cite{maurey} (see also \cite{bgl}). Namely, starting from (\ref{eqICLSIcC}), we apply it with $\varepsilon f$, where $f\dvtx\R
^k\rightarrow\R$ is a smooth function with compact support. The
infimum $\inf_{y \in\mathbb{R}^k} \{ \varepsilon f(y) +
\lambda c ({x-y} ) \} $ is attained at some
$y_\varepsilon$ such that $\varepsilon\nabla f (y_\varepsilon)=
\lambda\nabla c(x-y_\varepsilon)$. Since for $h\in\R^k$, $\nabla
c^* (\nabla c)(h)=h$, one has
\[
x-y_\varepsilon=\nabla c^*\biggl(\frac{\varepsilon\nabla
f(y_\varepsilon)}{\lambda}\biggr)= \frac{\varepsilon}{\lambda}
D^2c^*(0)\cdot\nabla f(x)+ o(\varepsilon).
\]
Therefore, since $D^2c^*(\nabla c(h))\cdot D^2c(h)=I$ and after some
computations, we get the following Taylor expansion
\begin{eqnarray*}
Q^\lambda( \varepsilon f)(x)&=&\varepsilon f(y_\varepsilon)+ \lambda c
( x-y_\varepsilon)\\
&=&\varepsilon f(x)- \frac{\varepsilon^2}{2\lambda} \nabla f(x)^{T}
\cdot D^2c^*(0)\cdot\nabla f(x) + o( \varepsilon^2 ).
\end{eqnarray*}
It is a classical fact that
\[
\ent_\mu( e^{ \ep f} ) = \frac{\varepsilon^2}{2} \Var
_\mu(f) + o( \varepsilon^2 ).
\]
Finally, as $\varepsilon\rightarrow0$, (\ref{eqICLSIcC})
implies: for every $\lambda\in(0,1/C)$,
\[
\Var_\mu(f)\leq\frac{1}{\lambda(1-\lambda C)} \int\nabla f^T\cdot
D^2c^*(0)\cdot\nabla f \,d\mu.
\]
Optimizing over all $\lambda$ yields the following Poincar\'e
inequality for the metric induced by
$D^2c^*(0)$
\[
\Var_\mu(f)\leq4C\int\nabla f^T\cdot D^2c^*(0)\cdot\nabla f \,d\mu.
\]
Denoting by $\|\cdot\|$ the usual operator norm, one also has a
Poincar\'e inequality with respect to the usual Euclidean metric
\[
\Var_\mu(f)\leq4C \|D^2c^*(0)\| \int|\nabla f|^2 \,d\mu.
\]
From the infimum-convolution characterization of transport inequality
(\ref{eqiTcC}) (see Theorem \ref{bg}), a similar proof gives
the same Poincar\'e inequality with the constant $C$ instead of $4C$
(see \cite{maurey}).
Conversely, Bobkov and Ledoux \cite{bobkov-ledoux}, Theorem 3.1,
obtained that Poincar\'e inequality implies a modified logarithmic
Sobolev inequality.
Let $\alpha_{2,1}\dvtx\R\rightarrow\R^+$ and $c_{2,1}\dvtx\R
^k\rightarrow\R^+$ be the cost function defined by
\[
\alpha_{2,1}(h)=\min\bigl( \tfrac{1}{2} h^2, |h| - \tfrac12
\bigr)\qquad \forall h\in\R,
\]
and $c_{2,1}(x)=\sum_{i=1}^k \alpha_{2,1}(x_i), x\in\R^k$. One has
$\alpha_{2,1}^*(h)=h^2/2$ if $|h|\leq1$ and $\alpha
_{2,1}^*(h)=+\infty$ otherwise. Bobkov--Ledoux's result is the following.
\begin{theorem}[\cite{bobkov-ledoux}]\label{bl} Let $\mu$ be a
probability measure on $\R^k$ satisfying the Poincar\'e inequality:
{\renewcommand{\theequation}{$\mathbf{P}(C)$}
\begin{equation}\label{eqPC}
\Var_\mu(f)\leq C \int|\nabla f|^2 \,d\mu,
\end{equation}}
\noindent
for every smooth function $f$ on $\R^k$. Then the following modified
logarithmic Sobo\-lev inequality holds [in short (\ref{eqBLIC})]:
for all $\kappa<2/\sqrt{C}$ and every smooth function $f$,
{\renewcommand{\theequation}{$\mathbf{BLI}(C)$}
\begin{equation}\label{eqBLIC}
\ent_\mu(e^f) \leq C\kappa^2 K(\kappa,C)\int\alpha
_{2,1}^* \biggl(\frac{\nabla f}{\kappa} \biggr) e^f \,d\mu,
\end{equation}}
\noindent
where $K(\kappa,C)= (\frac{2+\kappa\sqrt C}{2-\kappa\sqrt
C})^2 e^{\kappa\sqrt{5C}}$.
\end{theorem}
Applying (\ref{eqBLIC}) to $\varepsilon f$, as $\varepsilon
\rightarrow0$, (\ref{eqBLIC}) yields
$\mathbf{P}( C K(\kappa,C))$ but also
(\ref{eqPC}) since $K(\kappa,C)\rightarrow1$ as $\kappa
\rightarrow0$. Theorem \ref{bl} therefore indicates that $\mathbf
{P}( C)$ and (\ref{eqBLIC}) are exactly equivalent.
Thanks to the Hamilton--Jacobi approach, Bobkov, Gentil and Ledoux
\cite{bgl} obtained that (\ref{eqBLIC}) implies \hyperlink{eqiTcClink}{($\T_{\tilde c^{\kappa
}_{2,1}}(C)$)} for all $\kappa<2/\sqrt C$ where
\setcounter{equation}{8}
\begin{equation}\label{c-kappa21}
{\tilde c^{\kappa}_{2,1}}(x)=\kappa^2C^2 K(\kappa,C) \alpha
_{2,1}\biggl(\frac{|x|}{\kappa C K(\kappa,C)}\biggr)\qquad \forall
x\in\R^k.
\end{equation}
By linearization and optimization over $\kappa$,
\hyperlink{eqiTcClink}{($\T_{\tilde c^{\kappa}_{2,1}}(C)$)} implies (\ref{eqPC}), and
therefore (\ref{eqBLIC}) is also equivalent to \hyperlink{eqiTcClink}{($\T_{\tilde
c^{\kappa}_{2,1}}(C )$)} for all $\kappa<2/\sqrt C$.
Let\vspace*{1pt} $c^{\kappa}_{2,1}$ denote the cost function defined
similarly as $\tilde c^{\kappa}_{2,1}$ replacing
$\alpha_{2,1}(|\cdot|)$ by $c_{2,1}$ in (\ref{c-kappa21}). One has
$\tilde c^{\kappa}_{2,1}\leq c^{\kappa}_{2,1}$ [this is\vspace*{1pt} a
consequence of the subadditivity of the concave function
$h\rightarrow\alpha_{2,1} (\sqrt h)$]. Therefore,
\hyperlink{eqiTcClink}{($\T_{ c^{\kappa}_{2,1}}(C)$)} implies
\hyperlink{eqiTcClink}{($\T_{\tilde c^{\kappa }_{2,1}}(C)$)}. Consider
now the case of dimension 1, $k=1$, so that $c^{\kappa}_{2,1}=\tilde
c^{\kappa}_{2,1}$.\vspace*{2pt} Theorem \ref{main-result2} indicates
that \hyperlink{eqiTcClink}{($\T_{ c_{2,1}^\kappa}$)} is equivalent, up
to constant, to
\hyperlink{eqrMLSIcClink}{($\mathbf{rMLSI}(c_{2,1}^\kappa)$)}. Actually
\hyperlink{eqrMLSIcClink}{($\mathbf{rMLSI}(c_{2,1}^\kappa)$)} can be
interpreted as $\mathbf {BLI}$ restricted to a class of semi-convex
function for the cost~$c_{2,1}^\kappa$. However, from the discussions
above, \hyperlink{eqrMLSIcClink}{($\mathbf {rMLSI}(c_{2,1}^\kappa)$)}
and $\mathbf{BLI}$ are equivalent up to constant. It would be
interesting to directly recover $\mathbf{BLI}$ from
\hyperlink{eqrMLSIcClink}{($\mathbf{rMLSI}(c_{2,1}^\kappa)$)} or from
\hyperlink{eqiTcClink}{($\T_{ c_{2,1}^\kappa }$)}. The known results
can be summarized by the following diagram for $k=1$:\vspace*{3pt}
\noindent\fbox{
\begin{tabular}{@{}cc@{}}
$
\begin{array}{cccc}
\mathbf{BLI} & \stackrel{B.L.}{\Longleftarrow \!\!\Longrightarrow} & \mathbf{P} \\
\begin{sideways} $\!\! \stackrel{B.G.L}{\Longleftarrow}$ \end{sideways}
& \begin{turn}{30} $\stackrel{M.-O.V.}{ \Longrightarrow}$ \end{turn} &
\begin{sideways} $\;\Longrightarrow$ \end{sideways} \\
\T_{\tilde c^{\kappa}_{2,1}} = \T_{ c^{\kappa}_{2,1}} & \stackrel{\fontsize{8.36pt}{10.36pt}
\selectfont{\mbox{\textit{Theorem}~\ref{main-result2}}}}{\Longleftarrow \!\Longrightarrow}
& \mbox{\hyperlink{eqrMLSIcClink}{($\mathbf{rMLSI}(c_{2,1}^\kappa)$)}}
\end{array}
$
\end{tabular}}
\quad
{\small
\begin{tabular}{@{}rl@{}}
where:\hspace*{7pt}\\
\textit{B.L.}: & Bobkov, Ledoux \cite{bobkov-ledoux};\\
\textit{B.G.L.}: & Bobkov, Gentil, Ledoux \cite{bgl};\\
\textit{M.}: & Maurey \cite{maurey};\\
\textit{O.V.}: & Otto, Villani \cite{otto-villani}.
\end{tabular}}
\section*{Acknowledgments}
We warmly thank an anonymous referee for providing constructive
comments and help in improving the contents of this paper.
|
1,477,468,751,117 | arxiv | \section{Introduction}
\label{sec1}
Under the assumption of molecular chaos, the velocity distribution function of a \textit{dense} fluid of inelastic hard spheres obeys the Enskog kinetic equation, suitably modified to account for the collisional inelasticity through a constant coefficient of normal restitution $\alpha$ \cite{BDS97}. By an extension of the Chapman--Enskog method, the Navier--Stokes transport coefficients for a single fluid have been derived in the first Sonine approximation as nonlinear functions of the packing fraction $\phi$ and the coefficient of restitution \cite{GD99}. The shear viscosity for a dense binary mixture has also been evaluated within the same Sonine approximation \cite{GM03}.
Given that the first Sonine approximation is known to be rather accurate in the case of elastic hard spheres \cite{FK70}, a natural question is whether this approximation is still reliable for finite inelasticity. This point is of interest since in the Sonine approximation the distribution function is expanded around the local \textit{equilibrium} distribution. However, a granular system is never in equilibrium \cite{G03}. In the undriven case, the uniform reference state corresponds to the so-called homogeneous cooling state (HCS), in which the time dependence only occurs through the temperature. The velocity distribution function of the HCS deviates from a Gaussian, as measured by a non-zero value of the fourth-cumulant and by an overpopulated high-energy tail \cite{vNE98}. This fact could cast some doubts about the accuracy of the Sonine approximation to compute the transport coefficients.
The shear viscosity $\eta$ is perhaps the most widely studied transport coefficient in a granular fluid. For a low-density gas ($\phi\to 0$), Brey \textit{et al.} \cite{BRC96} measured $\eta$ from DSMC by analyzing the time decay of a weak transverse shear wave, and observed a qualitative good agreement with the Sonine prediction. However, while this method is physically quite elegant, it is not perhaps sufficiently efficient to measure $\eta$ accurately. More recently, the low-density Navier--Stokes transport coefficients of the HCS have been measured from DSMC by using the associated Green--Kubo formulas \cite{DB02}.
In the case of a system \textit{heated} by the action of an external driving or thermostat, the associated shear viscosity (which is different from that of the HCS) has been computed from the Chapman--Enskog method in the first Sonine approximation and from DSMC, for a dilute single gas \cite{GM02} as well as for a dense binary mixture \cite{GM03,MG02}.
On the other hand, to the best of our knowledge, the shear viscosity of the HCS for a \textit{dense} gas described by the Enskog equation has not been measured from DSMC before.
This issue is the main goal of this paper.
As a second point, we want to address the question of whether a granular fluid reaches a hydrodynamic regime that can be described by the Navier--Stokes equations, even far from the quasi-elastic limit. In fact, a granular fluid in an inhomogeneous \textit{steady} state is inherently non-Newtonian \cite{G03,SGD04}, so that a Newtonian regime requires \textit{unsteady} states sufficiently separated from the initial preparation.
In order to investigate the problem, we have carried out numerical simulations of the Enskog equation by means of an extension of the DSMC method \cite{MS96} for $0.6\leq\alpha\leq 1$ and $0\leq \phi\leq 0.5$ and have considered the so-called simple shear flow state. To allow the granular fluid to approach a Newtonian regime, an external ``friction'' force with a negative friction coefficient is applied to compensate for the inelastic cooling, so that viscous heating produces a monotonic decrease of the Knudsen number (reduced shear rate) with time. In addition, to mimic the conditions appearing in the Chapman--Enskog method to Navier--Stokes order, a stochastic process is introduced, according to which every particle has a certain probability per unit time to have its velocity replaced by a new velocity sampled from the HCS distribution.
Our simulation results confirm that, regardless of its initial preparation, the fluid reaches after a few collisions per particle a hydrodynamic state whose temporal evolution is governed by that of the granular temperature. Moreover, when the shear viscosity is measured as the ratio between the shear stress and the shear rate for long times, it agrees reasonably well with the theoretical expressions derived in the first Sonine approximation \cite{GD99}, the deviations being comparable to or smaller than the ones in the elastic case.
\section{Modified simple shear flow}
The simple shear flow is macroscopically characterized by a constant density $n$, a uniform granular temperature $T$, and a linear velocity field $\mathbf{u}(\mathbf{r})=\mathsf{a}\cdot \mathbf{r}$, where the rate of strain tensor $\mathsf{a}$ is $a_{ij}=a\delta_{ix}\delta_{jy}$, $a$ being the constant shear rate \cite{GS03}.
At the level of the one-body distribution function, the simple shear flow becomes uniform in the Lagrangian frame of reference, i.e., $f(\mathbf{r},\mathbf{v},t)=f(\mathbf{V}(\mathbf{r}),t)$, where $\mathbf{V}(\mathbf{r})=\mathbf{v}-\mathbf{u}(\mathbf{r})$.
Consequently, the Enskog equation in this problem reads
\beq
\partial_t f-a V_y\frac{\partial}{\partial{V_x}} f
+{F}[f]= J[f,f].
\label{1}
\eeq
Here $J[f,f]$ is the Enskog operator, which in the simple shear flow is given by
\beq
J[f,f]= \sigma^2 \chi(n)\int d{\bf V}_1\int d\widehat{
{\bm{\sigma}}}\,
\Theta(\widehat{\bm{\sigma}}\cdot {\bf g})
(\widehat{\bm{\sigma}}\cdot {\bf g})
[\alpha^{-2}f({\bf V}')f({\bf V}_1'+\mathsf{a}\cdot\bm{\sigma})-f({\bf V})f({\bf V}_1-\mathsf{a}\cdot\bm{\sigma})],
\label{2}
\eeq
where $\sigma$ is the diameter of a sphere, $\chi(n)$ is the contact value of the pair distribution function, $\widehat{\bm{\sigma}}$ is a unit vector directed along the centers of the two colliding spheres at contact, $\mathbf{g}=\mathbf{V}-\mathbf{V}_1$ is the relative velocity, $\alpha\leq 1$ is the coefficient of normal restitution,
$\bm{\sigma}\equiv\sigma \widehat{\bm{\sigma}}$, and the primes denote pre-collisional velocities, namely
${\bf V}'={\bf V}-\frac{1}{2}({1+\alpha^{-1}})
(\widehat{\bm{\sigma}}\cdot {\bf g})
\widehat{\bm{\sigma}}$ and ${\bf V}_1'={\bf V}_1+\frac{1}{2}({1+\alpha^{-1}})
(\widehat{\bm{\sigma}}\cdot {\bf g})
\widehat{\bm{\sigma}}$.
Finally, $F[f]$ in Eq.\ (\ref{1}) is an operator representing some type of external action on $f$, absent in the true simple shear flow problem, that will be chosen later on. This extra term is assumed to verify the conditions
\beq
\int d\mathbf{V} \{1,\mathbf{V}, V^2\}F[f]=\{0,\mathbf{0},-{3}n\gamma T/m\},
\label{4}
\eeq
so that it does not affect the mass and momentum conservation laws, but in general contributes to the energy balance equation through the ``heating'' rate $\gamma$.
The time evolution for the granular temperature is
\beq
\partial_t T=-\frac{2}{3n}a P_{xy}-\zeta T+\gamma T,
\label{5}
\eeq
where $P_{xy}=P_{xy}^k+P_{xy}^c$ is the \textit{total} shear stress and $\zeta$ is the cooling rate,
\beq
P_{ij}^k=\int d\mathbf{V} m V_i V_j f(\mathbf{V}),\quad P_{ij}^c=\frac{1+\alpha}{4}m\sigma^3\chi\int {d}\widehat{\bm{\sigma}}
\widehat{\sigma}_i\widehat{\sigma}_j\int {d}{\bf V}\int {d}{\bf V}_1
\Theta(\widehat{\bm{\sigma}}\cdot{\bf g})(\widehat{\bm{\sigma}}\cdot
{\bf g})^2 f({\bf V}+\mathsf{a}\cdot\bm{\sigma})f({\bf V}_1),
\label{3.6}
\end{equation}
\begin{equation}
\zeta=\frac{m\sigma^2}{12nT}(1-\alpha^2)\chi\int {d}\widehat{\bm{\sigma}}
\int {d}{\bf V}\int {d}{\bf V}_1
\Theta(\widehat{\bm{\sigma}}\cdot{\bf g})(\widehat{\bm{\sigma}}\cdot
{\bf g})^3 f({\bf V}+\mathsf{a}\cdot\bm{\sigma})f({\bf V}_1).
\label{3.7}
\end{equation}
The first term on the right-hand side of Eq.\ \eqref{5} represents viscous heating effects, the second term corresponds to the cooling due to the inelasticity of collisions, while the third term is the contribution due to the external action $F[f]$.
In the true simple shear flow, i.e., with $F[f]=0$, the temperature evolution is governed by the competition between the viscous heating and the collisional cooling effects until a steady state is reached when both effects cancel each other.
However, this steady state is inherently non-Newtonian \cite{SGD04}, so that the Navier--Stokes shear viscosity coefficient cannot be measured, even in the quasi-elastic limit.
The domain of validity of the Newtonian description is restricted to small values of the Knudsen number $\text{Kn}=\lambda/\ell$, where $\lambda=1/\sqrt{2}\pi n\sigma^2\chi(n)$ is the mean free path and $\ell$ is the characteristic hydrodynamic length. In the present problem, the only hydrodynamic length is $\ell=\sqrt{2T/m}/a$. In the steady shear flow $\text{Kn}$ is a function of $\alpha$ that cannot be controlled independently. In order to reach asymptotically small values of $\text{Kn}$ for any value of $\alpha$ we need to avoid a steady state and have a monotonically increasing temperature. Consequently, we must modify the original shear flow problem by introducing an external driving mechanism, represented by $F[f]$, which exactly compensates for the collisional cooling term in Eq.\ \eqref{5}. Specifically, the heating rate introduced in Eq.\ \eqref{4} is chosen as $\gamma=\zeta$, so $\partial_t T=-(2/3n)aP_{xy}$.
So far, apart from the requirement $\gamma=\zeta$, the explicit form of $F[f]$ remains still open. Here we will determine
$F[f]$ by requiring that the kinetic equation \eqref{1} to first order in $\text{Kn}$ be the same as the one found by applying the Chapman--Enskog method for states close to the (local) HCS \cite{GD99}. To that end we formally expand $f$ as
\beq
f(\mathbf{V})=f^{(0)}(\mathbf{V})+f^{(1)}(\mathbf{V})+\cdots,
\label{6}
\eeq
where $f^{(k)}(\mathbf{V})$ is of order $k$ in $\text{Kn}$. Moreover, the time dependence of $f$ occurs entirely through the temperature. Note that, by definition, $a\sim \text{Kn}$. On physical grounds, $P_{xy}$ is at least of first order in $\text{Kn}$. Therefore, given that $\gamma=\zeta$, $\partial_t T\sim \text{Kn}^2$ and so $\partial_t f$ can be neglected to first order. Inserting Eq.\ \eqref{6} into Eq.\ \eqref{1}, we get, through first order in $\text{Kn}$,
\beq
F[f^{(0)}]=J^{(0)}[f^{(0)},f^{(0)}],
\label{7}
\eeq
\beq
F[f^{(1)}]-a V_y\frac{\partial}{\partial V_x}f^{(0)} = -L[f^{(1)}]+a \Lambda_y\left[{\partial f^{(0)}}/{\partial V_x}\right],
\label{8}
\eeq
where
\beq
J^{(0)}[X,Y]=
\sigma^2 \chi(n)\int d{\bf V}_1\int d\widehat{
{\bm{\sigma}}}\,
\Theta(\widehat{\bm{\sigma}}\cdot {\bf g})
(\widehat{\bm{\sigma}}\cdot {\bf g})
[\alpha^{-2}X({\bf V}')Y({\bf V}_1')-X({\bf V})Y({\bf V}_1)],
\label{9}
\eeq
\beq
L[X]=-J^{(0)}[f^{(0)},X]-J^{(0)}[X,f^{(0)}],
\label{10}
\eeq
\beq
\Lambda_i[X]=
\sigma^3 \chi(n)\int d{\bf V}_1\int d\widehat{
{\bm{\sigma}}}\,
\Theta(\widehat{\bm{\sigma}}\cdot {\bf g})
(\widehat{\bm{\sigma}}\cdot {\bf g})\widehat{\sigma}_i
[\alpha^{-2}f^{(0)}({\bf V}')X({\bf V}_1')+f^{(0)}({\bf V})X({\bf V}_1)].
\label{11}
\eeq
We require that $f^{(0)}$ be the HCS. This implies that \cite{vNE98}
\beq
F[f^{(0)}]=\frac{1}{2}\zeta^{(0)}\frac{\partial}{\partial\mathbf{V}}\cdot \left[\mathbf{V}f^{(0)}(\mathbf{V})\right],
\label{12}
\eeq
where $\zeta^{(0)}$ is the cooling rate in the HCS, which is obtained from Eq.\ \eqref{3.7} by setting $a=0$ and $f\rightarrow f^{(0)}$.
Next, we consider the linear integral equation \eqref{8} for $f^{(1)}$. This equation coincides with the one derived in Ref.\ \cite{GD99} from the general Chapman--Enskog method specialized to $\nabla n=\nabla T=\nabla\cdot \mathbf{u}=0$, provided that
\beq
F[f^{(1)}]=-\zeta^{(0)}T{\partial_T}f^{(1)}= \frac{1}{2}\zeta^{(0)}\frac{\partial}{\partial\mathbf{V}}\cdot \left[\mathbf{V}f^{(1)}(\mathbf{V})\right]+\frac{1}{2}\zeta^{(0)}f^{(1)}(\mathbf{V}),
\label{13}
\eeq
where in the last step we have taken into account that $\text{Kn}\propto T^{-1/2}$ and, by dimensional analysis, $f^{(1)}(\mathbf{V})=n \text{v}_0^{-3}\Phi(\mathbf{V}/\text{v}_0)\text{Kn}$, where $\text{v}_0=\sqrt{2T/m}$ is the thermal velocity, $\Phi$ being a dimensionless function of the reduced velocity.
The simplest choice for $F[f]$ compatible with Eq.\ \eqref{4} (with $\gamma=\zeta$) and with Eqs.\ \eqref{12} and \eqref{13} is
\beq
F[f]=\frac{1}{2}\zeta\frac{\partial}{\partial\mathbf{V}}\cdot \left(\mathbf{V}f\right)+ \frac{1}{2}\zeta \left(f-f^{(0)}\right).
\label{14}
\eeq
In summary, our \textit{modified} simple shear flow problem is described by Eq.\ (\ref{1}) along with the choice \eqref{14}.
The first term on the right-hand side of Eq.\ \eqref{14} represents the effect of a deterministic non-conservative force of the form $\frac{1}{2}\zeta m\mathbf{V}$, which does work to compensate for the collisional energy loss. The shear viscosity of a granular fluid mixture when only this term is present in $F[f]$ has been determined from the Chapman--Enskog method and measured in DSMC simulations \cite{MG02,GM03}. However, this term does not suffice to get the Navier--Stokes shear viscosity of the HCS, but it must be supplemented by the second term on the right-hand side of Eq.\ \eqref{14}. The latter term represents the action of a \textit{stochastic} process, whereby each particle has a probability per unit time equal to $\frac{1}{2}\zeta$ of changing its velocity by a new velocity sampled from the (local) HCS distribution $f^{(0)}$. When this stochastic term is moved to the right-hand side of the Enskog equation \eqref{1}, it can be interpreted as a BGK-like relaxation term.
\section{DSMC results}
\begin{figure}[tbp]
\includegraphics[width=1.\columnwidth]{Fig1.eps}
\caption{Left panel: Parametric plot of the reduced normal stresses $P_{ii}^k/nT$ versus $\text{Kn}^2$ for $\alpha=0.8$ and $\phi=0.5$. Right panel: Parametric plot of the fourth cumulant $c$, relative to that of the HCS, versus $\text{Kn}^2$ for $\alpha=0.9$ and $\phi=0.2$, 0.3, and 0.4.
\label{fig1}}
\end{figure}
We have solved the Enskog equation \eqref{1} with the presence of $F[f]$ as given by Eq.\ \eqref{14} by means of an extension to the Enskog equation \cite{MS96} of Bird's DSMC method. Since the system is uniform in the Lagrangian frame, there is no need to split it into cells. In the implementation of $F[f]$ the cooling rate is self-consistently measured in the simulations. The stochastic term is simulated by randomly choosing during a small time step $\delta t$ a fraction $\frac{1}{2}\zeta \delta t$ of particles; the original velocity $\mathbf{V}_{\text{old}}$ of each one of those particles is replaced by a new velocity $\mathbf{V}_{\text{new}}=\sqrt{T}\mathbf{C}$, where $\mathbf{C}$ is the velocity of a particle in a reservoir kept at the HCS normalized to unit temperature.
The shear rate and the initial temperature are such that $\text{Kn}=0.1$ at $t=0$ for all density and coefficient of restitution. Besides, the initial velocity distribution is a Maxwellian. In the course of the simulations, the kinetic and collisional transfer contributions \eqref{3.6} to the pressure tensor are evaluated. {}From the shear stress $P_{xy}(t)$ the shear viscosity is measured as a function of time as $\eta(t)=-P_{xy}(t)/a$. Since the Knudsen number $\text{Kn}(t)\propto 1/\sqrt{T(t)}$ monotonically decreases with increasing time, the Navier--Stokes shear viscosity is identified as $\eta(t)$ for long times.
As said in Section \ref{sec1}, our main objective is to compare the kinetic ($\eta^k$) and total ($\eta$) shear viscosity measured in the simulations with the expressions derived from the Chapman--Enskog method by using the first Sonine approximation $f^{(1)}\to -a(m\eta^k/nT^2)f_M V_xV_y$, $f_M$ being the Maxwellian distribution. The expressions are \cite{GD99}
\beq
\eta^k(\alpha,\phi)=\eta_0\frac{1-\frac{2}{5}\phi\chi(\phi)(1+\alpha)(1-3\alpha)}{\frac{1}{384}\chi(\phi)(1+\alpha)\left[16(13-\alpha)-3(4-3\alpha)c_0(\alpha)\right]},
\label{15}
\eeq
\beq
\eta(\alpha,\phi)=\eta^k(\alpha,\phi)\left[1+\frac{4}{5}\phi\chi(\phi)(1+\alpha)\right]+\eta_0
\frac{384}{25\pi}\phi^2\chi(\phi)(1+\alpha)\left[1-\frac{1}{32}c_0(\alpha)\right],
\label{16}
\eeq
where $\eta_0=(5/16\sigma^2)\sqrt{mT/\pi}$ is the shear viscosity of a dilute gas in the elastic limit, $\phi=\frac{\pi}{6}n\sigma^3$ is the packing fraction, and $c_0$ is the fourth cumulant of the HCS. A good estimate of $c_0$ is \cite{vNE98}
\beq
c_0(\alpha)=\frac{32(1-\alpha)(1-2\alpha^2)}{81-17\alpha+30\alpha^2(1-\alpha)}.
\label{17}
\eeq
In addition, we take the Carnahan--Starling approximation $\chi(\phi)=(1-\phi/2)/(1-\phi)^3$.
Figure \ref{fig1} shows the kinetic part of the normal stresses $P_{ii}^k(t)$, relative to $nT(t)$, and the fourth cumulant $c(t)$, relative to its HCS value $c_0$, as functions of the time-dependent Knudsen number $\text{Kn}(t)=\lambda/\ell(T)=a\sigma/12\phi\chi(\phi)\sqrt{T(t)/m}$, for some representative cases. Note that, as time grows, the Knudsen number $\text{Kn}$ monotonically decreases from its initial value $\text{Kn}=0.1$, behaving as $\text{Kn}(t)\sim t^{-1}$ for asymptotically long times. We observe that in the long-time limit, i.e., for $\text{Kn}\to 0$, the system tends to an isotropic state with a fourth cumulant $c$ equal to that of the HCS. This supports the expectation that the asymptotic state of our modified simple shear flow is the HCS, despite the fact that the temperature is increasing rather than decreasing, in agreement with Eq.\ \eqref{7}. It is worth remarking that, according to Fig.\ \ref{fig1}, after a short transient period the fluid reaches a \textit{hydrodynamic} regime where the normal stresses and the cumulant are linear functions of $\text{Kn}^2$ (Burnett-order effects).
As an illustration of how the Navier--Stokes shear viscosity is evaluated from DSMC, Fig.\ \ref{fig1bis} shows the time-dependent kinetic shear viscosity $\eta^k(t)=-P_{xy}^k(t)/a$, relative to its (time-dependent) theoretical Sonine value in the elastic limit,
as a function of $\text{Kn}^2(t)$ for the case $\alpha=0.8$, $\phi=0.2$. Figure \ref{fig1bis} clearly shows that the ratio $\eta^k(\alpha,\phi)/\eta^k(1,\phi)$ reaches a plateau for long times (small Knudsen numbers) that can be identified as the Navier--Stokes value. The same procedure has been followed to measure the kinetic and collisional parts of the Navier--Stokes shear viscosity for different values of dissipation and density.
\begin{figure}[tbp]
\includegraphics[width=.50\columnwidth]{Fig1bis.eps}
\caption{Parametric plot of the reduced kinetic shear viscosity $\eta^k(\alpha,\phi)/\eta^k(1,\phi)$ versus $\text{Kn}^2$ for $\alpha=0.8$ and $\phi=0.2$. The dotted line represents the estimated Navier--Stokes value.
\label{fig1bis}}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=1.\columnwidth]{Graph3.eps}
\caption{Plots of the kinetic part of the shear viscosity (left panel) and of the total shear viscosity (right panel), relative to their respective theoretical values in the elastic limit in the first Sonine approximation, as functions of the packing fraction for $\alpha=0.9$, 0.8, and 0.6. The symbols are simulation results, while the lines are the theoretical predictions in the first Sonine approximation.
\label{fig2}}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=1.\columnwidth]{Fig3.eps}
\caption{Plots of the kinetic part of the shear viscosity (left panel) and of the total shear viscosity (right panel), relative to their respective theoretical values in the elastic limit in the first Sonine approximation, as functions of the coefficient of restitution for $\phi=0$, 0.2, and 0.4. The symbols are simulation results, while the lines are the theoretical predictions in the first Sonine approximation.
\label{fig3}}
\end{figure}
The density dependence of $\eta^k$ and $\eta$ for three values of the coefficient of restitution is displayed in Fig.\ \ref{fig2}, while Fig.\ \ref{fig3} shows the influence of dissipation for three values of the packing fraction.
We observe a general good agreement between DSMC results and the predictions from the first Sonine approximation, even for strong dissipation and high densities. Both theory and simulation show that, at a given value of $\alpha$, $\eta^k(\alpha,\phi)>\eta^k(1,\phi)$ if the packing fraction is smaller than a certain value $\phi_0^k(\alpha)$, while $\eta^k(\alpha,\phi)<\eta^k(1,\phi)$ if $\phi>\phi_0^k(\alpha)$. A similar behavior occurs for the total shear viscosity with a different value $\phi_0(\alpha)$. The influence of $\alpha$ on both $\phi_0^k$ and $\phi_0$ is rather weak; according to Eqs.\ \eqref{15}--\eqref{17}, $(\phi_0^k,\phi_0)= (0.12,0.09 )$, $(0.13,0.09 )$, and $(0.15,0.10 )$ for $\alpha=0.9$, 0.8, and 0.6, respectively.
As a consequence, while in a dilute granular gas ($\phi\lesssim 0.1$) the shear viscosity increases with inelasticity, the opposite happens for sufficiently dense fluids ($\phi\gtrsim 0.1$).
\section{Concluding remarks}
In this paper we have proposed a method to measure the Navier--Stokes shear viscosity of a moderately dense granular fluid described by the Enskog equation. The idea is to consider the simple shear flow (which is uniform in the Lagrangian frame), modified by the presence of a deterministic non-conservative force (which compensates for the collisional cooling) along with a stochastic BGK-like term. Under these conditions the Knudsen number of the problem decreases with increasing time, so that the system reaches a hydrodynamic Navier--Stokes regime for long times. This procedure allows one to evaluate the Navier--Stokes shear viscosity in an efficient way by means of the DSMC method. The simulation results have been compared with predictions from the Chapman--Enskog expansion in the first Sonine approximation \cite{GD99}.
The results show that the Sonine predictions compare quite well with the simulation data for the wide range of dissipation ($\alpha\geq 0.6$) and density ($\phi\leq 0.5$) explored. This agreement is significant, given that, in contrast to the elastic case, the reference state is not a (local) Maxwellian but the (local) HCS, which exhibits non-Maxwellian features, such as a non-zero fourth cumulant and an overpopulated high-velocity tail \cite{vNE98}. It is interesting to remark that the accuracy of the Sonine approximation found here is comparable to the one observed for elastic fluids, so that one could expect to improve the agreement by considering more terms in the Sonine polynomial expansion.
Finally, we want to stress that the system, after a short transient period, achieves a hydrodynamic stage prior to the Navier--Stokes regime. This stage could be used to study nonlinear transport properties (e.g., shear thinning and viscometric effects), although this issue is beyond the scope of this paper.
\begin{theacknowledgments}
Partial support from the Ministerio de
Educaci\'on y Ciencia
(Spain) through Grants Nos.\ ESP2003-02859 (J.M.M.) and FIS2004-01399 (A.S. and V.G.) is gratefully acknowledged.
\end{theacknowledgments}
\bibliographystyle{aipproc}
|
1,477,468,751,118 | arxiv | \section{\@startsection{section}{1}{0pt}{-3.25ex plus -1ex minus
-.2ex}{1.5ex plus .2ex minus .3ex}{\normalfont\large\bf}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{1.5ex \@plus .2ex}%
{\normalfont\normalsize\bfseries}}
\makeatother
\renewenvironment{abstract}
{\vspace*{-1.8ex}\begin{quotation}\small
}{\end{quotation}}
\newcommand{\parag}[1]{\paragraph{#1.}\!\!\!}
\newcommand{\defn}[1]{{\textit{\textbf{#1}}}}
\newcommand{\myitem}[1]{\item[\textnormal{(#1)}]}
\newcommand{\abcitem}[1]{\item[\bf#1]}
\theoremheaderfont{\scshape}
\theorembodyfont{\normalfont\slshape}
\theoremstyle{plain}
\newtheorem{definition}{Definition}
\newtheorem{lemma}{Lemma}
\newtheorem{proposition}{Proposition}
\newtheorem{corollary}{Corollary}
\newtheorem{theorem}{Theorem}
\theorembodyfont{\normalfont}
\newtheorem{example}{Example}
\newenvironment{proof}{\begin{trivlist}\item{}\normalfont\textit{Proof.}}{\hfill$\square$\end{trivlist}}
\newenvironment{proofof}[1]{\begin{trivlist}\item{}\normalfont\textit{Proof of #1.}}{\hfill$\square$\end{trivlist}}
\newcommand{\hfill\rule{1.2ex}{1.2ex}}{\hfill\rule{1.2ex}{1.2ex}}
\newcommand{\margincomment}[1]{\marginpar{\raggedright\tiny#1}}
\newcommand{\emph{et al.}}{\emph{et al.}}
\newcommand{^{\text{th}}}{^{\text{th}}}
\newcommand{^{\text{nd}}}{^{\text{nd}}}
\newcommand{\emph{I.e.}}{\emph{I.e.}}
\newcommand{\emph{i.e.}}{\emph{i.e.}}
\newcommand{\emph{e.g.}}{\emph{e.g.}}
\newcommand{\emph{cf.}}{\emph{cf.}}
\newcommand{\emph{etc.}}{\emph{etc.}}
\newdimen\arrayruleHwidth
\setlength{\arrayruleHwidth}{1.3pt}
\makeatletter
\def\Hline{\noalign{\ifnum0=`}\fi\hrule \@height \arrayruleHwidth
\futurelet \@tempa\@xhline}
\makeatother
\def\introrule{{\cal I}}\def\elimrule{{\cal E}}%
\def\andintro{\using{\land}\introrule\justifies}%
\def\impelim{\using{\Rightarrow}\elimrule\justifies}%
\def\allintro{\using{\forall}\introrule\justifies}%
\def\allelim{\using{\forall}\elimrule\justifies}%
\def\falseelim{\using{\bot}\elimrule\justifies}%
\def\existsintro{\using{\exists}\introrule\justifies}%
\def\andelim#1{\using{\land}#1\elimrule\justifies}%
\def\orintro#1{\using{\lor}#1\introrule\justifies}%
\def\impintro#1{\using{\Rightarrow}\introrule_{#1}\justifies}%
\def\orelim#1{\using{\lor}\elimrule_{#1}\justifies}%
\def\existselim#1{\using{\exists}\elimrule_{#1}\justifies}
\newdimen\proofrulebreadth \proofrulebreadth=.05em
\newdimen\proofdotseparation \proofdotseparation=1.25ex
\newdimen\proofrulebaseline \proofrulebaseline=2ex
\newcount\proofdotnumber \proofdotnumber=3
\let\then\relax
\def\hskip0pt plus.0001fil{\hskip0pt plus.0001fil}
\mathchardef\squigto="3A3B
\newif\ifinsideprooftree\insideprooftreefalse
\newif\ifonleftofproofrule\onleftofproofrulefalse
\newif\ifproofdots\proofdotsfalse
\newif\ifdoubleproof\doubleprooffalse
\let\wereinproofbit\relax
\newdimen\shortenproofleft
\newdimen\shortenproofright
\newdimen\proofbelowshift
\newbox\proofabove
\newbox\proofbelow
\newbox\proofrulename
\def\let\next\relax\afterassignment\next\proofbelowshift=\dimen0 \dimen0 {\let\next\relax\afterassignment\next\proofbelowshift=\dimen0 \dimen0 }
\def\shiftproofbelowneg{\def\next{\multiply\dimen0 by-1 }%
\afterassignment\next\proofbelowshift=\dimen0 \dimen0 }
\def\next\proofbelowshift=\dimen0 {\next\proofbelowshift=\dimen0 }
\def\proofrulebreadth{\proofrulebreadth}
\def\prooftree
\ifnum \lastpenalty=1
\then \unpenalty
\else \onleftofproofrulefalse
\fi
\ifonleftofproofrule
\else \ifinsideprooftree
\then \hskip.5em plus1fil
\fi
\fi
\bgrou
\setbox\proofbelow=\hbox{}\setbox\proofrulename=\hbox{}%
\let\justifies\proofover\let\leadsto\proofoverdots\let\Justifies\proofoverdbl
\let\using\proofusing\let\[\prooftree
\ifinsideprooftree\let\]\endprooftree\fi
\proofdotsfalse\doubleprooffalse
\let\thickness\proofrulebreadth
\let\shiftright\let\next\relax\afterassignment\next\proofbelowshift=\dimen0 \dimen0 \let\shift\let\next\relax\afterassignment\next\proofbelowshift=\dimen0 \dimen0
\let\shiftleft\shiftproofbelowneg
\let\ifwasinsideprooftree\ifinsideprooftree
\insideprooftreetrue
\setbox\proofabove=\hbox\bgroup$\displaystyle
\let\wereinproofbit\prooftree
\shortenproofleft=0pt \shortenproofright=0pt \proofbelowshift=0pt
\onleftofproofruletrue\penalty1
}
\def\eproofbit
\ifx \wereinproofbit\prooftree
\then \ifcase \lastpenalty
\then \shortenproofright=0pt
\or \unpenalty\hfil
\or \unpenalty\unskip
\else \shortenproofright=0pt
\fi
\fi
\global\dimen0=\shortenproofleft
\global\dimen1=\shortenproofright
\global\dimen2=\proofrulebreadth
\global\dimen3=\proofbelowshift
\global\dimen4=\proofdotseparation
\global\count255=\proofdotnumber
$\egroup
\shortenproofleft=\dimen0
\shortenproofright=\dimen1
\proofrulebreadth=\dimen2
\proofbelowshift=\dimen3
\proofdotseparation=\dimen4
\proofdotnumber=\count255
}
\def\proofover
\eproofbit
\setbox\proofbelow=\hbox\bgroup
\let\wereinproofbit\proofover
$\displaystyle
}%
\def\proofoverdbl
\eproofbit
\doubleprooftrue
\setbox\proofbelow=\hbox\bgroup
\let\wereinproofbit\proofoverdbl
$\displaystyle
}%
\def\proofoverdots
\eproofbit
\proofdotstrue
\setbox\proofbelow=\hbox\bgroup
\let\wereinproofbit\proofoverdots
$\displaystyle
}%
\def\proofusing
\eproofbit
\setbox\proofrulename=\hbox\bgroup
\let\wereinproofbit\proofusing
\kern0.3em$
}
\def\endprooftree
\eproofbit
\dimen5 =0p
\dimen0=\wd\proofabove \advance\dimen0-\shortenproofleft
\advance\dimen0-\shortenproofright
\dimen1=.5\dimen0 \advance\dimen1-.5\wd\proofbelow
\dimen4=\dimen1
\advance\dimen1\proofbelowshift \advance\dimen4-\proofbelowshift
\ifdim \dimen1<0pt
\then \advance\shortenproofleft\dimen1
\advance\dimen0-\dimen1
\dimen1=0pt
\ifdim \shortenproofleft<0pt
\then \setbox\proofabove=\hbox{%
\kern-\shortenproofleft\unhbox\proofabove}%
\shortenproofleft=0pt
\fi
\fi
\ifdim \dimen4<0pt
\then \advance\shortenproofright\dimen4
\advance\dimen0-\dimen4
\dimen4=0pt
\fi
\ifdim \shortenproofright<\wd\proofrulename
\then \shortenproofright=\wd\proofrulename
\fi
\dimen2=\shortenproofleft \advance\dimen2 by\dimen1
\dimen3=\shortenproofright\advance\dimen3 by\dimen4
\ifproofdots
\then
\dimen6=\shortenproofleft \advance\dimen6 .5\dimen0
\setbox1=\vbox to\proofdotseparation{\vss\hbox{$\cdot$}\vss}%
\setbox0=\hbox{%
\advance\dimen6-.5\wd1
\kern\dimen6
$\vcenter to\proofdotnumber\proofdotseparation
{\leaders\box1\vfill}$%
\unhbox\proofrulename}%
\else \dimen6=\fontdimen22\the\textfont2
\dimen7=\dimen6
\advance\dimen6by.5\proofrulebreadth
\advance\dimen7by-.5\proofrulebreadth
\setbox0=\hbox{%
\kern\shortenproofleft
\ifdoubleproof
\then \hbox to\dimen0{%
$\mathsurround0pt\mathord=\mkern-6mu%
\cleaders\hbox{$\mkern-2mu=\mkern-2mu$}\hfill
\mkern-6mu\mathord=$}%
\else \vrule height\dimen6 depth-\dimen7 width\dimen0
\fi
\unhbox\proofrulename}%
\ht0=\dimen6 \dp0=-\dimen7
\fi
\let\doll\relax
\ifwasinsideprooftree
\then \let\VBOX\vbox
\else \ifmmode\else$\let\doll=$\fi
\let\VBOX\vcenter
\fi
\VBOX {\baselineskip\proofrulebaseline \lineskip.2ex
\expandafter\lineskiplimit\ifproofdots0ex\else-0.6ex\fi
\hbox spread\dimen5 {\hskip0pt plus.0001fil\unhbox\proofabove\hskip0pt plus.0001fil}%
\hbox{\box0}%
\hbox {\kern\dimen2 \box\proofbelow}}\doll%
\global\dimen2=\dimen2
\global\dimen3=\dimen3
\egroup
\ifonleftofproofrule
\then \shortenproofleft=\dimen2
\fi
\shortenproofright=\dimen3
\onleftofproofrulefalse
\ifinsideprooftree
\then \hskip.5em plus 1fil \penalty2
\fi
}
\newcommand{\place}[2]{\put(#1){\makebox(0,0){#2}}}
\newcommand{\pic}[2]{\begin{picture}(0,0)\place{#1}{#2}\end{picture}}
\newcommand{\vertsof}[1]{|#1|}
\newcommand{3ex}{3ex}
\newcommand{\begin{psmatrix}[colsep=\edgelen]\rnode{l}{\rule{0pt}{1.2em}}&\rnode{r}{\rule{0pt}{1.2em}}\ncline[arrows=->,nodesep=2pt]{l}{r}\end{psmatrix}}{\begin{psmatrix}[colsep=3ex]\rnode{l}{\rule{0pt}{1.2em}}&\rnode{r}{\rule{0pt}{1.2em}}\ncline[arrows=->,nodesep=2pt]{l}{r}\end{psmatrix}}
\newcommand{\edgesof}[1]{\begin{psmatrix}[colsep=\edgelen]\rnode{l}{\rule{0pt}{1.2em}}&\rnode{r}{\rule{0pt}{1.2em}}\ncline[arrows=->,nodesep=2pt]{l}{r}\end{psmatrix}_{#1}}
\newcommand{\statesof}[1]{\vertsof{#1}}
\newcommand{\Sigma}{\Sigma}
\newcommand{\labelsof}[1]{\Sigma_{#1}}
\newcommand{\begin{psmatrix}[colsep=2.4ex]\rnode{l}{\rule{0pt}{1.2em}}&\rnode{r}{\rule{0pt}{1.2em}}\ncline[arrows=->,nodesep=2pt]{l}{r}\end{psmatrix}}{\begin{psmatrix}[colsep=2.4ex]\rnode{l}{\rule{0pt}{1.2em}}&\rnode{r}{\rule{0pt}{1.2em}}\ncline[arrows=->,nodesep=2pt]{l}{r}\end{psmatrix}}
\newcommand{\transbetweena}[4]{\ncline[arrows=->]{#1}{#2}\aput(#4){\mbox{\small$#3$}}}
\newcommand{\transbetweenb}[4]{\ncline[arrows=->]{#1}{#2}\bput(#4){\mbox{\small$#3$}}}
\newcommand{\transducebetween}[6]{\ncline[arrows=->]{#1}{#2}\bput(#5){\mbox{\small$#3$}}\aput(#6){\mbox{\small$#4$}}}
\newcommand{\transto}[1]{\begin{psmatrix}[labelsep=1.5pt,colsep=4.5ex]\rnode{l}{\rule{0pt}{1.1em}\,}&\rnode{r}{\,\rule{0pt}{1.1em}}\ncline[arrows=->,nodesep=2pt]{l}{r}\aput(.4){\mbox{\small$#1$}}\end{psmatrix}}
\newcommand{\transof}[1]{\begin{psmatrix}[colsep=3ex]\rnode{l}{\rule{0pt}{1.2em}}&\rnode{r}{\rule{0pt}{1.2em}}\ncline[arrows=->,nodesep=2pt]{l}{r}\bput(1){\mbox{\small$#1$}}\end{psmatrix}}
\newcommand{\transtoof}[2]{\begin{psmatrix}[colsep=3ex]\rnode{l}{\rule{0pt}{1em}}&\rnode{r}{\rule{0pt}{1em}}\ncline[arrows=->,nodesep=2pt]{l}{r}\aput(.5){\mbox{\small$#1$}}\bput(1){\mbox{\small$#2$}}\end{psmatrix}}
\newcommand{\star}{\star}
\newcommand{\initof}[1]{\star_{#1}}
\newcommand{\defaultlts}{Q
\newcommand{\sqsubseteq}{\sqsubseteq}
\newcommand{\close}[1]{\hspace{2pt}{#1}^*\hspace{2pt}}
\newcommand{\treeorder}[1]{<_{#1}}
\newcommand{\sequence}[1]{\underline{#1}}
\newcommand{\subst}[2]{[\raisebox{.2ex}{$#1$}/\raisebox{-.2ex}{$#2$}]}
\newcommand{\mathsf{O}}{\mathsf{O}}
\newcommand{\mathsf{P}}{\mathsf{P}}
\newcommand{\opp}{\mathsf{O}}
\newcommand{\pla}{\mathsf{P}}
\newcommand{\lambdagam}[1]{\mathcal{G}(#1)}
\newcommand{\oldmathcal{H}}{\oldmathcal{H}}
\newcommand{\oldmathcal{U}}{\oldmathcal{U}}
\newcommand{\jptrht}[5]{\nccurve[ncurv=#5,angleA=#3,angleB=#4]{#1}{#2}}
\newcommand{\jptr}[4]{\nccurve[angleA=#3,angleB=#4]{#1}{#2}}
\newcommand{\cptrht}[6]{\jptrht{#1}{#2}{#3}{#4}{#5}\psset{labelsep=0pt,nodesep=0pt}\mput*{\mbox{\footnotesize$#6$}}}
\newcommand{\seqs}[1]{{#1}^*}
\newcommand{\le}{\le}
\newcommand{\rightharpoonup}{\rightharpoonup}
\newcommand{\cong}{\cong}
\newcommand{3.5}{3.5}
\newcommand{\jrelof}[1]{\begin{picture}(14,0)\put(2,3.5){\rnode{a}{}}\put(12,3.5){\rnode{b}{}}\end{picture}\jptr{b}{a}{145}{35}{\psset{labelsep=1pt}\aput(.5){\mbox{\scriptsize{$#1$}}}}}
\newcommand{\heredof}[1]{\begin{picture}(16,0)(0,-4)%
\put(2,0){\rnode{a}{}}%
\put(6,0){\rnode{b}{}}%
\put(10,0){\rnode{c}{}}%
\put(14,0){\rnode{d}{}}\end{picture}%
\jptrht{b}{a}{-120}{-60}{1.2}%
\jptrht{c}{b}{120}{60}{1}{\psset{labelsep=2pt}\aput(.5){\mbox{\scriptsize{$#1$}}}}%
\jptrht{d}{c}{-120}{-60}{1.2}%
}
\renewcommand{\j}{J}
\newcommand{J^*}{J^*}
\newcommand{\jstar}{J^*}
\newcommand{\jstar}{J^*}
\newcommand{\jrelof{}}{\jrelof{}}
\newcommand{\heredof{}}{\heredof{}}
\newcommand{\jsymb}{\jrelof{}}
\newcommand{\jsymb}{\jrelof{}}
\newcommand{\begin{picture}(0,0)\put(4,0){$\not$}\end{picture}\mkern-3mu\jsymb\mkern-3mu}{\begin{picture}(0,0)\put(4,0){$\not$}\end{picture}\mkern-3mu\jrelof{}\mkern-3mu}
\newcommand{\mathsf{dom}\,}{\mathsf{dom}\,}
\newcommand{\domain}{\mathsf{dom}\,}
\newcommand{\game}[1]{\mathsf{G}{#1}}
\newcommand{\gam}[1]{\game{#1}}
\newcommand{\vdash}{\vdash}
\newcommand{\enabof}[1]{\vdash_{#1}}
\newcommand{\tokens}[1]{|#1|}
\newcommand{\traces}[1]{#1_{\mathsf{T}}}
\newcommand{\raisebox{2pt}{\rule{1.5pt}{0pt}\rule{3ex}{1pt}\rule{1.5pt}{0pt}}}{\raisebox{2pt}{\rule{1.5pt}{0pt}\rule{3ex}{1pt}\rule{1.5pt}{0pt}}}
\newcommand{\arc}[1]{\overset{#1}{\raisebox{2pt}{\rule{1.5pt}{0pt}\rule{3ex}{1pt}\rule{1.5pt}{0pt}}}}
\newcommand{\prec}{\prec}
\newcommand{\preceq}{\preceq}
\newcommand{\reindexed}[1]{#1^{\mbox{\hspace{-1pt}\Large.\hspace{-.3pt}}}}
\newcommand{\restr}[2]{#1\mkern-6mu\restriction_{\mkern-1mu#2}}
\newcommand{\underseq}[1]{#1}
\newcommand{\mkern-2mu\parallel\mkern-2mu}{\mkern-2mu\parallel\mkern-2mu}
\newcommand{\dial}[1]{\mathcal{D}(#1)}
\newcommand{\mkern-4mu+\mkern-4mu}{\mkern-4mu+\mkern-4mu}
\newcommand{\mkern3mu;}{\mkern3mu;}
\newcommand{\thread}[1]{\mathcal{C}(#1)}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\nthlineof}[2]{{#1}^{(#2)}}
\newcommand{\lineof}[1]{\underline{#1}}
\newcommand{\lines}[1]{{#1}_\mathsf{L}}
\newcommand{\Bool}[1]{\forall #1.\,#1\to #1\to #1}
\newenvironment{oldhypergame}{\begin{center}\small\begin{math}\begin{array}{c}}{\end{array}\end{math}\end{center}}
\newcommand{\movedown}[2]{\begin{picture}(0,26)
\put(0,26){\vector(0,-1){29.3}}
\put(-2,13){\makebox(0,0)[r]{\footnotesize$#1$}}
\put(3,13){\makebox(0,0)[l]{\footnotesize$#2$}}
\end{picture}}
\newcommand{\oldassert}[1]{\fbox{$#1$}}
\newcommand{\oldplayerassert}[2]{\oldassert{#1}\begin{picture}(0,0)\put(2,7){\makebox(0,0)[l]{\footnotesize$#2$}}\end{picture}}
\newcommand{\oldoassert}[1]{\oldplayerassert{#1}{\mathsf{O}}}
\newcommand{\oldpassert}[1]{\oldplayerassert{#1}{\mathsf{P}}}
\newenvironment{hypergame}{\begin{center}\vspace{.7ex}\begin{math}
\psset{nodesep=2pt,colsep=4ex,nodesep=1.5pt,rowsep=7ex}\begin{psmatrix}}{\end{psmatrix}\end{math}\vspace{1ex}\end{center}}
\newcommand{\hptrht}[5]{\nccurve[ncurv=#5,angleA=#3,angleB=#4]{#1}{#2}}
\newcommand{\hptr}[4]{\nccurve[angleA=#3,angleB=#4]{#1}{#2}}
\newrgbcolor{ocolor}{.85 .35 0}
\newrgbcolor{pcolor}{.35 0 .5}
\newcommand{\oassert}[2]{\rnode{#1}{\psframebox[framearc=1,linecolor=ocolor]{\color{ocolor}\mkern5mu#2\mkern5mu}}}
\newcommand{\passert}[2]{\rnode{#1}{\psframebox[linecolor=pcolor]{\color{pcolor}#2}}}
\newcommand{\ostart}[1]{{\psset{arrows=<-,nodesepA=0pt,nodesepB=4pt,labelsep=2pt,linecolor=ocolor}\hptr{2}{1}{90}{-90}\bput(0.5){\mbox{\small{\color{ocolor}$#1$}}}}}
\newcommand{\move}[4]{{\psset{arrows=<-,nodesepA=0pt,nodesepB=0pt,labelsep=2pt}\hptr{#1}{#2}{90}{-90}\aput(0.5){\mbox{\small{$#3$}}}\bput(0.5){\mbox{\small{$#4$}}}}}
\newcommand{\pmove}[4]{{\psset{linecolor=pcolor}\color{pcolor}\move{#1}{#2}{#3}{#4}}}
\newcommand{\omove}[4]{{\psset{linecolor=ocolor}\color{ocolor}\move{#1}{#2}{#3}{#4}}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\colour}[1]{\mathcal{K}(#1)}
\newcommand{\backtrack}[1]{\widehat{#1}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{B}}{\mathbb{B}}
\newcommand{\mathsf{t}}{\mathsf{t}}
\newcommand{\mathsf{f}}{\mathsf{f}}
\newcommand{\N}{\mathbb{N}}
\newcommand{\backup}[1]{\overleftarrow{#1}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\obacktrack}[1]{\backtrack{#1}^{\mathsf{O}}}
\newcommand{\pbacktrack}[1]{\backtrack{#1}^{\mathsf{P}}}
\newcommand{\bbgame}[1]{\backtrack{#1}^{\mathsf{B}}}
\newcommand{\mathsf{Proc}}{\mathsf{Proc}}
\newcommand{\transduce}[2]{\begin{psmatrix}[labelsep=2pt]\rnode{l}{\rule{0pt}{1em}}&\rnode{r}{\rule{0pt}{1em}}\ncline[arrows=->,nodesep=4pt]{l}{r}\aput(0.3){\mbox{\small$#1$}}\bput(.75){\mbox{\small$#2$}}\end{psmatrix}}
\newcommand{\transactions}{M
\newcommand{\begin{center}\vspace{3ex}\begin{math}{\begin{center}\vspace{3ex}\begin{math}
\psset{nodesep=2pt,colsep=4ex,nodesep=1.5pt}\begin{psmatrix}
\rnode{1}{a}
&
\rnode{2}{b}
&
\rnode{3}{c}
&
\rnode{4}{b}
&
\rnode{5}{e}
&
\rnode{6}{b}
&
\rnode{7}{b}
&
\rnode{8}{c}
&
\rnode{9}{d}
&
\rnode{10}{c}
\end{psmatrix}
\jptr{2}{1}{150}{35}
\jptr{3}{2}{-140}{-40}
\jptr{4}{1}{150}{35}
\jptr{5}{2}{-140}{-40}
\jptr{6}{3}{150}{30}
\jptr{7}{6}{-140}{-40}
\jptr{8}{3}{150}{30}
\jptr{10}{7}{150}{30}
\end{math}\vspace{3ex}\end{center}}
\newcommand{\prenex}[1]{\widehat{#1}}
\newcommand{\raisebox{-1pt}{\LARGE$\mkern2mu\cdot\mkern2mu$}}{\raisebox{-1pt}{\LARGE$\mkern2mu\cdot\mkern2mu$}}
\newcommand{\raisebox{.8pt}{$\mkern4mu\bullet\mkern4mu$}}{\raisebox{.8pt}{$\mkern4mu\bullet\mkern4mu$}}
\newcommand{\Rightarrow}{\Rightarrow}
\newcommand{\textsf{Chess}}{\textsf{Chess}}
\newcommand{\textsf{\textbf X}}{\textsf{\textbf X}}
\newcommand{\textsf{\textbf O}}{\textsf{\textbf O}}
\newcommand{\textsf{\oxO's \& \oxX's}}{\textsf{\textsf{\textbf O}'s \& \textsf{\textbf X}'s}}
\newcommand{\mkern-3mu\multimap\mkern-3mu}{\mkern-3mu\multimap\mkern-3mu}
\newcommand{\simul}[1]{#1\mkern-2mu\to\mkern-2mu#1}
\newcommand{\oxgrid}[9]
\thicklines\setlength{\unitlength}{2.8pt}%
\begin{picture}(15,15)(-7.5,-7.5)%
\put(-2.5,7.5){\line(0,-1){15}}\put(2.5,7.5){\line(0,-1){15}}\put(-7.5,2.5){\line(1,0){15}}\put(-7.5,-2.5){\line(1,0){15}}%
\put(-5,5){\makebox(0,0){\textsf{\textbf #1}}}%
\put(0,5){\makebox(0,0){\textsf{\textbf #2}}}%
\put(5,5){\makebox(0,0){\textsf{\textbf #3}}}%
\put(-5,0){\makebox(0,0){\textsf{\textbf #4}}}%
\put(0,0){\makebox(0,0){\textsf{\textbf #5}}}%
\put(5,0){\makebox(0,0){\textsf{\textbf #6}}}%
\put(-5,-5){\makebox(0,0){\textsf{\textbf #7}}}%
\put(0,-5){\makebox(0,0){\textsf{\textbf #8}}}%
\put(5,-5){\makebox(0,0){\textsf{\textbf #9}}}%
\end{picture}}
\newcommand{\oxgrid{}{}{}{}{}{}{}{}{}}{\oxgrid{}{}{}{}{}{}{}{}{}}
\newcommand{\oxgrid{}{}{}{}{X}{}{}{}{}}{\oxgrid{}{}{}{}{X}{}{}{}{}}
\newcommand{\chessposn}[1]{\chessposnscale{0.26}{#1}
\newcommand{\chessinvposn}[1]{\chessinvposnscale{0.26}{#1}}
\newcommand{\chessposnscale}[2]{\psscalebox{#1}{%
\newgame\notationoff\hidemoves{#2}
\showboard
}}
\newcommand{\chessinvposnscale}[2]{%
\psscalebox{#1}{\newgame\notationoff\hidemoves{#2}
\showinverseboard
}}
\newcommand{\chesssimulscale}[4]{\begin{psmatrix}[colsep=#4]\\[1ex]\rput(0,.1)
{\rnode{a}{\chessposnscale{#3}{#1}}}&\small$\to$&\rput(0,.1){\rnode{b}{\chessinvposnscale{#3}{#2}}}\\[0ex]\end{psmatrix}}
\newcommand{\invchesssimulscale}[4]{\begin{psmatrix}[colsep=#4]\\[1ex]\rput(0,.1)
{\rnode{c}{\chessinvposnscale{#3}{#1}}}&\small$\to$&\rput(0,.1){\rnode{d}{\chessposnscale{#3}{#2}}}\\[0ex]\end{psmatrix}}
\newcommand{\chesssimul}[2]{\chesssimulscale{#1}{#2}{0.2}{3.8ex}}
\newcommand{\oxsimulscale}[4]{\begin{psmatrix}[colsep=#4]\\[1ex]\rput(0,.1)
{\rnode{x}{\psscalebox{#3}{#1}}}&\small$\to$&\rput(0,.1){\rnode{y}{\psscalebox{#3}{#2}}}\\[0ex]\end{psmatrix}}
\newcommand{\oxsimul}[2]{\oxsimulscale{#1}{#2}{.8}{3.8ex}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\,\mbox{\raisebox{-.3ex}{\LARGE$\cdot$}\,}}{\,\mbox{\raisebox{-.3ex}{\LARGE$\cdot$}\,}}
\newcommand{\,\mbox{\raisebox{-.3ex}{\LARGE$\cdot$}\,}}{\,\mbox{\raisebox{-.3ex}{\LARGE$\cdot$}\,}}
\newcommand{\mathsf{A}}{\mathsf{A}}
\newcommand{\Theta_F}{\Theta_F}
\newcommand{\Delta_{\wmove{N}}}{\Delta_{\wmove{N}}}
\newcommand{M_{\wmove{N}}}{M_{\wmove{N}}}
\newcommand{\pmb{\lambda}}{\pmb{\lambda}}
\newcommand{\pmb{\lambda}^{\star}}{\pmb{\lambda}^{\star}}
\newcommand{\star}{\star}
\newcommand{\forall G\,.\,G\to G}{\forall G\,.\,G\to G}
\title{\vspace*{-5.5ex}\Large
\textbf{Hypergames and full completeness for system \textit{F}\\ (\sc rough draft)}
\author{\\[-3ex]
\large Dominic J.\ D.\ Hughes
\thanks{Visiting scholar, Computer Science Department, Stanford University, CA 94305.}
\\[1ex]
\normalsize Stanford University\\
\normalsize August 25, 2006}\date{
}
\begin{document}\thispagestyle{empty}
\maketitle
\begin{abstract}
This paper reviews the fully complete \emph{hypergames} model of
system $F$, presented a decade ago in the author's thesis.
Instantiating type variables is modelled by allowing ``games as
moves''. The uniformity of a quantified type variable $\forall X$ is
modelled by
\emph{copycat expansion}: $X$ represents an unkown game, a kind of black box,
so all the player can do is copy moves between a positive occurrence
and a negative occurrence of $X$.
This presentation is based on slides for a talk entitled ``Hypergame
semantics: ten years later'' given at \emph{Games for Logic and
Programming Languages}, Seattle, August 2006.
\end{abstract}
\section{Introduction}
Zwicker's \textsl{\textsf{Hypergame}} \cite{Zwi87} is an alternating
two-player game: one player chooses any
alternating game $G$ which terminates\footnote{Every legal sequence of
moves is finite.} (\emph{e.g.}\ ``\textsf{\textbf{O}'s \& \textbf{X}'s}'' or
\textsf{Chess}\footnote{To ensure termination, assume a draw is forced upon a
threefold repetition of a position (a variant of a standard rule).}),
then play proceeds in $G$.\footnote{The question \mbox{``\textsl{Does
\textsf{Hypergame} terminate?}''}, the \emph{Hypergame paradox}, amounts to a hereditary form of
Russell's paradox, known as Mirimanoff's paradox
\cite{Mir17}: ``\textsl{Is the set of well-founded sets well-founded?}''. (Each `paradox' is illusory, being merely
due to the lack of formal definition of ``game'' or ``set''.)}
\begin{center}
\pspicture*(-14,-8.3)(15,.3)
\psset{nodesep=5pt,xunit=.25cm,yunit=.57cm
\rput(0,0){\rnode{root}{\textsl{\textsf{Hypergame}}}}
\rput(14,-3){\rnode{chess}{\chessposn{}}}
\rput(-14,-3){\rnode{empty}{\oxgrid{}{}{}{}{}{}{}{}{}}}
\ncline[arrows=->,nodesepA=3pt,linecolor=ocolor]{root}{chess}
\aput(0.5){\ocolor\textsf{Chess}}
\ncline[arrows=->,nodesepA=3pt,linecolor=ocolor]{root}{empty}
\bput(0.5){\ocolor\textsf{\textbf{O}'s \& \textbf{X}'s}}
\rput(9,-8){\rnode{nf3}{\chessposn{1.Nf3}}}
\ncline[arrows=->,linecolor=pcolor]{chess}{nf3}
\bput(0.5){\pcolor\wmove{Nf3}}
\rput(19,-8){\rnode{e4}{\chessposn{1.e4}}}
\ncline[arrows=->,linecolor=pcolor]{chess}{e4}
\aput(0.5){\pcolor e4}
\rput(14,-13){\rnode{sicilian}{\chessposn{1.e4 c5}}}
\ncline[arrows=->,linecolor=ocolor]{e4}{sicilian}
\bput(0.5){\ocolor c5}
\rput(24,-13){\rnode{alekhine}{\chessposn{1.e4 Nf6}}}
\ncline[arrows=->,linecolor=ocolor]{e4}{alekhine}
\aput(0.5){\ocolor\wmove{Nf6}}
\rput(-9,-8){\rnode{centre}{\oxgrid{}{}{}{}{X}{}{}{}{}}}
\ncline[arrows=->,linecolor=pcolor]{empty}{centre}
\aput(0.5){\pcolor centre}
\rput(-19,-8){\rnode{left}{\oxgrid{}{}{}{X}{}{}{}{}{}}}
\ncline[arrows=->,linecolor=pcolor]{empty}{left}
\bput(0.5){\pcolor left}
\rput(-17,-13){\rnode{x}{\oxgrid{O}{}{}{}{X}{}{}{}{}}}
\ncline[arrows=->,linecolor=ocolor]{centre}{x}
\bput(0.5){\ocolor top-left}
\rput(-9,-13){\rnode{y}{\oxgrid{}{O}{}{}{X}{}{}{}{}}}
\ncline[arrows=->,linecolor=ocolor]{centre}{y}
\aput(0.5){\ocolor top}
\rput(-1,-13){\rnode{z}{\oxgrid{}{}{}{}{X}{O}{}{}{}}}
\ncline[arrows=->,linecolor=ocolor]{centre}{z}
\aput(0.5){\ocolor right}
\endpspicture
\end{center}
At the \textit{Imperial College Games Workshop} in 1996, the author
illustrated how hypergames --- games in which games can be played as
moves --- can model languages with universal quantification.
Originally implemented in \cite{Hug97} for Girard's system $F$
\cite{Gir71,GLT89}, the idea is quite general, and has been successfully
applied to affine linear logic \cite{MO01,Mur01} and Curry-style type
isomorphisms \cite{deL06}.
\subsection{Universally quantified games}\label{univ-quant-intro}
Recall the little girl Anna-Louise who wins one point out of two in a
``simultaneous display'' against chess world champions Spassky and
Fischer \cite[Theorem~51]{Con76}.
She faces Spassky as Black and Fischer as White, and copies moves back
and forth, indirectly playing one champion against the other. When
Spassky opens with the Queen's pawn \wmove{d4}, she opens \wmove{d4}
against Fischer; when Fischer responds with the King's knight
\wmove{Nf6}, she responds \wmove{Nf6} against Spassky, and so on.
\begin{center}\vspace{2ex}
\begin{psmatrix}[colsep=5ex,rowsep=2ex]
Fischer && Spassky \\ \\
\rput(0,.1){\rnode{FischerBoard}{\psscalebox{.4}{%
\newgame
\notationoff
\hidemoves{1.d4 Nf6}
\showboard
}}}
&&
\rput(0,.1){\rnode{SpasskyBoard}{\psscalebox{.4}{%
\newgame
\notationoff
\hidemoves{1.d4 Nf6}
\showinverseboard
}}}
\end{psmatrix}
\nccurve[ncurv=.82,arrows=->,nodesep=.5ex,angleA=-90,angleB=-90,offsetA=-2ex,offsetB=-2ex]{SpasskyBoard}{FischerBoard}
\bput(0.5){\wmove{d4}}
\nccurve[ncurv=.83,arrows=->,nodesep=.5ex,angleA=-90,angleB=-90,offsetA=-1ex,offsetB=-1ex]{FischerBoard}{SpasskyBoard}
\bput(0.5){\wmove{Nf6}}
\vspace{17ex}
Anna-Louise
\vspace{3ex}\end{center}
We shall write $\simul{G}$ for such a simultaneous display with a game
$G$ (so Anna-Louise played the game $\simul{\textsf{Chess}}$ above, as second
player, against the Fischer-Spassky
team).\footnote{\label{backtracking}Conway writes $-G+G$, or $G-G$
\cite[Chapter\,7]{Con76}. Later on, we shall add a form of
backtracking to our games so that Anna-Louise may restart the
game with Fischer as many times as she likes, corresponding to the
intuitionism of the arrow $\to$ of system $F$, in which a function may
read its argument any number of times \cite{Lor60,Fel85,Coq91,HO}.
To maintain the focus on universal quantification, here in the
introduction we shall ignore the availability backtracking.}
Observing that her copycat strategy is not specific to chess,
Anna-Louise declares that she will tackle the Fischer-Spassky team in
a more grandiose spectacle: she will give them an additional first
move, to decide the game for simultaneous display. For example, the
Fischer-Spassky team might choose \textsf{Chess}, thereby opting for
the simultaneous display $\simul{\textsf{Chess}}$,
and play continues as above.
Or they might choose \textsf{\oxO's \& \oxX's}{}, opting for the simultaneous
display $\simul{\textsf{\oxO's \& \oxX's}}$, and open with \textsf{\textbf X}{} in the centre of Spassky's
grid; Anna-Louise copies that $\textsf{\textbf X}$ accross as her opening move on
Fischer's grid; Fischer responds with \textsf{\textbf O}{} in (his) top-left; Anna copies
this $\textsf{\textbf O}$ back to Spassky; and so on:
\begin{center}\vspace{2ex}\label{chess-chess}
\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer && Spassky \\[4ex]
\rput(0,.1){\rnode{FischerBoard}{\psscalebox{1}{\oxgrid{}{}{}{}{X}{}{}{}{O}}}}
&&
\rput(0,.1){\rnode{SpasskyBoard}{\psscalebox{1}{\oxgrid{O}{}{}{}{X}{}{}{}{}}}}
\\[14ex]
&\makebox[0ex]{Anna-Louise}
\\[3ex]
\end{psmatrix}
\nccurve[arrows=->,nodesep=1ex,angleA=-90,angleB=-90,offsetA=-2ex,offsetB=-2ex]{SpasskyBoard}{FischerBoard}
\bput(0.5){\small\textsf{\textbf X} centre}
\nccurve[ncurv=.77,arrows=->,nodesep=1ex,angleA=-90,angleB=-90,offsetA=-1ex,offsetB=-1ex]{FischerBoard}{SpasskyBoard}
\bput(0.5){\small\textsf{\textbf O} top-left}
\end{center}
The key novelty of \cite{Hug97} was
to define this as a formal game, a \emph{hypergame} or
\emph{universally quantified game},
which we shall write as
$$\forall G\,.\,G\to G$$
The tree of $\forall G\,.\,G\to G$ is illustrated below. Similar
in spirit to Zwicker's hypergame\footnote{The author was unaware of
Zwicker's work while preparing \cite{Hug97}, hence the lack of
reference to Zwicker in that paper, and in the author's thesis
\cite{Hug00}.}, it differs in the fact that the first player not
only chooses $G$ but also plays an opening move $m$ in $G$. We call
such a compound move (importing a game, and playing a move in a game)
a \emph{hypermove}.
\begin{center}\label{Htree}
\pspicture*(-15,-8.3)(15,.3)
\psset{nodesep=5pt,xunit=.25cm,yunit=.57cm
\rput(0,0){\rnode{root}{$\forall G\,.\,G\to G$}}
\rput(-14,-3){\rnode{chess}{\chesssimul{}{1.d4}}}
\rput(14,-3){\rnode{empty}{\oxsimul{\oxgrid{}{}{}{}{}{}{}{}{}}{\oxgrid{}{}{}{}{}{X}{}{}{}}}}
\ncline[arrows=->,nodesepA=3pt,linecolor=ocolor]{root}{chess}
\bput(0.8){\ocolor$\textsf{Chess},\wmove{d4}$}
\ncline[arrows=->,nodesepA=3pt,linecolor=ocolor]{root}{empty}
\aput(0.8){\ocolor$\textsf{\textbf{O}'s \& \textbf{X}'s},\text{left}$}
\rput(-21,-8){\rnode{d4}{\chesssimul{1.d4}{1.d4}}}
\ncline[nodesepA=-.5ex,nodesepB=-2.5ex,arrows=->,linecolor=pcolor]{chess}{d4}
\bput(0.6){\pcolor$\wmove{d4}$}
\rput(-7,-8){\rnode{nf6}{\chesssimul{}{1.d4 Nf6}}}
\ncline[nodesepA=-.8ex,nodesepB=-2.5ex,arrows=->,linecolor=pcolor]{chess}{nf6}
\aput(0.6){\pcolor$\wmove{Nf6}$}
\rput(-24,-13){\rnode{f5}{\chesssimul{1.d4 f5}{1.d4}}}
\ncline[nodesepA=-1ex,nodesepB=-3.7ex,arrows=->,linecolor=ocolor]{d4}{f5}
\bput(0.5){\ocolor$\wmove{f5}$}
\rput(-10,-13){\rnode{d5}{\chesssimul{1.d4 d5}{1.d4}}}
\ncline[nodesepA=1ex,nodesepB=-1ex,arrows=->,linecolor=ocolor]{d4}{d5}
\aput(0.6){\ocolor$d5$}
\rput(10,-8){\rnode{copy}{\oxsimul{\oxgrid{}{}{}{X}{}{}{}{}{}}{\oxgrid{}{}{}{}{}{X}{}{}{}}}}
\ncline[nodesepA=-.5ex,nodesepB=-3.5ex,arrows=->,linecolor=pcolor]{empty}{copy}
\bput(0.52){\pcolor left}
\rput(24,-8){\rnode{right}{\oxsimul{\oxgrid{}{}{}{}{}{}{}{}{}}{\oxgrid{}{O}{}{}{X}{}{}{}{}}}}
\ncline[nodesepA=1ex,nodesepB=-2ex,arrows=->,linecolor=pcolor]{empty}{right}
\aput(0.6){\pcolor top}
\rput(6,-13){\rnode{p}{\oxsimul{\oxgrid{}{}{}{}{X}{}{}{}{O}}{\oxgrid{}{}{}{}{X}{}{}{}{}}}}
\ncline[nodesepA=-1ex,nodesepB=-3.5ex,arrows=->,linecolor=ocolor]{copy}{p}
\bput(0.6){\ocolor top-left}
\rput(20,-13){\rnode{q}{\oxsimul{\oxgrid{}{}{}{O}{X}{}{}{}{}}{\oxgrid{}{}{}{}{X}{}{}{}{}}}}
\ncline[nodesepA=1ex,nodesepB=-1.5ex,arrows=->,linecolor=ocolor]{copy}{q}
\aput(0.7){\ocolor right}
\endpspicture
\end{center}
\subsection{Self-reference (without paradox)}
In the tree above, we have shown two cases for instantiating $G$ in
the hypergame $H\,=\,\forall G\,.\,G\to G$, either to \textsf{Chess}{} or to
\textsf{\oxO's \& \oxX's}. But it is also possible to instantiate $G$ to a hypergame, or indeed,
to $H$ itself. We consider this case below.
The initial state is:
\begin{center}\vspace{1ex}
\begin{psmatrix}[colsep=-2ex,rowsep=0ex]
Fischer && Spassky \\[2ex]
&$\forall G\,.\,G\to G$
\\[2ex]
&\makebox[0ex]{Anna-Louise}
\\[.2ex]
\;
\end{psmatrix}
\end{center}
Fischer and Spassky begin by importing a game for $G$, in this case,
$H\,=\,\forall G\,.\,G\to G$ itself, yielding a simultaneous display of
$H$:
\begin{center}\vspace{1ex}
\begin{psmatrix}[colsep=-2ex,rowsep=0ex]
Fischer && Spassky \\[2ex]
&$H$\;\;{\Large$\to$}\;\;$H$
\\[2ex]
&\makebox[0ex]{Anna-Louise}
\\[.5ex]
\;
\end{psmatrix}
\end{center}
In other words, we have:
\begin{center}\vspace{2ex}
\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer && Spassky \\[3ex]
\rput(0,.1){\rnode{FischerBoard}{$\forall G_1\,.\,G_1\to G_1$}}
&\Large\;\;\;\;\;$\to$\;\;\;\;\;&
\rput(0,.1){\rnode{SpasskyBoard}{$\forall G_2\,.\,G_2\to G_2$}}
\\[3ex]
&\makebox[0ex]{Anna-Louise}
\\[2ex]
\;
\end{psmatrix}
\end{center}
The local bound variable $G$ is renamed in each component to clarify
the evolution of the game below.\footnote{The scope of $\forall G_1$
in the diagram does not extend past the central arrow
{\normalsize$\to$}. In other words, formally the game played by
Anna-Louise is $(\forall G_1.G_1\to G_1)\,\to\,(\forall
G_2.G_2\to G_2).$}
As in the simultaneous display $\textsf{Chess}\to\textsf{Chess}$, where Spassky opened
with a move on his chessboard, here in $H\to H$ Spassky must complete
the opening hypermove by playing a move on his copy of $H$. Since
$H\,=\,\forall G_2\,.\,G_2\to G_2$ is a hypergame, opening $H$ requires
importing \emph{another} game, instantiating $G_2$. Suppose he
chooses \textsf{Chess}{} for $G_2$:
\begin{center}\vspace{2ex}
\hspace*{4ex}\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer && Spassky \\[6ex]
\rput(0,.1){\rnode{FischerBoard}{$\forall G_1\,.\,G_1\to G_1$}}
&\Large\hspace*{7ex}$\to$\hspace*{7ex}&
\hspace*{4ex}\chesssimulscale{}{}{.45}{8.5ex}
\\[9ex]
&\makebox[0ex]{Anna-Louise}
\\[2ex]
\;
\end{psmatrix}
\end{center}
Now Spassky has his own local simultaneous display $\textsf{Chess}\to\textsf{Chess}$.
To complete his opening (hyper)move on the overall game, he must open
this chess display. Suppose he plays
\wmove{Nf3} (necessarily on the right board, where it is his turn since he has White):
\begin{center}\vspace{2ex}
\hspace*{4ex}\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer && Spassky \\[6ex]
\rput(0,.1){\rnode{FischerBoard}{$\forall G_1\,.\,G_1\to G_1$}}
&\Large\hspace*{7ex}$\to$\hspace*{7ex}&
\hspace*{4ex}\chesssimulscale{}{1.Nf3}{.45}{8.5ex}
\\[9ex]
&\makebox[0ex]{Anna-Louise}
\\[2ex]
\;
\end{psmatrix}
\end{center}
Now it is Anna-Louise's turn.
She has three options: (1) respond to Spassky as Black on
the rightmost chess board, (2) respond to Spassky as White on the
other chess board, or (3) play an opening move against Fischer
in $\forall G_1\,.\,G_1\to G_1$.
We consider the last case, since it is the most interesting. Suppose
Anna-Louise chooses to import \textsf{\oxO's \& \oxX's}{} for $G_1$:
\begin{center}\vspace{2ex}
\hspace*{4ex}\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer\hspace*{3.5ex} && Spassky \\[6ex]
\oxsimulscale{\oxgrid{}{}{}{}{}{}{}{}{}}{\oxgrid{}{}{}{}{}{}{}{}{}}{1.2}{6.5ex}\hspace*{4ex}
&\Large\hspace*{7ex}$\to$\hspace*{7ex}&
\hspace*{4ex}\chesssimulscale{}{1.Nf3}{.45}{8.5ex}
\\[9ex]
&\makebox[0ex]{Anna-Louise}\;\;\;\;
\\[3ex]
\;
\end{psmatrix}\;\;\;\;\;\;\;\;
\end{center}
Now Fischer has his own local simultaneous display $\textsf{\oxO's \& \oxX's}\to\textsf{\oxO's \& \oxX's}$.
For Anna-Louise to complete her hypermove, she must play a move on
$\textsf{\oxO's \& \oxX's}\to\textsf{\oxO's \& \oxX's}$ (necessarily in the right of the two grids, the one in
which it her turn).
Suppose she plays her $\textsf{\textbf X}$ top-right:
\begin{center}\vspace{2ex}
\hspace*{4ex}\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer\hspace*{3.5ex} && Spassky \\[6ex]
\oxsimulscale{\oxgrid{}{}{}{}{}{}{}{}{}}{\oxgrid{}{}{X}{}{}{}{}{}{}}{1.2}{6.5ex}\hspace*{4ex}
&\Large\hspace*{7ex}$\to$\hspace*{7ex}&
\hspace*{4ex}\chesssimulscale{}{1.Nf3}{.45}{8.5ex}
\\[9ex]
&\makebox[0ex]{Anna-Louise}\;\;\;\;
\\[3ex]
\;
\end{psmatrix}\;\;\;\;\;\;\;\;
\end{center}
Fischer responds either with an $\textsf{\textbf O}$ in the same grid, or with an
$\textsf{\textbf X}$ in the empty grid, and play continuous in the two local
simultaneous displays, $\textsf{\oxO's \& \oxX's}\to\textsf{\oxO's \& \oxX's}$ against Fischer and
$\textsf{Chess}\to\textsf{Chess}$ against Spassky.
But to remain consistent with her copycat strategy, Anna-Louise must
mimic Spassky. Instead of importing \textsf{\oxO's \& \oxX's}{} for $G_1$ against Fischer,
she must import \textsf{Chess}{} and open with the White move
\wmove{Nf3}, exactly as Spassky did:
\begin{center}\vspace{2ex}
\hspace*{4ex}\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer\hspace*{3.5ex} && Spassky \\[6ex]
\invchesssimulscale{}{1.Nf3}{.45}{8.5ex}\hspace*{4ex}
&\Large\hspace*{7ex}$\to$\hspace*{7ex}&
\hspace*{4ex}\chesssimulscale{}{1.Nf3}{.45}{8.5ex}
\\[15ex]
&\makebox[0ex]{Anna-Louise}\;\;\;\;
\\[2ex]
\;
\end{psmatrix}\;\;\;\;\;\;\;\;%
\nccurve[ncurv=.35,arrows=<-,nodesep=.5ex,angleA=-60,angleB=-120]{d}{b}\aput(0.5){$\textsf{Chess}$\:,\:\wmove{Nf3}}
\end{center}
Fischer might now open his other board with \wmove{e4}, which
Anna-Louise would copy back to the corresponding board against
Spassky:
\begin{center}\vspace{2ex}
\hspace*{4ex}\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer\hspace*{3.5ex} && Spassky \\[6ex]
\invchesssimulscale{1.e4}{1.Nf3}{.45}{8.5ex}\hspace*{4ex}
&\Large\hspace*{7ex}$\to$\hspace*{7ex}&
\hspace*{4ex}\chesssimulscale{1.e4}{1.Nf3}{.45}{8.5ex}
\\[14ex]
&\makebox[0ex]{Anna-Louise}\;\;\;\;
\\[1ex]
\;
\end{psmatrix}\;\;\;\;\;\;\;\;%
\nccurve[ncurv=.35,arrows=->,nodesep=.5ex,angleA=-60,angleB=-120]{c}{a}\aput(0.5){\wmove{e4}}%
\end{center}
Or perhaps Fischer responds with Black in the rightmost of his pair
of boards, with \wmove{d5}, which Anna-Louise copies to Spassky:
\begin{center}\vspace{2ex}
\hspace*{4ex}\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer\hspace*{3.5ex} && Spassky \\[6ex]
\invchesssimulscale{}{1.Nf3 d5}{.45}{8.5ex}\hspace*{4ex}
&\Large\hspace*{7ex}$\to$\hspace*{7ex}&
\hspace*{4ex}\chesssimulscale{}{1.Nf3 d5}{.45}{8.5ex}
\\[14ex]
&\makebox[0ex]{Anna-Louise}\;\;\;\;
\\[1ex]
\;
\end{psmatrix}\;\;\;\;\;\;\;\;%
\nccurve[ncurv=.35,arrows=->,nodesep=.5ex,angleA=-60,angleB=-120]{d}{b}\aput(0.5){\wmove{d5}}%
\end{center}
Either way, she continues to copy moves between the four boards
according to the following geometry of copycat links:
\begin{center}\vspace{1ex}
\hspace*{4ex}\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer\hspace*{3.5ex} && Spassky \\[6ex]
\invchesssimulscale{}{1.Nf3}{.45}{8.5ex}\hspace*{4ex}
&\Large\hspace*{7ex}$\to$\hspace*{7ex}&
\hspace*{4ex}\chesssimulscale{}{1.Nf3}{.45}{8.5ex}
\\[14ex]
&\makebox[0ex]{Anna-Louise}\;\;\;\;
\\[1ex]
\;
\end{psmatrix}\;\;\;\;\;\;\;\;%
\nccurve[ncurv=.35,arrows=<->,nodesep=.5ex,angleA=-60,angleB=-120]{c}{a}%
\nccurve[ncurv=.35,arrows=<->,nodesep=.5ex,angleA=-60,angleB=-120]{d}{b}%
\end{center}
This copycat strategy corresponds to the polymorphic identity system
$F$ term $$\Lambda G.\lambda g^G.g$$ of type $\forall G\,.\,G\to G\,$.
\subsection{Uniformity}
Consider again the original Fischer-Spassky simultaneous display, with
chess. Add Kasparov to the team, playing Black.
\begin{center}\vspace{2ex}
\begin{psmatrix}[colsep=5ex,rowsep=2ex]
Kasparov && Fischer && Spassky \\ \\
\rput(0,.1){\rnode{KasparovBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\showboard
}}}\,
&&
\rput(0,.1){\rnode{FischerBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\showboard
}}}\;
&&
\rput(0,.1){\rnode{SpasskyBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\showinverseboard
}}}%
\end{psmatrix}
\vspace{9ex}
Anna-Louise
\vspace{3ex}\end{center}
Anna-Louise has two distinct ways to guarantee picking up a point.
Either she copies moves between Spassky and Fischer, as before, while
ignoring Kasparov (never playing a move against him),
\begin{center}\vspace{2ex}
\begin{psmatrix}[colsep=5ex,rowsep=2ex]
Kasparov && Fischer && Spassky \\ \\
\rput(0,.1){\rnode{KasparovBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\showboard
}}}\,
&&
\rput(0,.1){\rnode{FischerBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.d4 Nf6}
\showboard
}}}\;
&&
\rput(0,.1){\rnode{SpasskyBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.d4 Nf6}
\showinverseboard
}}}
\\[16ex]
&&\makebox[0ex][c]{Anna-Louise}
\\[.5ex]
\;
\end{psmatrix}
\nccurve[ncurv=.82,arrows=->,nodesep=.5ex,angleA=-90,angleB=-90,offsetA=-2ex,offsetB=-2ex]{SpasskyBoard}{FischerBoard}
\bput(0.5){\wmove{d4}}
\nccurve[ncurv=.83,arrows=->,nodesep=.5ex,angleA=-90,angleB=-90,offsetA=-1ex,offsetB=-1ex]{FischerBoard}{SpasskyBoard}
\bput(0.5){\wmove{Nf6}}
\end{center}
or she copies moves between Spassky and Kasparov, ignoring Fischer:
\begin{center}\vspace{2ex}
\begin{psmatrix}[colsep=5ex,rowsep=2ex]
Kasparov && Fischer && Spassky \\ \\
\rput(0,.1){\rnode{KasparovBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4 c5}
\showboard
}}}\,
&&
\rput(0,.1){\rnode{FischerBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\showboard
}}}\;
&&
\rput(0,.1){\rnode{SpasskyBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4 c5}
\showinverseboard
}}}
\\[16ex]
&&\makebox[0ex][c]{Anna-Louise}
\\[.5ex]
\;
\end{psmatrix}
\nccurve[ncurv=.4,arrows=->,nodesep=1.6ex,angleA=-110,angleB=-70,offsetA=-1ex,offsetB=-1ex]{SpasskyBoard}{KasparovBoard}
\bput(0.5){\wmove{e4}}
\nccurve[ncurv=.5,arrows=->,nodesep=.5ex,angleA=-70,angleB=-110,offsetA=-2ex,offsetB=-2ex]{KasparovBoard}{SpasskyBoard}
\bput(0.5){\wmove{c5}}
\end{center}
We shall write this triple simultaneous display as
$\textsf{Chess}\,\to\,\textsf{Chess}\,\to\,\textsf{Chess}$, and more generally, for any game
$G$, as $G\to G\to G$.\footnote{Again with the backtracking caveat: see footnote~\ref{backtracking}.}
Now consider the universally quantified form of this game, the hypergame
$$\forall G\,.\,G\to G\to G.$$
As with $\forall G\,.\,G\to G$ discussed above, the
Kasparov-Fischer-Spassky team, KFS, now has the right to choose the
game of the triple simultaneous display, as part of their opening
(hyper)move.
We shall say that Anna-Louise's strategy is \defn{uniform} in this
setting if
\begin{itemize}
\item irrespective of the game chosen by KFS, she always ignores the same player, Kasparov or Fischer.
\end{itemize}
Otherwise her strategy is \defn{ad hoc}. For example, her strategy
would be ad hoc if, when KFS chooses \textsf{Chess}{}, she ignores Kasparov
and copies chess moves between Fischer and Spassky, but when KFS
chooses \textsf{\oxO's \& \oxX's}{}, she ignores Fischer and copies \textsf{\textbf X}{} and \textsf{\textbf O}{} moves
between Kasparov and Spassky.
In this case the geometry of her move copying depends on the game
imported by FKS: she is not treating $G$ as a ``black box''.
There are only two uniform strategies for Anna-Louise: either she
always copies between \emph{Kasparov} and Spassky, ignoring Fischer, or she always copies
between \emph{Fischer} and Spassky, ignoring Kasparov.
These correspond to the system $F$ terms
\begin{displaymath}
\begin{array}{c}
\Lambda G\,.\:\lambda k^G\,.\,\lambda f^G\,.\,k\\[1ex]
\Lambda G\,.\:\lambda k^G\,.\,\lambda f^G\,.\,f
\end{array}
\end{displaymath}
respectively, of type
$$\forall G\,.\,G\to G\to G\,,$$
where the variable $k$ corresponds to Kasparov and $f$ corresponds to Fischer.
More generally, with multiple bound $\forall$ variables and more
complicated game imports, we shall take
uniformity to mean that the links Anna-Louise sets up between
components (such as the Kasparov$\,\leftrightarrow\,$Spassky or
Fischer$\,\leftrightarrow\,$Spassky links above) must be independent
of the games imported by the opposing team: these imported games are impenetrable ``black boxes''.
\paragraph*{Fixed links.}
Uniformity as independence from the particular games imported by the
opposing team will include independence from the not only the
\emph{identity} of those games, but also from their \emph{state}.
This will ensure that the geometry of Anna-Louise's copycat play
remains constant over time:
once she has committed to linking one component to another, she must
stick with that link for the rest of the hypergame.
To illustrate this aspect of uniformity, consider the quadruple chess
simultaneous display with Kasparov and Fischer playing Black, and
Karpov and Spassky playing White:
\begin{center}\vspace{2ex}
\begin{psmatrix}[colsep=5ex,rowsep=2ex]
Kasparov && Fischer && Karpov && Spassky \\ \\
\rput(0,.1){\rnode{KasparovBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\showboard
}}}\;\:
&&
\rput(0,.1){\rnode{FischerBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\showboard
}}}\;\;\;
&&
\rput(0,.1){\rnode{KarpovBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\showinverseboard
}}}\;
&&
\rput(0,.1){\rnode{SpasskyBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\showinverseboard
}}}
\\[7ex]
&&&\makebox[0ex]{Anna-Louise}
\\[.5ex]
\;
\end{psmatrix}
\end{center}
We shall write $\textsf{Chess}\times\textsf{Chess}\,\to\,\textsf{Chess}\times\textsf{Chess}$ for this
simultaneous display.\footnote{With the backtracking caveat: see footnote~\ref{backtracking}.}
Suppose Spassky begins with \wmove{e4}. Anna-Louise, playing copycat,
has a choice between copying this move to Fischer or to Kasparov.
Suppose she copies it to Fischer, who responds with \wmove{c5}, which
she duly copies back to Spassky:
\begin{center}\vspace{2ex}
\begin{psmatrix}[colsep=5ex,rowsep=2ex]
Kasparov && Fischer && Karpov && Spassky \\ \\
\rput(0,.1){\rnode{KasparovBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\showboard
}}}\;\:
&&
\rput(0,.1){\rnode{FischerBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4 c5}
\showboard
}}}\;\;\:
&&
\rput(0,.1){\rnode{KarpovBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\showinverseboard
}}}\,
&&
\rput(0,.1){\rnode{SpasskyBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4 c5}
\showinverseboard
}}}
\\[14ex]
&&&\makebox[0ex]{Anna-Louise}
\\[.5ex]
\;
\end{psmatrix}
\nccurve[ncurv=.4,arrows=->,nodesep=1.6ex,angleA=-110,angleB=-70,offsetA=-1ex,offsetB=-1ex]{SpasskyBoard}{FischerBoard}
\bput(0.5){\wmove{e4}}
\nccurve[ncurv=.5,arrows=->,nodesep=.5ex,angleA=-70,angleB=-110,offsetA=-2ex,offsetB=-2ex]{FischerBoard}{SpasskyBoard}
\bput(0.5){\wmove{c5}}
\end{center}
Suppose Karpov opens his game with the very same move as Spassky,
\wmove{e4}, which Anna-Louise copies accross to Kasparov (the only
destination where this move makes sense):
\begin{center}\vspace{2ex}
\begin{psmatrix}[colsep=5ex,rowsep=2ex]
Kasparov && Fischer && Karpov && Spassky \\ \\
\rput(0,.1){\rnode{KasparovBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4}
\showboard
}}}\;\:
&&
\rput(0,.1){\rnode{FischerBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4 c5}
\showboard
}}}\;\;\;
&&
\rput(0,.1){\rnode{KarpovBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4}
\showinverseboard
}}}\;
&&
\rput(0,.1){\rnode{SpasskyBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4 c5}
\showinverseboard
}}}
\\[13ex]
&&&\makebox[0ex]{Anna-Louise}
\\[.5ex]
\;
\end{psmatrix}
\nccurve[ncurv=.4,arrows=->,nodesep=1.6ex,angleA=-110,angleB=-70,offsetA=-1ex,offsetB=-1ex]{KarpovBoard}{KasparovBoard}
\bput(0.5){\wmove{e4}}
\end{center}
Kasparov responds with the same move as Fischer, \wmove{c5}, which Anna-Louise copies back to Karpov:
\begin{center}\vspace{2ex}
\begin{psmatrix}[colsep=5ex,rowsep=2ex]
Kasparov && Fischer && Karpov && Spassky \\ \\
\rput(0,.1){\rnode{KasparovBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4 c5}
\showboard
}}}\;\:
&&
\rput(0,.1){\rnode{FischerBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4 c5}
\showboard
}}}\;\;\;
&&
\rput(0,.1){\rnode{KarpovBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4 c5}
\showinverseboard
}}}\;
&&
\rput(0,.1){\rnode{SpasskyBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4 c5}
\showinverseboard
}}}
\\[13ex]
&&&\makebox[0ex]{Anna-Louise}
\\[.5ex]
\;
\end{psmatrix}
\nccurve[ncurv=.4,arrows=->,nodesep=1.6ex,angleA=-110,angleB=-70,offsetA=-1ex,offsetB=-1ex]{KarpovBoard}{KasparovBoard}
\bput(0.5){\wmove{e4}}
\nccurve[ncurv=.5,arrows=->,nodesep=.5ex,angleA=-70,angleB=-110,offsetA=-2ex,offsetB=-2ex]{KasparovBoard}{KarpovBoard}
\bput(0.5){\wmove{c5}}
\end{center}
So far, Anna-Louise has linked Spassky with Fischer, and Karpov with Kasparov:
\begin{center}\vspace{2ex}
\begin{psmatrix}[colsep=5ex,rowsep=2ex]
Kasparov && Fischer && Karpov && Spassky \\ \\
\rput(0,.1){\rnode{KasparovBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4 c5}
\showboard
}}}\;\:
&&
\rput(0,.1){\rnode{FischerBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4 c5}
\showboard
}}}\;\;\;
&&
\rput(0,.1){\rnode{KarpovBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4 c5}
\showinverseboard
}}}\;
&&
\rput(0,.1){\rnode{SpasskyBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4 c5}
\showinverseboard
}}}
\\[13ex]
&&&\makebox[0ex]{Anna-Louise}
\\[.5ex]
\;
\end{psmatrix}
\nccurve[ncurv=.4,arrows=<->,nodesep=.5ex,angleA=-70,angleB=-110]{KasparovBoard}{KarpovBoard}
\nccurve[ncurv=.4,arrows=<->,nodesep=.5ex,angleA=-70,angleB=-110]{FischerBoard}{SpasskyBoard}
\end{center}
By (contrived) coincidence, both pairs of linked boards happen to have
reached exactly the same state. Therefore from this point onwards,
Anna-Louise could change the linkage, linking Kasparov with Spassky,
and Karpov with Fischer:
\begin{center}\vspace{2ex}
\begin{psmatrix}[colsep=5ex,rowsep=2ex]
Kasparov && Fischer && Karpov && Spassky \\ \\
\rput(0,.1){\rnode{KasparovBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4 c5}
\showboard
}}}\;\:
&&
\rput(0,.1){\rnode{FischerBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4 c5}
\showboard
}}}\;\;\;
&&
\rput(0,.1){\rnode{KarpovBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4 c5}
\showinverseboard
}}}\;
&&
\rput(0,.1){\rnode{SpasskyBoard}{\psscalebox{.45}{%
\newgame
\notationoff
\hidemoves{1.e4 c5}
\showinverseboard
}}}
\\[13ex]
&&&\makebox[0ex]{Anna-Louise}
\\[.5ex]
\;
\end{psmatrix}
\nccurve[ncurv=.35,arrows=<->,nodesep=.5ex,angleA=-70,angleB=-110]{KasparovBoard}{SpasskyBoard}
\nccurve[ncurv=.65,arrows=<->,nodesep=.5ex,angleA=-70,angleB=-110]{FischerBoard}{KarpovBoard}
\end{center}
For example, should Karpov respond with \wmove{Nf3}, she would copy
that move across to Fischer, then continue copying between Fischer and
Karpov, and between Kasparov and Spassky.
She could do this ``relinking'' for any game $G$, not just \textsf{Chess}, on
$G\times G\,\to\,G\times G$: no matter what the game $G$ is, she could
link the first and third $G$, and link the second and fourth $G$, but
if a point is reached in which all four copies of $G$ have the same
state, she switches the linkage, as in the chess example above.
If she consistently does this for all $G$, she has a strategy on the
hypergame $\forall G\,.\,G\times G\to G\times G$ which, in some fashion,
does not depend on $G$. Such ``relinking'' strategies do not
correspond to system $F$ terms, and are eliminated from the model by
our uniformity condition: independence from $G$ means
independence not only from the \emph{identity} of $G$, but also from the
\emph{state} of $G$.
\subsection{Negative quantifiers}\label{sec-neg}
Linear polymorphism was modelled in \cite{Abr97} using a universal
notion of the games in \cite{AJ94,AJM00}. Full completeness failed
for types with negative quantifiers. In this subsection we illustrate
how the hypergames model successfully treats negative quantifiers.
The polarity of a quantifier in a type is positive or negative
according to the number of times it is to the left of an arrow (in the
syntactic parse tree of the type): positive if even, negative if odd.
For example, $\forall X$ is positive in $\forall X.T$ and $\forall
Y.\forall X.T$, negative in $(\forall X.U)\to T$ and $(V\to \forall
X.U)\to T$, and positive in $\big((\forall X.U)\to V\big)\to T$.
Consider the simultaneous display $H\to H$ where $H$ is the hypergame
$\forall G\,.\,G\to G$:
\begin{center}\vspace{3ex}
\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer && Spassky \\[4ex]
\rput(0,.1){\rnode{FischerBoard}{$\forall G_1\,.\,G_1\to G_1$}}
&\Large\;\;\;\;\;\;$\to$\;\;\;\;\;\;&
\rput(0,.1){\rnode{SpasskyBoard}{$\forall G_2\,.\,G_2\to G_2$}}
\\[3ex]
&\makebox[0ex]{Anna-Louise}
\\[.5ex]
\;
\end{psmatrix}
\end{center}
Fischer's quantifier $\forall G_1$ is negative.\footnote{The scope of
$\forall G_1$ in the diagram does not extend past the central arrow
{\normalsize$\to$}.}
To kick off, Spassky must open the game $\forall G_2\,.\,G_2\to G_2$ in
front of him. This is a hypergame, universally quantified, so he must
begin by instantiating $G_2$. He chooses $G_2=\textsf{Chess}$, and opens
$\wmove{Nf3}$ on the board where he has White:
\begin{center}\vspace{2ex}
\hspace*{4ex}\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer && Spassky \\[6ex]
\rput(0,.1){\rnode{FischerBoard}{$\forall G_1\,.\,G_1\to G_1$}}
&\Large\hspace*{7ex}$\to$\hspace*{7ex}&
\hspace*{6ex}\chesssimulscale{}{1.Nf3}{.45}{8.5ex}
\\[9ex]
&\makebox[0ex]{Anna-Louise}
\\[3ex]
\;
\end{psmatrix}
\end{center}
We shall consider three of the copycat strategies available to Anna-Louise from this point:
\begin{center}
\begin{tabular}{|c|l|l|}\hline
\bf \!\!Strategy\!\! & \bf Anna Louise\ldots & \bf \rule{0ex}{2.5ex}Corresponding term of type $H\to H$\raisebox{-1ex}{\strut} \\\hline
$\iota$ &
\parbox{2.7in}{\rule{0ex}{2.2ex}\raggedright\ldots\ copies what Spassky did accross to Fischer: import $\textsf{Chess}$ and play $\wmove{Nf3}$\raisebox{-1ex}{\strut}} &
$\lambda h^H.h$ \\\hline
$\sigma$ &
\parbox{2.7in}{\rule{0ex}{2.2ex}\raggedright\ldots\ plays copycat in Spassky's local chess display, ``playing Spassky against himself''\raisebox{-1ex}{\strut}} &
$\lambda h^H.\Lambda G.\lambda g^{G}.g$ \\\hline
$\tau$ &
\parbox{2.7in}{\rule{0ex}{2.2ex}\raggedright\ldots\ imports $G_1\,=\,\textsf{Chess}\to\textsf{Chess}$ against Fischer,
then copies moves between the six resulting boards, along three
``copycat links''\raisebox{-1ex}{\strut}} &
$\lambda h^H.\Lambda G.\lambda g^{G}.h_{G\to G}(\lambda x^{G}.x)g$ \\\hline
\end{tabular}
\end{center}
The notation $h_U$ in the third term denotes the application of $h$ to
the type $U$.
\paragraph*{The first copycat strategy $\iota\,$.}
Anna-Louise opens the hypergame $\forall G_1\,.\,G_1\to G_1$ in front
of Fischer by mimicking Spassky: she imports $\textsf{Chess}$ for $G_1$ and
opens with $\wmove{Nf3}$ as White:
\begin{center}\vspace{2ex}
\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer\;\;\;\;\;\;\;\;&&Spassky\\[6ex]
\invchesssimulscale{}{1.Nf3}{.45}{8.5ex}\hspace*{4ex}
&\Large\hspace*{7ex}$\to$\hspace*{7ex}&
\hspace*{4ex}\chesssimulscale{}{1.Nf3}{.45}{8.5ex}
\\[9ex]
&\makebox[0ex]{Anna-Louise}\;\;\;\;\;\;
\\[2ex]
\;
\end{psmatrix}
\end{center}
She then copies moves between the four boards according to the
following geometry of copycat links:
\begin{center}\vspace{2ex}
\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer\;\;\;\;\;\;\;\;&&Spassky\\[6ex]
\invchesssimulscale{}{1.Nf3}{.45}{8.5ex}\hspace*{4ex}
&\Large\hspace*{7ex}$\to$\hspace*{7ex}&
\hspace*{4ex}\chesssimulscale{}{1.Nf3}{.45}{8.5ex}
\\[14ex]
&\makebox[0ex]{Anna-Louise}\;\;\;\;\;\;
\\[2ex]
\;
\end{psmatrix}%
\nccurve[ncurv=.4,arrows=<->,nodesep=.5ex,angleA=-60,angleB=-120]{c}{a}
\nccurve[ncurv=.4,arrows=<->,nodesep=.5ex,angleA=-60,angleB=-120]{d}{b}
\end{center}
This copycat strategy $\iota$ corresponds to the identity system $F$
term $$\lambda h^H.h$$ of type $H\to H$. (Recall $H\,=\,\forall G\,.\,G\to
G$.) The same strategy models the $\eta$-expanded variant $\lambda
h^H.\Lambda G.\lambda g^G.h_G\,g$.
\paragraph*{The second copycat strategy $\sigma\,$.}
The second copycat strategy $\sigma$ ``plays Spassky against
himself''. Recall the state after Spassky's opening move:
\begin{center}\vspace{2ex}
\hspace*{4ex}\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer && Spassky \\[6ex]
\rput(0,.1){\rnode{FischerBoard}{$\forall G_1\,.\,G_1\to G_1$}}
&\Large\hspace*{7ex}$\to$\hspace*{7ex}&
\hspace*{4ex}\chesssimulscale{}{1.Nf3}{.45}{8.5ex}
\\[7ex]
&\makebox[0ex]{Anna-Louise}
\\[2ex]
\;
\end{psmatrix}
\end{center}
Spassky has just imported \textsf{Chess}{} and opened with the White move
\wmove{Nf3}. In this scenario Anna-Louise copies that move locally,
to the other board in front of Spassky:
\begin{center}\vspace{2ex}
\hspace*{4ex}\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer && Spassky \\[6ex]
\rput(0,.1){\rnode{FischerBoard}{$\forall G_1\,.\,G_1\to G_1$}}
&\Large\hspace*{7ex}$\to$\hspace*{7ex}&
\hspace*{4ex}\chesssimulscale{1.Nf3}{1.Nf3}{.45}{8.5ex}
\\[7ex]
&\makebox[0ex]{Anna-Louise}
\\[2ex]
\;
\end{psmatrix}
\end{center}
Spassky may respond with \wmove{g6} as Black, which
Anna-Louise copies back to the other board:
\begin{center}\vspace{2ex}
\hspace*{4ex}\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer && Spassky \\[6ex]
\rput(0,.1){\rnode{FischerBoard}{$\forall G_1\,.\,G_1\to G_1$}}
&\Large\hspace*{7ex}$\to$\hspace*{7ex}&
\hspace*{4ex}\chesssimulscale{1.Nf3 g6}{1.Nf3 g6}{.45}{8.5ex}
\\[7ex]
&\makebox[0ex]{Anna-Louise}
\\[2ex]
\;
\end{psmatrix}
\end{center}
She continues to copy moves along the following copycat link, leaving
Fischer to forever twiddle his thumbs:
\begin{center}\vspace{2ex}
\hspace*{4ex}\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer && Spassky \\[6ex]
\rput(0,.1){\rnode{FischerBoard}{$\forall G_1\,.\,G_1\to G_1$}}
&\Large\hspace*{7ex}$\to$\hspace*{7ex}&
\hspace*{4ex}\chesssimulscale{1.Nf3 g6}{1.Nf3 g6}{.45}{8.5ex}
\\[9ex]
&\makebox[0ex]{Anna-Louise}
\\[3ex]
\;
\end{psmatrix}%
\nccurve[ncurv=.7,arrows=<->,nodesep=.5ex,angleA=-90,angleB=-90]{a}{b}%
\end{center}
This copycat strategy corresponds to the system $F$ term
$$\lambda h^H.\Lambda G.\lambda g^G.g$$
of type $H\to H$. (Recall $H\,=\,\forall G\,.\,G\to G\,$.)
Fischer's eternal thumb twiddling corresponds to $h$ not showing up in the body of the term.
\paragraph*{The third copycat strategy $\tau\,$.}
The third copycat strategy $\tau$, like the first, the identity
$\iota$, responds to Fischer. However, instead of importing $\textsf{Chess}$
for $G_1$ against Fischer, as in $\iota$, Anna-Louise imports a
simultaneous chess display $\textsf{Chess}\to\textsf{Chess}$ for $G_1$:\footnote{As
usual, the large arrow {\Large$\to$} between Fischer and Spassky binds
most strongly (so we can omit brackets around the left four boards).}
\begin{center}\vspace{2ex}
\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer\hspace*{6ex}&&Spassky\\[4ex]
\hspace*{-6ex}$\left(\hspace*{5.5ex}\mbox{\chesssimulscale{}{}{.26}{5ex}}\hspace*{5.5ex}\right)$%
\hspace*{1ex}{\small$\to$}\hspace*{1ex}%
$\left(\hspace*{5.5ex}\mbox{\invchesssimulscale{}{1.Nf3}{.26}{5ex}}\hspace*{5.5ex}\right)$%
&\Large\;\;$\to$\;\;&
\hspace*{6ex}\chesssimulscale{}{1.Nf3}{.3}{5.5ex}
\\[7ex]
\hspace{24ex}Anna-Louise
\\[2ex]
\;
\end{psmatrix}%
\end{center}
As shown above, Anna-Louise copies Spassky's \wmove{Nf3} onto the
fourth board against Fischer.
She continues with the following geometry of copycat links:
\begin{center}\vspace{2ex}
\begin{psmatrix}[colsep=4ex,rowsep=0ex]
Fischer\hspace*{6ex}&&Spassky\\[4ex]
\hspace*{-6ex}$\left(\hspace*{5.5ex}\mbox{\chesssimulscale{}{}{.26}{5ex}}\hspace*{5.5ex}\right)$%
\nccurve[ncurv=.7,arrows=<->,nodesep=.5ex,angleA=-90,angleB=-90]{a}{b}%
\hspace*{1ex}{\small$\to$}\hspace*{1ex}%
$\left(\hspace*{5.5ex}\mbox{\invchesssimulscale{}{1.Nf3}{.26}{5ex}}\hspace*{5.5ex}\right)$%
&\Large\;\;$\to$\;\;&
\hspace*{6ex}\chesssimulscale{}{1.Nf3}{.3}{5.5ex}
\\[7ex]
\hspace{24ex}Anna-Louise
\\[2ex]
\;
\end{psmatrix}%
\nccurve[ncurv=.4,arrows=<->,nodesep=.5ex,angleA=-70,angleB=-110]{c}{a}
\nccurve[ncurv=.4,arrows=<->,nodesep=.5ex,angleA=-70,angleB=-110]{d}{b}
\end{center}
On the right four boards she continues just as on the four boards of
the identity $\iota$. If Fischer responds as Black on the fourth
board, she copies this to the last board against Spassky, and if
Fischer opens as White on the third board, she copies this to open the
other board against Spassky.
On the left two boards she ``plays Fischer against himself''. If
Fischer opens with White on the second board, she copies this to him
on the first board; if Fischer responds as Black on the first board,
she copies that back to the second board.
This corresponds to the first argument $\lambda x^G.x$ of $h_{G\to G}$
in the term
$$\lambda h^H.\Lambda G.\lambda g^G.h_{G\to G}\,(\lambda x^G.x)\,g$$
associated with this strategy.
Note that all three of the above copycat strategies are uniform: had
the imported game been \textsf{\oxO's \& \oxX's}{} instead of \textsf{Chess}, Anna-Louise would have
copied the moves around in exactly the same geometry.
In the third strategy she would have imported $\textsf{\oxO's \& \oxX's}\to\textsf{\oxO's \& \oxX's}$ for $G_1$
against Fischer.
This strategy always imports $K\to K$ against Fischer, whatever the
game $K$ imported by Spassky. The geometry of Anna-Louise's six copycat
links is independent of $K$.
\subsection{Other conceptual ingredients of the model}
\emph{This subsection may be somewhat abstruse for readers not already familiar with game semantics;
consider skipping to Section~\ref{sec-trans} below, without loss of
continuity.}
So far in this introduction we have sketched the following ingredients
of our model:
\begin{itemize}
\item Hypergames: games as moves, to model universal quantification/instantiation.
\item Self-reference: hypergames can be imported into hypergames, and a
hypergame may even be imported into itself.
\item Uniformity: the shape of Anna-Louise's play, in terms of how
we copy moves around, cannot depend on the choices of games imported
by the opposing team: she must treat those games as ``black boxes''. Once
two (sub)games are linked by copycat, she cannot change that link.
\end{itemize}
The following additional ingredients come from prior (first-order,
unquantified) work:
\begin{itemize}
\item Backtracking. We permit moves to be taken back during play, corresponding to the fact that a system $F$
function can call its argument an arbitrary number of times.
Backtracking was used by Lorenzen \cite{Lor60,Fel85} for modelling
proofs of intuitionistic logic, by Coquand \cite{Coq91,Coq95}, and by
Hyland and Ong \cite{HO}.
\item
Innocence. Following Coquand \cite{Coq91,Coq95}, Hyland-Ong \cite{HO}
and Nickau \cite{Nic96}, strategies depend only on a restricted
``view'' of the history of play.
\item Interaction. We use Coquand-style interaction between backtracking strategies
to model normalisation of system $F$ terms, specifically, the
refinement by Hyland and Ong of this interaction in a lambda calculus (cartesian closed)
setting.
\item Liveness. A strategy must always be able to make a move (coined \emph{liveness} by Conway \cite{Con76}).
\item Copycat condition. We impose (a restriction of) Lorenzen's condition \cite{Lor60} for dialogues
listed by Felscher as (D10) \cite{Fel85}, which requires that an
atomic formula (or in the present system $F$ context, a type variable)
be ``asserted'' by Anna-Louise only if, within her view, the opposing
team has just asserted it.\footnote{I was unaware of Lorenzen's (D10) at the time I wrote \cite{Hug97,Hug00}.}
\end{itemize}
These additional ingredients relate to quantifiers:
\begin{itemize}
\item Copycat expansion.
Technically, uniformity will be implemented by \emph{copycat
expansion} \cite{Hug06h}, similar to Felscher's \emph{skeleton
expansion} \cite{Fel85,Fel01} (and equivalent to the condition in
\cite{Hug97,Hug00}): whenever a strategy includes a play (accepts a
move sequence) $p$, with a variable $X$ imported by the opponent into
a quantified variable, then for all types $T$, all variants of $p$
obtained by substituting $T$ for $X$ and playing copycat between
appropriate instances of $T$ are also in the strategy.\footnote{I was
unaware of Felscher's skeleton expansion at the time I wrote
\cite{Hug97,Hug00}.}
\item Compactness. A strategy is determined by a finite ``skeleton'', which expresses only the
copycat links between components.
\end{itemize}
The main theorem is that the map from system $F$ terms to strategies
(satisfying the above properties) is surjective. A surjectivity
theorem of this kind for simply typed $\lambda$-calculus is given in
\cite{Plo80}, but since \cite{AJ92} such a result in a logical
setting has often come to be referred to as \emph{full completeness},
when it includes a semantic notion of composition.
\subsubsection{Modular construction of games}
We shall define system $F$ games modularly. First we define a
transition system whose states are system $F$ types, and whose
transition labels are hypermoves. The hypermoves involve
instantiating quantifiers in the states (just as the examples above
involved instantiating quantifiers during play).
Every transition system determines a forest (disjoint union of
trees): its set of non-empty traces.
Every forest can be interpreted as an \emph{arena}, in the sense of
Hyland and Ong \cite{HO}.
Following Hyland and Ong, every arena defines a game, with
backtracking. The (hyper)game we associate with a system $F$ type
will be such a backtracking arena-game. Since we use arena games,
interaction of strategies (composition) is precisely the Hyland-Ong
interaction.
The underlying first-order composition allows us to relate the
composition to an underlying untyped lambda calculus machine, as in
\cite{DHR96}, upon erasing the system $F$ type information. In other
words, the composition, when viewed as acting on $\eta$-long
$\beta$-normal forms (representing innocent view functions),
corresponds to (a) erasing the system $F$ type information, (b)
computing with the abstract machine
\cite{DHR96} on the underlying untyped lambda term, then (c) replacing
type information.\footnote{I have a vague recollection that just
such an abstract machine was analysed for system $F$ in the masters'
thesis of Eike Ritter. I need to investigate this.}
If we erase the type information but stay in the model (\emph{i.e.}, we don't
look at the lambda terms), then we are just composing strategies in a
naive games model of untyped lambda calculus. The underlying
transition system of the untyped lambda game has a single state and
every integer $i\ge 1$ as transition labels. These integer labels are
precisely the result of deleting the instatiating types from the
transition labels of the system $F$ transition graph. Or to put it
another way: the system $F$ transition labels are those of the untyped lambda
transition graph together with type instantiations.
The untyped lambda calculus games similar to those in \cite{KNO02}.
\subsection{Related work}
Affine linear polymorphism was modelled in\cite{Abr97}\footnote{Samson
Abramsky's course at this summer school, during the summer before my
D.Phil., is in part what inspired my choice of thesis topic.} with a
PER-like ``intersections'' of first-order games of the form \cite{AJ94,AJM00}.
Abramsky and Lenisa have explored systematic ways of modelling
quantifiers so that, in the limited case in which all quantifiers are
outermost (so in particular positive), models are fully complete
\cite{AL00}. (See subsection~\ref{sec-neg} for a simple example of a type
at which full completeness fails.)
The hypergame/uniformity technique presented here has been applied to
affline linear logic \cite{MO01,Mur01}, and has been used to study
Curry-style type isomorphisms \cite{deL06}.
\section{Transition system games and backtracking}\label{sec-trans}
A game such as \textsf{Chess}{} or \textsf{\oxO's \& \oxX's}{} has a \emph{state} (the configuration
of the board or grid) and, for every state, a set of
\emph{transitions} or moves (\emph{e.g.}\ \wmove{Nf3},
\wmove{Bb4}, $\textsf{\textbf X}$ top-right, $\textsf{\textbf O}$ centre), each with an ensuing
state.\footnote{For a game of chance such as backgammon, one would
specify a probability distribution over ensuing states, rather than a
single ensuing state. We consider only deterministic games here.}
Such a game can be specified as a deterministic labelled \emph{transition system}:
an edge-labelled directed graph whose vertices are the states of the
game, with a distinguished initial state.
A fragment of the transition system for chess is illustrated
below.\footnote{The states include data for en passant and castling
and rights, and the turn (Black or White to move), not shown in the
diagram.}
\begin{center}
\pspicture*(-14,-10)(15,.3)
\psset{nodesep=5pt,xunit=.25cm,yunit=.57cm
\rput(0,-1){\rnode{chess}{\chessposn{}}}
\rput(-6,-6){\rnode{d4}{\chessposn{1.d4}}}
\ncline[arrows=->]{chess}{d4}
\bput(0.5){\wmove{d4}}
\rput(6,-6){\rnode{c4}{\chessposn{1.c4}}}
\ncline[arrows=->]{chess}{c4}
\aput(0.5){c4}
\rput(-17,-9.5){\rnode{d4f5}{\chessposn{1.d4 f5}}}
\ncline[arrows=->]{d4}{d4f5}
\bput(0.5){f5}
\rput(-6,-11){\rnode{d4Nf6}{\chessposn{1.d4 Nf6}}}
\ncline[arrows=->]{d4}{d4Nf6}
\aput(0.5){\wmove{Nf6}}
\rput(17,-9.5){\rnode{c4e5}{\chessposn{1.c4 e5}}}
\ncline[arrows=->]{c4}{c4e5}
\aput(0.5){e5}
\rput(6,-11){\rnode{c4Nf6}{\chessposn{1.c4 Nf6}}}
\ncline[arrows=->]{c4}{c4Nf6}
\bput(0.5){\wmove{Nf6}}
\rput(-14.5,-15.7){\rnode{d4Nf6Nf3}{\chessposn{1.d4 Nf6 2.Nf3}}}
\ncline[arrows=->]{d4Nf6}{d4Nf6Nf3}
\bput(0.5){\wmove{Nf3}}
\rput(14.5,-15.7){\rnode{c4Nf6g3}{\chessposn{1.c4 Nf6 2.g3}}}
\ncline[arrows=->]{c4Nf6}{c4Nf6g3}
\aput(0.5){\wmove{g3}}
\rput(0,-16){\rnode{join}{\chessposn{1.d4 Nf6 2.c4}}}
\ncline[arrows=->]{d4Nf6}{join}
\bput(0.4){c4}
\ncline[arrows=->]{c4Nf6}{join}
\aput(0.4){d4}
\endpspicture
\end{center}
Note that the graph is not a tree.
Without a distinguished initial state, we shall refer to such a graph as a \emph{transition graph}.
Formally, a \defn{transition graph} $(Q,M,\begin{psmatrix}[colsep=2.4ex]\rnode{l}{\rule{0pt}{1.2em}}&\rnode{r}{\rule{0pt}{1.2em}}\ncline[arrows=->,nodesep=2pt]{l}{r}\end{psmatrix})$ comprises a set $Q$
of \defn{states}, a set $L$ of \defn{labels}, and a partial \defn{transition function}
$\begin{psmatrix}[colsep=2.4ex]\rnode{l}{\rule{0pt}{1.2em}}&\rnode{r}{\rule{0pt}{1.2em}}\ncline[arrows=->,nodesep=2pt]{l}{r}\end{psmatrix}:Q\times L\rightharpoonup Q$.\footnote{We
write $f:X\rightharpoonup Y$ if $f$ is a partial function from $X$ to $Y$, \emph{i.e.},
a function $X'\to Y$ for some $X'\subseteq X$.}
We write $q\transto{l}q'$ for $\begin{psmatrix}[colsep=2.4ex]\rnode{l}{\rule{0pt}{1.2em}}&\rnode{r}{\rule{0pt}{1.2em}}\ncline[arrows=->,nodesep=2pt]{l}{r}\end{psmatrix}(q,l)=q'$.
A \defn{transition system} $(Q,L,\begin{psmatrix}[colsep=2.4ex]\rnode{l}{\rule{0pt}{1.2em}}&\rnode{r}{\rule{0pt}{1.2em}}\ncline[arrows=->,nodesep=2pt]{l}{r}\end{psmatrix},\star)$ is a transition graph
$(Q,L,\begin{psmatrix}[colsep=2.4ex]\rnode{l}{\rule{0pt}{1.2em}}&\rnode{r}{\rule{0pt}{1.2em}}\ncline[arrows=->,nodesep=2pt]{l}{r}\end{psmatrix})$ together with an \defn{initial state} $\star\in Q$.
A \defn{trace} of $(Q,L,\begin{psmatrix}[colsep=2.4ex]\rnode{l}{\rule{0pt}{1.2em}}&\rnode{r}{\rule{0pt}{1.2em}}\ncline[arrows=->,nodesep=2pt]{l}{r}\end{psmatrix},\star)$
is a finite sequence $l_1\ldots l_k$ of labels $l_i\in L$
$(k\ge 0)$ such that
$$\star\transto{l_1}q_1\transto{l_1}q_2\transto{l_3}\cdots\transto{l_{k-1}}
q_{k-1}\transto{l_k}q_k$$ for states $q_i\in Q$ ($1\le i\le
k$).\footnote{Note that the states $q_i\in Q$ are uniquely determined
by the $l_i$, since our transition systems are implicitly
deterministic.}
For example, \hspace{1ex}\wmove{d4}\;\wmove{Nf6}\;\wmove{c4}\hspace{1ex} is a trace of
chess, visible in the diagram above.
\subsection{Games}
Let $M$ be a set of \defn{moves}. A \defn{trace over $M$} or
\defn{$M$-trace} is a list (finite sequence) $m_1\ldots m_k$ of moves $m_i\in
M$ ($k\ge 0$).
A set $G$ of $M$-traces is a \defn{tree} if whenever $m_1\ldots m_k$
is in $G$ with $k\ge 1$ then its \defn{predecessor} $m_1\ldots
m_{k-1}$ is also in $G$, and the empty trace $\varepsilon$ is in $G$ (the
root of the tree).
A \defn{game over $M$} or \defn{$M$-game} is a tree of $M$-traces.
Following \cite{HO}, we write $\mathsf{O}$ for the first player
(associated with odd moves,
\emph{i.e.}, moves in a trace with odd index), and $\mathsf{P}$ for the second player
(associated with even moves).
Every transition system $\Delta$ with label set $M$ defines a game
$G(\Delta)$ over $M$, namely the set of traces of $\Delta$.
For example, if $\Delta_{\wmove{N}}$ is the chess transition system
depicted above, and $M_{\wmove{N}}$ is the set of all chess moves
$\{\wmove{Ne5},\wmove{Qa2},\wmove{Kh1},\ldots\}$, then
$G(\Delta_{\wmove{N}})$ (the set of all traces of the chess transition
system) is a game over $M_{\wmove{N}}$.
This game comprises all legal sequences of chess moves.
\subsection{Strategies}
A \defn{strategy} (implicitly for the second player $\mathsf{P}$) for a game
$G$ is a tree $\sigma\subseteq G$ whose every odd-length trace has a
unique one-move extension in $\sigma$: if $m_1\ldots m_k\in \sigma$
and $k$ is odd,
there exists a unique move $m$ such that $m_1\ldots m_k m\in
\sigma$.
A strategy $\sigma$ for $G$ is \defn{live} (or \defn{total}) if it responds to every
stimulus: if $m_1\ldots m_k\in \sigma$ with $k$ even and $m_1\ldots
m_km\in G$, then $m_1\ldots m_km\in
\sigma$.\footnote{Thus $m_1\ldots m_kmn\in \sigma$ for a unique $n$,
the ``response of $\sigma$ to $m$ after $m_1\ldots m_k$''. One is
also tempted to call such a strategy \emph{total}, by analogy with
partial versus total functions; we shall stick with Conway's original
terminology \cite{Con76}.}
\subsection{Backtracking}
When playing chess against a computer, there is usually an option
to take a move back. If we allow both players (user and computer) to
take back moves, and also to return to previously abandoned lines, we
obtain a derived game in which a move is either an opening chess move
(starting or restarting the game) or is a pair: a pointer to an
earlier move by the opponent, and a chess move in response to that
move. For example, here is a trace of backtracking chess, with time
running left-to-right (so backtracking pointers are right-to-left):
\begin{center}\vspace{4ex}\begin{math}
\psset{nodesep=2pt,colsep=4ex,nodesep=1.5pt}\begin{psmatrix}
\rnode{1}{\wmove{e4}}
&
\rnode{2}{\wmove{e5}}
&
\rnode{3}{\wmove{Nf3}}
&
\rnode{4}{\wmove{c5}}
&
\rnode{5}{\wmove{f4}}
&
\rnode{6}{\wmove{Nc6}}
&
\rnode{7}{\wmove{Bb5}}
&
\rnode{8}{\wmove{Nf6}}
&
\rnode{9}{\wmove{e3}}
&
\rnode{10}{\wmove{a6}}
\end{psmatrix}
\jptr{2}{1}{150}{35}
\jptr{3}{2}{-140}{-40}
\jptr{4}{1}{150}{35}
\jptr{5}{2}{-140}{-40}
\jptr{6}{3}{150}{30}
\jptr{7}{6}{-140}{-40}
\jptr{8}{3}{150}{30}
\jptr{10}{7}{150}{30}
\end{math}\vspace{4ex}\end{center}
The penultimate move \wmove{e3}, with no backtracking pointer, is a
restarting move.
Since this is a trace of a game with an underlying transition system,
we can include the states in the depiction, as below, which
corresponds to the first six moves above.
\begin{center}\vspace{6ex}\newcommand{0.23}\label{chessdiagstates}{0.23}\label{chessdiagstates}
\psset{nodesep=2pt,colsep=5ex,nodesep=1.5pt}\begin{psmatrix}
\rnode{0}{\chessposnscale{0.23}\label{chessdiagstates}{}}
&
\rnode{1}{\chessposnscale{0.23}\label{chessdiagstates}{1.e4}}
&
\rnode{2}{\chessposnscale{0.23}\label{chessdiagstates}{1.e4 e5}}
&
\rnode{3}{\chessposnscale{0.23}\label{chessdiagstates}{1.e4 e5 2.Nf3}}
&
\rnode{4}{\chessposnscale{0.23}\label{chessdiagstates}{1.e4 c5}}
&
\rnode{5}{\chessposnscale{0.23}\label{chessdiagstates}{1.e4 e5 2.f4}}
&
\rnode{6}{\chessposnscale{0.23}\label{chessdiagstates}{1.e4 e5 2.Nf3 Nc6}}
\end{psmatrix}
\nccurve[arrows=<-,angleA=-170,angleB=-10]{1}{0}\aput(0.5){\wmove{e4}}
\nccurve[arrows=<-,angleA=170,angleB=10]{2}{1}\bput(0.5){\wmove{e5}}
\nccurve[arrows=<-,angleA=-170,angleB=-10]{3}{2}\aput(0.5){\wmove{Nf3}}
\nccurve[ncurv=.5,nodesep=5pt,arrows=<-,angleA=145,angleB=35,offsetA=-1.2ex,offsetB=-1.2ex]{4}{1}\bput(0.5){\wmove{c5}}
\nccurve[ncurv=.5,nodesep=5pt,arrows=<-,angleA=-145,angleB=-35,offsetA=1.2ex,offsetB=1.2ex]{5}{2}\aput(0.5){\wmove{f4}}
\nccurve[ncurv=.5,nodesep=5pt,arrows=<-,angleA=145,angleB=35,offsetA=-1.2ex,offsetB=-1.2ex]{6}{3}\bput(0.5){\wmove{Nc6}}
\vspace{6ex}\end{center}
In this depiction we draw the pointers akin to transitions in the
underlying transition system, with their labels. This clarifies the
sense in which we refer back to a previous state during backtracking,
and make our move from there.
We shall write $\backtrack{G}$ for the backtracking variant of a game $G$, formalised below.
Let $M$ be a set of moves. A
\defn{dialogue} over $M$ is a an $M$-trace in which each
element may carry an odd length pointer to an earlier element (\emph{cf.}\ \cite{Lor60,Fel85,Coq91,Coq95,HO}).
For example, a dialogue over the set $M_{\wmove{N}}$ of chess
moves is depicted above.
Formally, a dialogue over $M$ is an ($\N\!\times\!M$)-trace\footnote{$\N=\{0,1,2,\ldots\}$.}
$$\langle \alpha_1,m_1\rangle\ldots\langle \alpha_k,m_k\rangle$$ such
that $i-\alpha_i\in\{1,3,5,\ldots\}$ for $1\le i\le
k$.
Each $\alpha_i$ represents a pointer from $m_i$ back to
$m_{\alpha_i}$, with $\alpha_i=0$ coding ``$m_i$ has no pointer''.
The formalisation of the chess dialogue depicted above is the
following ($\N\!\times\!M_{\wmove{N}}$)-trace:
\begin{center}\begin{math}
\langle 0,\wmove{e4}\rangle\;\langle 1,\wmove{e5}\rangle\;
\langle 2,\wmove{Nf3}\rangle\; \langle 1,\wmove{c5}\rangle\; \langle 2,\wmove{f4}\rangle\; \langle 3,\wmove{Nc6}\rangle \; \langle 6,\wmove{Bb5}\rangle
\; \langle 3,\wmove{Nf6}\rangle \; \langle 0,\wmove{e3}\rangle \; \langle 7,\wmove{a6}\rangle
\end{math}\end{center}
A move of the form $\langle 0,m\rangle$, without a pointer, is a
\defn{starting} move.
A \defn{thread} of a dialogue over $M$ is any sequence of elements
traversed from a starting move by following pointers towards the
right.
For example,
$\wmove{e4}\:\wmove{e5}\:\wmove{Nf3}\:\wmove{Nc6}\:\wmove{Bb5}$ is a
thread of the chess dialogue above:
\begin{center}\vspace{2ex}\begin{math}
\psset{colsep=4ex,nodesep=1.5pt}\begin{psmatrix}
\rnode{1}{\wmove{e4}}
&
\rnode{2}{\wmove{e5}}
&
\rnode{3}{\wmove{Nf3}}
&
{\rule{2ex}{0ex}\rule{0ex}{1ex}}
&
{\rule{2ex}{0ex}\rule{0ex}{1ex}}
&
\rnode{6}{\wmove{Nf6}}
&
\rnode{7}{\wmove{Bb5}}
&
{\rule{2ex}{0ex}\rule{0ex}{1ex}}
&
{\rule{2ex}{0ex}\rule{0ex}{1ex}}
&
{\rule{2ex}{0ex}\rule{0ex}{1ex}}
\end{psmatrix}
\jptr{2}{1}{150}{35}
\jptr{3}{2}{-140}{-40}
\jptr{6}{3}{150}{30}
\jptr{7}{6}{-140}{-40}
\end{math}\vspace{3ex}\end{center}
The singleton sequence $\wmove{e3}$ is also a thread, as is \wmove{e4}\,\wmove{e5}\,\wmove{f4}\,.
Formally, an $M$-trace $m_{d_1}\ldots m_{d_n}$ (where $n\ge 0$) is a
\defn{thread} of
the dialogue $\langle \alpha_1,m_1\rangle\cdots\langle
\alpha_k,m_k\rangle$ over $M$ if $\alpha_{d_1}=0$ and
$\alpha_{d_j}=d_{j-1}$ for $1<j\le n$.
Let $G$ be an $M$-game. A dialogue over $M$ \defn{respects $G$} if
its threads are in $G$.
For example, if $\textsf{Chess}$ abbreviates our earlier formalisation
$G(\Delta_{\wmove{N}})$ of the game of chess as a transition system game,
then the dialogue over $M_{\wmove{N}}$ depicted above respects
$\textsf{Chess}$ (since every thread is a legal sequence of chess moves from the
initial chess position).
The \defn{backtracking game} $\backtrack G$ is
the set of all dialogues over $M$ which respect $G$.
For example, the dialogue over $M_{\textsf{Chess}}$ depicted above is a trace
of $\backtrack{\textsf{Chess}}$, \emph{i.e.}, of ``backtracking chess''.
The \defn{$\mathsf{P}$-backtracking game} $\pbacktrack{G}$ is obtained from
$\backtrack G$ by permitting only the second player $\mathsf{P}$ to
backtrack: every $\mathsf{O}$-move (move in odd position) but the first
points to the previous move. Formally, $\pbacktrack{G}$ comprises
every $\langle \alpha_1,m_1\rangle\ldots\langle \alpha_k,m_k\rangle$
in $\backtrack G$ such that $\alpha_i=i-1$ for all odd
$i\in\{1,\ldots,k\}$.
A dialogue of $\pbacktrack{\textsf{Chess}}$ is shown below.
\begin{center}\vspace{6ex}\begin{math}
\psset{nodesep=2pt,colsep=4ex,nodesep=1.5pt}\begin{psmatrix}
\rnode{1}{\wmove{e4}}
&
\rnode{2}{\wmove{e5}}
&
\rnode{3}{\wmove{f4}}
&
\rnode{4}{\wmove{c5}}
&
\rnode{5}{\wmove{Nf3}}
&
\rnode{6}{\wmove{d6}}
&
\rnode{7}{\wmove{d4}}
&
\rnode{8}{\wmove{e6}}
&
\rnode{9}{\wmove{d4}}
&
\rnode{10}{\wmove{cd}}
\end{psmatrix}
\jptr{2}{1}{150}{35}
\jptr{3}{2}{-140}{-40}
\jptr{4}{1}{150}{36}
\jptr{5}{4}{-140}{-40}
\jptr{6}{5}{150}{30}
\jptr{7}{6}{-140}{-40}
\jptr{8}{1}{150}{36}
\jptr{9}{8}{-145}{-40}
\jptr{10}{7}{150}{30}
\end{math}\vspace{2ex}\end{center}
For every type $T$ of system $F$, we shall define a transition system
$\Delta_T$ and define the hypergame associated with $T$ simply as the
backtracking game over this transition system, \emph{i.e.},
$\backtrack{G(\Delta_T)}$.
For didactic purposes, we begin in the next section with the
restricted case of lambda calculus.
\section{Lambda calculus games}
Let $\pmb{\lambda}$ denote the types of $\lambda$ calculus generated from
a single base type $X$ by implication $\to$.
Every $\pmb{\lambda}$ type $T$ determines a transition system $\Delta_T$:
\begin{itemize}
\item
States are $\pmb{\lambda}$ types, with an additional initial state $\star\,$.
\item A label is any $i\in\{1,2,3,\ldots\}$, called a \defn{branch choice}.
\item Transitions. A $1$-labelled transition
$$\star\hspace{1ex}\transto{1}\hspace{1ex}T$$
from the initial state to $T$, and transitions
$$T_1\to T_2\to\ldots\to T_n\to X\hspace{3ex}\transto{i}\hspace{3ex}T_i{_{_{\rule{0ex}{2ex}}}}\hspace*{8ex}$$
for $1\le j\le n$.
\end{itemize}
For example, if
$U\;=\;X\to(X\to X)\to X$
then the reachable portion of the transition system $\Delta_U$ is
\begin{center}\vspace{1ex}\begin{math}
\psset{colsep=-2ex,rowsep=6ex,nodesep=.9ex,labelsep=1pt}\begin{psmatrix}
{} & \rnode{q0}{\star} & {} \\
{} & \rnode{q1}{\;\;X\to(X\to X)\to X} & {} \\
\rnode{q2}{X} & {} & \rnode{q3}{X\to X}
\transbetweena{q0}{q1}{1}{.45}
\transbetweenb{q1}{q2}{1}{.55}
\transbetweena{q1}{q3}{2}{.55}
\transbetweena{q3}{q2}{1}{.5}
\end{psmatrix}\end{math}\vspace{2ex}\end{center}
so the associated (non-backtracking) game (set of traces) $G(\Delta_U)$ is
$\{ \varepsilon, 1, 11, 12, 121 \}$,
where $\varepsilon$ denotes the empty sequence.
\begin{theorem}\label{theoremX}
Let $T$ be a lambda calculus type generated from a single base type
$X$ by implication $\to$. The $\eta$-expanded $\beta$-normal terms of type $T$
are in bijection with finite live
strategies on the $\pla$-backtracking game $\pbacktrack
{G(\Delta_T)}$.
\end{theorem}
\begin{proof} A routine induction: a restriction of the definability proof in \cite{Hug97}, in turn a variant of the definability proof in \cite{HO}.
\end{proof}
The $\eta$-expanded $\beta$-normal forms $t_n$ of $U\,=\,X\to (X\to
X)\to X$ (whose transition system was depicted above)
are\footnote{$f^n$ denotes $n$ applications of $f$: $\:f^0(x)=x$ and
$f^{n}(x)=f(f^{n-1}(x))$ for $n\ge 1$.}
\begin{displaymath}
\lambda x^X.\,\lambda f^{X\to X}.\,f^n(x)
\end{displaymath}
for $n\ge 0$
and the unique maximal trace of the corresponding live finite
strategy $\tau_n$
on $\pbacktrack{G(\Delta_T)}$ is
\begin{center}\vspace{5ex}\begin{math}
\psset{colsep=4ex,nodesep=1.5pt}\begin{psmatrix}
\rnode{1}{1}
&
\rnode{2}{2}
&
\rnode{3}{1}
&
\rnode{4}{2}
&
\rnode{5}{1}
&
\cdots
&
\rnode{8}{2}
&
\rnode{9}{1}
&
\rnode{10}{1}
\end{psmatrix}
\jptr{2}{1}{150}{35}
\jptr{3}{2}{-140}{-40}
\jptr{4}{1}{160}{35}
\jptr{5}{4}{-140}{-40}
\jptr{8}{1}{160}{35}
\jptr{9}{8}{-140}{-40}
\jptr{10}{1}{155}{35}
\end{math}\vspace{1ex}\end{center}
with $n$ occurrences of $2$.
Below we depict this dialogue in the case $n=2$ (corresponding to the
term $\lambda x^X\,.\,\lambda f^{X\to X}\,.\,f(f\,x)$) with its states (as we
did for the chess dialogue on page~\pageref{chessdiagstates}). It is
easier to display with time running down the page, rather than from
right to left.
\begin{center}\vspace{1ex}
\psset{nodesep=6pt,colsep=5ex,rowsep=6ex}\begin{psmatrix}
\rnode{0}{$\star$}
\\
\rnode{1}{$X\to(X\to X)\to X$}
\\
\rnode{2}{$X\to X$}
\\
\rnode{3}{$X$}
\\
\rnode{4}{$X\to X$}
\\
\rnode{5}{$X$}
\\
\rnode{6}{$X$}
\\
\end{psmatrix}
\nccurve[arrows=<-,angleA=90,angleB=-90]{1}{0}\aput(0.5){1}
\nccurve[arrows=<-,angleA=90,angleB=-90]{2}{1}\bput(0.5){\!2}
\nccurve[arrows=<-,angleA=90,angleB=-90]{3}{2}\aput(0.5){1}
\nccurve[ncurv=.8,nodesepB=6pt,nodesepA=8pt,arrows=<-,angleA=55,angleB=-55,offsetA=-1.2ex,offsetB=-.2ex]{4}{1}\bput(0.5){2}
\nccurve[arrows=<-,angleA=90,angleB=-90]{5}{4}\aput(0.5){1}
\nccurve[ncurv=.8,nodesepB=8pt,,nodesepA=3pt,arrows=<-,angleA=40,angleB=-45,offsetA=-.2ex,offsetB=-1.2ex]{6}{1}\bput(0.5){1}
\vspace{4ex}\end{center}
This notation highlights the similarity with Lorenzen's
dialogues \cite{Lor60,Fel85}. What we show as states, he referred to as
\emph{assertions}.
\subsection{The copycat condition}
In this section we introduce the \defn{copycat condition} \cite{Hug97}
on strategies, which is crucial for uniformity (more precisely, for us
to be able to implement uniformity via \emph{copycat expansion}
later). This condition a slight restriction of a condition of
Lorenzen for dialogue games (listed as condition (D10) in
\cite{Fel85}). We shall introduce the condition in the context of
lambda calculus games; the generalisation to system $F$ games in the
sequel is immediate.
Extend the set $\pmb{\lambda}$ of lambda calculus types from the previous
subsection to those generated by implication $\to$ from the ambient set
$\mathsf{Var}$ of system $F$ type variables. The transition system
$\Delta_T$ associated with a $\pmb{\lambda}$ type $T$ is defined exactly as
in the previous section, but now in the transitions
$$T_1\to T_2\to\ldots\to T_n\to
X\hspace{3ex}\transto{i}\hspace{3ex}T_i{_{_{\rule{0ex}{2ex}}}}\hspace*{8ex}$$
$X$ may be \emph{any} type variable in $\mathsf{Var}$.
The \defn{colour} of a transition
$$T_1\to\ldots\to T_n\to
X\hspace{3ex}\transto{i}\hspace{3ex}U_1\to\ldots\to U_m\to Y$
(where necessarily $T_i\,=\,U_1\to\ldots\to U_m\to Y$) is the rightmost variable
$Y$ in the target.
The colour of a move in a trace of $G(\Delta_T)$ or a dialogue in
$\backtrack{G(\Delta_T)}$ is the colour of the associated transition.
A dialogue in the $\mathsf{P}$-backtracking game $\pbacktrack{G(\Delta_T)}$
satisfies the \defn{copycat condition} if the colour of every
$\mathsf{P}$-move (even-index move) is equal to the colour of the preceding
$\mathsf{O}$-move.\footnote{Lorenzen's condition (D10) required the colour
to be equal to \emph{any} prior $\mathsf{O}$-move in a $\mathsf{P}$-backtracking
trace.} A strategy satisfies the copycat condition if each of its
traces satisfies the copycat condition.
As a simple illustration of the copycat condition, consider the type
$$U\;\;\;=\;\;\; X\to Y\to X$$ whose transition system $\Delta_U$ is
below (only reachable states shown).
\begin{center}\vspace{1ex}\begin{math}
\psset{colsep=-2ex,rowsep=6ex,nodesep=.9ex,labelsep=1pt}\begin{psmatrix}
{} & \rnode{q0}{\star} & {} \\
{} & \rnode{q1}{\;\;X\to Y\to X} & {} \\
\rnode{q2}{X} & {} & \rnode{q3}{Y}
\transbetweena{q0}{q1}{1}{.45}
\transbetweenb{q1}{q2}{1}{.55}
\transbetweena{q1}{q3}{2}{.55}
\end{psmatrix}\end{math}\vspace{2ex}\end{center}
The colour of the top and lower-left transitions is $X$, and the
colour of the lower-right transition is $Y$.
The associated (non-backtracking) game (set of traces) $G(\Delta_U)$ is
$\{ \varepsilon, 1, 2, 11, 12 \}$.
There are two live strategies in the $\mathsf{P}$-backtracking game
$\pbacktrack{G(\Delta_U)}$, whose maximal traces are as follows, with
the colour of each move shown beneath it in brackets:
\begin{center}\vspace{2ex}\begin{math}
\psset{colsep=4ex,nodesep=1.5pt,rowsep=0ex}\begin{psmatrix}
\rnode{1}{1}
&
\rnode{2}{1}
\\
(X) & (X)
\end{psmatrix}
\jptr{2}{1}{150}{35}
\hspace*{20ex}\begin{psmatrix}
\rnode{1}{1}
&
\rnode{2}{2}
\\
(X) & (Y)
\end{psmatrix}
\jptr{2}{1}{150}{35}
\end{math}\vspace{1ex}\end{center}
The first strategy satisfies the copycat condition, while the second
does not. The strategies correspond (respectively) to the terms
$$\lambda x^X.\lambda y^Y.x\hspace{23ex}\lambda x^X.\lambda y^Y.y$$
of which only the former is typed correctly as $X\to Y\to X$. The
second attempts to return $y$ of type $Y$, while the rightmost
variable of $X\to Y\to X$ is $X$. This corresponds to the failure of
the copycat condition for the second strategy.
The following is a corollary of the theorem above.
\begin{theorem}\label{theoremXY}
Let $T$ be a lambda calculus type generated from the set
$\textsf{Var}$ of system $F$ type variables by implication $\to$. The
$\eta$-expanded $\beta$-normal terms of type $T$ are in bijection with
finite live strategies on the $\pla$-backtracking game $\pbacktrack
{G(\Delta_T)}$ which satisfy the copycat condition.
\end{theorem}
\subsection{Remarks on Hyland-Ong arenas}
\emph{This section is for readers familiar with Hyland-Ong
games \cite{HO}. It can be skipped without loss of continuity.}
The set of non-empty traces of a transition system forms a forest
under the prefix order, and is therefore an \defn{arena} in the sense
of Hyland and Ong \cite{HO}.
Write $\mathsf{A}(\Delta)$ for the arena of a transition system $\Delta$,
and for a lambda calculus type $T$ abbreviate $\mathsf{A}(\Delta(T))$ to
$\mathsf{A}(T)$.
The following arena isomorphism is immediate:
\begin{displaymath}
\begin{array}{r@{\;\;\;\;\;\cong\;\;\;\;\;}l}
\mathsf{A}(T\to U) & \mathsf{A}(T)\Rightarrow \mathsf{A}(U) \\[1ex]
\end{array}
\end{displaymath}
where $\Rightarrow$ is the Hyland-Ong function space operation on
arenas and $\cong$ is isomorphism of forests.
Elements of these arenas are sequences (traces), and therefore Hyland-Ong dialogues in them suffer
some redundancy, as in (for example)
\begin{center}\vspace{4ex}\begin{math}
\psset{nodesep=2pt,colsep=4ex,nodesep=1.5pt}\begin{psmatrix}
\rnode{1}{a}
&
\rnode{2}{ab}
&
\rnode{3}{abc}
&
\rnode{4}{ab'}
&
\rnode{5}{abc'}
&
\rnode{6}{abcd}
&
\rnode{7}{abcde}
&
\rnode{8}{abcd'}
\end{psmatrix}
\jptr{2}{1}{150}{35}
\jptr{3}{2}{-140}{-40}
\jptr{4}{1}{150}{35}
\jptr{5}{2}{-140}{-40}
\jptr{6}{3}{150}{30}
\jptr{7}{6}{-140}{-40}
\jptr{8}{3}{150}{30}
\end{math}\vspace{4ex}\end{center}
in the arena generated by a transition system with transition labels
$a,b,b',c,c',d,d'$, whose traces include $abcde$, $abcd'$, $abc'$,
\emph{etc.}\
Clearly, one can abbreviate this trace to
\begin{center}\vspace{4ex}\begin{math}
\psset{nodesep=2pt,colsep=4ex,nodesep=1.5pt}\begin{psmatrix}
\rnode{1}{a}
&
\rnode{2}{b}
&
\rnode{3}{c}
&
\rnode{4}{b'}
&
\rnode{5}{c'}
&
\rnode{6}{d}
&
\rnode{7}{e}
&
\rnode{8}{d'}
\end{psmatrix}
\jptr{2}{1}{150}{35}
\jptr{3}{2}{-140}{-40}
\jptr{4}{1}{150}{35}
\jptr{5}{2}{-140}{-40}
\jptr{6}{3}{150}{30}
\jptr{7}{6}{-140}{-40}
\jptr{8}{3}{150}{30}
\end{math}\vspace{4ex}\end{center}
eliminating the redundancy. This is how we have opted to formalise
the backtracking games over transition systems in the previous
subsections. Note, however, this notational difference is trivial,
and in spirit they are essentially Hyland-Ong arena/dialogue games. The
notation is simply taylored towards arenas whose forests are described
as sets of traces, rather than partial-order forests as used
originally by Hyland and Ong
\cite{HO}.
Since our games are Hyland-Ong games, and we have the isomorphism
relating syntactic $\to$ with arena $\Rightarrow$ above, we obtain
composition (hence a category) as standard Hyland-Ong
composition of innocent strategies.
In the next section we extend the lambda calculus transition systems
to system $F$ transition systems. The following arena (forest)
ismorphisms will then hold:
\begin{displaymath}
\begin{array}{r@{\;\;\;\;\;\cong\;\;\;\;\;}l}
\mathsf{A}(T\to U) & \mathsf{A}(T)\Rightarrow \mathsf{A}(U) \\[1ex]
\mathsf{A}(\forall X.T) & \prod_{\mathsf{Types }\,U}\mathsf{A}(T[U/X])
\end{array}
\end{displaymath}
The arena-product $\prod$ (disjoint union of forests) is taken over
all system $F$ types. Composition in our system $F$ model is simply
Hyland-Ong composition.
\section{System $F$ games (hypergames)}
We extend the lambda calculus transition systems defined above to all
of system $F$. States will be types, as before, and a transition will
remain a branch choice $i\ge 1$, but now together with some types
to instantiate quantifiers. We begin by precisely defining
quantifier instantiation.
Recall that a \defn{prenex type} is a type in which all quantifiers have
been pulled to the front by exhaustively applying the
rewrite $$T\to
\forall X.U\;\;\;\leadsto\;\;\; \forall X\,.\,T\to U$$
throughout the type.\footnote{Without loss of generality, in the rewrite assume $X$ is not free in
$T$.}
Thus a type is prenex if and only if it has the form
$$\forall X_1\,.\forall X_2\,.\,\cdots\,\forall X_m\,.\:T_1\to T_2\to \ldots\to T_n\to X$$
for prenex types $T_i$ and type variables $X$ and $X_j$.
Write $T[V/X]$ for the result of substituting the type $V$ for
the free variable $X$ throughout the prenex type $T$, and (if
necessary) converting to prenex form. For example
$$(X\to X)[\forall Y.Y/X]\;\;\;\;=\;\;\;\;\forall Y.\:(\forall Y'.Y')\to Y$$
via: $$(X\to X)[\forall Y.Y/X]\;\;\;\overset{\text{substitute}}{\leadsto}\;\;\; (\forall Y.Y)\to (\forall Y.Y)
\;\;\;\overset{\text{prenex}}{\leadsto}\;\;\;\forall Y.(\forall Y'.Y')\to Y\,.$$
Define $$\forall X\,.T\:\,\mbox{\raisebox{-.3ex}{\LARGE$\cdot$}\,}\: V\;\;\;\;=\;\;\;\; T[V/X]$$ called
the result of \defn{importing} $V$ into $\forall X.T$. For example,
$$\forall X\,.\,X\to X\;\,\mbox{\raisebox{-.3ex}{\LARGE$\cdot$}\,}\; \forall Y.Y\;\;\;\;=\;\;\;\;\forall Y.\:(\forall Y'.Y')\to Y\,.$$
Write $T\,\mbox{\raisebox{-.3ex}{\LARGE$\cdot$}\,} V_1V_2\ldots V_n$ for the iterated importation
$(\ldots((T\,\mbox{\raisebox{-.3ex}{\LARGE$\cdot$}\,} V_1)\,\mbox{\raisebox{-.3ex}{\LARGE$\cdot$}\,} V_2)\,\mbox{\raisebox{-.3ex}{\LARGE$\cdot$}\,}\ldots)\,\mbox{\raisebox{-.3ex}{\LARGE$\cdot$}\,} V_n$, when
defined. For example,
$$\forall X\,.\,X\to X\;\,\mbox{\raisebox{-.3ex}{\LARGE$\cdot$}\,}\;(\forall Y.Y)\;(Z\to Z)\;\;\;=\;\;\;(\forall Y.Y)\to Z\to Z\,.$$
A prenex type is \defn{resolved} if it has no outermost quantifier,
\emph{i.e.},
it has the form $$T_1\to T_2\to \ldots\to T_n\to X\,,$$
a form which we shall often abbreviate to
$$T_1T_2\ldots T_n\;\;\to\;\;X$$
Each $T_i$ is called a \defn{branch}.
If $T\,\mbox{\raisebox{-.3ex}{\LARGE$\cdot$}\,} U_1\ldots U_n$ is resolved,
we say that $U_1\ldots U_n$ \defn{resolves} $T$
to $T\,\mbox{\raisebox{-.3ex}{\LARGE$\cdot$}\,} U_1\ldots U_n$.
For example, we saw above that $(\forall Y.Y)(Z\to Z)$ resolves
$\forall X\,.\,X\to X$ to $(\forall Y.Y)\to Z\to Z$.
Define the transition system $\Delta_T$ of a prenex type $T$ as follows:
\begin{itemize}
\item
States are resolved prenex types, with an additional initial state
$\star$\,.\footnote{Prenex types were drawn graphically in
\cite{Hug97,Hug00}, in a manner akin to B\"ohm trees, and called
\emph{polymorphic arenas}.}
\item
A label is a pair $\langle i,V_1\ldots V_k\rangle$ where $i\ge 1$ is a
\defn{branch choice}, $k\ge 0$ and each $V_i$ is a type, called an \defn{import}.
\item Transitions. A $1$-labelled transition
$$\star\hspace{1ex}\transto{1}\hspace{1ex}T$$ from the initial state
to $T$, and transitions
$$\rule{0ex}{3ex}T_1T_2\ldots T_n\to X\;\;\;\;\;\;\;\transto{\langle
i,V_1\ldots V_k\rangle}\;\;\;\;\;\;\;U_1U_2\ldots U_m\to Y$$ whenever $1\le i\le n$
and
$$T_i\,\mbox{\raisebox{-.3ex}{\LARGE$\cdot$}\,} V_1\ldots V_k\;\;\;=\;\;\;U_1U_2\ldots U_m\to Y$$
(Thus a transition chooses a branch $T_i$ and resolves it to
form the next state.)
\end{itemize}
More generally, the transition system of a type is the transition system of its prenex form.
An example transition is shown below.
$$(X'\to X'\to X')\,\to\,(\forall X.X\to X)\,\to\,X''
\hspace*{10ex}
\transto{\langle 2,(\forall Y.Y)(Z_1\!\to\! Z_2\!\to\! Z)\rangle}
\hspace*{10ex}
(\forall Y.Y)\,\to\,Z_1\,\to\, Z_2\,\to Z$$
The branch choice $2$ selects the branch $\forall X.X\to X$ and the
imports $\forall Y.Y$ and $Z_1\!\to\! Z_2\!\to Z$ resolve this branch
to form the next state.\footnote{To obtain a category with products,
we extend system $F$ with products, and allow
import/resolution/substition with products.}
\subsection{Implicit prenexification}\label{lazy}
Prenexification is a lynchpin of the hypergames approach \cite{Hug97}:
it is critical to the dynamics of hypergames that in a type
$T\to \forall X.U$
the quantifier $\forall X$ is available for instantiation.
Whether we make the prenexifications $T\to\forall
X.U\;\leadsto\;\forall X.T\to U$ explicit during play or not is
optional.
We can just as well leave prenexification implicit, by formally
designating $\forall X$ as available for instantiation in
$T\to\forall X.U\,$.
A quantifier $\forall X$ in a type $T$ is \defn{available} if $T$ has
any of the following forms:\footnote{We assume without loss of
generality throughout this section that all bound variables are
distinct from one another and from the free variables.}
\begin{itemize}
\item $T\,=\,\forall X.U$
\item $T\,=\,U\to T'$ and $\forall X$ is available in $T'$
\item $T\,=\forall Y. T'$ and $\forall X$ is available in $T'$.
\end{itemize}
For example, $\forall X$ and $\forall Y$ are available in
$\forall Y.(\forall Z.Y\to Z)\to \forall X.X$,
but $\forall Z$ is
not.\footnote{Note that $\forall X$ is available in $T$ iff it is one
of the outermost quantifiers in the prenex form $\widetilde T$ of $T$
(\emph{i.e.}, $\widetilde T\,=\,\forall X_1\ldots \forall X_k.U$ and $X$ is
among the $X_i$). In this sense, prenexification is implicit, or
``lazy'': from a behavioural point of view, we're still working with
prenex types.}
Type resolution and importation are tweaked in the obvious way, as follows.
A type is \defn{resolved} if it has no available quantifier, \emph{i.e.}, if
it has the form
$$T_1\to T_2\to \ldots\to T_n\to X\;\;\;\;\;=\;\;\;\;\;T_1\ldots T_n\to X$$ for
$n\ge 0$ and types $T_i$, called \defn{branches}. (All we have done is drop the requirement that the $T_i$ be prenex.)
Let $\forall X$ be the leftmost available quantifier in a type $T$,
and let $T^X$ be the result of deleting $\forall X$ from $T$ (\emph{e.g.}\ if
$T\,=\,U\to \forall X.V$ then $T^X\,=\,U\to V$).
Define $$T\;\,\mbox{\raisebox{-.3ex}{\LARGE$\cdot$}\,}\;V\;\;\;=\;\;\; T^X[V/X]\,,$$ the result of
\defn{importing} a type $V$ into $T$, and define \defn{iterated
importation} $T\,\,\mbox{\raisebox{-.3ex}{\LARGE$\cdot$}\,}\,V_1\ldots V_n$ as before.
The (lazy style) transition system $\Delta_T$ of a system $F$ type
remains essentially unchanged:
\begin{itemize}
\item
States are system $F$ types, with an additional initial state
$\star$\,.
\item
A label is a pair $\langle i,V_1\ldots V_k\rangle$ where $i\ge 1$ is a
\defn{branch choice}, $k\ge 0$ and each $V_i$ is a type, called an \defn{import}.
\item Transitions. A $1$-labelled transition
$$\star\hspace{1ex}\transto{1}\hspace{1ex}T$$ from the initial state
to $T$, and transitions
$$\rule{0ex}{3ex}T_1T_2\ldots T_n\to X\;\;\;\;\;\;\;\transto{\langle
i,V_1\ldots V_k\rangle}\;\;\;\;\;\;\;U_1U_2\ldots U_m\to Y$$ whenever $1\le i\le n$
and
$$T_i\,\mbox{\raisebox{-.3ex}{\LARGE$\cdot$}\,} V_1\ldots V_k\;\;\;=\;\;\;U_1U_2\ldots U_m\to Y$$
\end{itemize}
\section{Black box characterisation of system $F$ terms}
A \defn{black box importation} is an importation of the form
$$\forall X.T\;\,\mbox{\raisebox{-.3ex}{\LARGE$\cdot$}\,}\; X\;\;\;\;\;=\;\;\;\;\; T\,,$$
simply deleting the quantifier. Thus the bound variable $X$ becomes
free. We refer to $X$ as a \defn{black box}. (We continue to assume,
without loss of generality, that within a type all bound variables are
distinct from one another and from all free variables.)
Let $T$ be a closed\footnote{No free variables.} system $F$ type and $d$ a dialogue in the
$\mathsf{P}$-backtracking game $\pbacktrack{G(\Delta_T)}$.
The first player $\mathsf{O}$ \defn{imports black boxes} in $d$ if every
importation associated with $\mathsf{O}$ in $d$ is a black box importation,
and the second player $\mathsf{P}$ \defn{respects black boxes} in $d$ if
every import associated with $\mathsf{P}$ takes its free variables among the
black boxes imported hitherto by $\mathsf{O}$. A dialogue in which $\mathsf{O}$
imports black boxes and $\mathsf{P}$ respects them is a \defn{black box
dialogue}.
The \defn{black box game} $\bbgame{G(\Delta_T)}$ is the
restriction of the $\mathsf{P}$-backtracking game $\pbacktrack{G(\Delta_T)}$
to black box dialogues.
The copycat condition extends from the lambda calculus case to system
$F$ in the obvious way: the colour of a transition is once again the
rightmost variable of the target.
\begin{theorem}\label{theoremBB}
The $\eta$-expanded $\beta$-normal terms of a closed system $F$ type $T$ are
in bijection with finite live strategies on the black box game
$\bbgame{G(\Delta_T)}$ which satisfy the copycat condition.
\end{theorem}
\begin{proof}
The definability proof in \cite{Hug97}.
\end{proof}
\section{Uniformity by copycat expansion}
The black box game is highly unsymmetric:
\begin{itemize}
\myitem{1} $\mathsf{P}$ can backtrack, while $\mathsf{O}$ cannot.
\myitem{2} $\mathsf{P}$ is subject to the copycat condition, while $\mathsf{O}$ is not.
\myitem{3} $\mathsf{O}$ can only import black boxes (free variables); $\mathsf{P}$ can import
arbitrary types, so long as their free variables are prior black
boxes.
\end{itemize}
To compose strategies we must symmetrise the game, so that $\mathsf{O}$ and
$\mathsf{P}$ can interact.
A symmetrisation of backtracking (1) was obtained by Coquand
\cite{Coq91,Coq95}. A shared history of two asymmetric strategies is
built, in which both players backtrack. Each time either asymmetric
strategy plays a move, it can only see a projection of the shared
history in which the opposing player does not backtrack.
This interaction was made lambda-calculus specific by Hyland-Ong
\cite{HO}, who called the projections \emph{views} and called the symmetrised
strategies \emph{innocent}.
Symmetrising the copycat condition (2) will be automatic, coming as a
simple side effect of the views: we simply demand that, in their
respective views, both strategies adhere to the copycat condition.
We shall symmetrise with respect to black boxes (3) via the notion of
\emph{copycat expansion} \cite{Hug06h} recalled below.\footnote{Copycat
expansion was implicit in \cite{Hug00}, occuring during interaction.
In \cite{Hug06h} it was made explicit, being applied to the strategies
prior to ineraction, rather just during interaction.}
Symmetrising (1) yields interaction for lambda calculus over a single
base type symbol. Symmetrising (1) \& (2) yields interaction for
lambda calculus over a set of base type symbols. Symmetrising
(1)---(3) yields interaction for system $F$.
\begin{center}
\begin{tabular}{|l|c|}
\hline
\bf Symmetrising & \bf yields interaction for \\ \hline
(1) & $\lambda$, single base type \\ \hline
(1) \& (2) & $\lambda$, set of base types \\ \hline
(1) \& (2) \& (3) & system $F$ \\ \hline
\end{tabular}
\end{center}
We shall refer to the copycat condition together with copycat
expansion as \defn{uniformity}.
A visual summary of the symmetrisation is below, where $T$ is a system $F$ type.
\begin{center}\vspace*{1ex}%
\Rnode{b}{$\bbgame{G(\Delta_T)}$}
\hspace*{12ex}
\Rnode{p}{$\pbacktrack{G(\Delta_T)}$}
\hspace*{12ex}
\Rnode{s}{$\backtrack{G(\Delta_T)}$}%
\ncline[nodesep=.5ex,arrows=->]{b}{p}\naput{uniformity}%
\ncline[nodesep=.5ex,arrows=->]{p}{s}\naput{innocence}%
\end{center}
An arrow here indicates how a strategy on the left lifts to a strategy on the right.
\subsection{Symmetrising black boxes via copycat expansion}
Let $T$ be a closed system $F$ type and $d$ a dialogue in the
$\mathsf{P}$-backtracking game $\pbacktrack{G(\Delta_T)}$ which satisfies
the copycat condition.
Let $X$ be a black box in $d$, let $U$ be a type, and define $d[U/X]$
as the result of substituting $U$ for free occurrences of $X$ in the
imports of $d$.
\begin{center}
\Large\framebox{To be continued\ldots}
\end{center}
\small
\bibliographystyle{myalphaams}
|
1,477,468,751,119 | arxiv | \section{Introduction}
The theory of stochastic differential equations driven by a fractional Brownian motion in infinite dimension has been widely examined by many researchers
due to various mathematical models in different fields, such as physics, population dynamics, ecology, biological systems, biotechnology, optimal control, theory of elasticity,
electrical networks, and several other areas of engineering and science. There is now a rich litterature on the topic of stochastic equations driven by a fractional Brownian motion,
for example, \cite{duncan2}, proved the existence and uniqueness of a mild solution for a class of stochastic differential equations in a Hilbert space with a cylindrical fractional Brownian motion with Hurst parameter $H\in (\frac{1}{2},1)$, \cite{MaslNual} established the existence and uniqueness of a mild solution for non linear stochastic evolution equations in Hilbert space. Moreover, \cite{Caraballo2011} discussed mild solutions for stochastic delay evolution equations driven by a fractional Brownian motion. \\The study of neutral stochastic functional differential equations (NFSDEs) driven by a fractional Brownian motion (fBm) has became an active area of investigation.
\cite{Boufoussi} analysed the existence and uniqueness of a mild solution for a neutral stochastic differential equation with finite delay driven by fractional Brownian motion in a Hilbert space, and established some sufficient conditions ensuring the exponential decay to zero in mean square for the solution. In \cite{Caraballo} the authors studied the existence
and uniqueness of mild solutions to neutral stochastic delay functional integro-differential equations perturbed by fractional Brownian motion. Since then, many other works have followed dealing with the same subject, but all these works consider NFSDEs with deterministic diffusion coefficients. Recently, \cite{Boufoussi2}, inspired by the work of \cite{ferrante}, proved the existence of mild solution to delay differential equations driven by a fractional Brownian motion in Hilbert space and with nonlinear stochastic diffusion terms. Following this line, our main objective in this paper, is to generalize the result in \cite{Caraballo} to a class of neutral stochastic functional integro-differential equations with nondeterministic diffusion terms. More precisely, we consider the following stochastic equation:
\begin{eqnarray}\label{eq1}
&& d[x(t)+g(x(t-r))]=A[x(t)+g(x(t-r))]dt\nonumber\\
&&\qquad\qquad +\left[\int_0^tB(t -s)\left[x(s)+g(x(s-r))\right]ds+ f(x(t-r))\right]dt\nonumber\\
&&\qquad \qquad+\sigma(x(t-r))dB^H(t),\qquad\qquad\qquad\qquad\qquad\qquad\;0\leq t \leq T,\nonumber\\
x(t)&=&\varphi (t), \; -\tau \leq t \leq 0\,,
\end{eqnarray}
where $ A $ is a closed linear operator on a Hilbert space $V$ with domain $D(A)\subset V $.
For all $t\geq 0,\, B(t)$ is a closed linear operator with domain $ D(B)\supset D(A)$ independent
of $t$. $B^H$ is a fractional Brownian motion
with Hurst parameter $H>1/2$ defined in a complete probability
space $(\Omega,\mathcal{F},\mathbb{P})$,\, $f,\sigma,\, g:V \rightarrow V$ are
appropriate functions, while $\varphi:[-r,0]\rightarrow V$ is a continuous function. The importance of considering this equation lies on the difficulties caused by the presence of
the stochastic integral's term, which is defined here in the Skorohod sense and then have to be carefully managed. We prove the existence and uniqueness of a mild solution of the equation (\ref{eq1}). The proof is based on an iterative procedure to estimate the Malliavin derivatives of the solution. The technical nature of this method makes it difficult to answer certain classical questions such as studying the stability of the solution.
\\
The theory of integro-differential equations with resolvent operators is an important branch of differential equations, which has an extensive physical background and provides useful mathematical models for many fields of applications. This is why it has received much attention in recent years. The resolvent operator is similar to the semigroup operator for abstract differential equations in Banach spaces. However, the resolvent operator does not satisfy semigroup properties.
This paper is organized as follows, In Section 2 we introduce some notations, concepts, and basic results about fractional Brownian motion and Stochastic integral on Hilbert spaces. The existence and uniqueness of mild solutions are discussed in Section 3. In Section 4, we investigate the absolute continuity for the law of
$F(x(t))$, where $x(t)$ is the mild solution obtained in section $3$ and $F$ a function satisfying appropriate conditions.
Finally, in section $5$ we will exhibit an example to illustrate our previous abstract results.
\section{Preliminaries}
In this section, stochastic integrals with respect to a fractional Brownian motion in a separable Hilbert space is introduced, and some basic properties of this integral are noted.\\
Let $(\Omega,\mathcal{F}, \mathbb{P})$ be a complete probability space.
Consider a time interval $[0,T]$ with arbitrary fixed horizon $T$ and let
$\{B^H(t) , t \in [0, T ]\}$ the one-dimensional fractional Brownian motion
with Hurst parameter $H\in(1/2,1)$. This means by definition that $B^H$ is a centred Gaussian process with covariance function:
$$ R_H(s, t) =\frac{1}{2}(t^{2H} + s^{2H}-|t-s|^{2H}).$$
Moreover $B^H$ has the following Wiener integral representation:
$$B^H(t) =\int_0^tK_H(t,s)\, dB(s)$$
where $B = \{B(t) :\; t\in [0,T]\}$ is a Wiener process, and $K_H(t; s)$ is
the kernel given by
$$K_H(t, s )=c_Hs^{\frac{1}{2}-H}\int_s^t (u-s)^{H-\frac{3}{2}}u^{H-\frac{1}{2}}du$$
for $t>s$, where $c_H=\sqrt{\frac{H(2H-1)}{\beta (2-2H,H-\frac{1}{2})}}$ and $\beta(,)$ denotes the Beta function. We put $K_H(t, s ) =0$ if $t\leq s$.\\
We will denote by $\mathcal{H}$ the reproducing kernel Hilbert space of the fBm. In fact
$\mathcal{H}$ is the closure of set of indicator functions $\{1_{[0;t]}, t\in[0,T]\}$ with respect to the scalar product
$$\langle 1_{[0,t]},1_{[0,s]}\rangle _{\mathcal{H}}=R_H(t , s).$$
The mapping $1_{[0,t]}\rightarrow B^H(t)$ can be extended to an isometry between $\mathcal{H}$
and the first Wiener chaos and we will denote by $B^H(\varphi)$ the image of $\varphi$ by the previous isometry.
We recall that for $\psi,\varphi \in \mathcal{H}$ their scalar product in
$\mathcal{H}$ is given by
$$\langle \psi,\varphi\rangle _{\mathcal{H}}=H(2H-1)\int_0^T\int_0^T\psi(s)
\varphi(t)|t-s|^{2H-2}dsdt\,.$$
Let us consider the operator $K_H^*$ from $\mathcal{H}$ to $L^2([0,T])$ defined by
$$(K_H^*\varphi)(s)=\int_s^T\varphi(r)\frac{\partial K}{\partial r}(r,s)dr\,.$$
We refer to \cite{nualart} for the proof of the fact that $K_H^*$ is an isometry between $\mathcal{H}$ and $L^2([0,T])$. Moreover for any $\varphi \in \mathcal{H}$, we have
$$ B^H(\varphi)=\int_0^T(K_H^*\varphi)(t)\, dB(t)\,.$$
Let $\mathcal{S}$ denote the class of smooth random variables such that a random variable $F \in \mathcal{S}$ has the form
\begin{equation}\label{l1}
F = f(B^H(\phi_1), . . . , B^H(\phi_n))
\end{equation}
where $f$ belongs to $\mathcal{C}_b^{\infty}(\mathbb{R}^n),\; \phi_1, . . . , \phi_n$ are in $\mathcal{H}$ and $n \geq 1$.
The derivative of a smooth random variable $F$ of the form $(\ref{l1})$ is the $\mathcal{H}$-valued random variable given by
$$DF =\sum_{j=1}^n \frac{\partial f}{\partial x_j}(B^H(\phi_1), . . . , B^H(\phi_n))\phi_j.$$
We can define the iteration of the operator $D$ in such a way that for a smooth random variable $F$, the iterated derivative $D^kF$ is a random variable with values in $\mathcal{H}^{\otimes k}$.
Let $V$ be a real separable Hilbert space. We denote by $\mathcal{H}_V$ the completion of the pre-Hilbert space of $V$-valued bounded Borel measurable functions $F:[0,T]\rightarrow V$ with the norm induced from the inner product
$$<F, G>_{\mathcal{H}_V}:=\int_0^T\int_0^T\langle F(s),G(t)\rangle_V \phi_H(t-s)ds\,dt$$
where $\phi_H(s)=H(2H-1)|s|^{2H-2}$. It is easily verified that $\mathcal{\widetilde{H}}_V \subseteq \mathcal{H}_V,$ where $\mathcal{\widetilde{H}}_V$ is the Banach space of Borel measurable functions with the norm $\mid.\mid_{\mathcal{\widetilde{H}}_V}$ given by
$$\mid \varphi\mid^2_{\mathcal{\widetilde{H}}_V}:= \int_0^T\int_0^T \parallel
\varphi(u)\parallel \parallel \varphi(v) \parallel\phi_H(u-v)du dv\,.$$
It can be seen that $\mathbb{L}^{1/H}([0,T],V)\subseteq \mathcal{\widetilde{H}}_V$ and, in particular, $ \mathbb{L}^{2}([0,T],V)
\subseteq\mathcal{\widetilde{H}}_V$. For more details we can refer to \cite{duncan1}.
Consider the family $\mathcal{S}_V$ of $V$-valued smooth random variables of
the form
$$F=\sum_{j=1}^n F_j v_j,\;\;\; v_j\in V,\;\; F_j\in \mathcal{S}\,. $$
Define $D^kF=\sum_{j=1}^n D^kF_j\otimes v_j,$ then $D^k$ is a closable operator from $\mathcal{S}_V \subset \mathbb{L}^p(\Omega,V)$ into $\mathbb{L}^p(\Omega,\mathcal{H}_V^{\otimes k})$ for any $p\geq 1$.\\
For any integer $k\geq 1$ and any real number $p\geq 1,$ we can define the semi-norm on $\mathcal{S}_V $
$$\|F\|_{k,p}^p=E\|F\|_V^p+E(\sum_{i=1}^k \|D^iF\|^p_{\mathcal{H}_V^{\otimes i}})\,.$$
We define the space $\mathbb{D}^{k,p}(V)$ as the completion of $\mathcal{S}_V$ with respect to the norm $\|F\|_{k,p}$, and we will write $\mathbb{D}^{k,p}(\mathbb{R})=\mathbb{D}^{k,p}.$
\begin{definition}
The divergence operator $\delta$ is the adjoint of the derivative operator, defined by means of the duality relationship
$$E\langle F, \delta (u) \rangle_V=E\langle DF, u\rangle_{\mathcal{H}_V}, \forall F\in \mathcal{S}_V$$
where $u$ is a random variable in $\mathbb{L}^2(\Omega,\mathcal{H}_V)$.
\end{definition}
\begin{definition}\label{def integr}
We say that $u$ belongs to the domain of the operator $\delta$, denoted by $Dom(\delta)$, if $F\mapsto \langle u,DF\rangle_{\mathcal{H}_V}$ is continuous on $\mathcal{S}_V$ with respect to the $\mathbb{L}^2(\Omega)$ norm.\\
For $u \in Dom(\delta) $, we denote $\delta (u)$ by $\int_0^T u(s) \, dB^H(s)$.
\end{definition}
Let us now consider the space $\widetilde{\mathcal{H}}_V\otimes
\widetilde{\mathcal{H}}_V$ of measurable functions $\varphi:[0,T]^2\rightarrow V$
such that
$$
\|\varphi\|^2_{\widetilde{\mathcal{H}}_V^{\otimes 2}}=\int_{[0,T]^4}
\|\varphi(s,v)\|\|\varphi(s',v')\|\phi_H(s-s')\phi_H(v-v')ds\,
dv\,ds'\,dv'<\infty\,.
$$
We denote by $\mathbb{D}^{1,2}(\widetilde{\mathcal{H}}_V)$ the space of processes $ u $ satisfying:
$$E\|u\|^2_{\widetilde{\mathcal{H}}_V}+E\|Du\|^2_{\widetilde
{\mathcal{H}}_V^{\otimes 2}} < \infty$$
It has been shown in \cite{duncan1} that $\mathbb{D}^{1,2}(\widetilde{\mathcal{H}}_V)$ is included in $Dom (\delta)$ and for a process $u$ in $\mathbb{D}^{1,2}(\widetilde{\mathcal{H}}_V)$
we have:
$$ E\parallel \delta(u)\parallel^2=E\|u\|^2_{\mathcal{H}_V}+E\int_{[0,T]^4}\langle D_pu(q), D_v u(s) \rangle
\phi_H(p-s)\phi_H(v-q)dp\,dq\,dv\,ds.$$
For fixed $m\geq 1$, we will say that a $V -$valued stochastic process $Z=\{Z(t),
t\in [0,T]\}$ satisfies condition $\mathcal{A}_{m}$ if $Z(t)\in \mathbb{D}^{m,2}(V)$ for any $t \in [0,T]$, and for any
$k\leq m $, we have
$$\displaystyle \sup_t \mathbb{E}\|Z(t)\|^2_V \leq c_{1} \;\;
\mbox{ and}\;\;\displaystyle \sup_t\sup_{u,|u|=k}\mathbb{E}\|D_u^kZ(t)\|^2_V\leq
c_{2,k}$$
for some positive constants $c_{1},c_{2,k}$, and where $\displaystyle\sup_{u,|u|=k} $ is the supremum taken on all the vectors $u$ of length $k$.
Notice that if $Z$ satisfies condition $\mathcal{A}_{m}$, then $Z$ belongs to $\mathbb{D}^{1,2}(\mathcal{\widetilde{H}}_V) $.
Now we turn to state some notations and basic facts about the theory of resolvent operators needed in the sequel. For additional details on resolvent operators, we refer to \cite{Grimmer} and \cite{pruss}.\\
Let $A:D(A)\subset V \rightarrow V$ be a closed linear operator and for all $t\geq 0,\, B(t)$ a closed linear operator with domain $ D(B(t))\supset D(A)$.
Let us denote by $X$ the Banach space $D(A)$, the domain of operator $A$, equipped with the graph norm
$$\|y\|_X :=\|Ay\|_V+\|y\|_V \;\;\mbox{for}\;\; y\in X.$$
We will note by $C([0,+\infty),X)$, the space of all continuous functions from $[0,+\infty)$ into $X$, and $\mathcal{B}(X,V)$ the set of all bounded linear operators form $X$ into $V$. Consider the following Cauchy problem
\begin{eqnarray}\label{cauchy}
v'(t) &=& Av(t)+\int_0^tB(t -s)v(s)ds \,,\,\;\; \mbox{for}\;\; t\geq 0,\nonumber\\
v(0) &=& v_0 \in V.
\end{eqnarray}
We recall the following definition (\cite{Grimmer})
\begin{definition} A resolvent operator of the
Equation $(\ref{cauchy})$ is a bounded linear operator valued function $R(t)\in \mathcal{L}(V)$ for $t\geq 0$, satisfying the following properties:
\begin{itemize}
\item [(i)] $ R(0) = I$ and $\|R(t)\|\leq Ne^{\beta t}$ for some constants $N$ and $\beta$.
\item [(ii)] For each $x\in V$, $R(t)x$ is strongly continuous for $t\geq 0$.
\item [(iii)] $R(t) \in \mathcal{L}(X) $ for $t\ge 0$. For $x \in X$, $R(.)x\in \mathcal{C}^1([0,+\infty);V)\cap \mathcal{C}([0,+\infty);X)$ and
$$R'(t)x = AR(t)x +\int_0^tB(t -s)R(s)xds= R(t)Ax+\int_0^tR(t -s)B(s)xds, \;\;\mbox{for}\;\; t\geq 0.$$
\end{itemize}
\end{definition}
The resolvent operator satisfies a number properties reminiscent semi-group, it plays an important role to study the existence of solutions and to establish a variation of constants formula for nonlinear systems. To assure the existence of the resolvent operator, we need the following hypotheses:
\begin{itemize}
\item [$(\mathcal{H}.1)$] $A$ is the infinitesimal generator of a $C_0$-semigroup $(T(t))_{t\geq0}$ on $V$.
\item [$(\mathcal{H}.2)$] For all $t\geq 0$, $B(t)$ is a closed linear operator from $X$
into $V$. Furthermore for any $y\in X$ the map $t\to B(t)y$ is bounded, differentiable and the derivative $t\to B'(t)y$ is bounded and uniformly continuous on $\mathbb{R}^+$.
\end{itemize}
The following theorem gives the existence conditions of a resolvent operator for the equation (\ref{cauchy}).
\begin{theorem}(\cite{Grimmer}) \label{res}
Assume that hypotheses $(\mathcal{H}.1)$ and $(\mathcal{H}.2)$ hold, then the Cauchy problem $(\ref{cauchy})$ admits a unique resolvent operator $(R(t))_{t\geq 0}$.
\end{theorem}
In what follows, we recall some existence results for the following integro-differential equation
\begin{eqnarray}\label{Cauchy-integro}
v'(t)&=& Av(t)+ \int_0^t B(t-s)v(s)ds+q(t),\ for t\ge 0\\
v(0)&=&v_0\in V\nonumber
\end{eqnarray}
where $q:[0,+\infty[\to V$ is a continuous function.\\
\begin{definition}
A continuous function $v:[0,+\infty)\to V$ is said to be a strict solution of equation $(\ref{Cauchy-integro})$ if
\begin{itemize}
\item[$(i)$] $v\in C^1([0,+\infty),V)\cap C([0,+\infty),X)$,
\item[$(ii)$] $v$ satisfies equation $(\ref{Cauchy-integro})$ for $t\ge 0$.
\end{itemize}
\end{definition}
\begin{definition}
A function $v:[0,+\infty)\to V$ is called a mild solution of $(\ref{Cauchy-integro})$ if it satisfies the following variation of constants formula: for any $v(0)\in V$,
\begin{equation}\label{variation-cons}
v(t)=R(t)v(0)+\int_0^t R(t-s)q(s)ds\,,\,\,\,\, t\geq 0\,,
\end{equation}
where $R(t)$ is the resolvent operator of the Equation (\ref{cauchy}).
\end{definition}
\begin{remark}
It has been shown in \cite{Grimmer} that under $(\mathcal{H}.1)$ and $ (\mathcal{H}.2)$ a strict solution of Equation $(\ref{Cauchy-integro})$ is a mild solution. Reciprocally, if in addition the function $q$ is sufficiently regular a mild solution of Equation $(\ref{Cauchy-integro})$ with $v_{0}\in D(A)$ is a strict solution.\\
Clearly in our situation, due to the presence of the stochastic integral in the Equation (\ref{eq1}), we will not be concerned by strict solutions.
\end{remark}
\section{Existence and uniqueness of a mild solution}
In this section we study the existence and uniqueness of a mild solution for Equation (\ref{eq1}). In the sequel, we assume that the following conditions hold.
\begin{itemize}
\item [$(\mathcal{H}.3)$]
$f,\, g,\, \sigma:V \rightarrow V $ are bounded functions with bounded Fr\'echet
derivatives up to some order $m\geq1.$
\end{itemize}
Moreover, we assume that $\varphi \in \mathcal{C}([-\tau,0],\mathbb{L}^2(\Omega,V))$.
Similar to the deterministic situation we give the following definition of mild solutions for Equation (\ref{eq1}).
\begin{definition}
An $V$-valued process $\{x(t),\;t\in[-\tau,T]\}$, is called a mild solution of equation (\ref{eq1}) if
\begin{itemize}
\item[$i)$] $x(.)\in \mathcal{C}([-r,T],\mathbb{L}^2(\Omega,V))$,
\item[$ii)$] $x(t)=\varphi(t), \, -r \leq t \leq 0$.
\item[$iii)$]For arbitrary $t \in [0,T]$, we have
\begin{eqnarray*}
x(t)&=&R(t)[\varphi(0)+g(\varphi(-r))]-g(x(t-r))\\
&+&\int_0^t R(t-s)f(x(s-r))ds+\int_0^t R(t-s)\sigma(x(s-r))dB^H(s)\;\;
\mathbb{P}-a.s.\phantom{\int_0^2+2}\,,\\
\end{eqnarray*}
\end{itemize}
where $R(.)$ is the resolvent operator of the Cauchy problem (\ref{cauchy}).
\end{definition}
Our main result is the following:
\begin{theorem}\label{thm1}
Suppose that $(\mathcal{H}.1)$, $(\mathcal{H}.2)$ and $(\mathcal{H}.3)$ hold. Then the equation $(\ref{eq1})$ admits a unique solution on $[-r,T]$ for every $T\le mr$.
\end{theorem}
For the proof we need the following lemma which can be proved by the same arguments as those used in \cite{Boufoussi2}.
\begin{lemma}\label{lem1}
Let $y=\{y(t),t\in [0,T]\}$ be a stochastic process.
\begin{itemize}
\item [$(i)$] If $y$ satisfies condition $\mathcal{A}_m$ and if $b:V\to V$ is a bounded function with bounded derivatives up to order $m$.
Then, the stochastic process\\ $\{Z(t)=b(y(t)), t\in [0,T]\}$ satisfies condition $\mathcal{A}_m$.
\item [$(ii)$] If $y$ satisfies condition $\mathcal{A}_m$, then, the stochastic process\\ $\{Z(t)=\int_0^t R(t-s)y(s)\,ds,\, t\in [0,T]\}$ satisfies condition $\mathcal{A}_m$.
\item [$(iii)$] If $y$ satisfies condition $\mathcal{A}_{m+1}$, then,the stochastic Skorohod integral\\ $\{Z(t)=\int_0^t R(t-s)y(s)\, dB^H(s), t\in [0,T]\}$
is well defined and the stochastic process $Z=\{Z(t),\,t\in [0,T]\}$ satisfies condition $\mathcal{A}_m$.
\end{itemize}
\end{lemma}
\begin{proof} of Theorem \ref{thm1}.
To prove that the equation $(\ref{eq1})$ admits a unique solution on $[0,T]$, with $T\leq mr $, we construct the solution step by step. Let us consider the induction hypothesis $(H_n)$ for $1\leq n\le m$:\\
$ (H_n) $: The equation
\begin{eqnarray}\label{eq related to Hn}
x_{}(t)&=& R(t)[\varphi(0)+g(\varphi(-r))]-g(x_{}(t-r))+\int_0^tR(t-s)f(x_{}(s-r))ds\nonumber\\
&+&\int_0^tR(t-s)\sigma(x_{}(s-r))\, dB^H(s)
\,,\,\,\,\,\quad t\in [0,nr]\ \\
x(t)&=&\varphi(t),\,\quad t\in [-r,0]\nonumber
\end{eqnarray}
has a unique solution $x_{n}(t)$ which satisfies condition
$\mathcal{A}_{m-n}$.\\
Let us check $(H_1)$. For $t\in [0,r]$, equation $(\ref{eq related to Hn})$ can be written in the following form:
\begin{eqnarray*}
x_{1}(t)&=&\varphi(t),\,\quad t\in [-r,0]\\
x_{1}(t)&=& R(t)[\varphi(0)+g(\varphi(-r))]-g(\varphi(t-r))\\
&+&\int_0^tR(t-s)f(\varphi(s-r))ds+\int_0^tR(t-s)\sigma(\varphi(s-r))\, dB^H(s)
\,,\,\,\,\,\quad t\in [0,r]\ \\
\end{eqnarray*}
Since $\varphi$ is a deterministic continuous function, it follows that $x_1 (t)\in \mathbb{D}^{k,2}(V)$ for all $k\ge 1$. Therefore,
\begin{eqnarray*}
D_u x_1(t)=R(t-u)\sigma(\varphi(u-r))1_{u<t<r}
\end{eqnarray*}
and then $D^k x_1 (t)=0$ when $k\ge 2$. Thanks to the boundedness of the coefficients $f$, $g$ and $\sigma$ we can easily check that
\begin{eqnarray*}
\sup_t\mathbb{E}\|x_1 (t)\|^2\le c_1,\quad \sup_t\sup_{u,|u|=k}\mathbb{E}
\|D^k_u x_1 (t)\|^2\le c_{2,k}
\end{eqnarray*}
Then $x_1 $ satisfies condition $\mathcal{A}_k$ for any $k\ge 1$, hence $x_1 $ satisfies $\mathcal{A}_{m-1}$.\\
Assume now that $(H_n)$ is true for $n<m$, and check $(H_{n+1})$.
Consider the stochastic process $\{x_{n+1}(t),t\in [-r,(n+1)r]\}$ defined as:
\begin{eqnarray}\label{def of x_{n+1}}
x_{n+1}(t)&=& R(t)[\varphi(0)+g(\varphi(-r))]-g(x_n(t-r))\nonumber\\
&+&\int_0^tR(t-s)f(x_n(s-r))ds+\int_0^tR(t-s)\sigma(x_n(s-r))dB^H_s,\\
x_{n+1}(t)&=&\varphi(t),\,\quad t\in [-r,0]\,,\nonumber
\end{eqnarray}
where $x_n$ is the solution obtained in $(H_n)$. The process $x_{n+1}$ is well defined, thanks to fact that $x_n$ satisfies $\mathcal{A}_{m-n}$, assumption $(\mathcal{H}.3)$,
the boundedness of $R$ and Lemma \ref{lem1}. Moreover $x_{n+1}$ verifies $\mathcal{A}_{m-n-1}$.\\
Therefore, for $t\le nr$, the uniqueness of the solution on $[0, nr]$, entails:
$$x_{n+1}(t)=x_n(t).$$
Then, equation $(\ref{def of x_{n+1}})$ becomes:
\begin{eqnarray*}
x_{n+1}(t)&=& R(t)[\varphi(0)+g(\varphi(-r))]-g(x_{n+1}(t-r))\nonumber\\
&+&\int_0^tR(t-s)f(x_{n+1}(s-r))ds+\int_0^tR(t-s)\sigma(x_{n+1}(s-r))dB^H_s.
\end{eqnarray*}
Finally, $x_{n+1}$ is the unique solution of equation $(\ref{eq related to Hn})$ on $[0,(n+1)r]$. The procedure is verified up to $n=m$, and the process $x(t)=x_m(t)$ is the unique solution of Equation $(\ref{eq1})$ on $[-r,T]$ for all $T\le mr$. We can easily check the continuity of the solution, that is $x(.)\in\mathcal{C}([-r,T],\mathbb{L}^2(\Omega,V))$. Which ends the proof of the theorem.
\end{proof}
\section{Regularity of the law of $F(x(t))$}
In this section, we find a sufficient conditions under which the law of $F(x(t))$ is absolutely continuous with respect to the Lebesgue measure, where $x(t)$ is the solution of Equation $(\ref{eq1})$ and $F$ is a real Lipschitzian function. More precisely, we have the following result:
\begin{theorem}
Assume that the hypothesis of Theorem \ref{thm1} hold and let $\{x(t),t \in [0,T]\}$ be the solution of the equation (\ref{eq1}) on $[0,T]$, with $T\leq mr$, and
$F:V\to \mathbb{R}$ be a Lipschitzian function. Then, for any $0< t \leq T$ the law of $F(x(t))$ is absolutely continuous with respect to the Lebesgue measure If:
$$\int_{t-r}^t \left(F'(x(t))R(t-u)\sigma (x(u-r))\right)^2 du > 0\quad a.s.\,.$$
\end{theorem}
\begin{proof}
Fix $t\in (0,T]$ such that $\int_{t-r}^t F'(x(t))R(t-u)\sigma (x(u-r))^2 du > 0$ a.s By Proposition 7.1.4 in \cite{boul}, it suffices to show that $F(x(t))\in \mathbb{D}^{1,2}$ and $\|DF(x(t))\|_{L^2([0,T])}>0$ a.s.\\
Since $x(t)\in \mathbb{D}^{1,2}(V)$ and $F: V\longrightarrow {\mathbb R} $ is a Lipschitz function, then $F\in \mathbb{D}^{1,2}$ (see Proposition 1.2.4 page.29 in \cite{nualart}) and $$D_uF(x(t))=F'(x(t))D_ux(t)$$
On the other hand, we have
\begin{eqnarray*}
D_ux(t)&=&-g'(x(t-r))D_ux(t-r)1_{u<t-r}+\int_{u+r}^t R(t-s)f'(x(s-r))D_ux(s-r)ds\\
&+& R(t-u)\sigma(x(u-r))
+\int_{u+r}^t R(t-s)\sigma'(x(s-r))D_ux(s-r)dB^H(s)
\end{eqnarray*}
Hence, for $u\in (t-r,t)$, we have
\begin{equation}\label{d}
D_ux(t)= R(t-u)\sigma(x(u-r))
\end{equation}
Note that $$\|DF(x(t))\|_{L^2([0,T])}>0 \; \mbox{a.s}\;
\Leftrightarrow \int_0^T \left(F'(x(t))R(t-u)\sigma(x(u-r))\right)^2 du>0 \; \mbox{a.s}\;$$
and a sufficient condition for this is that
$$\int_{t-r}^t \left(F'(x(t))R(t-u)\sigma(x(u-r))\right)^2 du>0 \quad \mbox{a.s}\;$$
which imply that the law of $F(x(t))$ has a density with respect to the Lebesgue measure. This completes the proof.
\end{proof}
\begin{example}
In the case of $F(x)=\vert\vert x\vert\vert $, we get that the law of $\vert\vert x(t)\vert\vert_{V} $ is absolutely continuous with respect to the Lebesgue measure if
$$\int_{t-r}^t <x(t)\,,\, R(t-u)\sigma(x(u-r))>_{V}^2 du>0 \quad \mbox{a.s}\,.$$
\end{example}
\section{Application}
We consider the following stochastic partial neutral functional integro-differential equation with finite delay $r$:
\begin{equation}\label{Eq2}
\left\{
\begin{array}{rl}
\frac{\partial}{\partial t}\left[ x(t,y)+\hat{g}(x(t-r,y))\right] &=\frac{\partial^2}{\partial y^2}\left[ x(t,y)+\hat{g}(x(t-r,y))\right] \\
&+\int_0^tb(t-s)\frac{\partial^2}{\partial y^2}\left[ x(t,y)+\hat{g}(x(s-r,y))\right] ds\\
&+\hat{f}(x(t-r,y))+\hat{\sigma}(x(t-r,y))\frac{dB^H}{dt}(t)\\
x(t,0)+\hat{g}(x(t-r,0))=0,\ t\ge 0\\
x(t,\pi)+\hat{g}(x(t-r,\pi))=0,\ t\ge 0\\
x(t,y)=\varphi(t,y),\ -r\le t\le 0,\, 0\le y\le \pi
\end{array}
\right.
\end{equation}
where $\hat{g}, \hat{f}, \hat{\sigma}: \mathbb{R}\to \mathbb{R}$, and $b:\mathbb{R}\to \mathbb{R}$ are continuous functions, such that $\hat{g}(0)=\hat{f}(0)=0$ . Let $X=V=L^2([0,\pi])$ and define the operator $A:D(A)\to V$ by $Az=z"$ with domain
$$D(A)=\{z\in V; z,z'\ are\ absoluty\ continuous, z"\in V, z(0)=z(\pi)=0\}$$
It is well known that $A$ generates a strongly continuous semigroup $\{T(t),\, t\ge 0\}$ on $V$ which is given by:
$$T(t)\varphi=\sum_{n=1}^{\infty} e^{-n^2 t}<\varphi,e_n>\, e_n$$
where $e_n(x)=\sqrt{\frac{2}{\pi}}\sin(nx)$ is the orthogonal set of eigenvectors of $-A$.\\
Define the operators $g,\, f$, and $\sigma:V\to V$ by:
\begin{eqnarray*}
g(\psi)(y)&=& \hat{g}(\psi(y)),\ for\ \psi \in V\ and\ y\in [0,\pi]\\
f(\psi)(y)&=& \hat{f}(\psi(y)),\ for\ \psi \in V\ and\ y\in [0,\pi]\\
\sigma(\psi)(y)&=& \hat{\sigma}(\psi(y)),\ for\ \psi \in V\ and\ y\in [0,\pi]\\
\end{eqnarray*}
If we put
$$x(t)(y)=x(t,y),\quad for \ t\ge -r\quad and\quad y\in [0,\pi]$$
Let $B:D(A)\subset V\to V$ be defined by
$$B(t)y=b(t)Ay,\quad for\; t \ge 0\quad and\; y\in D(A)$$
then the equation $(\ref{Eq2})$ becomes
\begin{equation}
\left\{
\begin{array}{rl}
d\left[ x(t)+g(x(t-r))\right]& = A\left[ x(t)+g(x(t-r))\right] dt\\
&+\left[ \int_0^tB(t-s)\left[ x(s)+g(x(s-r))\right] ds+f(x(t-r))\right] dt\\
&+\sigma(x(t-r))dB^H(t)\\
x(t)&=\varphi,\ t\in [-r,0]
\end{array}
\right.
\end{equation}
Moreover, if $b$ is a bounded and $C^1$ function such that $b'$ is bounded and uniformly continuous, then $(\mathcal{H}.1)$ and $(\mathcal{H}.2)$ are satisfied and hence by theorem \ref{res}, equation $(\ref{Eq2})$ has a resolvent operator $(R(t),\ t\ge 0)$ on $V$.\\
Further, if we impose suitable regularity conditions on $\hat{g}$, $\hat{f}$ and $\hat{\sigma}$ such that $g$, $f$ and $\sigma$ verify assumptions of theorem \ref{thm1}. then we conclude the existence and uniqueness of the mild solution of the equation $(\ref{Eq2})$.
|
1,477,468,751,120 | arxiv | \section{Introduction}
There are two different mechanisms that contribute to nuclear energy
dissipation, i.e. the irreversible transfer of energy from collective into
intrinsic single-particle motion: two-body collisions and ``one-body
friction''. The latter is caused by the moving walls of the
self-consistent nuclear mean field. The role played by these two
dissipation mechanisms in fission and heavy-ion reactions is not yet
completely understood. In a pioneering article that appeared in 1976
Davies, Sierk and Nix \cite{DSN76} calculated the effect of
viscosity on the dynamics of fission. Assuming that friction is caused by
two-body collisions they extracted a viscosity coefficient $\mu = 0.015$
Tera Poise from a comparison of
theoretical and experimental values for the kinetic
energies of fission fragments. The corresponding time delay for the
nuclear motion from the saddle to the scission point was found to be of
order $\Delta t =1 \times 10^{-21}$ s. However, in one-body dissipation
models the time delay is an order of magnitude larger.
Several experimental techniques are sensitive to the energy dissipation
in nuclear fission. At high excitation energy, the multiplicity of
pre-scission neutrons \cite{Ga87} or photons \cite{Ho95} depends on the
dissipation strength. At low excitation energy, the process of
prompt muon-induced fission \cite{MO80} provides a suitable ``clock''.
This process will be discussed here.
After muons have been captured into high-lying single particle states
they form an excited muonic
atom. Inner shell transitions may proceed without photon emission by
inverse internal conversion, i.e. the muonic excitation energy is
transferred to the nucleus. In actinides, the $2p \rightarrow 1s$ and the
$3d \rightarrow 1s$ muonic transitions result in excitation of the nuclear
giant dipole and giant quadrupole resonance, respectively, which act as
doorway states for fission. The nuclear excitation energy is typically
between 6.5 and 10 MeV. Most importantly, the muon is still available
following these atomic transitions
(in the ground state of the muonic atom) and can be utilized to probe
the fission dynamics. Eventually though, the muon will
disappear as a result of the weak interaction (nuclear capture by
one of the fission fragments). However, the nuclear capture occurs
on a time scale of order $10^{-7}$ s which is many orders
of magnitude larger than the time scale of fission.
The prompt muon-induced fission process is most easily understood via a
``correlation diagram'', i.e. one plots the single-particle energies of the
transient muonic molecule as a function of the internuclear distance
\cite{OU93}. If there is a large amount of friction during the motion
from the outer fission barrier to the scission point the muon will
remain in the lowest molecular energy
level $1s\sigma$ and emerge in the $1s$ bound
state of the {\it heavy} fission fragment. If, on the other hand, friction is
small and hence the nuclear collective motion is relatively
fast there is a nonvanishing probability
that the muon may be transferred to higher-lying molecular orbitals, e.g.
the $2p\sigma$ level, from where it will end up attached to the {\it light}
fission fragment. Therefore, theoretical studies of the muon-attachment
probability to the light fission fragment, $P_L$, in combination with
experimental data can be utilized to analyze the dynamics of fission,
and nuclear energy dissipation in particular.
\section{Theoretical Developments}
Because the nuclear excitation energy in muon-induced fission exceeds
the fission barrier height it is justified to treat the
fission dynamics classically (no barrier tunneling). For simplicity,
we describe the fission path by one collective coordinate $R$;
the classical collective nuclear energy has the form
\begin{equation}
E_{\rm nuc} = \frac{1}{2} B(R) \dot R^2 + V_{\rm fis}(R) + E_\mu (R).
\label{ecoll}
\end{equation}
We utilize a coordinate dependent mass parameter \cite{OU93} and an empirical
double-humped fission potential $V_{\rm fis}(R)$ \cite{Ba74} which
is smoothly joined with the Coulomb potential of the fission fragments at
large $R$. The last term in Eq. (\ref{ecoll}) denotes the instantaneous muonic
binding energy which depends on the fission coordinate; this term will
be defined later.
To account for the nuclear energy dissipation between the outer fission barrier
and the scission point, we introduce a friction force which depends
linearly on the velocity. In this case, the dissipation function $D$ is a simple
quadratic form in the velocity
\begin{equation}
\dot E_{\rm nuc}(t) = -2D = -f \dot R^2 (t) \label{frict}.
\end{equation}
The adjustable friction parameter $f$ determines the dissipated energy; it
is the only unknown quantity in the theory.
For the dynamical description of the muonic wavefunction during prompt
fission, the electromagnetic coupling between muon and nucleus $(-e \gamma
_\mu A^\mu)$ is dominant; the weak interaction is negligible. Because
of the nonrelativistic motion of the fission fragments the
electromagnetic interaction is dominated by the Coulomb interaction
\begin{equation}
A^0({\bf r},t) = \int d^3r' \frac{\rho_{\rm nuc}( {\bf r'},t)}
{| {\bf r} - {\bf r'} |}. \label{vcoul}
\end{equation}
The muonic binding energy in the ground state of an actinide muonic atom
amounts to 12 percent of the muonic rest mass; hence nonrelativistic
calculations, while qualitatively correct, are limited in accuracy. Several
theory groups have demonstrated the feasibility of such calculations
\cite{MO80,MW81,Ka97} which are based on the time-dependent Schr\"odinger equation
\begin{equation}
[ -\frac{\hbar^2}{2m} \nabla^2 -e A^0 ({\bf r},t) ] \
\psi ({\bf r},t) = i \hbar \frac{\partial}{\partial t} \ \psi ({\bf r},t) .
\end{equation}
Recently, we have developed a numerical algorithm to solve the relativistic
problem on a three-dimensional Cartesian mesh \cite{OU92,OU93}.
The time-dependent Dirac equation for
the muonic spinor wave function in the Coulomb field of the fissioning
nucleus has the form
\begin{equation}
H_{\rm D}(t) \ \psi ({\bf r},t) = i \hbar \frac \partial
{\partial t} \ \psi ({\bf r},t) \label{tdirac},
\end{equation}
where the Dirac Hamiltonian is given by
\begin{equation}
H_{\rm D}(t) = -i \hbar c {\bf \alpha} \cdot \nabla + \beta mc^2
-e A^0 ({\bf r},t). \label{hdirac}
\end{equation}
Our main task is the solution of the Dirac
equation for the muon in the presence of a time-dependent external Coulomb
field $A^0({\bf r},t)$ which is generated by the fission
fragments in motion.
Note the coupling between the fission dynamics, Eq. (\ref{ecoll}), and
the muon dynamics, Eq. (\ref{tdirac}), via the
instantaneous muonic binding energy
\begin{equation}
E_\mu (R(t)) = \langle \psi ({\bf r},t) \mid H_{\rm D}(t) \mid
\psi ({\bf r},t) \rangle
\end{equation}
which depends on the fission coordinate; the presence of this
term increases the effective fission barrier height.
\section{Lattice Representation: Basis-Spline Expansion}
For the numerical solution of the time-dependent Dirac equation (\ref{tdirac})
it is convenient to introduce dimensionless space and time coordinates
$$
{\bf x} = {\bf r} / {\mathchar'26\mkern-9mu\lambda}_c \ \ \ \ {\mathchar'26\mkern-9mu\lambda}_c = \hbar /(m_\mu c)=1.87 fm
$$
\begin{equation}
\tau = t / \tau _c \ \ \ \ \tau _c= {\mathchar'26\mkern-9mu\lambda}_c / c = 6.23 \times 10^{-24} s
\label{comptim}
\end{equation}
where ${\mathchar'26\mkern-9mu\lambda}_c$ denotes the reduced Compton wavelength of the muon and
$\tau _c$ the reduced Compton time. For the lattice representation of the
Dirac Hamiltonian and spinor wave functions we introduce
a 3-dimensional rectangular box with a uniform lattice spacing $\Delta x$.
The lattice points are labeled $( x_\alpha, y_\beta, z_\gamma)$.
Our numerical algorithm is the Basis-Spline collocation method \cite{WO95}.
Basis-Spline functions $B_i^M(x)$ are piecewise-continuous polynomials
of order $(M-1)$. These may be thought of as generalizations of the
well-known ``finite elements'' which are a B-Splines with $M=2$.
To illustrate the method let us consider
a wave function which depends on one space coordinate $x$;
we represent the wave function on a finite spatial interval
as a linear superposition of B-Spline functions
\begin{equation}
\psi(x_\alpha) = \sum_{i=1}^{N} B^M_i(x_\alpha) c^i .
\label{psialpha}
\end{equation}
In the Basis-Spline collocation method, local operators such
as the EM potential $A^0$ in Eq. (\ref{hdirac})
become diagonal matrices of their values at the grid points
(collocation points), i.e.\ $V(x) \rightarrow V_\alpha=V(x_\alpha)$.
The matrix representation of derivative operators is
more involved \cite{WO95}. For example, the first-derivative
operator of the Dirac equation has the following
matrix representation on the lattice
\begin{equation}
D_\alpha^\beta
\equiv \sum_{i=1}^{N} B'_{\alpha i} B^{i \beta}\;,
\label{1der}
\end{equation}
where $B'_{\alpha i} = [dB_i^M(x) / dx] |_{x=x_\alpha}$.
Furthermore, we use the shorthand notation
$B_{\beta i}=B^M_i(x_\beta)$
for the B-spline function evaluated at the collocation point $x_\beta$,
and the inverse of this matrix is denoted by $B^{i \beta} =
[B^{-1}]_{\beta i}$.
Because of the presence of this inverse, the operator
$D_\alpha^\beta$ will have a nonsparse matrix representation.
In the present calculations we employ B-Splines of
order $M=5$. Eq. (\ref{psialpha}) can readily be generalized to three
space dimensions; in this case the four Dirac
spinor components $\psi ^{(p)}, p=( 1,\cdot \cdot \cdot,4)$
are expanded in terms of a product of Basis-Spline functions
\begin{equation}
\psi ^{(p)}( x_\alpha ,y_\beta ,z_\gamma ,t) =
\sum\limits_{i,j,k}B^M_i(x_\alpha )B^M_j(y_\beta )B^M_k(z_\gamma )
c_{(p)}^{ijk}(t) ,
\end{equation}
i.e. the lattice representation of the spinor wave function
is a vector with $N = 4 \times N_x \times N_y \times N_z$
complex components. Hence,
it is impossible to store $H_{\rm D}$ in memory because this would
require the storage of $N^2$ complex double-precision numbers.
We must therefore resort to iterative methods for the solution of the matrix
equation which do not require the storage of $H_{\rm D}$.
We solve the time-dependent Dirac equation in two steps: first, we solve
the static Coulomb problem at time $t=0$, i.e. the muon bound to an
actinide nucleus. This problem is solved by the damped relaxation
method \cite{OU93}. The second part of our numerical procedure is
the solution of the time-dependent Dirac equation (\ref{tdirac})
by a Taylor-expansion of the propagator for an infinitesimal time step
$\Delta t$. Details may be found in ref. \cite{OU93}.
\section{Discussion of Numerical Results}
In the following we present results for prompt fission of $^{237}_{\ 93}$Np
induced by the $3d \rightarrow 1s$ muonic transition $(9.5 {\rm MeV})$.
All results reported here are for a 3-D Cartesian lattice of size
$L_x = L_y = 67$ fm and $L_z = 146$ fm with $N_x \times N_y \times N_z =
25 \times 25 \times 53$ lattice points with a uniform lattice spacing
$\Delta x = 1.5 {\mathchar'26\mkern-9mu\lambda}_c = 2.8 fm$. Depending on the value of the
friction coefficient, we utilize between $1,200 - 1,900$ time steps with
a step size $\Delta t = 1.5 \tau_c = 9.3 \times 10^{-24}$ s.
Typical production runs take about 11 hours of CPU time on a CRAY
supercomputer or about 54 hours on an IBM RS/6000 workstation.
\begin{figure}[t]
\vspace*{-1.0cm}
\epsfxsize=12cm \epsfbox{fig-1.eps}
\vspace*{-0.5cm}
\caption[]{
Prompt muon-induced fission of $^{237}_{\ 93}$Np for a fission fragment mass
asymmetry $\xi = A_H / A_L = 1.10$ at $E^* = 9.5$ MeV. Shown is the muon
position probability density at four different times during fission:
$t = 0$, $6.5 \times 10^{-21}$ s, $8.4 \times 10^{-21}$ s, $1.1 \times
10^{-20}$ s. Zero friction ($f=0$) has been assumed.
}
\label{fig1}
\end{figure}
Fig.\ \ref{fig1} shows the time-development of the muon position probability
density during fission at a fragment mass asymmetry $\xi = A_H / A_L = 1.10$.
As expected, the muon sticks predominantly to the heavy fragment, but for this
small mass asymmetry the muon attachment probability to the light fission
fragment, $P_L$, is rather large (20 percent).
One might ask whether the muon will always remain bound during fission;
what is the probability for ionization? To investigate this question we
have plotted the muon position probability density on a logarithmic scale.
\begin{figure}[htb]
\vspace*{0.2cm}
\epsfxsize=12cm \epsfbox{fig-2.eps}
\vspace*{0.2cm}
\caption[]{
Contour plot of the logarithm of the muon probability density at $ t =
1.1 \times 10^{-20}$ s shows no evidence of muon ionization.
}
\label{fig2}
\end{figure}
In coordinate space, any appreciable muon ionization would show up as a
``probability cloud'' that is separating from the fission fragments and
moving towards the boundaries of the lattice. Fig.\ \ref{fig2} shows no
evidence for such an event in our numerical calculations. Hence, we conclude
that the probability for muon ionization $P_{\rm ion}$ is
substantially smaller than the muon attachment probability to the light
fission fragment which is always clearly visible in our logarithmic plots,
even at large mass asymmetry. From this we estimate that $P_{\rm ion}
< 10^{-4}$.
\begin{figure}[htb]
\vspace*{0.2cm}
\epsfxsize=12cm \epsfbox{fig-3.eps}
\vspace*{0.2cm}
\caption[]{
Muon attachment probability to the light fission fragment as
function of nuclear energy dissipation for $^{237}_{\ 93}$Np. Results are
shown for fragment mass asymmetries $\xi=1.05$ (upper curve), $1.10,\ 1.15$,
and $1.20$ (lower curve).
}
\label{fig3}
\end{figure}
Fig. \ref{fig3} shows that $P_L$ depends strongly on the fission fragment
mass asymmetry. This is easily understood: for equal fragments we obviously
obtain $P_L=0.5$, and for large mass asymmetry it is energetically favorable
for the muon to be bound to the heavy fragment, hence $P_L$ will be small.
In Fig.\ \ref{fig3} we also examine the dependence of $P_L$ on the dissipated
nuclear energy, $E_{\rm diss}$, during fission. In our model, friction
takes place between the outer fission barrier and the scission point.
When the dissipated energy is computed from equation (\ref{frict})
we find an almost linear dependence of the muon attachment probability on
$E_{\rm diss}$; unfortunately, this dependence is rather weak.
We would like to point out that the theoretical values for $P_L$ obtained
in this work are smaller than those reported in our earlier calculations
\cite{OU92,OU93}. There are two reasons for this: (a) the size of
the lattice and (b) the lattice representation of the first derivative
operator in the Dirac equation. Because of constraints in the amount
of computer time available to us we utilized a smaller cubic lattice
in our prior calculations \cite{OU93} with
$N_x \times N_y \times N_z = 29^3$ lattice points.
More recently, we were able to increase the size of the lattice
substantially, in particular in fission ($z$-) direction (see above).
In Fig. 2 of ref. \cite{OU98} we have demonstrated the convergence of
our results for the muon attachment probability in terms of the lattice
size and lattice spacing. Another reason for the difference between the
current and prior results is the lattice representation of the first
derivative operator, Eq. (\ref{1der}), in the Dirac equation.
In ref. \cite{OU92,OU93} we utilized a combination of forward and backward
derivatives for the upper and lower spinor wave function components; after
extensive testing of Coulomb potential model problems with known analytical
solutions we have found that the symmetric derivative operator
provides a more faithful lattice representation. The results reported
here and in ref. \cite{OU98} have been obtained utilizing the symmetric
derivative prescription.
\section{Comparison of Theory with Experiment}
There are only a few experimental data available for comparison.
Schr\"oder {\it et al.} \cite{SW79} measured for the first time
mean lifetimes of muons bound to fission fragments of several
actinide nuclei. The muon decays from the K-shell of the muonic atom
through various weak interaction processes at a characteristic rate
$\lambda = \lambda_0 + \lambda_c$, where $\lambda_0 =
(2.2\times 10^{-6}s)^{-1}$ is the free leptonic decay rate for
the decay process $\mu^- \rightarrow e^- + \bar{\nu_e} + \nu_{\mu}$ and
$\lambda_c$ denotes the nuclear capture rate; $\lambda_c$ depends upon the
charge and mass of the fission fragment. From the observed lifetime $\tau _\mu
=1.30\times 10^{-7}s$ Schr\"oder {\it et al.} estimated an upper limit
for the muon attachment probability $P_L \le 0.1$. It must be
emphasized that this number represents an integral over the whole fission
mass distribution and, hence, cannot be directly compared to the
numbers given in Fig. \ref{fig3}.
The most complete experiments have been carried out by Risse {\it et al.}
\cite{RB91} at the Paul Scherrer Institute (PSI) in Switzerland. The basic
experimental approach is to place a fission
chamber inside an electron spectrometer. The
incident muons are detected by a scintillation counter. An event
is defined by a $(\mu^-, f_1 f_2 e^-)$ coincidence where the fission
fragments are observed in prompt and the muon decay electrons in delayed
coincidence with respect to the incident muon. The magnetic field of the
electron spectrometer allows for a reconstruction of the
electron trajectories. Thus, it is possible to determine whether the muon
decay electrons originate from the heavy or the light fission fragment.
For several mass bins of the light fission fragment,
muon attachment probabilities $P_L$ have been measured; the experimental
data are given in Table \ref{exptheo}. It should be emphasized that the
mass bins are relatively broad. Because the theoretical values for $P_L$
depend strongly on the mass asymmetry it is not justified to assume that
$P_L$ remains constant within each experimental mass bin.
Instead, to allow for a comparison between theory and experiment,
we have to multiply the theoretical $P_L$ values in Fig.\ \ref{fig3}
with a weighting factor that accounts for the measured relative mass
distribution \cite{RB91} of the prompt fission events within this mass bin.
We subsequently integrate the results over the sizes of the experimental
mass bins.
Due to the relatively low excitation energy in muon-induced fission,
the fission mass distribution exhibits a maximum at $\xi = A_H / A_L = 1.4$
and falls off rather steeply for values larger or smaller than the maximum.
This means that the large values of $P_L \approx 0.5$ at or near
fission fragment symmetry $\xi=1.0$ will be strongly suppressed.
The resulting theoretical values for $P_L$ are given in the last column of
Table \ref{exptheo}. It is apparent that our theory agrees rather well
with experiment. Because of the size of the error bars in the experiment
and because of the weak dependence of the theoretical values of $P_L$ on
the dissipated energy, it is not possible to extract very precise information
about the amount of energy dissipation.
\begin{table}[!t]
\caption{Muon-attachment probabilities to the light fission fragment,
$P_L$, for $^{237}{\rm Np}(\mu^-,f)$. Exp. data are taken from ref.
\cite{RB91}.
\label{exptheo}}
\vspace{0.2cm}
\begin{center}
\footnotesize
\begin{tabular}{|c|c|c|c|}
\hline
{mass bin $A_L$} &\raisebox{0pt}[13pt][7pt]{mass asymmetry} &
\raisebox{0pt}[13pt][7pt]{$P_L$(exp)} &{$P_L$(theo)}\\
\hline
& & & \\
$118.5 \rightarrow 111.5$ & $1.000 \rightarrow 1.126$
& $(25.5 \pm 8.5) \times 10^{-2}$
& $26.0 \times 10^{-2}, E_{\rm diss}=0 {\rm MeV}$ \\
& & & $22.3 \times 10^{-2}, E_{\rm diss}=22 {\rm MeV}$ \\
& & & \\
\hline
& & & \\
$111.5 \rightarrow 104.5$ & $1.126 \rightarrow 1.268$
& $(9.7 \pm 2.6) \times 10^{-2}$
& $6.62 \times 10^{-2}, E_{\rm diss}=0 {\rm MeV}$ \\
& & & $3.51 \times 10^{-2}, E_{\rm diss}=22 {\rm MeV}$ \\
& & & \\
\hline
\end{tabular}
\end{center}
\end{table}
From a comparison of our theoretical result for the mass bin
$A_L = 118.5 \rightarrow 111.5$ with the measured data we extract
a dissipated energy of order $10$ MeV for $^{237}$Np while the second
mass bin $A_L = 111.5 \rightarrow 104.5$ is more compatible with zero
dissipation energy. We place a higher confidence on the theoretical
results for the first mass bin because the probabilities $P_L$ are
substantially larger and hence numerically more reliable. We like
to point out that our theoretical value $E_{\rm diss}=10$ MeV is
compatible with results from other low-energy fission
measurements that are based on the odd-even effect in the charge yields
of fission fragments \cite{Wa91}. In addition to $^{237}$Np
we have also studied muon-induced fission of $^{238}$U; the results
for muon attachment are very similar \cite{OU98}.
\section{Conclusions}
We have studied the dynamics of a muon bound to a fissioning
actinide nucleus by solving the time-dependent Dirac equation for the
muonic spinor wavefunction; the fission dynamics is described classically.
The theory predicts a strong mass asymmetry dependence of the muon
attachment probability $P_L$ to the light fission fragment; this feature
is in agreement with experimental data. Our calculations show no evidence
for muon ionization during fission. The theory also predicts
a (relatively weak) dependence of $P_L$ on the dissipated energy. By
comparing our theoretical results to the experimental data of
ref. \cite{RB91} we extract a dissipated energy of about $10$ MeV for
$^{237}$Np (see Table 1). Using the dissipation function defined in
Eq. (\ref{frict}), this value corresponds to a fission time delay from
saddle to scission of order $2 \times 10^{-21}$ s.
\section*{Acknowledgements}
This research project was sponsored by the U.S. Department of Energy under
contract No. DE-FG02-96ER40975 with Vanderbilt University. For several years,
I have benefitted from fruitful discussions with my collaborators, in particular
with J.A. Maruhn, the late C. Bottcher, M.R. Strayer, P.G. Reinhard, A.S. Umar
and J.C. Wells. Some of the numerical calculations were carried out on CRAY
supercomputers at NERSC, Berkeley. I also acknowledge travel support to
Germany from the NATO Collaborative Research Grants Program.
|
1,477,468,751,121 | arxiv | \section{Introduction}
In modern science, it is often the case that each study screens many
features. Identifying which of the many features screened have
replicated findings, and the extent of replicability for these
features, is of great interest. For example, the association of single nucleotide polymorphisms (SNPs) with a
phenotype is typically considered a scientific finding only if it has been discovered in independent
studies, that examine the same associations with phenotype, but on
different cohorts, with different environmental exposures \citep{Heller14b}.
Two studies that examine the same problem may only partially agree on which features have signal. For example, in the two microarray studies discussed in Section \ref{subsec-big-example}, among the 22283 probes examined in each study we estimated that 29\% have signal in both studies, but 32\% have signal in exactly one of the studies. Possible explanations for having signal in only one of the studies include bias (e.g., in the cohorts selected or in the laboratory process), and the fact that the null hypotheses tested may be too specific (e.g., to the specific cohorts that were subject to specific exposures in a each study). In a typical meta-analysis, all the features with signal in at least one of the studies are of interest (estimated to be 61\% of the probes in our example). However, the subset of the potential meta-analysis findings which have signal in both studies may be of particular interest, for both verifiability and generalizability of the results. Replicability analysis targets this subset, and aims to identify the features with signal in both studies (estimated to be 29\% of the probes in our example).
Formal statistical methods for assessing
replicability, when each study examines many features, were developed only recently.
An empirical Bayes approach for two studies was suggested by \cite{Li11}, and for at least two studies by \cite{Heller14}. The accuracy of the empirical Bayes analysis relies on the ability to estimate well the unknown parameters, and thus it may not be suitable for applications with a small number of features and non-local dependency in the measurements across features. A frequentist approach was suggested in \cite{benjamini09}, which suggested applying the Benjamini-Hochberg (BH) procedure \citep{yoav1} to the maximum of the two studies $p$-values.
However, \cite{Heller14} and \cite{Bogomolov13} noted that the power of this procedure may be low when there is nothing to discover in most features. \cite{Bogomolov13} suggested instead applying twice their procedures for establishing replicability from a primary to a follow-up study, where each time one of the studies takes on the role of a primary study and the other the role of the follow-up study.
In this work we suggest novel procedures for establishing
replicability across two studies, which are especially useful in
modern applications when the fraction of features with signal
is small (e.g., the approaches of \cite{Bogomolov13} and
\cite{benjamini09} will be less powerful whenever the fraction of
features with signal is smaller than half). The advantage of our
procedures over previous ones is due to two main factors. First,
these procedures are based on our novel approach, which
selects the promising features from each study solely based on that
study, and then tests for replicability only the features that were
selected in both studies. This approach focuses attention on the promising features, and has the added advantage of reducing the number of features that need to be accounted for in the subsequent replicability analysis. Note that since the selection is only a first step, it may be much more liberal than that made by a multiple testing procedure, and can include all features that seem interesting to the investigator (see Remark \ref{rem-selectiontype} for a discussion of selection by multiple testing).
Second, we incorporate in our procedures
estimates of the fraction of nulls in one study among the features
selected in the other study. We show that exploiting these estimates
can lead to far more replicability claims while still controlling
the relevant error measures. For single studies, multiple testing
procedures that incorporate estimates of the fraction of nulls, i.e.
the fraction of features in which there is nothing to discover, are
called adaptive procedures \citep{BHadaptive} or plug-in procedures
\citep{finner09}. One of the simplest, and still very popular,
estimators is the plug-in estimator, reviewed in Section
\ref{subsec-reviewplug-in}.
The smaller is the fraction of nulls, the higher is the power gain
due to the use of the plug-in estimator. In this work, there is a
unique opportunity for using adaptivity: even if the fraction of
nulls in each individual study is close to one, the fraction of
nulls in study one (two) among the selected features based on study
two (one) may be small since the selected features are likely to
contain mostly features with false null hypotheses in both studies.
In the data examples we consider, the fraction of nulls in one
study among the selected in the other study was lower than
50\%, and we show in simulations that the power gain from adaptivity can be large.
Our procedures also report the strength of the evidence towards replicability by a number for each outcome, the $r$-value for replicability, introduced in \cite{Heller14b} and reviewed in Section
\ref{sec-notation}. The remaining of the paper is organized as follows. In Section
\ref{sec-notation} we describe the formal mathematical framework. We
introduce our new non-adaptive FWER- and FDR-replicability analysis
procedures in Section \ref{sec-non-adapt}, and their adaptive
variants in Section \ref{sec-adapt}. For simplicity, we shall present the notation, procedures, and theoretical results for one-sided hypotheses tests in Sections \ref{sec-notation}-\ref{sec-adapt}.
In Section \ref{sec-twosided}
we present the necessary modifications for two-sided hypotheses,
which turn out to be minimal. In Section \ref{sec-estthresholds} we suggest selection rules with
optimal properties.
In Sections \ref{sec-sim} and
\ref{sec-example} we present a simulation study and real data
examples, respectively. Conclusions are given in Section
\ref{sec-Discussion}. Lengthy proofs of theoretical results are in
the Appendix.
\subsection{Review of the plug-in estimator for estimating the fraction of nulls}\label{subsec-reviewplug-in}
Let $\pi_0$ be the fraction of null hypotheses. \cite{schweder82} proposed estimating this fraction by $ \frac{\#\{
p-values>\lambda\}}{m(1-\lambda)},$ where $m$ is the number of
features and $\lambda \in (0,1)$.
The slightly inflated plug-in estimator $$\hat \pi_0 = \frac{\# \{ p-values>\lambda\}+1}{m(1-\lambda)}$$ has been incorporated into multiple testing procedures in recent years.
For independent $p$-values, \cite{storey2} proved that applying the BH procedure with $m\hat \pi_0$ instead of $m$ controls the FDR, and \cite{finner09} proved that applying Bonferroni with $m\hat \pi_0$ instead of $m$ controls the FWER.
Adaptive procedures in single studies have larger power gain over
non-adaptive procedures when the fraction of nulls, $\pi_0$, is
small. This is so because these procedures essentially apply the
original procedure at level $1/\hat \pi_0$ times the nominal level
to achieve FDR or FWER control at the nominal level.
\cite{finner09} showed in simulations that the power gain of using
$m\hat \pi_0$ instead of $m$ can be small when the fraction of nulls
is 60\%, but large when the fraction of nulls is 20\%.
The plug-in estimator is typically less conservative (smaller) the
larger $\lambda$ is. This follows from Lemma 1 in
\cite{Dickhaus12}, that showed that for a single study the
estimator is biased upwards, and that the bias is a decreasing
function of $\lambda$ if the cumulative distribution function (CDF)
of the non-null $p$-values is concave (if the $p$-values are based
on a test statistic whose density is eventually strictly decreasing,
then concavity will hold, at least for small $\lambda$).
\cite{yoav2} noted that the FDR of the BH procedure which
incorporates the plug-in estimator with $\lambda=0.5$ is sensitive
to deviations from the assumption of independence, and it may be
inflated above the nominal level under dependency.
\cite{Blanchard09} further noted that although under
equi-correlation among the test statistics using the plug-in
estimators does not control the FDR with $\lambda=0.5$, it does
control the FDR with $\lambda= q/(q+1+1/m) \approx q$.
\cite{Blanchard09} compared in simulations with dependent test
statistics the adaptive BH procedure using various estimators of the
fraction of nulls for single studies, including the plug-in
estimator with $\lambda \in \{0.05,0.5\}$. Their conclusion was that
the plug-in estimator with $\lambda = 0.05$ was superior to all
other estimators considered, since it had the highest power overall
without inflating the FDR above the 0.05 nominal level.
\section{Notation, goal, and review for replicability analysis }\label{sec-notation}
Consider a family of $m$ features examined in two independent studies. The effect of feature $j\in \{ 1,\ldots,m\}$ in study $i\in \{1,2\}$ is $\theta_{ij}$. Let $H_{ij}$ be the hypothesis indicator, so $H_{ij} = 0$ if $\theta_{ij} = \theta_{ij}^0$, and $H_{ij} = 1$ if $\theta_{ij}> \theta_{ij}^0$.
Let $\vec H_j = (H_{1j}, H_{2j})$. The set of possible states of
$\vec H_j$ is $\mathcal{H} = \{\vec{h} = (h_1,h_2): (0,0), (1,0),
(0,1), (1,1) \}.$ The
goal of inference is to discover as many features as possible with
$\vec H_j \notin \mathcal H^0$, where $\mathcal
H^0\subset\mathcal{H}.$ For replicability analysis, $\mathcal{H}^0 =
\mathcal{H}^0_{NR} = \{(0,0), (0,1), (1,0)\}$. For a typical meta-analysis, $\mathcal{H}^0 =\{(0,0)\}$, and the number of features with state $(0,0)$ can be much smaller than the number of features with states in $\mathcal{H}^0_{NR}$, see the example in Section \ref{subsec-big-example}.
We aim to discover as many features with $\vec H_j = (1,1)$
as possible, i.e., true
replicability claims, while controlling for false replicability
claims, i.e. replicability claims for features with $\vec H_j
\in\mathcal{H}^0_{NR} .$ Let $\mathcal R$ be the set of indices of
features with replicability claims. The FWER and FDR for
replicability analysis are defined as follows:
$$FWER = \textmd{Pr}\left(|\mathcal R \cap \{j: \vec H_j \in \mathcal{H}^0_{NR} \} |>0\right), \quad FDR = E\left(\frac{|\mathcal R \cap \{j: \vec H_j \in \mathcal{H}^0_{NR} \} |}{\max(|\mathcal R|,1 )} \right), $$
where $E(\cdot)$ is the expectation.
Our novel procedures first select promising features from each study solely based on the data of that study.
Let $\mathcal S_i$ be the index set of features selected in
study $i,$ for $i\in \{1,2\},$ and let $S_i= |\mathcal S_i|$ be
their number. The procedures proceed towards making replicability
claims only on the index set of features which are selected in both
studies, i.e.
$\mathcal{S}_1\cap \mathcal{S}_2.$ For example, selected sets may include all (or a subset of) features with two-sided $p$-values below $\alpha$. See Remark \ref{rem-selectiontype} for a discussion about the selection process.
Let $P_i=(P_{i1}, \ldots,P_{im})$ be the $m$-dimensional random
vector of $p$-values of study $i\in\{1,2\},$ and $p_i=(p_{i1},
\ldots,p_{im})$ be its realization.
We shall assume the following condition is satisfied for $(P_1,
P_2)$:
\begin{definition}The studies satisfy the \emph{null independence-across-studies condition} if
for all $j$ with $\vec{H}_j\in \mathcal{H}^0_{NR}$, if $H_{1j}=0$
then $P_{1j}$ is independent of $P_2$, and if $H_{2j}=0$ then
$P_{2j}$ is independent of $P_1$.
\end{definition}
This condition is clearly satisfied if the two studies are
independent, but it also allows the pairs $(P_{1j},
P_{2j})$ to be dependent for $\vec H_j\notin \mathcal{H}^0_{NR}$.
Note moreover that this condition does not pose any restriction on
the joint distribution of $p$-values within each study.
We shall assess the evidence towards replicability by a quantity we call the $r$-value, introduced in \cite{Heller14b}, which is the adjusted $p$-value for replicability analysis.
In a single study, the adjusted $p$-value of a feature is the
smallest level (of FWER or FDR) at which it is discovered
\citep{wright92}. Similarly, for feature $j$,
the $r$-value is the smallest level (of FWER or FDR) at which
feature $j$ is declared replicable.
The simplest example of $p$-value adjustment for a single
study $i\in \{1,2\}$ is Bonferroni, with adjusted $p$-values
$p_{ij}^{adj-Bonf}=m p_{ij}, j=1,\ldots, m$. The BH adjusted
$p$-values build upon the Bonferroni adjusted $p$-values
\citep{reiner2}.
The BH adjusted $p$-value for
feature $j$ is defined to be
$$ \min_{\{k:\, p_{ik}^{adj-Bonf}\geq p_{ij}^{adj-Bonf},\, k=1,\ldots,m \}} \frac{p_{ik}^{adj-Bonf}}{rank(p_{ik}^{adj-Bonf})},$$
where $rank(p_{ik}^{adj-Bonf})$ is the rank of the Bonferroni
adjusted $p$-value for feature $k$, with maximum rank for ties.
For two studies, we can for example define the Bonferroni-on-max
$r$-values to be $r_j^{Bonf-max}=m\max (p_{1j}, p_{2j}), j=1,\ldots,
m$. The BH-on-max $r$-values build upon the Bonferroni-on-max
$r$-values exactly as in single studies. The BH-on-max $r$-value for
feature $j$ is defined to be
$$ \min_{\{k: \,r_k^{Bonf-max}\geq r_j^{Bonf-max},\, k=1,\ldots,m \}} \frac{r_k^{Bonf-max}}{rank(r_k^{Bonf-max})},$$
where $rank(r_k^{Bonf-max})$ is the rank of the Bonferroni-on-max
adjusted $p$-value for feature $k$, with maximum rank for ties.
Claiming as replicable the findings of all features with BH-on-max
$r$-values at most $\alpha$ is equivalent to considering as
replicability claims the discoveries from applying the BH procedure
at level $\alpha$ on the maximum of the two studies $p$-values,
suggested in \cite{benjamini09}. In this work we introduce
$r$-values that are typically much smaller than the above-mentioned
$r$-values for features selected in both studies, with the same
theoretical guarantees upon rejection at level $\alpha$, and thus
preferred for replicability analysis of two studies.
\section{Replicability among the selected in each of two studies}\label{sec-non-adapt}
Let $c\in (0,1)$, with default value $c=0.5$, be the fraction of the
significance level ``dedicated" to study one. The Bonferroni
$r$-values are
$$r^{Bonf}_j =
\max\left(\frac{S_2p_{1j}}{c}, \frac{S_1p_{2j}}{1-c}\right), \quad j
\in \mathcal S_1\cap \mathcal S_2.
$$
The FDR $r$-values build upon the Bonferroni $r$-values and are
necessarily smaller: \begin{align} r^{FDR}_j = \min_{\{i:\,
r^{Bonf}_i\geq r^{Bonf}_j,\, i \in \mathcal S_1\cap \mathcal S_2
\}} \frac{r^{Bonf}_i}{rank(r^{Bonf}_i)},\quad j \in \mathcal S_1\cap
\mathcal S_2.\label{r_FDR}\end{align} where $rank(r^{Bonf}_i)$ is
the rank of the Bonferroni $r$-value for feature $i \in \mathcal
S_1\cap \mathcal S_2$, with maximum rank for ties.
Declaring as replicated all features with Bonferroni $r$-values at most $\alpha$ controls the FWER at level $\alpha$, and declaring as replicated all features with FDR $r$-values at most $\alpha$ controls the FDR at level $\alpha$ under independence, see Section \ref{subsec-theoreticalproperties}.
The relation between the Bonferroni and FDR $r$-values is similar to
that of the adjusted Bonferroni and adjusted BH $p$-values described
in Section \ref{sec-notation}. For the features selected in both
studies, if less than half of the features are selected by each
study, it is easy to show that FDR (Bonferroni) $r$-values given above, using
$c=0.5$, will be smaller than (1) the
BH-on-max (Bonferroni-on-max) $r$-values described in Section \ref{sec-notation}, and (2) the $r$-values that correspond to the FDR-controlling symmetric procedure in \cite{Bogomolov13}, which
will be typically smaller than BH-on-max $r$-values but larger than FDR $r$-values in (\ref{r_FDR}) due to taking into account the multiplicity of all features considered.
\subsection{Theoretical properties}\label{subsec-theoreticalproperties}
Let $\alpha\in (0,1)$ be the level of control desired, e.g. $\alpha = 0.05$. Let $\alpha_1=c\alpha$ be the fraction of $\alpha$ for study one, e.g. $\alpha_1 = \alpha/2$.
The procedure that makes replicability claims for features with
Bonferroni $r$-values at most $\alpha$ is a special case of the
following more general procedure.
\begin{procedure}\label{proc-FWER}
FWER-replicability analysis on the selected features $\mathcal
S_1\cap \mathcal S_2$:
\begin{enumerate}
\item Apply a FWER controlling
procedure at level $\alpha_1$ on the set $\{p_{1j}, j\in
\mathcal{S}_2\},$ and let $\mathcal R_{1}$ be the set of indices of discovered features. Similarly, apply a FWER controlling procedure at level
$\alpha-\alpha_1$ on the set $\{p_{2j}, j\in \mathcal{S}_1\},$ and let
$\mathcal R_{2}$ be the set of indices of discovered features.
\item The set of indices of features with replicability claims is $\mathcal R_{1}\cap \mathcal R_{2}$.
\end{enumerate}
\end{procedure}
When using Bonferroni in Procedure \ref{proc-FWER},
feature $j\in \mathcal S_1\cap \mathcal S_2$ is among the
discoveries if and only if $(p_{1j}, p_{2j})\leq (\alpha_1/S_2,
\quad (\alpha-\alpha_1)/S_1). $ Therefore, claiming replicability for all features
with Bonferroni $r$-values at most $\alpha$ is equivalent to
Procedure \ref{proc-FWER} using Bonferroni.
\begin{theorem}\label{thm-fwer}
If the null independence-across-studies condition is satisfied,
then Procedure \ref{proc-FWER} controls the FWER for replicability
analysis at level $\alpha$.
\end{theorem}
\begin{proof}
Let $V_1 =|\mathcal R_1 \cap \{j:H_{1j}=0\} | $ and $V_2 = |\mathcal R_2 \cap \{j:H_{2j}=0\}|$ be the number of true
null hypotheses rejected in study one and in study two,
respectively, by Procedure \ref{proc-FWER}. Then the FWER for replicability analysis is
\begin{eqnarray}
E(\textbf{I}[V_1+V_2>0])\leq E(E(\textbf{I}[V_1>0]|P_2))
+E(E(\textbf{I}[V_2>0]|P_1)).
\nonumber
\end{eqnarray}
Clearly, $E(\textbf{I}[V_1>0]|P_2)\leq \alpha_1$ since $P_{1j}$ is
independent of $P_2$ for all $j$ with $H_{1j}=0$, and a FWER controlling
procedure is applied on $\{p_{1j}, j\in
\mathcal{S}_2\}.$ Similarly, $E(\textbf{I}[V_2>0]|P_1)\leq \alpha-\alpha_1$. It thus follows that the FWER for replicability analysis is at most $\alpha$.
\end{proof}
The procedure that rejects the features with FDR $r$-values at most
$\alpha$ is equivalent to the following procedure, see Lemma
\ref{lem_fdr} for a proof.
\begin{procedure}\label{procfdrsym}
FDR-replicability analysis on the selected features $\mathcal
S_1\cap \mathcal S_2$:
\begin{enumerate}
\item Let $$R\triangleq\max\left\{r:
\sum_{j\in\mathcal S_1\cap \mathcal{S}_2}\textbf{I}\left[(p_{1j},
p_{2j})\leq\left(\frac{r\alpha_1}{S_2},
\frac{r(\alpha-\alpha_1)}{S_1}\right)\right] = r\right\}.$$
\item The set of indices with replicability claims is
\begin{align*}\mathcal R= \{j: (p_{1j},
p_{2j})\leq\left(\frac{R\alpha_1}{S_2},
\frac{R(\alpha-\alpha_1)}{S_1}\right), j \in \mathcal S_1\cap
\mathcal S_2\}.\end{align*}
\end{enumerate}
\end{procedure}
This procedure controls the FDR for replicability analysis at level
$\alpha$ as long as the selection rules by which the sets
$\mathcal{S}_1$ and $\mathcal{S}_2$ are selected are stable (this is a very lenient requirement, see \cite{Bogomolov13} for examples).
\begin{definition}\citep{Bogomolov13} A stable selection rule satisfies the following condition: for any
selected feature, changing its $p$-value so that the feature is
still selected while all other $p$-values are held fixed, will not
change the set of selected features.\end{definition}
\begin{theorem}\label{indep}
If the null independence-across-studies condition is satisfied, and
the selection rules by which the sets
$\mathcal{S}_1$ and $\mathcal{S}_2$ are selected are stable,
then Procedure \ref{procfdrsym} controls the FDR for replicability
analysis at level $\alpha$ if one of the following items is
satisfied:
\begin{enumerate}
\item[(1)] The $p$-values from true null hypotheses
within each study are each independent of all other $p$-values.
\item[(2)] Arbitrary dependence among the $p$-values within each study, when $S_i$ in Procedure \ref{procfdrsym} is replaced by $S_i\sum_{k=1}^{S_i}1/k$, for $i=1,2$.
\end{enumerate}
\end{theorem}
See Appendix \ref{app-thm-fdr-indep} for a proof.
\begin{remark}
The FDR $r$-values for the procedure that is valid for arbitrary
dependence, denoted by $\tilde{r}_j^{FDR}, j\in \mathcal{S}_1\cap \mathcal{S}_2$, are computed using formula (\ref{r_FDR}) where the
Bonferroni $r$-values $r_j^{Bonf}$ are replaced by
\begin{align}\tilde{r}_j =
\max\left(\frac{(\sum_{i=1}^{S_2}1/i)S_2p_{1j}}{c},
\frac{(\sum_{i=1}^{S_1}1/i)S_1p_{2j}}{1-c}\right), \quad j \in
\mathcal S_1\cap \mathcal S_2.\label{rtilda}
\end{align}
\end{remark}
\begin{remark}\label{rem-selectiontype}
An intuitive approach towards replicability may be to apply a multiple testing procedure on each study separately, with discovery sets $\mathcal D_1$ and $\mathcal D_2$ in study one and two, respectively, and then claim replicability on the set $\mathcal D_1\cap \mathcal D_2$. However, even if the multiple testing procedure has guaranteed FDR control at level $\alpha$, it is easy to construct examples where the expected fraction of false replicability claims in $\mathcal D_1\cap \mathcal D_2$ will be far larger than $\alpha$. An extreme example is the following: half of the features have $\vec H_j = (1,0)$, the remaining half have $\vec H_j = (0,1)$, and the signal is very strong. Then in study one all features with $\vec H_j = (1,0)$ and few features with $\vec H_j = (0,1)$ will be discovered, and in study two all features with $\vec H_j = (0,1)$ and few features with $\vec H_j = (1,0)$ will be discovered, resulting in a non-empty set $\mathcal D_1\cap \mathcal D_2$ which contains only false replicability claims. Interestingly, if the multiple testing procedure is Bonferroni at level $\alpha$, then the FWER on replicability claims of the set $\mathcal D_1\cap \mathcal D_2$ is at most $\alpha$. However, this procedure (which can be viewed as Bonferroni on the maximum of the two study $p$-values) can be far more conservative than our suggested Bonferroni-type procedure. If we select in each study separately all features with $p$-values below $\alpha/2$, resulting in selection sets $\mathcal S_1$ and $\mathcal S_2$ in study one and two, respectively, then using our Bonferroni-type procedure we claim replicability for features with $(p_{1j}, p_{2j})\leq (\alpha/(2S_2),\alpha/(2S_1))$. Our discovery thresholds, $ (\alpha/(2S_2),\alpha/(2S_1))$, are both larger than $\alpha/m$ as long as the number of features selected by each study is less than half, and thus can lead to more replicability claims with FWER control at level $\alpha$.
\end{remark}
\section{Incorporating the plug-in estimates}\label{sec-adapt}
When the non-null hypotheses are mostly non-null in both studies,
i.e., there are more features with $\vec H_j = (1,1)$ than with
$\vec H_j = (1,0)$ or $\vec H_j = (0,1)$, then the non-adaptive
procedures for replicability analysis may be over conservative. The
conservativeness follows from the fact that the fraction of null
hypotheses in one study among the selected in the other study is small.
The set $\mathcal{S}_1$ is more likely to contain hypotheses with
$\vec H_j \in \{(1,0), (1,1)\}$ than hypotheses with $\vec H_j \in
\{(0,0), (0,1)\},$ and therefore the fraction of true null
hypotheses in study two among the selected in study one, i.e.,
$\sum_{j \in \mathcal{S}_1} (1-H_{2j})/S_1$, may be much smaller
than one (especially if there are more features with $\vec H_j =
(1,1)$ than with $\vec H_j = (1,0)$). Similarly, the fraction of
true null hypotheses in study one among the selected based on study
two, i.e., $\sum_{j \in \mathcal{S}_2} (1-H_{1j})/S_2$, may be much
smaller than one.
The non-adaptive procedures for replicability analysis in Section \ref{sec-non-adapt} control the error-rates at levels that are conservative by the expectation of these fractions.
Procedures \ref{proc-FWER} using Bonferroni and \ref{procfdrsym}
control the FWER and FDR, respectively, at level which is at most
\begin{equation}
\alpha_1E\left(\frac{\sum_{j \in \mathcal{S}_2}
(1-H_{1j})}{S_2}\right)+(\alpha-\alpha_1)E\left(\frac{\sum_{j\in
\mathcal{S}_1} (1-H_{2j})}{S_1}\right), \nonumber
\end{equation}
which can be much
smaller than $\alpha$ if the above expectations are far smaller
than one. This upper bound follows for FWER since an upper bound
for the FWER of a Bonferroni procedure is the desired level times
the fraction of null hypotheses in the family tested, and for the FDR from the proof of item 1 of Theorem \ref{indep}.
We therefore suggest adaptive variants, that first estimate the expected fractions of true null hypotheses among the selected. We use the slightly inflated plug-in estimators (reviewed in Section \ref{subsec-reviewplug-in}):
\begin{align}\label{eq-adaptiveestimates}
\hat{\pi}_0^{\MakeUppercase{\romannumeral 1}} = \frac{1+\sum_{j\in
\mathcal{S}_{2,\lambda}}\textbf{I}(P_{1j}>\lambda)}{S_{2,\lambda}(1-\lambda)};\,\,\,
\hat{\pi}_0^{\MakeUppercase{\romannumeral 2}} = \frac{1+\sum_{j\in
\mathcal{S}_{1,\lambda}}\textbf{I}(P_{2j}>\lambda)}{S_{1,\lambda}(1-\lambda)},
\end{align}
where $0<\lambda<1$ is a fixed parameter, $\mathcal{S}_{i,\lambda} = \mathcal{S}_i\cap \{j:P_{ij}\leq \lambda\}$,
and $S_{i,\lambda} = |\mathcal{S}_{i,\lambda}|$, for $i=1,2$.
Although $\hat{\pi}_0^{\MakeUppercase{\romannumeral 1}}$ and $\hat{\pi}_0^{\MakeUppercase{\romannumeral 2}}$ depend on the tuning parameter $\lambda,$ we suppress the dependence of the estimates on
$\lambda$ for ease of notation.
The adaptive Bonferroni $r$-values for fixed $c = \alpha_1/\alpha$
are: $$r^{adaptBonf}_j = \max\left(\frac{\hat{\pi}_0^{\MakeUppercase{\romannumeral 1}}
S_{2,\lambda}p_{1j}}{c}, \frac{\hat{\pi}_0^{\MakeUppercase{\romannumeral 2}}
S_{1,\lambda}p_{2j}}{1-c}\right), \quad j \in \mathcal
S_{1,\lambda}\cap \mathcal S_{2,\lambda}.$$ As in
Section \ref{sec-non-adapt}, the adaptive FDR $r$-values build upon the
adaptive Bonferroni $r$-values:
$$ r^{adaptFDR}_j = \min_{\{i:\, r^{adaptBonf}_i\geq r^{adaptBonf}_j,\, i \in \mathcal{S}_{1,\lambda}\cap\mathcal{S}_{2,\lambda} \}} \frac{r^{adaptBonf}_i }{rank(r^{adaptBonf}_i)}, \quad j \in
\mathcal S_{1,\lambda}\cap \mathcal S_{2,\lambda}$$ where
$rank(r^{adaptBonf}_i)$ is the rank of the adaptive Bonferroni
$r$-value for feature $i \in \mathcal{S}_{1,\lambda}\cap
\mathcal{S}_{2,\lambda}$, with maximum rank for ties. Declaring as replicated all features with adaptive Bonferroni/FDR $r$-values at most $\alpha$
controls the FWER/FDR for replicability analysis at level $\alpha$ under independence, see
Section \ref{sec-adapt-theoreticalproperties}.
The non-adaptive procedures in Section \ref{sec-non-adapt} only
require as input $\{p_{1j}: j \in \mathcal S_1 \}$ and $\{p_{2j}: j \in \mathcal S_2 \}$. However, if $\{p_{1j}: j \in \mathcal S_1\cup \mathcal S_2 \}$ and $\{p_{2j}: j \in \mathcal S_1\cup \mathcal S_2 \}$ are available, then the adaptive procedures with $\lambda = \alpha$ are attractive alternatives with better power, as demonstrated in our simulations detailed in Section \ref{sec-sim}.
\subsection{Theoretical properties}\label{sec-adapt-theoreticalproperties}
The following Procedure \ref{proc-bonferroniadapt} is equivalent to declaring as replicated all
features with Bonferroni adaptive $r$-values at most $\alpha$.
\begin{procedure}\label{proc-bonferroniadapt}
Adaptive-Bonferroni-replicability analysis on $\{(p_{1j},p_{2j}): j \in \mathcal S_1\cup S_2 \}$ with input parameter $\lambda$:
\begin{enumerate}
\item Compute $\hat{\pi}_0^{\MakeUppercase{\romannumeral 1}}, \hat{\pi}_0^{\MakeUppercase{\romannumeral 2}}$ and $S_{1,\lambda}, S_{2,\lambda}$.
\item Let $\mathcal R_{1} = \{j\in \mathcal{S}_{1,\lambda}: p_{1j}\leq \alpha_1/(S_{2,\lambda}\hat{\pi}_0^{\MakeUppercase{\romannumeral 1}})\}$ and $\mathcal R_{2} = \{j\in \mathcal{S}_{2,\lambda}: p_{2j}\leq (\alpha-\alpha_1)/(S_{1,\lambda}\hat{\pi}_0^{\MakeUppercase{\romannumeral 2}})\}$
be the sets of indices of features discovered in studies one and
two, respectively.
\item The set of indices of features with replicability claims is $\mathcal R_{1}\cap \mathcal R_{2}$.
\end{enumerate}
\end{procedure}
\begin{theorem}\label{thm-bonferroniadapt}
If the null independence-across-studies condition is satisfied, and
the $p$-values from true null hypotheses within each study are
jointly
independent,
then Procedure \ref{proc-bonferroniadapt} controls
the FWER for replicability analysis at level $\alpha$.
\end{theorem}
\begin{proof}
It is enough to prove that $E(\textbf{I}[V_1>0]|P_2)\leq \alpha_1$
and $E(\textbf{I}[V_2>0]|P_1)\leq \alpha-\alpha_1,$ as we showed in
the proof of Theorem \ref{thm-fwer}. These inequalities essentially
follow from the fact that the Bonferroni plug-in procedure controls the FWER
\citep{finner09}. We will only show that
$E(\textbf{I}[V_1>0]|P_2)\leq \alpha_1$, since the proof that
$E(\textbf{I}[V_2>0]|P_1)\leq \alpha-\alpha_1$ is similar. We shall
use the fact that
\begin{equation}
\hat{\pi}_0^{\MakeUppercase{\romannumeral 1}}\geq
\frac{1+\sum_{j\in\mathcal{S}_{2,\lambda}}(1-H_{1j})\textbf{I}(P_{1j}>\lambda)}{S_{2,\lambda}(1-\lambda)}.
\label{eq-lowerboundfractionnulls}
\end{equation}
\begin{eqnarray}
&&E(\textbf{I}[V_1>0]|P_2) = \textmd{Pr}\left(\sum_{i \in \mathcal{S}_{2,\lambda}}(1-H_{1i})\textbf{I}[i\in S_{1,\lambda}, P_{1i}\leq \alpha_1/(S_{2,\lambda}\hat{\pi}_0^{\MakeUppercase{\romannumeral 1}})]>0| P_2\right) \nonumber \\
&& \leq\sum_{i \in \mathcal{S}_{2,\lambda}}(1-H_{1i}) \textmd{Pr}(P_{1i}\leq \min(\lambda, \alpha_1/S_{2,\lambda}\hat{\pi}_0^{\MakeUppercase{\romannumeral 1}})| P_2) \label{eq-bonfadapt1}\\
&&\leq\sum_{i \in \mathcal{S}_{2,\lambda}}(1-H_{1i})
\textmd{Pr}\left(P_{1i}\leq\min\left(\lambda,\frac{\alpha_1}{\left(\frac{1+\sum_{j\in\mathcal{S}_{2,\lambda}}(1-H_{1j})\textbf{I}(P_{1j}>\lambda)}{1-\lambda}\right)}\right)|P_2\right)\label{eq-bonfadapt2}
\\&& = \sum_{i \in \mathcal{S}_{2,\lambda}}(1-H_{1i}) \textmd{Pr}\left(P_{1i}\leq \min\left(\lambda, \frac{\alpha_1}{\left(\frac{1+\sum_{j\in \mathcal{S}_{2,\lambda}, j\neq i}(1-H_{1j})\textbf{I}(P_{1j}>\lambda)}{1-\lambda}\right)}\right)| P_2\right) \notag\\
&& \leq \sum_{i \in \mathcal{S}_{2,\lambda}}(1-H_{1i}) \alpha_1 E\left(1/\left(\frac{1+\sum_{j\in \mathcal{S}_{2,\lambda}, j\neq i}(1-H_{1j})\textbf{I}(P_{1j}>\lambda)}{1-\lambda}\right)| P_2 \right) \label{eq-bonfadapt4}\\
&& \leq \sum_{i \in \mathcal{S}_{2,\lambda}}
(1-H_{1i})\alpha_1/\sum_{j \in \mathcal{S}_{2,\lambda}} (1-H_{1j}) =
\alpha_1. \label{eq-bonfadapt5}
\end{eqnarray}
Inequality (\ref{eq-bonfadapt1}) follows from the Bonferroni
inequality, and
inequality (\ref{eq-bonfadapt2}) follows from (\ref{eq-lowerboundfractionnulls}).
Inequality (\ref{eq-bonfadapt4}) follows from the facts that for $i$ with $H_{1i}=0$, (a) $P_{1i}$ is independent of all null $p$-values from study one and from all $p$-values from study two, and (b) $\textmd{Pr}(P_{1i}\leq x)\leq x$ for all $x\in[0,1].$
Inequality (\ref{eq-bonfadapt5}) follows by
applying Lemma 1 in \cite{yoav2}, which states that if $Y\sim
B(k-1,p)$ then $E(1/(Y+1))<1/(kp)$, to $Y = \sum_{j\in
\mathcal{S}_{2,\lambda}, j\neq
i}(1-H_{1j})\textbf{I}(P_{1j}>\lambda)$, which is distributed
$B(\sum_{j \in \mathcal{S}_{2,\lambda}, j \neq i}(1-H_{1j}),
1-\lambda)$ if the null $p$-values within each study are uniformly
distributed. It is easy to show, using similar arguments, that
inequality (\ref{eq-bonfadapt5}) remains true when the null
$p$-values are stochastically larger than uniform.
\end{proof}
Declaring as replicated all features with adaptive FDR $r$-values at most
$\alpha$ is equivalent to Procedure \ref{procfdrsym} where $S_1$ and
$S_2$ are replaced by $S_{1,\lambda}\hat{\pi}_0^{\MakeUppercase{\romannumeral 2}}$ and
$S_{2,\lambda}\hat{\pi}_0^{\MakeUppercase{\romannumeral 1}}$ respectively, and $\mathcal{S}_1\cap
\mathcal{S}_2$ is replaced by $\mathcal{S}_{1,\lambda}\cap
\mathcal{S}_{2,\lambda},$ see Lemma \ref{lem_fdr} for a proof.
\begin{theorem}\label{thm-proc-fdr-adaptive}
If the null independence-across-studies condition holds, the
$p$-values corresponding to true null hypotheses are each
independent of all the other $p$-values, and the selection rules by
which the sets $\mathcal{S}_1$ and $\mathcal{S}_2$ are selected are
stable, then declaring as replicated all features with adaptive FDR $r$-values at most $
\alpha$ controls the
FDR for replicability analysis at level $\alpha$.
\end{theorem}
See Appendix \ref{app-thm-fdr-indep} for a proof.
\section{Directional replicability analysis for two-sided alternatives}\label{sec-twosided}
So far we have considered one sided alternatives. If a two-sided alternative is considered for each feature in each study, and the aim is to discover the features that have replicated effect in the same direction in both studies, the following simple modifications are necessary.
For feature $j\in \{1,\ldots,m\}$, the left- and right- sided
$p$-values for study $i\in \{1,2\}$ are denoted by $p^L_{ij}$ and
$p^R_{ij}$, respectively. For continuous test statistics, $p^R_{ij}
= 1-p^L_{ij}$.
For directional replicability analysis, the selection step has to be modified to include also the selection of the direction of testing. The set of
features selected is the
subset of features that are selected from both studies, for which the
direction of the alternative with the smallest one-sided $p$-value
is the same for both studies, i.e.,
$$\mathcal{S}\triangleq\mathcal{S}_1\cap\mathcal{S}_2\cap\left(\{j: \max(p_{1j}^R,
p_{2j}^R)<0.5\}\cup\{j: \max(p_{1j}^L, p_{2j}^L)<0.5\} \right).$$ In
addition, define for $j\in \mathcal{S}_1\cup\mathcal{S}_2,$
\begin{equation*}
p'_{1j} = \left\{
\begin{array}{rl}
p_{1j}^L & \text{if } p_{2j}^L< p_{2j}^R,\\
p_{1j}^R & \text{if } p_{2j}^L> p_{2j}^R,
\end{array} \right.
\end{equation*}
\begin{equation*}
p'_{2j} = \left\{
\begin{array}{rl}
p_{2j}^L & \text{if } p_{1j}^L< p_{1j}^R,\\
p_{2j}^R & \text{if } p_{1j}^L> p_{1j}^R.
\end{array} \right.
\end{equation*}
The Bonferroni and FDR $r$-values are computed for features in
$\mathcal{S}$ using the formulae given in Sections
\ref{sec-non-adapt} and \ref{sec-adapt} (where
$\mathcal{S}_{1,\lambda}$ and $ \mathcal{S}_{2,\lambda}$ are the selected sets in Section \ref{sec-adapt}), with the following
modifications: the set $\mathcal{S}_1\cap \mathcal{S}_2$ is replaced
by $\mathcal{S}$, and $p_{1j}$ and $p_{2j}$ are replaced by
$p'_{1j}$ and $p'_{2j}$ for $j\in
\mathcal{S}_1\cup\mathcal{S}_2.$
As in Sections \ref{sec-non-adapt} and \ref{sec-adapt}, features with $r$-values at most $\alpha$ are declared
as replicated at level $\alpha$, in the direction selected.
The corresponding procedures remain valid, with the
theoretical guarantees of directional FWER and FDR control for
replicability analysis on the modified selected set above, despite
the fact that the direction of the alternative for establishing
replicability was not known in advance. This is remarkable, since it
means that there is no additional penalty, beyond the penalty for
selection used already in the above procedures, for the fact that
the direction for establishing replicability is also decided upon
selection. The proofs are similar to the proofs provided for one-sided hypotheses and are therefore omitted.
\section{Estimating the selection thresholds}\label{sec-estthresholds}
When the full data for both studies is available, we first need to
select the promising features from each study based on the data in
this study. If the selection is based on $p$-values, then our first
step will include selecting the features with $p$-values below
thresholds $t_1$ and $t_2$ for studies one and two, respectively. The
thresholds for selection, $(t_1,t_2)\in (0,1]^2$, affect power: if
$(t_1,t_2)$ are too low, features with $\vec H_j\notin
\mathcal{H}^0_{NR}$ may not be considered for replicability even if
they have a chance of being discovered upon selection, thus
resulting in power loss; if $(t_1,t_2)$ are too high, too many
features with $\vec H_j\in \mathcal{H}^0_{NR}$ will be considered
for replicability making it difficult to discover the true
replicated findings, thus resulting in power loss.
We suggest automated methods for choosing $(t_1,t_2)$, based on
$(p_1,p_2)$ and the level of FWER or FDR control desired, which are
based on the following principle: choose the values $(t_1,t_2)$ so
that the set of discovered features coincides with the set of
selected features. We show in simulations in Section \ref{sec-sim}
that data-dependent thresholds
may lead to more powerful procedures than procedures with a-priori fixed
thresholds.
Let $\mathcal{S}_i(t_i)=\{j:p_{ij}\leq t_i\}$ be the index set of
selected features from study $i,$ for $i\in\{1,2\}.$ We suggest the
selection thresholds $(t^*_1,t^*_2)$ that solve the two equations
\begin{equation}\label{eq-nonlin}
t_1 = \frac{\alpha_1}{|\mathcal{S}_2(t_2)|}; \quad t_2 =
\frac{\alpha-\alpha_1}{|\mathcal{S}_1(t_1)|},
\end{equation}
for Procedure \ref{proc-FWER} using Bonferroni, and the selection
thresholds $(t^*_1,t^*_2)$ that solve the two equations
\begin{equation}\label{sel-fwer-adapt}
t_1 = \frac{\alpha_1}{|\mathcal{S}_{2,\lambda}(t_2)|\hat{\pi}_0^{\MakeUppercase{\romannumeral 1}}(t_2)};
\quad t_2 = \frac{\alpha-\alpha_1}{|\mathcal{S}_{1,
\lambda}(t_1)|\hat{\pi}_0^{\MakeUppercase{\romannumeral 2}}(t_1)},
\end{equation}
for the adaptive Procedure \ref{proc-bonferroniadapt} for FWER
control, where $\hat{\pi}_0^{\MakeUppercase{\romannumeral 1}}(t_2)$ and $\hat{\pi}_0^{\MakeUppercase{\romannumeral 2}}(t_1)$ are the estimators
defined in (\ref{eq-adaptiveestimates}) with
$\mathcal{S}_1=\mathcal{S}_{1,\lambda}(t_1)=\{j:P_{1j}\leq\min(\lambda,
t_1)\}$ and
$\mathcal{S}_2=\mathcal{S}_{2,\lambda}(t_2)=\{j:P_{2j}\leq\min(\lambda,
t_2)\}$. We show in Appendix \ref{app-estthresholdsTheoretical}
that these choices are not
dominated by any other choices (i.e., there do not exist other
choices $(t_1,t_2)$ that result in larger rejection thresholds for
the $p$-values in both studies).
Similarly, we suggest the selection
thresholds $(t^*_1,t^*_2)$ that solve the two equations
\begin{equation}
t_1=\frac{|\mathcal{S}_1(t_1)\cap
\mathcal{S}_2(t_2)|\alpha_1}{|\mathcal{S}_2(t_2)|}; \quad
t_2=\frac{|\mathcal{S}_1(t_1)\cap
\mathcal{S}_2(t_2)|(\alpha-\alpha_1)}{|\mathcal{S}_1(t_1)|},\label{sel-fdr}
\end{equation}
for Procedure \ref{procfdrsym} for FDR control, and the selection
thresholds $(t^*_1,t^*_2)$ that solve the two equations
\begin{align}
&t_1=\frac{|\mathcal{S}_{1,\lambda}(t_1)\cap \mathcal{S}_{2,\lambda}(t_2)|\alpha_1}{|\mathcal{S}_{2,\lambda}(t_2)|\hat{\pi}_0^{\MakeUppercase{\romannumeral 1}}(t_2)},\notag\\
&t_2=\frac{|\mathcal{S}_{1,\lambda}(t_1)\cap
\mathcal{S}_{2,\lambda}(t_2)|(\alpha-\alpha_1)}{|\mathcal{S}_{1,\lambda}(t_1)|\hat{\pi}_0^{\MakeUppercase{\romannumeral 2}}(t_1)}.\label{sel-fdr-adapt}
\end{align}
for the adaptive FDR-controlling procedure in Section
\ref{sec-adapt}.
If the solution does not exist, no replicability claims are made.
There may be more than one solution to the equations
(\ref{eq-nonlin}) - (\ref{sel-fdr-adapt}).
In our simulations and real data examples, we set as $(t_1^*, t_2^*)$ the first solution outputted from the algorithm used for solving the system of non-linear
equations. We show in simulations that using data-dependent thresholds $(t_1^*,t_2^*)$ results in power close to the power using the optimal (yet unknown) fixed thresholds
$t_1=t_2$, and that the nominal level of FWER/FDR is maintained under independence as well as under dependence as long as we use $\lambda=\alpha$ for the adaptive procedures.
We prove in Appendix \ref{app-estthresholdsTheoretical} that the nominal level of the non-adaptive procedures is controlled even though the selection thresholds $(t^*_1,t^*_2)$ are data-dependent, if the $p$-values are exchangeable under the null.
\section{Simulations}\label{sec-sim}
We define the configuration vector $\vec f = (f_{00},f_{10},f_{01},f_{11})$, where $f_{lk}=\sum_{j=1}^mI[\vec{H}_j=(l,k)]/m,$ the proportion of features with state $(l,k)$, for $(l,k)\in\{0,1\}$. Given $\vec f$, measurements for $mf_{lk}$ features, were generated from
$N(\mu_l,1)$ for study one, and $N(\mu_k,1)$ for study two, where
$\mu_0=0$ and $\mu_1=\mu>0$. One-sided $p$-values were computed for
each feature in each study. We varied $\vec f$ and $\mu \in
\{2,2.5,\ldots,6\}$ across simulations. We also examined the effect
of dependence within each study on the suggested procedures, by
allowing for equi-correlated test statistics within each study, with
correlation $\rho = \{0,0.25,0.5,0.75, 0.95\}$. Specifically, the
noise for feature $j$ in study $i\in \{1,2\}$ was
$e_{ij}=\sqrt{\rho}Z_{i0}+\sqrt{1-\rho}Z_{ij}$, where $\{Z_{ij}:
i=1,2, j=1,\ldots,m\}$ are independent identically distributed
$N(0,1)$ and $Z_{i0}$ is $N(0,1)$ random variable that is
independent of $\{Z_{ij}: i=1,2, j=1,\ldots,m\}$. The $p$-value for
feature $j$ in study $i$ was $1-\Phi(\mu_{ij}+e_{ij})$, where
$\Phi(\cdot)$ is the CDF of the standard normal distribution, and
$\mu_{ij}$ is the expectation for the signal of feature $j$ in study
$i$.
Our goal in this simulation was three-fold: First, to show the advantage of the adaptive procedures over the non-adaptive procedures for replicability analysis; Second, to examine the behaviour of the adaptive procedures when the test statistics are dependent within studies; Third, to compare the novel procedures with alternatives suggested in the literature.
The power, FWER, and FDR for the replicability analysis procedures considered were estimated based on 5000 simulated datasets.
\subsection{Results for FWER controlling procedures}\label{sec-fwersim}
We considered the following novel procedures:
Bonferroni-replicability with fixed or with data-dependent selection
thresholds $(t_1,t_2)$, adaptive-Bonferroni-replicability with
$\lambda \in \{0.05,0.5\}$ and with fixed or with data-dependent
$(t_1,t_2)$. These procedures were compared to an oracle procedure
with data-dependent thresholds (oracle-Bonferroni-replicability),
that knows $\sum_{j \in \mathcal S_2(t_2)}(1-H_{1j})$ and $\sum_{j
\in \mathcal S_1(t_1)}(1-H_{2j})$ and therefore rejects a feature
$j\in \mathcal{S}_1(t_1)\cap\mathcal{S}_2(t_2)$ if and only if
$p_{1j}\leq \alpha_1/\sum_{j \in \mathcal S_2(t_2)}(1-H_{1j})$ and
$p_{2j}\leq (\alpha-\alpha_1)/\sum_{j \in \mathcal
S_1(t_1)}(1-H_{2j})$. In addition, two procedures based on the
maximum of the two studies $p$-values were considered: the procedure
that declares as replicated all features with
$\max(p_{1i},p_{2i})\leq \alpha/m$ (Max), and the equivalent oracle
that knows $|\{j: \vec H_j\in \mathcal H_{NR}^0 \}|$ and therefore
declares as replicated all features with $\max(p_{1i},p_{2i})\leq
\alpha/|\{j: \vec H_j\in \mathcal H_{NR}^0 \}|$ (oracle Max). Note
that the oracle Max procedure controls the FWER for replicability analysis at the nominal level
$\alpha$ since the FWER is at most $\sum_{\{i: \vec H_i \in
\mathcal H_{NR}^0 \}}\textmd{Pr}(\max(p_{1i},p_{2i})\leq
\alpha/|\{j: \vec H_j\in \mathcal H_{NR}^0 \}|\}\leq \alpha$.
Figure \ref{fig:fwerFixedThresholds} shows the power for various
fixed selection thresholds $t_1=t_2=t$. There is a
clear gain from adaptivity since the power curves for the adaptive
procedures are above those for the non-adaptive procedures, for the
same fixed threshold $t.$ The gain
from adaptivity is larger as the difference between
$f_{11}$ and $f_{10} =f_{01}$ is larger: while in the last two rows
(where $f_{10} =f_{01}<f_{11}$) the power advantage can be greater than 10\%, in the first row (where $f_{10} =f_{01}=0.1, f_{11}=0.05$)
there is almost no power advantage. The choice of $t$ matters, and
the power of the procedures with data-dependent
thresholds $(t_1^*, t_2^*)$ is close to the power of the procedures
with the best possible fixed threshold $t.$
Figure \ref{fig-fwerproc} shows the power and FWER versus $\mu$
under independence (columns 1 and 2) and under equi-correlation of
the test statistics with $\rho=0.25$ (columns 3 and 4).
The novel procedures are clearly superior to the Max and Oracle Max procedures, the adaptive procedures are superior to the non-adaptive variants, and the power of the adaptive procedures with data-dependent thresholds is close to that of the oracle Bonferroni procedure.
The adaptive procedures with $\lambda=0.05$
and $\lambda=0.5$ have similar power, but the FWER with
$\lambda=0.05$ is controlled in all dependence settings while the FWER
with $\lambda=0.5$ is above 0.1 in all but the last dependence
setting. Our results concur with the results of \cite{Blanchard09} for single
studies, that the preferred parameter is $\lambda=0.05$. The
adaptive procedure with $\lambda = 0.05$ and data-dependent
selection thresholds is clearly superior to the two adaptive
procedures with fixed selection thresholds of $t_1=t_2=0.025$ or
$t_1=t_2=0.049$. We thus recommend the
adaptive-Bonferroni-replicability procedure with $\lambda =0.05$ and
data-dependent selection thresholds.
\begin{figure}[htbp]
\centering
\includegraphics[width =17.5cm,height = 22.5cm]{Figure1.pdf}
\caption{ Column 1: Independence setting; Columns 2: Equicorrelation
with $\rho=0.25$. The power versus fixed threshold $t$ is shown for
$\mu = 3$ for the adaptive-Bonferroni-replicability procedure
(dashed black with $\lambda =0.05$ and dotted green with $\lambda =
0.5$) and non-adaptive Bonferroni-replicability procedure (solid
red), along with the power of these procedures with data-dependent
thresholds. In all settings $m=1000$, $\alpha=0.05, \alpha_1=0.025$.
}\label{fig:fwerFixedThresholds}
\end{figure}
\begin{center}
\begin{figure}[htbp]
\centering
\includegraphics[width =17.5cm,height = 22.5cm]{Figure2.pdf}
\caption{\scriptsize{Columns 1 and 2: Independence setting; Columns 3 and 4: Equi-correlation with $\rho=0.25$.
Power and FWER versus $\mu$ for the adaptive-Bonferroni-replicability procedure with data-dependent $(t_1,t_2)$
with $\lambda=0.5$ (solid black ) and with $\lambda=0.05$ (dashed black); Bonferroni-replicability procedure with data-dependent $(t_1,t_2)$
(dotted black); the oracle that knows which hypotheses are null in one study among the selected from the other study (dashed blue);
oracle Max (dotted blue) and Max (dotted red); adaptive-Bonferroni-replicability with fixed $\lambda=0.05$ and fixed $t_1=t_2=0.049$
(dash-dot green) and fixed $t_1=t_2=0.025$ (dash green). In all settings $m=1000$, $\alpha=0.05, \alpha_1=0.025$.
}}
\label{fig-fwerproc}
\end{figure}
\end{center}
\subsection{Results for FDR controlling procedures}\label{sec-fdrsim}
We considered the following novel procedures for replicability
analysis with $\alpha=0.05, \alpha_1 = 0.025$:
Non-adaptive-FDR-replicability with fixed or data-dependent
$(t_1,t_2)$; adaptive-FDR-replicability with $\lambda \in \{0.05,
0.5\}$ and fixed or data-dependent $(t_1,t_2)$.
\cite{Heller14} introduced the oracle Bayes procedure (oracleBayes),
and showed that it has the largest rejection region while
controlling the Bayes FDR. When $m$ is large and the data is
generated from the mixture model, the Bayes FDR coincides with the
frequentist FDR, so oracle Bayes is optimal for FDR control.
We considered this oracle procedure for comparison with our novel procedures.
The difference in power between the oracle Bayes and the novel frequentist procedures
shows how much worse our procedures, which
make no mixture-model assumptions, are from the (best yet unknown in practice) oracle
procedure, which assumes the mixture model and needs as input its parameters.
In addition, the following three procedures were considered: the empirical Bayes procedure (eBayes),
as implemented in the R package \emph{repfdr} \citep{Heller14c}, which estimates the Bayes FDR and rejects the features with estimated Bayes FDR below $\alpha$,
see \cite{Heller14} for details; the oracle BH on $\{\max(p_{1i}, p_{2i}): i=1,\ldots, m\}$ (oracleMax); and the adaptive BH on $\{\max(p_{1i}, p_{2i}): i=1,\ldots, m\}$ (adaptiveMax).
Specifics about oracleMax and adaptiveMax follow. Applying the BH on $\{\max(p_{1i}, p_{2i}): i=1,\ldots, m\}$ at level $x$,
it is easy to show that the FDR level for independent features is at most $f_{00}x^2+(1-f_{00}-f_{11})x$.
Therefore, the oracleMax procedure uses level $x$, which is the solution to $f_{00}x^2+(1-f_{00}-f_{11})x=0.05$,
and the adaptiveMax procedure uses level $x$, which is the solution to $\hat f_{00}x^2+(1-\hat f_{00}-\hat f_{11})x=0.05$, where $\hat f_{00}$ and $\hat f_{11}$ are the estimated mixture fractions computed using the R package \emph{repfdr}.
Figure \ref{fig:fdrFixedThresholds} shows the power of novel
procedures for various fixed selection thresholds $t_1=t_2=t$, as
well as for the variants with data-dependent thresholds. There is a
clear gain from adaptivity since the power curves for the adaptive
procedures are above those for the non-adaptive procedures, for the
same fixed threshold $t$. The choice
of $t$ matters, and the choice $t=0.025$ is better than the choice
$t=0.05$, and fairly close to the best $t$. We see that the power of the
non-adaptive procedures with data-dependent selection thresholds is
superior to the power of non-adaptive procedures with fixed thresholds. The same
is true for the adaptive procedures in all the settings except for
the last two rows of the equi-correlation setting, where the power
of the adaptive procedures with data-dependent thresholds is
slightly lower than the highest power for fixed thresholds
$t_1=t_2=t.$ In these settings the number of selected hypotheses is
on average lower than in other settings, and the fractions of true
null hypotheses in one study among the selected in the other study
are expected to be small. As a result, the solutions to the two non-linear equations solved using the estimates of the fractions of nulls are far from optimal.
Therefore, when there is dependence within each study, and the number of selected hypotheses is small (say less than 100 per study), we suggest using the novel adaptive procedures with $t_1=t_2=\alpha/2$ instead of using data-dependent $(t_1,t_2)$.
Figure \ref{fig-fdrproc} shows the power and FDR versus $\mu$ under
independence (columns 1 and 2) and under equi-correlation of the
test statistics with $\rho=0.25$ (columns 3 and 4). The novel procedures are clearly superior to the
competitors: the empirical Bayes procedure does not control the FDR
when $m=1000$, and the actual level reaches above 0.1 under
dependence; the oracleMax and adaptiveMax procedures have the lowest power
in almost all settings. The novel adaptive procedures approach the power
of the oracle Bayes as $f_{10}=f_{01}$ increase. The adaptive
procedures with $\lambda=0.05$ and $\lambda=0.5$ have similar
power, but the FDR with $\lambda=0.05$ is controlled in all
dependence settings and the FDR with $\lambda=0.5$ is above the
nominal level in three of the dependence settings. Our results concur
with the results of \cite{Blanchard09} for single studies, that the
preferred parameter is $\lambda=0.05$. We thus recommend the
adaptive FDR-replicability procedure with $\lambda =\alpha$, for FDR control at level $\alpha$. We also recommend using
data-dependent $(t_1,t_2)$, unless the test statistics are dependent within each study and the number of selected hypotheses from each study is expected to be small.
\begin{figure}[htbp]
\centering
\includegraphics[width =17.5cm,height = 22.5cm]{Figure3.pdf}
\caption{Column 1: Independence setting; Columns 2: Equi-correlation
with $\rho=0.25$. The power versus fixed threshold $t$ is shown for
$\mu = 3$ for the adaptive and non-adaptive FDR-replicability
procedures, along with the power of these procedures with
data-dependent thresholds. In all settings $m=1000$, $\alpha=0.05,
\alpha_1=0.025$. }\label{fig:fdrFixedThresholds}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width =17.5cm,height = 22.5cm]{Figure4.pdf}
\caption{\scriptsize{Columns 1 and 2: Independence setting;
Columns 3 and 4: Equi-correlation with $\rho=0.25$.
Power and FDR versus $\mu$ for the adaptive-FDR-replicability procedure with data-dependent $(t_1,t_2)$ with $\lambda=0.5$ (solid black) and with $\lambda=0.05$ (dashed black); Non-adaptive-FDR-replicability procedure with data-dependent $(t_1,t_2)$ (dotted black);
the oracle Bayes (dashed blue) and empirical Bayes (dashed red); the oracle and adaptive BH on maximum $p$-value, (dotted blue and dotted red);
adaptive-FDR-replicability procedure with $\lambda=0.05$ and fixed $t_1=t_2=0.049$ (dash-dot green) and fixed $t_1=t_2=0.025$ (dash green).
In all settings $m=1000$, $\alpha=0.05, \alpha_1=0.025$.
}}
\label{fig-fdrproc}
\end{figure}
\section{Examples}\label{sec-example}
\subsection{Laboratory mice studies comparing behaviour across strains}
It is well documented that in different laboratories, the comparison
of behaviors of the same two strains may lead to opposite
conclusions that are both statistically significant
(\cite{Crabbe99}, \cite{kafkafi05}, and Chapter 4 in
\cite{Crusio13}). An explanation may be the different laboratory
environment (i.e. personnel, equipment, measurement techniques)
affecting differently the study strains (i.e. an interaction of
strain with laboratory). \cite{richter11} examined 29 behavioral
measures from five commonly used behavioral tests (the barrier test,
the vertical pole test, the elevated zero maze, the open field test,
and the novel object test) on female mice from different strains in
different laboratories with standardized conditions. Table 1
shows the one-sided $p$-value in the direction
favored by the data based on the comparison of two strains in two
laboratories, for each of the 29 outcomes.
\begin{center}
\begin{table}
\caption{\scriptsize{For 16 female mice from each of two inbred
strains, " C57BL6NCrl" and "DBA/2NCrl", in each of two
laboratories, the Wilcoxon rank sum test one-sided $p$-value was
computed for the test of no association between strain and
behavioral endpoint. We show the $p$-values for the lab of H. Wurbel
at the University of Giessen in column 3, and for the lab of P. Gass
at the Central Institute of Mental Health, Mannheim in column 4.
The direction of the alternative favored by the data is shown in
column 2, and it is marked as "X" if the laboratories differ in the
direction of smallest one-sided $p$-value. The rows are the outcomes from 5
behavioural tests: the barrier test (row 1); the vertical pole test
(row 2); the elevated zero maze (rows 3-11) ; the open field test
(rows 12-19); the novel object test (rows
20-29).}}\label{tab-mice-pv} \centering
\begin{tabular}{cc}
\scriptsize
\centering
\begin{tabular}{cccc}
\hline
& & \multicolumn{2}{c}{$\min(P_{ij}^L, P_{ij}^R)$}\\
& Alternative & $i=1$ & $i=2$ \\
\hline
1 & X & 0.3161 & 0.0218 \\
2 & $C57BL<DBA$ & 0.0012 & 0.0000 \\
3 & X & 0.0194 & 0.1120 \\
4 & $C57BL<DBA$ & 0.0095 & 0.2948 \\
5 & $C57BL<DBA$ & 0.1326 & 0.0028 \\
6 & $C57BL>DBA$ & 0.1488 & 0.0003 \\
7 & $C57BL>DBA$ & 0.2248 & 0.0000 \\
8 & X & 0.4519 & 0.0005 \\
9 & $C57BL<DBA$ & 0.0061 & 0.0000 \\
10 & $C57BL<DBA$ & 0.0071 & 0.0888 \\
11 & X & 0.4297 & 0.1602 \\
12 & $C57BL<DBA$ & 0.0918 & 0.0506 \\
13 & X & 0.0918 & 0.0001 \\
14 & $C57BL<DBA$ & 0.0000 & 0.0048 \\
15 & X & 0.0005 & 0.0550 \\
\hline
\end{tabular}
&
\scriptsize
\begin{tabular}{cccc}
\hline
& & \multicolumn{2}{c}{$\min(P_{ij}^L, P_{ij}^R)$}\\
& Alternative & $i=1$ & $i=2$ \\
\hline
16 & $C57BL<DBA$ & 0.0059 & 0.0002 \\
17 & $C57BL>DBA$ & 0.0176 & 0.0003 \\
18 & X & 0.0000 & 0.0538 \\
19 & $C57BL<DBA$ & 0.0000 & 0.1727 \\
20 & $C57BL<DBA$ & 0.0157 & 0.0001 \\
21 & $C57BL<DBA$ & 0.0000 & 0.0234 \\
22 & $C57BL<DBA$ & 0.3620 & 0.0176 \\
23 & $C57BL<DBA$ & 0.0000 & 0.0001 \\
24 & $C57BL<DBA$ & 0.0000 & 0.0076 \\
25 & $C57BL<DBA$ & 0.0000 & 0.0000 \\
26 & $C57BL<DBA$ & 0.0000 & 0.0003 \\
27 & $C57BL<DBA$ & 0.0000 & 0.0001 \\
28 & $C57BL<DBA$ & 0.0000 & 0.0550 \\
29 & X & 0.0033 & 0.3760 \\
& & & \\
\hline
\end{tabular}
\end{tabular}
\end{table}
\end{center}
The example is too small for considering the empirical Bayes
approach. The approach suggested in \cite{benjamini09} of using for
each feature the maximum of the two studies $p$-values, i.e., $2\min \{\max(p_{1j}^L, p_{2j}^L), \max(p_{1j}^R,
p_{2j}^R) \}$, detected overall fewer outcomes than using our novel
procedures both for FWER and for FDR control.
Table 2 shows the FWER/FDR non-adaptive and adaptive $r$-values, for the selected features,
according to the rule which selects all features with two-sided
$p$-values that are at most 0.05. We did not consider data-dependent
thresholds since the number of features examined was only 29, which
could result in highly variable data-dependent thresholds and a
power loss comparing to procedures with fixed thresholds, as was
observed in simulations. At the $\alpha=0.05$ level, for FWER
control, four discoveries were made by using Bonferroni on the
maximum $p$-values, and five discoveries were made with the
non-adaptive and adaptive Bonferroni-replicability procedures. At
the $\alpha=0.05$ level, for FDR control, nine discoveries were made
by using BH on the maximum $p$-values, and nine and twelve
discoveries were made with the non-adaptive FDR and adaptive
FDR-replicability procedures, respectively. Note that the adaptive
$r$-values can be less than half the non-adaptive $r$-values, since
$\hat{\pi}_0^{\MakeUppercase{\romannumeral 1}} = 0.44$ and $\hat{\pi}_0^{\MakeUppercase{\romannumeral 2}} =0.47$.
\begin{center}
\begin{table}
\caption{\scriptsize{The replicability analysis results for the data
in Table 1, after selection of features with
two-sided $p$-values at most 0.05 (i.e. $t_1=t_2=0.025$). Only the
twelve features in $\mathcal S_1\cap \mathcal S_2$ are shown, where
$S_1=20$, $S_2=19$. For each selected feature, we show the
$r$-values based on Bonferroni (column 5), FDR (column 6), adaptive
Bonferroni (column 7), and the adaptive FDR (column 8). The adaptive
procedures used $\lambda =0.05$.}} \label{tab:miceresults}
\centering
\scriptsize
\begin{tabular}{rrrrrrrrr}
\hline
index & & \multicolumn{2}{c}{$\min(P_{ij}^L, P_{ij}^R)$}&\multicolumn{2}{c}{Non-adaptive}&\multicolumn{2}{c}{Adaptive}\\
selected & Alternative & $i=1$ & $i=2$ & Bonf & FDR & Bonf & FDR \\
\hline
2 & $C57BL<DBA$ & 0.0012 & 0.0000 & 0.0452 & 0.0090 & 0.0200 & 0.0040 \\
9 & $C57BL<DBA$ & 0.0061 & 0.0000 & 0.2323 & 0.0290 & 0.1029 & 0.0129 \\
14 & $C57BL<DBA$ & 0.0000 & 0.0048 & 0.1910 & 0.0290 & 0.0905 & 0.0129 \\
16 & $C57BL<DBA$ & 0.0059 & 0.0002 & 0.2237 & 0.0290 & 0.0992 & 0.0129 \\
17 & $C57BL>DBA$ & 0.0176 & 0.0003 & 0.6679 & 0.0607 & 0.2960 & 0.0269 \\
20 & $C57BL<DBA$ & 0.0157 & 0.0001 & 0.5974 & 0.0597 & 0.2648 & 0.0265 \\
21 & $C57BL<DBA$ & 0.0000 & 0.0234 & 0.9363 & 0.0780 & 0.4435 & 0.0370 \\
23 & $C57BL<DBA$& 0.0000 & 0.0001 & 0.0022 & 0.0011 & 0.0010 & 0.0005 \\
24 & $C57BL<DBA$ & 0.0000 & 0.0076 & 0.3037 & 0.0337 & 0.1439 & 0.0160 \\
25 & $C57BL<DBA$ & 0.0000 & 0.0000 & 0.0005 & 0.0005 & 0.0003 & 0.0003 \\
26 & $C57BL<DBA$ & 0.0000 & 0.0003 & 0.0126 & 0.0032 & 0.0060 & 0.0015 \\
27 & $C57BL<DBA$ & 0.0000 & 0.0001 & 0.0038 & 0.0013 & 0.0018 & 0.0006 \\
\hline
\end{tabular}
\end{table}
\end{center}
\subsection{Microarray studies comparing groups with different cancer
severity}\label{subsec-big-example} \cite{Freije04} and
\cite{Phillips06} compared independently the expression levels in
patients with grade $III$ and grade $IV$ brain cancer. Both studies
used the Affymetrix HG U133 oligonucleotide arrays, with 22283
probes in each study.
The study of \cite{Freije04} (GEO accession GSE4412) included 26 subjects with tumors diagnosed as grade III glioma and 59 subjects with tumor diagnosis of grade IV glioma, all undergoing surgical treatment at the university of California, Los Angeles.
The study of \cite{Phillips06} (GEO accession GSE4271) included 24 grade III subjects, and 76 grade IV subjects, from the M.D. Anderson Cancer Center (MDA).
The Wilcoxon rank sum test $p$-values were computed for each probe in each study in order to quantify the evidence against no association of probe measurement with tumor subgroup.
We used the R package \emph{repfdr} \citep{Heller14c} to get the following estimated fractions, among the 22283 probes:
0.39 with $\vec h = (0,0)$; 0.16 with $\vec h = (1,1)$; 0.13 with $\vec h = (-1,-1)$; 0.10 with $\vec h = (0,1)$; 0.08 with $\vec h = (-1,0)$; 0.07 with $\vec h = (0,-1)$; 0.07 with $\vec h = (1,0)$; 0.00 with $\vec h = (-1,1)$ or $\vec h = (1,-1)$.
For FWER-replicability, the recommended Procedure
\ref{proc-bonferroniadapt} with $\lambda=0.05$ and data-dependent
thresholds $t_1 = 6.5*10^{-5}, t_2 = 5.1*10^{-5}$ discovered 340
probes. For comparison, the non-adaptive and adaptive
Bonferroni-replicability procedure with fixed thresholds $t_1
=t_2=0.025$ discovered only 90 and 124 probes, respectively. The
Bonferroni on maximum $p$-values discovered only 47 probes.
For FDR-replicability, the recommended adaptive procedure in Section
\ref{sec-adapt} with $\lambda=0.05$ and data-dependent thresholds
$t_1 = 0.021, t_2 = 0.024$ discovered 3383 probes. For comparison,
the non-adaptive and adaptive FDR-replicability procedure with fixed
selection thresholds $t_1 =t_2=0.025$ discovered 2288 and 3299
probes, respectively. The adaptive $r$-values can be half the non-adaptive $r$-values, since
$\hat \pi_0^I =0.51$ and $\hat \pi_0^{II} =0.49$.
Among the two competing approaches, the BH on
maximum $p$-values discovered only 1238 probes, and the empirical
Bayes procedure discovered 4320 probes. Among the 3383 probes
discovered by our approach, 3377 were also discovered by the
empirical Bayes procedure.
\section{Discussion}\label{sec-Discussion}
In this paper we proposed novel procedures for establishing
replicability in two studies. First, we introduced procedures that
take the selected set of features in each of two studies, and infer
about the replicability of features selected in both studies while
controlling for false replicability claims. We proved that the FWER
controlling procedure is valid (i.e., controls the error rate at the
desired nominal level) for any dependence within each study, and
that the FDR controlling procedure is valid under independence of
the test statistics within each study, and suggested also a more
conservative procedure that is valid for arbitrary dependence.
Next, we suggested incorporating the plug-in estimates of the
fraction of nulls in one study among the selected features by the
other study, which can be estimated as long as the $p$-values for
the union of features selected is available. We proved that the
resulting adaptive FWER and FDR controlling procedures are valid
under independence of the test statistics within each study. Our
empirical investigations showed that the adaptive procedures remain
valid even when the independence assumption is violated, as long as
we use $\lambda = \alpha$ as a parameter for the plug-in estimates,
as suggested by \cite{Blanchard09} for the adaptive BH procedure.
Finally, when two full studies are available that examine the same
features, we suggested selecting features for replicability analysis
that have $p$-values below certain thresholds. We showed that
selecting the features with one-sided $p$-values below $\alpha/2$ has good
power, but that the power can further be improved if we use
data-dependent thresholds, which receive the values that will lead
to the procedure selecting exactly the features that are discovered
as having replicated findings.
Our practical guidelines for establishing replicability are to use
the adaptive procedure for the desired error rate control, with
$\lambda = \alpha$. Moreover, based on the simulation results we
suggest using the data-dependent selection thresholds when two
full studies are available if the number of selected features in each study is
expected to be large enough (say above 100), and using the fixed thresholds $t_1=t_2=\alpha/2$ otherwise.
We would like to note that the $r$-value computation is more
involved when the thresholds are data-dependent, since these
thresholds depend on the nominal level $\alpha$. An interesting
open question is how to account for multiple solutions of the two
non-linear equations that are solved in order to find the
data-dependent thresholds.
The suggested procedures can be generalized to the case that more
than two studies are available. It is possible to either aggregate
multiple results of pairwise replicability analyses, or to first
aggregate the data and then apply a single replicability analysis on
two meta-analysis $p$-values. The aim of the replicability analysis
may also be redefined to be that of discovering features that have
replicated findings in at least $u$ studies, where $u$ can range
from two to the total number of studies. Other extensions include
weighting the features differently, as suggested by
\cite{Genovese06}, based on prior knowledge on the features, and
replicability analysis on multiple families of hypotheses while
controlling more general error rates, as suggested by
\cite{Benjamini13}.
|
1,477,468,751,122 | arxiv | \section{Introduction}
The asymmetric exclusion process (ASEP)~\cite{SZ95,D98,S00,BE07,KRB10}
represents an idealized description of transport in crowded one-dimensional
systems, such as traffic~\cite{PS99,AS00,SCN10}, ionic
conductors~\cite{R77}, and RNA transcription~\cite{MGB68}. In the
ASEP, each site is either vacant or occupied by a single particle that can
hop at a fixed rate to a vacant right neighbor~\cite{SZ95,D98,S00,BE07}.
Although simply defined, this model has rich transport properties: for
example, density heterogeneities can evolve into rarefaction or shock
waves~\cite{BE07}, while an open system, with input at one end and output at
the other, exhibits a variety of phases as a function of the input/output
rates~\cite{K91,DDM92,SD93}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.35\textwidth]{model}
\caption{Cooperative exclusion. A ``pushed'' particle (red) --- one whose
left neighbor is occupied --- can hop to a vacant right neighbor with rate
1, while an isolated particle (blue) hops to a vacancy with rate
$\lambda$.}
\label{model}
\end{center}
\end{figure}
A fundamental property of the ASEP is the relation $J(\rho)=\rho(1-\rho)$
between the current $J$ and density $\rho$. Because each site is occupied by
at most one particle, the average particle velocity $v=J/\rho$ is a
decreasing function of the density. In this work, we investigate a {\em
cooperative exclusion} (CE) model in which the velocity can \emph{increase}
with density. This cooperativity leads to unexpected features in the
evolution of initial density heterogeneities. Such cooperativity occurs, for
example, when ants emit pheromones that help guide fellow ants along a
trail~\cite{BAA02}. Another example are multiple buses that follow a fixed
route. The leading bus picks up more passengers so that the next bus moves
faster, which causes clustering of buses during peak travel
times~\cite{LEC98}. At the microscopic level, molecular motors can work
together to pull a load that is too large for a single motor~\cite{C06}.
Cooperativity has even been proposed as a basis for organic
superconductors~\cite{L64}.
The notion of cooperative interactions that counterbalance the fundamental
excluded-volume interaction is implicit in Ref.~\cite{AS00}, as well as
in~\cite{FGRS02, BGRS02}. These latter publications investigated an
exclusion model with a somewhat less stringent excluded-volume constraint
than in ASEP. This weaker exclusion gives rise to an effective cooperativity
and thereby to complex density profiles similar to what we find. As we shall
argue, the existence of these complex profiles does not depend on detailed
microscopic rules, but is rather a consequence of the underlying cooperative
interactions between particles. When sufficiently strong, these interactions
leads to an inflection point in the current-density curve; this feature is
the minimum requirement for the complex density-profile dynamics.
\section{Cooperative Exclusion Model}
In the CE model, a particle can hop to its vacant right neighbor at a rate
$r$ that depends on the occupancy of the previous site (Fig.~\ref{model}):
\begin{equation*}
r= \cases{ 1 & previous\ site\ occupied,\cr
\lambda& previous\ site\ vacant,}
\end{equation*}
with $0\leq \lambda \leq 1$. When $\lambda=1$, the standard ASEP is
recovered, while $\lambda=0$ corresponds to \emph{facilitated} asymmetric
exclusion~\cite{BM09}, in which the left neighbor of a particle must be
occupied for the particle to hop to a vacancy on the right. We pictorially
view this restriction as a particle requires a ``push'' from its left
neighbor to hop. This facilitation causes an unexpected discontinuity in a
rarefaction wave in the ASEP~\cite{GKR10}. More strikingly, we will show
that cooperativity leads to shock and rarefaction waves that can be
continuous, discontinuous, or a mixture of the two.
These unusual features arise in CE when $0<\lambda< \frac{1}{2}$, where an
inflection point in $J(\rho)$ occurs at $\rho=\rho_I$ (Fig.~\ref{JvsRHO}).
For $\rho<\rho_I$, cooperativity dominates, and $J$ grows superlinearly in
$\rho$. At higher densities, excluded volume interactions dominate, so that
$J$ grows sublinearly and ultimately decreases to zero. Correspondingly, the
group velocity changes from an increasing to a decreasing function of density
$\rho$ as $\rho$ passes through $\rho_I$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.45\textwidth]{JvsRHO}
\caption{Steady-state current as function of density in cooperative exclusion
(CE). Data are based $10^2$ realizations with $L=10^3$ up to $t=10^4$.
The solid curves are given by Eq.~(\ref{current}). Arrows indicate the
locations of the inflection points.}
\label{JvsRHO}
\end{center}
\end{figure}
A configuration of $N$ particles on a ring of length $L$ is specified by the
occupation numbers $\{n_1,\dots,n_L\}$, subject to conservation $\sum_i
n_i=N$; here $n_i$ equals $1$ if $i$ is occupied and equals 0 otherwise. A
crucial feature of CE is that the probability for any steady-state
configuration is a \emph{decreasing} function of the number $k$ of adjacent
vacancies: $k\equiv \sum_{i=1}^{L} (1-n_i)(1-n_{i+1})$, with $n_{L+1}=n_1$.
To understand how the configurational probabilities depend on $k$, we observe
that the hopping of a pushed particle (whose left neighbor is occupied)
either preserves or decreases the number of adjacent vacancies $k$
(left side of Fig.~\ref{k}). Conversely, the hopping of an isolated particle
either preserves or increases $k$ (right side of Fig.~\ref{k}). Since pushed
particle hopping events occur at a higher rate, configurations with fewer
adjacent vacancies are statistically more probable.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.3\textwidth]{k}\qquad\qquad \includegraphics[width=0.3\textwidth]{k1}
\caption{[left] Hopping of a pushed (red) particle where the number of
vacancy pairs is (a) preserved or (b) decreases. [right] Hopping of an
isolated (blue) particle where the number of vacancy pairs is (c) preserved
or (b) increases. }
\label{k}
\end{center}
\end{figure}
We now exploit the work of Antal and Sch\"utz~\cite{AS00} who investigated a
dual model in which next-nearest neighbor cooperative interactions pull a
particle ahead, in distinction to the pushing of particles from behind in CE.
By the mapping particles $\leftrightarrow$ holes, the CE and the
Antal-Sch\"utz models give the same probability distribution $P_k$ for a
configuration with $k$ adjacent vacancies~\cite{AS00}:
\begin{equation}
\label{prob}
P_k=\frac{\lambda^k}{Z(\lambda)}~,
\end{equation}
where $Z(\lambda)$ is a normalization constant. Since $\lambda<1$,
configurations with fewer adjacent vacancies are more probable.
Following~\cite{AS00}, the steady-state current is
\begin{equation}
\label{current}
J=(1-\rho)\left[1+\frac{\sqrt{1-4(1-\lambda)\rho(1-\rho)}-1}{2(1-\lambda)\rho} \right]
\end{equation}
in the $L\to\infty$ limit. The salient feature is that $J$ has an inflection
point at a density $\rho_I$ for $\lambda<\frac{1}{2}$ (Fig.~\ref{JvsRHO}).
We henceforth restrict our analysis to this domain and determine the unusual
consequences of this inflection point on the dynamics of initial density
steps.
\section{Density Profile Dynamics}
\label{dynamics}
In a hydrodynamic description, the particle satisfies the continuity equation
$\rho_t + J_x= 0$. By the chain rule, we rewrite the second term as
$J_\rho\,\rho_x$, from which the group velocity $u=J_\rho$. Here the
subscripts $t,x,\rho$ denote partial differentiation. The crucial feature is
the inflection point in $J(\rho)$, so that the group velocity can be either
increasing or decreasing in $\rho$. We now employ the steady-state current
(\ref{current}) to determine the evolution of an initial density
heterogeneity on length and time scales large compared to microscopic scales
for the step initial condition
\begin{equation}
\label{initial_density}
\rho(x,t=0)=
\cases {
\rho_- & $x\leq0$\,, \cr
\rho_+ & $x>0$\,.}
\end{equation}
As sketched in Fig.~\ref{phase_diagram}, the difference in the group
velocity to the right and left of the step determines whether a continuous,
discontinuous, or a composite density profile emerges.
It is worth noting that similar results for density profiles is obtained for
an asymmetric exclusion process with another form of cooperative
interactions~\cite{FGRS02,BGRS02}. In these works, the same qualitative
phase diagram as in Fig.~\ref{phase_diagram} is obtained, despite the rather
different natures of the microscopic interactions in their model. This
similarity in long-time behavior arises because our main results apply for
{\emph any} asymmetric exclusion process with sufficiently strong cooperative
interactions, as indicated by an inflection point in $J(\rho)$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.35\textwidth]{phase}
\caption{Phase diagram of the CE model for an initial density step
$(\rho_-,\rho_+$), with $\rho_I$ the inflection point in $J(\rho)$. A
typical density profile $\rho(z)$ is sketched for each of the six regions:
(R/IS) rarefaction/inverted shock, (R) continuous rarefaction, (S) shock,
(C/S) compression/shock, (C) continuous compression, (IS) inverted shock. }
\label{phase_diagram}
\end{center}
\end{figure}
\emph{Shock/Inverted Shock:} A propagating shock wave arises whenever the
group velocity on the left exceeds that on the right, $u(\rho_-)>u(\rho_+)$.
Qualitatively, the faster moving particles catch up to slower particles on
the right and pile up in a shock wave, just as freely-moving cars suddenly
slow down upon approaching a traffic jam. In the conventional ASEP, all
upsteps evolve into a \emph{shock} (S) wave. For the CE, in contrast, only
upsteps where both initial densities are above the inflection
point, $\rho_I<\rho_-<\rho_+$, evolve into shocks (Fig.~\ref{steps}). Here,
exclusion is sufficiently strong that the group velocity is a decreasing
function of density. Strikingly, a propagating shock wave also emerges from
a downstep in CE when the initial densities are both below the
inflection point, $\rho_I>\rho_->\rho_+$. In this regime,
$J_{\rho\rho}=u_\rho>0$; that is, cooperativity is sufficiently strong that
particles in the high-density region on the left have a greater group
velocity and therefore pile up at the interface. We term this singularity an
\emph{inverted shock} (IS) (Fig.~\ref{steps}).
For both shocks and inverted shocks, the density is given by the traveling
wave profile $\rho=\rho(x-ct)$. We obtain the shock speed $c$ by equating
the net flux into a large region that includes the shock,
$J(\rho_+)-J(\rho_-)$, with the change in the number of particles,
$c(\rho_+-\rho_-)$, in this region~\cite{W74} to obtain the standard
expression $c = [{J(\rho_+)-J(\rho_-)}][{\rho_+-\rho_-}]$; this holds both
for conventional and inverted shocks.
\begin{figure}[ht]
\begin{center}
\mbox{\subfigure{\includegraphics[width=0.4\textwidth]{step_up}} \quad
\subfigure{\includegraphics[width=0.4\textwidth]{step_down}}}
\caption{(left) Evolution of an upstep for $\lambda=\frac{1}{8}$: (C)
continuous compression wave for $\rho_-=\frac{1}{8}$, $\rho_+=\frac{3}{8}$;
(C/S) composite compression/shock for $\rho_-=\frac{1}{8}$,
$\rho_+=\frac{6}{10}$; (S) shock for $\rho_-=\frac{1}{8}$,
$\rho_+=\frac{9}{10}$. (right) Evolution of a downstep for
$\lambda=\frac{1}{8}$: (R) continuous rarefaction for $\rho_-=1$,
$\rho_+=\frac{6}{10}$; (R/IS) composite rarefaction/inverted shock for
$\rho_-=1$, $\rho_+=\frac{3}{8}$; (IS) inverted shock for $\rho_-=0.325$,
$\rho_+=\frac{1}{8}$. The dashed line is the locus $J_\rho=z$ and the
solid black curves are analytic predictions. Simulations are based on
$10^3$ realizations up to $t=4\times10^3$.}
\label{steps}
\end{center}
\end{figure}
\smallskip \emph{Continuous Rarefaction/Compression:} A density step
gradually smooths out when the when the group velocity to the left is less
than that on the right, $u(\rho_-)<u(\rho_+)$. Here the faster particles on
the right leave open space for the slower particles, similar to a cluster of
stopped cars that slowly spreads out after a stoplight turns green. In ASEP,
a downstep always evolves to a \emph{continuous rarefaction} (R) wave. This
continuous rarefaction also occurs in CE when both initial densities are
above the inflection point, $\rho_->\rho_+>\rho_I$. At these high densities,
exclusion dominates, as in the ASEP, which causes the group velocity to
decrease with density.
In striking contrast to the ASEP, an upstep can continuously smooth out in CE
when the initial densities are below the inflection point,
$\rho_-<\rho_+<\rho_I$. In this regime, cooperativity is sufficiently strong
that particles in the high density region on the right move faster than those
on the left. Thus instead of a shock wave, a \emph{continuous compression}
(C) wave develops (Fig.~\ref{steps}). We determine the density profile by
assuming that it a function of the scaled variable $z=x/t$. Substituting
$\rho(x,t)=\rho(z)$ into the continuity equation gives
$-z\rho_z+J_\rho\,\rho_z=0$. Thus the profile consists either of
constant-density segments ($\rho_z=0$) or else $z=J_\rho$. Matching these
solutions gives~\cite{KRB10,GKR10}
\begin{equation}
\label{rarefaction}
\rho(z)= \cases{ \rho_- & $z<z_-$ \,,\cr
I(z) & $z_-\leq z \leq z_+$\,, \cr
\rho_+ & $z>z_+$ \,,}
\end{equation}
where $I(z)$ is the inverse function of $z=J_\rho$. For a continuous
profile, the cutoffs $z_-$ and $z_+$ are determined by matching the interior
solution $I(z)$ with the asymptotic solutions: $I(z_\pm)=\rho_\pm$ or
equivalently, $z_\pm=J_\rho(\rho_\pm)$.
\smallskip \emph{Composite Rarefaction/Compression and Shock:} In CE, a
continuous rarefaction or compression wave can coexist with a shock wave.
This phenomenon occurs when the group velocity on the left is initially less
than that on the right but also with the constraint that the initial
densities lie on either side of the inflection point. Consequently one side
of the step is in the exclusion-dominated regime and the other is in the
cooperativity-dominated regime, or vice-versa. In particular, a
\emph{composite rarefaction/inverted shock} (R/IS) wave emerges from a
downstep when $\rho_->\rho_I>\rho_+$, so that $u(\rho_-)<u(\rho_+)$. As in
the case of the continuous rarefaction wave, the downstep begins to smooth
out from the rear. Consequently, cooperative interactions become more
important as the density at the leading edge of this rarefaction decreases.
Eventually this leading density reaches the point where the particle speed
matches that at the bottom of the downstep and the rarefaction front
terminates in an inverted shock.
Correspondingly, an upstep can evolve to a compression wave with a leading
shock when the densities satisfy $\rho_-<\rho_I<\rho_+$ and
$u(\rho_-)<u(\rho_+)$. In this case, the leading particles initially race
ahead, leaving behind a profile where the density increases with $x$.
However, this increase cannot be continuous because eventually a point is
reached where the speed at the front of this continuous wave matches that of
the top of the upstep. After this point, a pile-up occurs and a shock wave
forms. We call this profile a \emph{composite compression/shock} (C/S) wave
(Fig.~\ref{steps}).
The functional forms of the composite rarefaction/inverted shock and
composite compression/shock profiles are still given by
Eq.~(\ref{rarefaction}), but the criteria to determine the cutoffs $z_\pm$
are now slightly more involved than for continuous profiles. The location of
the left cutoff, $z_-$, is again determined by continuity, namely,
$I(z_-)=\rho_-$ or, alternatively, $z_-=J_\rho(\rho_-)$. To determine the
right cutoff $z_+$, note that in a small spatial region that includes the
leading-edge discontinuity, the density profile is just that of a shock or
inverted shock wave. Thus the equation for the shock speed is
\begin{equation}
\label{interface}
z_+=\frac{J(q_+)-J(\rho_+)}{q_+-\rho_+}~,
\end{equation}
where $q_+\equiv I(z_+)$ is the density just to the left of the
discontinuity. (Note also that $z_+ =J_\rho(q_+)$ by definition.) To
justify (\ref{interface}), we use the conservation equation that the particle
number in $[z_-,z_+]$ equals the initial number plus the net flux into this
region:
\begin{equation}
\label{particle_cons}
\int_{z_-}^{z_+}\!\! I(z)dz = -\rho_-z_-+\rho_+z_+-J(\rho_+)+J(\rho_-)\,.
\end{equation}
We recast this expression into (\ref{interface}), by making the variable
change $z=J_\rho(\rho)$ and using $I(J_\rho(\rho))=\rho$ to write the
integral as $\int_{\rho_-}^{q_+}\rho\,J_{\rho\rho}\,d\rho$, which can be
performed by parts. The resulting expression readily simplifies to
(\ref{interface}).
In summary, a diversity of wave singularities arise in asymmetric exclusion
with sufficiently strong cooperativity. The minimum requirement for these
phenomena is an inflection point in the current-density relation $J(\rho)$.
This inflection point leads to a group velocity that is an \emph{increasing}
function of density for $\rho<\rho_I$, a dependence opposite to that in the
conventional ASEP. The resulting non-monotonic density dependence of the
velocity causes an initial density upstep or downstep to evolve to:
shock/inverted shocks, continuous rarefaction/compression waves, or a
composite profile with both continuous and discontinuous elements.
\ack We thank Martin Schmaltz for asking an oral exam question that helped
spark this work and Paul Krapivsky for helpful discussions. We also thank
the referee for informing us about Refs.~\cite{FGRS02,BGRS02}. Finally, we
gratefully acknowledge financial support from NSF grant DMR-0906504.
\section*{References}
|
1,477,468,751,123 | arxiv | \section{Introduction}
We consider the Laguerre unitary ensemble (LUE for short) of $n\times n$ Hermitian matrices whose eigenvalues have the following joint probability density function \cite{Mehta}
$$
p(x_1,x_2,\ldots,x_n)=\frac{1}{Z_n}\prod_{1\leq i<j\leq n}(x_i-x_j)^2\prod_{k=1}^{n}w(x_k;\gamma,n),
$$
where $w(x;\gamma,n)$ is the scaled Laguerre weight
\[w(x;\gamma,n)=x^{\gamma}\mathrm{e}^{-4nx},\qquad\;x\in[0,\infty),\quad\gamma>-1,\]
and $Z_n$ is the partition function which reads
\[Z_n:=\int_{[0,\infty)^{n}}\prod_{1\leq i<j\leq n}(x_i-x_j)^2\prod_{k=1}^{n}w(x_k;\gamma,n)dx_{k}.\]
The probability that all the eigenvalues in this LUE lie in the interval $[0,\alpha]$, or the largest eigenvalue is not greater than $\alpha$, is given by
\begin{align}\label{probDn}
\mathbb{P}(n,\gamma,\alpha)=\frac{D_{n}(\alpha)}{D_{n}(\infty)},
\end{align}
where $D_n(\alpha)$ is defined by
\begin{align*}
D_{n}(\alpha):=&\frac{1}{n!}\int_{[0,\alpha]^{n}}\prod_{1\leq i<j\leq n}(x_i-x_j)^2\prod_{k=1}^{n}w(x_k;\gamma,n)dx_{k}.
\end{align*}
It is apparent that $D_n(\infty)=Z_n/n!$.
In this paper, we are interested in the asymptotic behavior of $\mathbb{P}(n,\gamma,\alpha)$ at the soft edge. Deift, Its and Krasovsky \cite{Deift} studied the special case $\mathbb{P}(n,0,\alpha)$, namely the largest eigenvalue distribution on $[0,\alpha]$ of LUE with the weight $\mathrm{e}^{-4nx}$. By using the Riemann-Hilbert approach, they obtained the constant conjectured by Tracy and Widom \cite{TW1}, which appears in the asymptotic formula for $\mathbb{P}(n,0,\alpha)$ at the soft edge. We would like to generalize their results to general $\gamma$.
By changing variables $4nx_{\ell}=y_{\ell}, \ell=1,2,\ldots,n$, in $D_n(\alpha)$, we get
\begin{eqnarray}
D_{n}(\alpha)&=&(4n)^{-n(n+\gamma)}\cdot\frac{1}{n!}\int_{[0,4n\alpha]^{n}}\prod_{1\leq i<j\leq n}(y_i-y_j)^2\prod_{k=1}^{n}y_{k}^{\gamma}\mathrm{e}^{-y_{k}}dy_{k}\nonumber\\
&=:&(4n)^{-n(n+\gamma)}\widehat{D}_n(4n\alpha),\nonumber
\end{eqnarray}
where $\widehat{D}_n(\cdot)$ is defined by
\begin{equation}\label{dnt}
\widehat{D}_{n}(t):=\frac{1}{n!}\int_{[0,t]^{n}}\prod_{1\leq i<j\leq n}(x_i-x_j)^2\prod_{k=1}^{n}x_k^{\gamma}\mathrm{e}^{-x_k}dx_{k}.
\end{equation}
It follows from \eqref{probDn} that
\begin{align}\label{probDn1}
\mathbb{P}(n,\gamma,\alpha)=\frac{\widehat{D}_{n}(4n\alpha)}{\widehat{D}_{n}(\infty)}.
\end{align}
Denoting by $\widehat{\mathbb{P}}(n,\gamma,t)$ the probability that the largest eigenvalue of $n\times n$ Hermitian matrices is $\leq t$ in the LUE with the normal Laguerre weight $x^{\gamma}\mathrm{e}^{-x}$, we have (see \cite{Lyu2017})
\begin{equation}\label{pr}
\widehat{\mathbb{P}}(n,\gamma,t)=\frac{\widehat{D}_{n}(t)}{\widehat{D}_{n}(\infty)}.
\end{equation}
Note that $\widehat{D}_{n}(\infty)$ has the following closed-form expression \cite[p.321 (17.6.5)]{Mehta},
\begin{align}
\widehat{D}_{n}(\infty)=&\frac{1}{n!}\prod_{j=1}^{n}\Gamma(j+1)\Gamma(j+\gamma)\nonumber\\
=&\frac{G(n+1)G(n+\gamma+1)}{G(\gamma+1)},\label{dni}
\end{align}
where $G(\cdot)$ is the Barnes $G$-function which satisfies the relation
$$
G(z+1)=\Gamma(z)G(z),\qquad\qquad G(1):=1.
$$
See \cite{Barnes,Voros,Choi} for more properties of this function.
A combination of \eqref{probDn1} and \eqref{pr} gives us a connection between the largest eigenvalue distribution of LUE with the weight $x^{\gamma}\mathrm{e}^{-4nx}$ and the weight $x^{\gamma}\mathrm{e}^{-x}$:
$$
\mathbb{P}(n,\gamma,\alpha)=\widehat{\mathbb{P}}(n,\gamma,4n\alpha).
$$
Therefore, to study $\mathbb{P}(n,\gamma,\alpha)$, we first turn our attention to $\widehat{\mathbb{P}}(n,\gamma,t)$.
It is well known that the gap probability $\widehat{\mathbb{P}}(n,\gamma,t)$ that the interval $(t,\infty)$ contains no eigenvalues, can be expressed as a Fredholm determinant \cite[p.109 (5.42)]{Deift1999}, namely,
$$
\widehat{\mathbb{P}}(n,\gamma,t)=\det\left(I-K_{n}\mathlarger {\chi}_{(t,\infty)}\right),
$$
where $\mathlarger{\mathlarger {\chi}}_{(t,\infty)}(\cdot)$ is the characteristic function of the interval $(t,\infty)$ and the integral operator $K_{n}\mathlarger{\mathlarger {\chi}}_{(t,\infty)}$ has kernel $K_n(x,y)\mathlarger{\mathlarger {\chi}}_{(t,\infty)}(y)$, with $K_n(x,y)$ given by the Christoffel-Darboux formula \cite{Szego},
\begin{align*}
K_{n}(x,y)=&\sum_{j=0}^{n-1}\varphi_j(x)\varphi_j(y)\\
=&\sqrt{n(n+\gamma)}\:\frac{\varphi_{n-1}(x)\varphi_{n}(y)-\varphi_{n-1}(y)\varphi_{n}(x)}{x-y}.
\end{align*}
Here $\{\varphi_j(x)\}_{j=0}^{\infty}$ are obtained by orthonormalizing the sequence $\{x^jx^{\gamma/2}\mathrm{e}^{-x/2}\}_{j=0}^{\infty}$ over $[0,\infty)$, and \[\varphi_j(x)=\sqrt{\frac{\Gamma(j+1)}{\Gamma(j+\gamma+1)}}x^{\frac{\gamma}{2}}\mathrm{e}^{-\frac{x}{2}}L_{j}^{(\gamma)}(x),\] with $L_{j}^{(\gamma)}(x)$ denoting the Laguerre polynomial of degree $j$.
The kernel $K_{n}(x,y)$ tends to the Airy kernel at the soft edge \cite{Forrester}, i.e.,
$$
\lim_{n\rightarrow\infty}2^{\frac{4}{3}}n^{\frac{1}{3}}K_{n}\left(4n+2\gamma+2+2^{\frac{4}{3}}n^{\frac{1}{3}}u,4n+2\gamma+2+2^{\frac{4}{3}}n^{\frac{1}{3}}v\right)
=K_{\mathrm{Airy}}(u,v),
$$
where $K_{\mathrm{Airy}}(u,v)$ is the Airy kernel defined by
$$
K_{\mathrm{Airy}}(u,v):=\frac{\mathrm{Ai}(u)\mathrm{Ai}'(v)-\mathrm{Ai}(v)\mathrm{Ai}'(u)}{u-v}.
$$
Here $\mathrm{Ai}(\cdot)$ is the Airy function of the first kind \cite{Lebedev}. See also \cite{TW1,Min2020} on the study of the Airy kernel. Tracy and Widom \cite{TW1} showed that $\widehat{\mathbb{P}}(n,\gamma,t)$ can be expressed in terms of a Painlev\'{e} II transcendent at the soft edge.
At the hard edge, $K_{n}(x,y)$ tends to the Bessel kernel \cite{Forrester}, that is,
$$
\lim_{n\rightarrow\infty}\frac{1}{4n}K_{n}\left(\frac{u}{4n},\frac{v}{4n}\right)
=K_{\mathrm{Bessel}}(u,v),
$$
where
\[K_{\mathrm{Bessel}}(u,v):=\frac{J_{\gamma}(\sqrt{u})\sqrt{v}J_{\gamma}'(\sqrt{v})-\sqrt{u}J_{\gamma}'(\sqrt{u})J_{\gamma}(\sqrt{v})}{2(u-v)}.\]
Tracy and Widom \cite{TW2} proved that the log-derivative of $\widehat{\mathbb{P}}(n,\gamma,t)$ satisfies a particular Painlev\'{e} III equation when $t$ approaches the hard edge.
The level density of the LUE with the weight $x^{\gamma}\mathrm{e}^{-x}$ is given by \cite[p.356 (19.1.11)]{Mehta} \[\rho(x)=\frac{1}{2\pi}\sqrt{\frac{4n-x}{x}},\qquad 0<x<4n,\]
which is an example of the Mar\v{c}enko-Pastur law \cite{MP}. Hence, in \cite{Deift} and also in this paper, the scaled Laguerre weight with $n$ appearing in the exponent is considered, in order to make the equilibrium density of the eigenvalues supported on $(0,1)$ instead of $(0,4n)$.
For finite $n$, Tracy and Widom \cite {TW3} established a particular Painlev\'{e} V equation satisfied by the log-derivative of $\widehat{\mathbb{P}}(n,\gamma,t)$. Adler and van Moerbeke \cite{Adler} derived the same results via differential operators.
By using the ladder operator approach of orthogonal polynomials, Basor and Chen \cite{Basor2009} investigated the Hankel determinant generated by the Laguerre weight with a jump, which includes $\widehat{D}_{n}(t)$ as a special case, and a Painlev\'{e} V equation shows up as is expected.
Based on their results, Lyu and Chen \cite{Lyu2017} considered the asymptotic behavior of $x^{\gamma/2}\mathrm{e}^{-x/2}P_j(x)$ at the soft edge, with $P_j(x),\;j=0,1,\ldots$ denoting the monic polynomials orthogonal with respect to $x^{\gamma}\mathrm{e}^{-x}$ on $[0,t]$.
We mention here that the ladder operator method is effective and straightforward in the finite dimensional analysis of problems in unitary ensembles, see for example, the gap probability \cite{Basor2012,Lyu2019,LyuChenFan,Min2018} and the partition function for weights with discontinuities or singularities \cite{Chen2010,ChenZhang,Min2019a,MLC}.
In the present paper, in order to derive the asymptotic formula for $\mathbb{P}(n,\gamma,\alpha)$ at the soft edge, we proceed from two aspects. On one hand, we first derive a large $n$ asymptotic expansion for $\frac{d}{d\alpha}\ln\mathbb{P}(n,\gamma,\alpha)$, by using differential equations for finite $n$ from \cite{Basor2009}. Then we integrate the expansion from $\alpha_0$ to $\alpha$ with arbitrary $\alpha_0<\alpha$ to obtain an asymptotic formula for $\ln\mathbb{P}(n,\gamma,\alpha)-\ln\mathbb{P}(n,\gamma,\alpha_0)$. On the other hand, we make use of the definition of $D_n(\alpha)$, i.e. the multiple integral, to get an approximate expression for $\ln\mathbb{P}(n,\gamma,\alpha_0)$ when $\alpha_0$ is close to 0. Taking the sum of these two asymptotic expansions together, and by sending $\alpha_0$ to 0, we come to the asymptotics for $\ln\mathbb{P}(n,\gamma,\alpha)$ in large $n$ with $\alpha$ ranging from 0 to the soft edge, where a term which is independent of $\alpha$ and tends to $0$ as $n\rightarrow\infty$ is included. Finally, by setting $\alpha=1-\frac{s}{(2n)^{2/3}}$ and sending $n$ to $\infty$, we obtain the asymptotic formula of $\ln\mathbb{P}(n,\gamma,\alpha)$ at the soft edge for large $s$,
$$
\lim_{n\rightarrow\infty}\ln\mathbb{P}\left(n,\gamma,1-\frac{s}{(2n)^{2/3}}\right)=-\frac{s^3}{12}-\frac{1}{8}\ln s+\frac{1}{24}\ln 2+\zeta'(-1)+O(s^{-3}),
$$
where the celebrated Tracy-Widom constant \cite{TW1} shows up.
The above method is motivated by Deift, Its and Krasovsky \cite{Deift} where they studied the special case $\mathbb{P}(n,0,\alpha)$, namely the largest eigenvalue distribution on $[0,\alpha]$ of LUE with the weight $\mathrm{e}^{-4nx}$. They used the Riemann-Hilbert approach to get the asymptotic expansion for $\frac{d}{d\alpha}\ln\mathbb{P}(n,0,\alpha)$, while in this paper, as is mentioned above, we relate the weight $x^{\gamma}\mathrm{e}^{-4nx}$ to $x^{\gamma}\mathrm{e}^{-x}$ and make use of the established results for the latter weight from \cite{Basor2009}. The Riemann-Hilbert method \cite{Deift1999} is a very powerful tool to investigate the asymptotic behavior of many unitary ensembles. See, for instance, the gap probability problem \cite{DaiXuZhang2018,XuZhao2019}, correlation kernel \cite{ChenChenFan,XuDaiZhao}, partition functions \cite{DaiXuZhang2019, BMM, ACM}, Hankel determinants and orthogonal polynomials \cite{Bogatskiy,Charlier,ChenChenFan2019,Xu2011}.
This paper is organized as follows. In Sec. 2, we present some important results from \cite{Basor2009} which are related to the largest eigenvalue distribution of LUE with the weight $x^{\gamma}\mathrm{e}^{-x}$. Sec. 3 is devoted to the derivation of the asymptotic formula for $\frac{d}{d\alpha}\ln\mathbb{P}(n,\gamma,\alpha)$ when $n$ is large. Our main results are developed in Sec. 4.
\section{Preliminaries}
In this section, we show some important results of Basor and Chen \cite{Basor2009}, which are crucial for the analysis of the asymptotic behavior of the largest eigenvalue distribution in the LUE with the scaled Laguerre weight.
It is well known that the multiple integral $\widehat{D}_n(t)$ defined by \eqref{dnt} can be written as the determinant of a Hankel matrix and also as the product of the square of the $L^2$-norms of the corresponding monic orthogonal polynomials \cite[p.16-19]{Ismail}, namely
\begin{align*}
\widehat{D}_n(t):=&\frac{1}{n!}\int_{[0,t]^{n}}\prod_{1\leq i<j\leq n}(x_i-x_j)^2\prod_{k=1}^{n}x_k^{\gamma}\mathrm{e}^{-x_k}dx_{k}\\
=&\det\left(\int_{0}^{t}x^{i+j}x^{\gamma}\mathrm{e}^{-x}dx\right)_{i,j=0}^{n-1}\\
=&\prod_{j=0}^{n-1}h_j(t),
\end{align*}
where
\begin{equation}\label{or}
h_j(t)\delta_{jk}:=\int_{0}^{t}P_j(x,t)P_k(x,t)x^{\gamma}\mathrm{e}^{-x}dx,
\end{equation}
and $\delta_{jk}$ is the Kronecker delta function. Here $P_n(x,t),\; n=0,1,2,\ldots,$ are monic polynomials of degree $n$ defined by
\begin{equation}\label{expan}
P_{n}(x,t)=x^{n}+\mathrm{p}(n,t)x^{n-1}+\cdots+P_{n}(0,t).
\end{equation}
Note that, in the following discussions, $n$ stands for any nonnegative integer instead of the dimension of the Hermitian matrices.
The orthogonality (\ref{or}) implies the following three-term recurrence relation \cite{Chihara,Szego}:
\begin{equation}\label{rr}
xP_{n}(x,t)=P_{n+1}(x,t)+\alpha_{n}(t)P_{n}(x,t)+\beta_{n}(t)P_{n-1}(x,t),
\end{equation}
subject to the initial conditions
$$
P_{0}(x,t):=1,\qquad\qquad\beta_{0}(t)P_{-1}(x,t):=0.
$$
As an easy consequence of (\ref{or})--(\ref{rr}), we have
$$
\alpha_{n}(t)=\mathrm{p}(n,t)-\mathrm{p}(n+1,t),
$$
$$
\beta_{n}(t)=\frac{h_{n}(t)}{h_{n-1}(t)}.
$$
In addition, $\alpha_{n}(t)$ and $\beta_{n}(t)$ admit the following integral representations,
$$
\alpha_{n}(t)=\frac{1}{h_{n}(t)}\int_{0}^{t}x P_{n}^{2}(x,t)x^{\gamma}\mathrm{e}^{-x}dx,
$$
$$
\beta_{n}(t)=\frac{1}{h_{n-1}(t)}\int_{0}^{t}x P_{n}(x,t)P_{n-1}(x,t)x^{\gamma}\mathrm{e}^{-x}dx.
$$
From the recurrence relation (\ref{rr}), one can derive the famous Christoffel-Darboux formula \cite{Szego},
$$
\sum_{k=0}^{n-1}\frac{P_{k}(x,t)P_{k}(y,t)}{h_{k}(t)}=\frac{P_{n}(x,t)P_{n-1}(y,t)-P_{n}(y,t)P_{n-1}(x,t)}{h_{n-1}(t)(x-y)}.
$$
For convenience, we will not display the $t$ dependence of relevant quantities unless it is required in the following discussions.
Basor and Chen \cite{Basor2009} studied the Hankel determinant generated by the discontinuous Laguerre weight $x^{\gamma}\mathrm{e}^{-x}(A+B\theta(x-t))$, where $\theta(\cdot)$ is the Heaviside step function. We observe that the special case where $A=1$ and $B=-1$ corresponds to $\widehat{D}_n(t)$.
It is proved in \cite{Basor2009} that the monic orthogonal polynomials defined by \eqref{or} satisfy the lowering operator equation
$$
\left(\frac{d}{dz}+B_{n}(z)\right)P_{n}(z)=\beta_{n}A_{n}(z)P_{n-1}(z),
$$
and the raising operator equation
$$
\left(\frac{d}{dz}-B_{n}(z)-\mathrm{v}'(z)\right)P_{n-1}(z)=-A_{n-1}(z)P_{n}(z),
$$
where $\mathrm{v}(z):=-\ln(z^{\gamma}\mathrm{e}^{-z})=z-\gamma\ln z$, and
$$
A_{n}(z):=\frac{R_n(t)}{z-t}+\frac{1}{h_{n}(t)}\int_{0}^{t}\frac{\mathrm{v}'(z)-\mathrm{v}'(y)}{z-y}P_{n}^{2}(y)y^{\gamma}\mathrm{e}^{-y}dy,
$$
$$
B_{n}(z):=\frac{r_n(t)}{z-t}+\frac{1}{h_{n-1}(t)}\int_{0}^{t}\frac{\mathrm{v}'(z)-\mathrm{v}'(y)}{z-y}P_{n}(y)P_{n-1}(y)y^{\gamma}\mathrm{e}^{-y}dy.
$$
Here the auxiliary quantities $R_n(t)$ and $r_n(t)$ are defined by
$$
R_n(t):=-\frac{t^{\gamma}\mathrm{e}^{-t}}{h_n(t)}P_n^2(t,t),
$$
$$
r_n(t):=-\frac{t^{\gamma}\mathrm{e}^{-t}}{h_{n-1}(t)}P_n(t,t)P_{n-1}(t,t),
$$
and $P_n(t,t):=P_n(z,t)|_{z=t}$.
From the definitions of $A_n(z)$ and $B_n(z)$, Basor and Chen \cite{Basor2009} derived two identities valid for $z\in\mathbb{C}\cup\{\infty\}$:
\begin{equation}
B_{n+1}(z)+B_{n}(z)=(z-\alpha_{n})A_{n}(z)-\mathrm{v}'(z), \tag{$S_{1}$}
\end{equation}
\begin{equation}
1+(z-\alpha_{n})(B_{n+1}(z)-B_{n}(z))=\beta_{n+1}A_{n+1}(z)-\beta_{n}A_{n-1}(z). \tag{$S_{2}$}
\end{equation}
A combination of $(S_1)$ and $(S_2)$ produces
\begin{equation}
B_{n}^{2}(z)+\mathrm{v}'(z)B_{n}(z)+\sum_{j=0}^{n-1}A_{j}(z)=\beta_{n}A_{n}(z)A_{n-1}(z). \tag{$S_{2}'$}
\end{equation}
Computing $A_n(z)$ and $B_n(z)$ by using their definitions and substituting the resulting expressions into the compatibility conditions $(S_1)$, $(S_2)$ and $(S_2')$, Basor and Chen \cite{Basor2009} obtained the following results.
\begin{proposition} $A_n(z)$ and $B_n(z)$ are given by
$$
A_{n}(z)=\frac{R_{n}(t)}{z-t}+\frac{1-R_{n}(t)}{z},
$$
$$
B_{n}(z)=\frac{r_{n}(t)}{z-t}-\frac{n+r_{n}(t)}{z}.
$$
\end{proposition}
\begin{proposition}\label{p1}
The quantity
$$S_{n}(t):=1-\frac{1}{R_n(t)},$$
satisfies the second-order differential equation
\begin{equation}\label{pv}
S_{n}''=\frac{(3S_{n}-1)(S_{n}')^2}{2S_{n}(S_{n}-1)}-\frac{S_{n}'}{t}-\frac{\gamma^{2}}{2}\frac{(S_{n}-1)^2}{t^2S_{n}}+\frac{(2n+1+\gamma)S_{n}}{t}-\frac{S_{n}(S_{n}+1)}{2(S_{n}-1)},
\end{equation}
which is a particular Painlev\'{e} V equation, $P_{V}\left(0,-\frac{\gamma^2}{2},2n+1+\gamma,-\frac{1}{2}\right)$, following the convention of \cite{Gromak}.
\end{proposition}
Let
$$
\sigma_n(t):=t\frac{d}{dt}\ln \widehat{D}_n(t).
$$
Then, in view of \eqref{pr}, we have
\begin{align}\label{sn}
\sigma_n(t)=t\frac{d}{dt}\ln\widehat{\mathbb{P}}(n,\gamma,t).
\end{align}
Recall that $\widehat{\mathbb{P}}(n,\gamma,t)$ represents the largest eigenvalue distribution function on $[0,t]$ of LUE with the weight $x^{\gamma}\mathrm{e}^{-x}$. The following results come from \cite{Basor2009} and \cite{Lyu2017}.
\begin{proposition}\label{p2}
The quantity $\sigma_n(t)$ satisfies the Jimbo-Miwa-Okamoto $\sigma$-form of Painlev\'{e} V \cite{Jimbo},
$$
(t\sigma_n'')^2=4(\sigma_n')^2\left(\sigma_n-n(n+\gamma)-t\sigma_n'\right)+\left((2n+\gamma-t)\sigma_n'+\sigma_n\right)^2.
$$
In addition, $\sigma_n(t)$ is expressed in terms of $S_n(t)$ by
\begin{eqnarray}\label{ss}
\sigma_n(t)
&=&-\frac{\gamma^2}{4S_n}+\frac{t(4n+2\gamma-t)}{4(S_n-1)}-\frac{t^2}{4(S_n-1)^2}+\frac{t^2(S_n')^2}{4S_n(S_n-1)^2}.
\end{eqnarray}
\end{proposition}
In the next section, we will make use of the above results to study $\mathbb{P}(n,\gamma,\alpha)$, that is, the probability that the largest eigenvalue is $\leq\alpha$ of the LUE with the weight $x^{\gamma}\mathrm{e}^{-4nx}$.
\section{Logarithmic Derivative of the Largest Eigenvalue Distribution Function}
We consider the LUE defined by the scaled Laguerre weight,
$$
w(x;\gamma,n)=x^{\gamma}\mathrm{e}^{-4nx},\qquad x\in[0,\infty),\quad\gamma>-1.
$$
As is shown in the introduction, the probability that the largest eigenvalue in this LUE is $\leq\alpha$ is equal to the probability that the largest eigenvalue is $\leq 4n\alpha$ in the LUE with the weight $x^{\gamma}\mathrm{e}^{-x}$, i.e.,
\begin{equation}\label{re}
\mathbb{P}(n,\gamma,\alpha)=\widehat{\mathbb{P}}(n,\gamma,4n\alpha).
\end{equation}
According to the results in the previous section with $t=4n\alpha$, we come to the following result.
\begin{lemma} As $n\rightarrow\infty$, $\frac{d}{d\alpha}\ln\mathbb{P}(n,\gamma,\alpha)$ has the following asymptotic expansion,
\begin{eqnarray}\label{phi}
\frac{d}{d\alpha}\ln\mathbb{P}(n,\gamma,\alpha)&=&\frac{(1-\alpha)^2}{\alpha}n^2+\frac{\gamma(1-\alpha)}{\alpha}n+\frac{\alpha+2\gamma^2(1-\alpha)}{4(1-\alpha^2)}
-\frac{\gamma\left(\alpha+\gamma^2(1-\alpha)^2\right)}{4n(1-\alpha^2)^2}\nonumber\\[5pt]
&&+O\left(\frac{1}{(1-\alpha)^4n^2}\right).
\end{eqnarray}
\end{lemma}
\begin{proof}
Let
$$
t=4n\alpha,
$$
and denote
\begin{equation}\label{t1}
F_n(\alpha):=S_n(t)=S_n(4n\alpha).
\end{equation}
Then equation (\ref{pv}) becomes
\begin{equation}\label{fn}
F_n''=\frac{(3F_n-1)(F_n')^2}{2F_n(F_n-1)}-\frac{F_n'}{\alpha}+\frac{4n(2n+1+\gamma)F_n}{\alpha}-\frac{\gamma^2(F_n-1)^2}{2\alpha^2F_n}-\frac{8n^2F_n(F_n+1)}{F_n-1}.
\end{equation}
In order to obtain the asymptotic formula of $F_n(\alpha)$, we disregard the derivative terms in this equation and have
$$
\frac{4n(2n+1+\gamma)\tilde{F}_n}{\alpha}-\frac{\gamma^2(\tilde{F}_n-1)^2}{2\alpha^2\tilde{F}_n}-\frac{8n^2\tilde{F}_n(\tilde{F}_n+1)}{\tilde{F}_n-1}=0,
$$
which is actually a cubic equation for $\tilde{F}_n(\alpha)$, i.e.,
$$
\left(16n^2\alpha(1-\alpha)+8n\alpha(1+\gamma)-\gamma^2\right)\tilde{F}_n^3-\left(16n^2\alpha(1+\alpha)+8n\alpha(1+\gamma)-3\gamma^2\right)\tilde{F}_n^2-3\gamma^2\tilde{F}_n+\gamma^2=0.
$$
It has only one real solution which has the following large $n$ expansion,
$$
\tilde{F}_n(\alpha)=\frac{1+\alpha}{1-\alpha}-\frac{\alpha(1+\gamma)}{n(1-\alpha)^2}+\frac{\alpha(1+\alpha)^2(1+2\gamma)+\alpha(1+3\alpha)\gamma^2}{2n^2(1-\alpha)^3(1+\alpha)^2}+O\left(\frac{1}{n^3(1-\alpha)^4}\right).
$$
Hence we suppose that $F_n(\alpha)$ has the following series expansion,
$$
F_n(\alpha)=\sum_{i=0}^{\infty}a_i(\alpha)n^{-i},\qquad n\rightarrow\infty.
$$
Substituting the above expression into equation (\ref{fn}), and comparing the coefficients of identical powers of $n$ on both sides, we obtain $a_i(\alpha), i=0, 1, 2, \ldots$ one by one. This leads to the expansion
\begin{eqnarray}\label{fne}
F_n(\alpha)&=&\frac{1+\alpha}{1-\alpha}-\frac{\alpha(1+\gamma)}{n(1-\alpha)^2}+\frac{\alpha+\alpha^2-\alpha^4+2\alpha(1-\alpha)(1+\alpha)^2\gamma+\alpha(1-\alpha)(1+3\alpha)\gamma^2}{2n^2(1-\alpha)^4(1+\alpha)^2}\nonumber\\[10pt]
&&+O\left(\frac{1}{n^3(1-\alpha)^5}\right).
\end{eqnarray}
Furthermore, with the relation $t=4n\alpha$, it follows from (\ref{sn}) and (\ref{re}) that
\begin{align}
\sigma_{n}(t)=&\sigma_{n}(4n\alpha)=\alpha\frac{d}{d\alpha}\ln\widehat{\mathbb{P}}(n,\gamma,4n\alpha)\nonumber\\
=&\alpha\frac{d}{d\alpha}\ln\mathbb{P}(n,\gamma,\alpha).\nonumber
\end{align}
Hence, by using (\ref{ss}) and in view of (\ref{t1}), we are able to express $\frac{d}{d\alpha}\ln\mathbb{P}(n,\gamma,\alpha)$ in terms of $F_n(\alpha)$,
\begin{equation}\label{pf}
\frac{d}{d\alpha}\ln\mathbb{P}(n,\gamma,\alpha)=-\frac{\gamma^2}{4\alpha F_n}+\frac{2n(2n(1-\alpha)+\gamma)}{F_n-1}-\frac{4n^2\alpha}{(F_n-1)^2}+\frac{\alpha(F_n')^2}{4F_n(F_n-1)^2}.
\end{equation}
Substituting (\ref{fne}) into (\ref{pf}), we arrive at \eqref{phi}.
\end{proof}
\begin{remark}
When $\gamma=0$, our formula (\ref{phi}) coincides with the expression (152) of Deift et al. \cite{Deift}.
\end{remark}
In order to obtain the asymptotic formula of the largest eigenvalue distribution function $\mathbb{P}(n,\gamma,\alpha)$ as $n\rightarrow\infty$, we can integrate identity (\ref{phi}) from $\alpha_0$ to any $\alpha$, where $\alpha_0$ is close to 0 and $0<\alpha_0<\alpha\leq\frac{1}{4n}(4n-2^{4/3}n^{1/3}s_0)=1-\frac{s_0}{(2n)^{2/3}}$ with finite $s_0>0$. See \cite{Perret, Lyu2017} for the study on the soft edge scaling limit of LUE. So we need to know the asymptotics of $\mathbb{P}(n,\gamma,\alpha)$ when $\alpha$ tends to 0. We will analyze it in the next section following the method in \cite{Deift}.
\section{Asymptotic Behavior of the Largest Eigenvalue Distribution Function}
Returning to our problem, we recall that the probability that all the eigenvalues lie in $[0,\alpha]$ is given by
$$
\mathbb{P}(n,\gamma,\alpha)=\frac{D_{n}(\alpha)}{D_{n}(\infty)},
$$
where
$$
D_{n}(\alpha)=\frac{1}{n!}\int_{[0,\alpha]^{n}}\prod_{1\leq i<j\leq n}(x_i-x_j)^2\prod_{k=1}^{n}x_{k}^{\gamma}\mathrm{e}^{-4nx_{k}}dx_{k},
$$
and
$$
D_{n}(\infty)=\frac{1}{n!}\int_{[0,\infty)^{n}}\prod_{1\leq i<j\leq n}(x_i-x_j)^2\prod_{k=1}^{n}x_{k}^{\gamma}\mathrm{e}^{-4nx_{k}}dx_{k}.
$$
By changing variables $x_{\ell}=\alpha t_{\ell}, {\ell}=1,2,\ldots,n$, we find
$$
D_{n}(\alpha)=\alpha^{n(n+\gamma)}\frac{1}{n!}\int_{[0,1]^{n}}\prod_{1\leq i<j\leq n}(t_i-t_j)^2\prod_{k=1}^{n}t_{k}^{\gamma}\mathrm{e}^{-4n\alpha t_{k}}dt_{k}.
$$
For fixed $n$ and as $\alpha\rightarrow 0$, we have
$$
\mathrm{e}^{-4n\alpha t_{k}}=1-4n\alpha t_{k}+O(\alpha^2),
$$
so that
\begin{eqnarray}
D_{n}(\alpha)&=&\alpha^{n(n+\gamma)}\frac{1}{n!}\int_{[0,1]^{n}}\prod_{1\leq i<j\leq n}(t_i-t_j)^2\prod_{k=1}^{n}t_{k}^{\gamma}(1-4n\alpha t_{k}+O(\alpha^2))dt_{k}\nonumber\\
&=&\alpha^{n(n+\gamma)}A_n(\gamma)(1+o_n(\alpha)),\nonumber
\end{eqnarray}
where $o_n(\alpha)\rightarrow 0$ as $\alpha\rightarrow 0$ for fixed $n$, and
$$
A_n(\gamma):=\frac{1}{n!}\int_{[0,1]^{n}}\prod_{1\leq i<j\leq n}(t_i-t_j)^2\prod_{k=1}^{n}t_{k}^{\gamma}dt_{k}.
$$
Hence, we find as $\alpha\rightarrow 0$,
\begin{eqnarray}\label{pn}
\ln\mathbb{P}(n,\gamma,\alpha)&=&\ln D_{n}(\alpha)-\ln D_{n}(\infty)\nonumber\\
&=&n(n+\gamma)\ln\alpha+\ln A_n(\gamma)-\ln D_{n}(\infty)+o_n(\alpha).
\end{eqnarray}
According to identity (17.1.3) in \cite{Mehta}, we have
\begin{eqnarray}
A_n(\gamma)
&=&\frac{1}{n!}\prod_{j=0}^{n-1}\frac{\Gamma(j+1)\Gamma(j+2)\Gamma(j+\gamma+1)}{\Gamma(j+n+\gamma+1)}\nonumber\\
&=&\frac{G^{2}(n+1)G^2(n+\gamma+1)}{G(\gamma+1)G(2n+\gamma+1)},\nonumber
\end{eqnarray}
where $G(\cdot)$ is the Barnes $G$-function. Now we look at $D_{n}(\infty)$, i.e.
$$
D_{n}(\infty)=\frac{1}{n!}\int_{[0,\infty)^{n}}\prod_{1\leq i<j\leq n}(x_i-x_j)^2\prod_{k=1}^{n}x_{k}^{\gamma}\mathrm{e}^{-4nx_{k}}dx_{k}.
$$
By changing variables $4nx_{\ell}=y_{\ell}, \ell=1,2,\ldots,n$, we get
\begin{eqnarray}
D_{n}(\infty)&=&(4n)^{-n(n+\gamma)}\cdot\frac{1}{n!}\int_{[0,\infty)^{n}}\prod_{1\leq i<j\leq n}(y_i-y_j)^2\prod_{k=1}^{n}y_{k}^{\gamma}\mathrm{e}^{-y_{k}}dy_{k}\nonumber\\[5pt]
&=&(4n)^{-n(n+\gamma)}\widehat{D}_{n}(\infty)\nonumber\\[5pt]
&=&(4n)^{-n(n+\gamma)}\frac{G(n+1)G(n+\gamma+1)}{G(\gamma+1)},\nonumber
\end{eqnarray}
where we have used (\ref{dni}). Substituting the above expressions for $A_n(\gamma)$ and $D_{n}(\infty)$ into (\ref{pn}), we arrive at, as $\alpha\rightarrow 0$,
$$
\ln\mathbb{P}(n,\gamma,\alpha)=n(n+\gamma)\ln(4n\alpha)+\ln G(n+1)+\ln G(n+\gamma+1)-\ln G(2n+\gamma+1)+o_n(\alpha).
$$
By using the asymptotic formula of Barnes $G$-function (see, for example, formula (A.6) in \cite{Voros}), i.e.,
\begin{equation}\label{bg}
\ln G(z+1)=z^2\left(\frac{\ln z}{2}-\frac{3}{4}\right)+\frac{z}{2}\ln (2\pi)-\frac{\ln z}{12}+\zeta'(-1)+O(z^{-1}),\qquad z\rightarrow\infty,
\end{equation}
where $\zeta(\cdot)$ is the Riemann zeta function, we obtain
\begin{eqnarray}\label{pn1}
\ln\mathbb{P}(n,\gamma,\alpha)&=&\left(\frac{3}{2}n^2+n\gamma-\frac{1}{12}\right)\ln n+\left(\frac{(n+\gamma)^2}{2}-\frac{1}{12}\right)\ln(n+\gamma)\nonumber\\[10pt]
&&-\left(\frac{(2n+\gamma)^2}{2}-\frac{1}{12}\right)\ln(2n+\gamma)+n(n+\gamma)\left(\frac{3}{2}+\ln (4\alpha)\right)\nonumber\\[10pt]
&&+\zeta'(-1)+\tilde{\delta}_{n}(\gamma)+o_n(\alpha),
\end{eqnarray}
where $\tilde{\delta}_{n}(\gamma)$ depends only on $n$ and $\gamma$, and $\tilde{\delta}_{n}(\gamma)\rightarrow 0$ as $n\rightarrow\infty$ for any given $\gamma$.
\begin{remark}
When $\gamma=0$, formula (\ref{pn1}) is consistent with formula (27) of Deift et al. \cite{Deift}.
\end{remark}
To continue, we integrate identity (\ref{phi}) from $\alpha_0$ to any $\alpha$ with $0<\alpha_0<\alpha\leq1-\frac{s_0}{(2n)^{2/3}},\; s_0>0$, and find
\begin{eqnarray}\label{pn2}
&&\ln\mathbb{P}(n,\gamma,\alpha)-\ln\mathbb{P}(n,\gamma,\alpha_0)\nonumber\\[5pt]
&=&n^2\left(\ln\frac{\alpha}{\alpha_0}+\frac{\alpha^2-\alpha_0^2}{2}-2(\alpha-\alpha_0)\right)
+n\gamma\left(\ln\frac{\alpha}{\alpha_0}-(\alpha-\alpha_0)\right)\nonumber\\[10pt]
&&+\frac{1}{8}\left((4\gamma^2-1)\ln\frac{1+\alpha}{1+\alpha_0}-\ln\frac{1-\alpha}{1-\alpha_0}\right)+\frac{\gamma\left(2(1-\alpha)\gamma^2-1\right)}{8n(1-\alpha^2)}\nonumber\\[10pt]
&&-\frac{\gamma\left(2(1-\alpha_0)\gamma^2-1\right)}{8n(1-\alpha_0^2)}+O\left(\frac{1}{n^2(1-\alpha)^3}\right)-O\left(\frac{1}{n^2(1-\alpha_0)^3}\right).
\end{eqnarray}
Substituting formula (\ref{pn1}) for $\ln\mathbb{P}(n,\gamma,\alpha_0)$ into (\ref{pn2}) and taking the limit $\alpha_0\rightarrow 0$, we establish the following theorem.
\begin{theorem}
For any $0<\alpha\leq1-\frac{s_0}{(2n)^{2/3}},\; s_0>0$, we have as $n\rightarrow\infty$,
\begin{eqnarray}\label{pn3}
\ln\mathbb{P}(n,\gamma,\alpha)&=&n^2\left(\frac{3}{2}-2\alpha+\frac{\alpha^2}{2}+\ln(4\alpha)\right)+n\gamma\left(\frac{3}{2}-\alpha+\ln(4\alpha)\right)
+\left(\frac{3}{2}n^2+n\gamma-\frac{1}{12}\right)\ln n\nonumber\\
&&+\left(\frac{(n+\gamma)^2}{2}-\frac{1}{12}\right)\ln(n+\gamma)-\left(\frac{(2n+\gamma)^2}{2}-\frac{1}{12}\right)\ln(2n+\gamma)\nonumber\\
&&+\frac{1}{8}\left((4\gamma^2-1)\ln(1+\alpha)-\ln(1-\alpha)\right)+\zeta'(-1)+\frac{\gamma\left(2(1-\alpha)\gamma^2-1\right)}{8n(1-\alpha^2)}\nonumber\\
&&+O\left(\frac{1}{n^2(1-\alpha)^3}\right)+\delta_{n}(\gamma),
\end{eqnarray}
where $\delta_{n}(\gamma)$ depends only on $n$ and $\gamma$, and $\delta_{n}(\gamma)\rightarrow 0$ as $n\rightarrow\infty$ for any given $\gamma$.
\end{theorem}
\begin{remark}
From the asymptotic formula (\ref{bg}) of the Barnes $G$-function, we can show that $\delta_{n}(\gamma)=O(\frac{1}{n})\: (n\rightarrow\infty)$ for any given $\gamma$. In addition, if $\gamma=0$, then (\ref{pn3}) becomes
\begin{eqnarray}
\ln\mathbb{P}(n,0,\alpha)&=&n^2\left(\frac{3}{2}+\ln\alpha-2\alpha+\frac{\alpha^2}{2}\right)-\frac{1}{12}\ln n-\frac{1}{8}\ln(1-\alpha^2)\nonumber\\
&&+\frac{1}{12}\ln 2+\zeta'(-1)+O\left(\frac{1}{n^2(1-\alpha)^3}\right)+\delta_{n},\nonumber
\end{eqnarray}
where $\delta_{n}$ depends only on $n$, and $\delta_{n}\rightarrow 0$ as $n\rightarrow\infty$.
This agrees with (162) of Deift et al. \cite{Deift}.
\end{remark}
In the end, we give the asymptotic formula of the Fredholm determinant
$$
\det(I-K_{\mathrm{Airy}}),
$$
where $K_{\mathrm{Airy}}$ is the integral operator with the Airy kernel
$$
K_{\mathrm{Airy}}(x,y)=\frac{\mathrm{Ai}(x)\mathrm{Ai}'(y)-\mathrm{Ai}(y)\mathrm{Ai}'(x)}{x-y}
$$
acting on $L^{2}(-s,\infty)$.
For any $s>s_0$ and sufficiently large $n$, we set
$$
\alpha=1-\frac{s}{(2n)^{2/3}}.
$$
Substituting it into (\ref{pn3}) and taking the limit $n\rightarrow\infty$, the r.h.s. of (\ref{pn3}) becomes
$$
-\frac{s^3}{12}-\frac{1}{8}\ln s+\frac{1}{24}\ln 2+\zeta'(-1)+O(s^{-3}),
$$
and the l.h.s. of (\ref{pn3}) approaches $\ln\det(I-K_{\mathrm{Airy}})$. Therefore, we establish the following asymptotic formula of the Airy determinant as $s\rightarrow+\infty$,
\begin{equation}\label{airy}
\ln\det(I-K_{\mathrm{Airy}})=-\frac{s^3}{12}-\frac{1}{8}\ln s+\frac{1}{24}\ln 2+\zeta'(-1)+O(s^{-3}).
\end{equation}
\begin{remark}
The constant term in (\ref{airy}), i.e. $\frac{1}{24}\ln 2+\zeta'(-1)$,
was conjectured by Tracy and Widom \cite{TW1}, and proved by Deift et al. \cite{Deift} where (\ref{airy}) was also derived but with the order term $O(s^{-3/2})$. Our order term $O(s^{-3})$ coincides with formula (1.19) in \cite{TW1}. In addition, Baik et al. \cite{Baik} gave an alternative proof of (\ref{airy}) by using an integral expression of the Tracy-Widom distribution.
\end{remark}
\section*{Acknowledgments}
The work of Shulin Lyu was supported by National Natural Science Foundation of China under grant number 11971492. Chao Min was supported by the Scientific Research Funds of Huaqiao University under grant number 600005-Z17Y0054.
Yang Chen was supported by the Macau Science and Technology Development Fund under grant number FDCT 023/2017/A1 and by the University of Macau under grant number MYRG 2018-00125-FST.
|
1,477,468,751,124 | arxiv | \section{Introduction}
In a recent paper, \textsc{Clamond} and \textsc{Dutykh} \cite{Clamond2018} have introduced a regularization of the classical \textsc{Saint-Venant} (shallow-water) equations, which is non-dispersive, non-dissipative, and formally conserves mass, momentum, and energy. In conservation form, these regularized \textsc{Saint-Venant} equations (rSV) are written
\begin{gather}
h_{\,t}\ +\ (h\,u)_{\,x}\ =\ 0 \,,\label{e:rsvh} \\
(h\,u)_{\,t}\ +\ (h\,u^{\,2}\ +\ \frac12\;g\,h^{\,2}\ +\ \varepsilon\,{\mathcal R}\,h^{\,2})_{\,x}\ =\ 0\,, \label{e:rsvu} \\
{\mathcal R}\ \stackrel{\textup{def}}{:=}\ h\,(u_{\,x}^{\,2}\ -\ u_{\,x\,t}\ -\ u\,u_{\,x\,x})\ -\ g\,\left(h\,h_{\,x\,x}\ +\ \frac12\;h_{\,x}^{\,2}\right)\,.\label{e:rsvR}
\end{gather}
Smooth solutions of these equations also satisfy a conservation law for energy, in the form
\begin{equation}\label{e:rsvE}
{\mathcal{E}}^{\,\eps}_{\,t}\ +\ {\mathcal{Q}}^{\,\eps}_{\,x}\ =\ 0\,,
\end{equation}
where
\begin{align}\label{d:eep}
& {\mathcal{E}}^{\,\eps}\ \stackrel{\textup{def}}{:=}\ \half\;h\,u^{\,2}\ +\ \half\;g\,h^{\,2}\ +\ \varepsilon\,\left(\half\;h^{\,3}\,u_{\,x}^{\,2}\ +\ \half\;g\,h^{\,2}\,h_{\,x}^{\,2}\right)\,, \\
& {\mathcal{Q}}^{\,\eps}\ \stackrel{\textup{def}}{:=}\ \half\;h\,u^{\,3}\ +\ g\,h^{\,2}\,u\ +\ \varepsilon\,\left(\paren{\half\;h^{\,2}\,u_{\,x}^{\,2}\ +\ \half\;g\,h\,h_{\,x}^{\,2}\ +\ h\,{\mathcal R}}\,h\,u\ +\ g\,h^{\,3}\,h_{\,x}\,u_{\,x}\right)\,.
\end{align}
The rSV equations \eqref{e:rsvh} -- \eqref{e:rsvu} above were derived in \cite{Clamond2018} as the \textsc{Euler--Lagrange} equations corresponding to a least action principle for a \textsc{Lagrangian} of the form (see \cite[Eq.~(3.2)]{Clamond2018})
\begin{equation*}
{\mathcal{L}}\ \stackrel{\textup{def}}{:=}\ \half\;h\,u^{\,2}\ -\ \half\;g\,h^{\,2}\ +\ \varepsilon\,\left(\half\;h^{\,3}\,u_{\,x}^{\,2}\ -\ \half\;g\,h^{\,2}\,h_{\,x}^{\,2}\right)\ +\ \paren{h_{\,t}\ +\ (h\,u)_{\,x}}\;\phi\,.
\end{equation*}
Here $\phi$ is a \textsc{Lagrange} multiplier field that enforces mass conservation. The terms proportional to $\varepsilon$ in \eqref{e:rsvu} have a form similar to terms that appear in improved \textsc{Green}--\textsc{Naghdi} or \textsc{Serre} equations that approximate shallow-water dynamics for waves of small slopes, see \cite{Clamond2015c}. (The rSV equations also admit a non-canonical \textsc{Hamiltonian} structure like one known for the \textsc{Green}--\textsc{Naghdi} equations --- see Section~\ref{sec:6} below.)
The particular coefficients appearing here, however, do not yield improved accuracy for modeling exact water-wave dispersion at long wavelengths. Instead, they are designed to \emph{eliminate} linear dispersion, resulting in a regularization that faithfully reproduces the original shallow-water dispersion relation. The balance of terms in ${\mathcal R}$ ensures that the rSV equations are \emph{non-dispersive} --- linearized about a constant state $(h_{\,0},\,u_{\,0})\,$, solutions proportional to $\mathrm{e}^{\,\mathrm{i}\,k\,x\ -\ \mathrm{i}\,\omega\,t}$ necessarily have
\begin{equation*}
(\omega\ -\ u_{\,0}\,k)^{\,2}\ =\ g\,h_{\,0}\,k^{\,2}\,,
\end{equation*}
implying that phase velocity is independent of frequency.
The presence of squared derivatives in the energy ${\mathcal{E}}^{\,\eps}$ indicates that the rSV equations will not admit classical shock wave solutions with discontinuities in $h$ and $u\,$. Numerical experiments reported in \cite{Clamond2018} suggest, in fact, that with smooth initial data that produce hydraulic jumps (shock wave solutions) for the shallow-water equations, one obtains front-like solutions of the rSV equations that remain smooth and non-oscillatory, yet propagate at the correct speed determined by classical jump conditions corresponding to limiting states on the left and right. These solutions were computed numerically by a pseudospectral scheme that is highly accurate for smooth solutions and fairly uncomplicated. This is a hint that a similar approach could perhaps be taken to approximate shallow water dynamics by non-dispersive regularization in multidimensional geometries with more complicated topography and other physics.
At this point, a paradox arises. The energy of smooth solutions of the rSV equations satisfies the conservation law~\eqref{e:rsvE}, whereas in the case of shallow water equations, energy is dissipated at a shock-wave discontinuity, satisfying a distributional identity of the form
\begin{equation}\label{e:eeo}
{\mathcal{E}}^{\,0}_{\,t}\ +\ {\mathcal{Q}}^{\,0}_{\,x}\ =\ \mu\,,
\end{equation}
where $\mu$ is a non-positive measure supported along the shock curve. How can it be that front-like solutions of the rSV equations approximate classical shallow-water shocks well while conserving an energy similar to the one dissipated for shallow-water shocks?
Our purpose here is to describe a novel wave-propagation mechanism that may explain this paradox. We shall show that the regularized \textsc{Saint-Venant} equations \eqref{e:rsvh} -- \eqref{e:rsvu} admit regularized shock-wave solutions with profiles that are \emph{continuous but only piecewise smooth}, with derivatives having a weak singularity at a single point. Such a wave exists corresponding to every classical shallow-water shock. These waves are traveling-wave weak solutions of the rSV equations that conserve mass and momentum. They \emph{dissipate energy at the singular point}, however, at the precise rate that the corresponding classical shock does.
We also find that the rSV equations admit weak solutions in the form of \emph{cusped solitary waves}. These waves loosely resemble the famous `peakon' solutions of the \textsc{Camassa--Holm} equation in the non-dispersive case \cite{Camassa1993}. One difference is that the wave slope of our cusped solitary waves becomes infinite approaching the crest, while that of a peakon remains finite. The rSV equations also loosely resemble various $2-$component generalizations of the \textsc{Camassa--Holm} equation which have appeared in the literature---for a sample see \cite{Chen2006b, Ivanov2006, Kuzmin2007, Holm2009, Ionescu-Kruse2013}. One of the most well-studied of these is the integrable $2-$component \textsc{Camassa--Holm} system appearing in \cite{Chen2006b, Ivanov2006, Kuzmin2007},
\begin{gather}
h_{\,t}\ +\ (h\,u)_{\,x}\ =\ 0\,, \\
u_{\,t}\ +\ 3\,u\,u_{\,x}\ -\ u_{\,t\,x\,x}\ -\ 2\,u_{\,x}\,u_{\,x\,x}\ -\ u\,u_{\,x\,x\,x}\ +\ g\,h\,h_{\,x}\ =\ 0\,,
\end{gather}
which has been derived in the context of shallow-water theory by \textsc{Constantin} and \textsc{Ivanov} \cite{Constantin2008a} (also see \cite{Ionescu-Kruse2013a}). This system admits peakon-type solutions, and as noted in \cite{Dutykh2016}, it admits some degenerate front-type traveling wave solutions, which however necessarily have $h\ \to\ 0$ as either $x\ \to\ +\,\infty$ or $-\,\infty\,$.
The existence of weakly singular weak solutions of the rSV equations raises many interesting analytical and numerical issues that we cannot address here. For example, do smooth solutions develop weak singularities in finite time? Do finite-energy weak solutions exist globally in time? How can we approximate solutions well numerically despite weak singularities? Are weakly singular shock profiles and cusped solitary waves stable? Can similar regularization mechanisms be used to approximate shock waves in other interesting physical systems? (\emph{E.g.}, the classical \textsc{Saint-Venant} equations are formally identical to isentropic \textsc{Euler} compressible fluid equations with a particular pressure-density relation.) It would be strange if this novel and interesting phenomenon were unique to the shallow water equations.
\section{Shock waves for the classical shallow-water system}
Let us summarize some well-known basic properties of shock-wave solutions of the classical shallow-water (\textsc{Airy} or \textsc{Saint-Venant}) system for water depth $h\,(x,\,t)\ >\ 0$ and average horizontal velocity $u\,(x,\,t)\,$:
\begin{align}
& h_{\,t}\ +\ (h\,u)_{\,x}\ =\ 0\,,\label{e:swh}\\
& (h\,u)_{\,t}\ +\ \paren{h\,u^{\,2}\ +\ \half\;g\,h^{\,2}}_{\,x}\ =\ 0\,.\label{e:swu}
\end{align}
This system has two \textsc{Riemann} invariants $u\ \pm\ 2\,\sqrt{\,g\,h}\,$, and two characteristic speeds
\begin{equation*}
\lambda_{\,1}\ =\ u\ -\ \sqrt{\,g\,h}\,, \qquad
\lambda_{\,2}\ =\ u\ +\ \sqrt{\,g\,h}\,.
\end{equation*}
\subsection{Jump conditions}
A piecewise smooth solution that jumps along a curve $x\ =\ X\,(t)$ is a weak solution if and only if the \textsc{Rankine--Hugoniot} conditions hold at each point of the curve:
\begin{align}
&-\,s\,\bracket{h}\ +\ \bracket{h\,u}\ =\ 0\,,\label{rh1}\\
&-\,s\,\bracket{h\,u}\ +\ \bracket{h\,u^{\,2}\ +\ \half\;g\,h^{\,2}}\ =\ 0\,.\label{rh2}
\end{align}
Here $s\ =\ \dot X\,(t)$ is the jump speed and $[\,h\,]\ \stackrel{\textup{def}}{:=}\ h_{\,+}\ -\ h_{\,-}$ is the difference of right and left limits at the shock location, with similar definitions for the other brackets, \emph{e.g.}, $[\,h\,u\,]\ \stackrel{\textup{def}}{:=}\ h_{\,+}\,u_{\,+}\ -\ h_{\,-}\,u_{\,-}\,$.
After eliminating $s$ from the \textsc{Rankine--Hugoniot} conditions one finds
\begin{equation*}
\left[\,\half\;g\,h^{\,2}\,\right]\,[\,h\,]\ =\ \frac{g\,(h_{\,+}\ +\ h_{\,-})}{2}\;[\,h\,]^{\,2}\ =\ [\,h\,u\,]^{\,2}\ -\ [\,h\,u^{\,2}\,]\,[\,h\,]\ =\ h_{\,+}\,h_{\,-}\,[\,u\,]^{\,2}\,,
\end{equation*}
so that the states $(h_{\,\pm},\,u_{\,\pm})$ lie on the \textsc{Hugoniot} curves given by
\begin{equation*}
u_{\,+}\ -\ u_{\,-}\ =\ \pm\,\gamma\,(h_{\,+}\ -\ h_{\,-})\,, \qquad \gamma\ \stackrel{\textup{def}}{:=}\ \sqrt{\,\frac{g\,(h_{\,+}\ +\ h_{\,-})}{2\,h_{\,+}\,h_{\,-}}}\,.
\end{equation*}
Correspondingly the jump speed is determined by
\begin{equation}\label{e:s}
s\ =\ \frac{[\,h\,u\,]}{[\,h\,]}\ =\ u_{\,+}\ \pm\ \gamma\,h_{\,-}\ =\ u_{\,-}\ \pm\ \gamma\,h_{\,+}\,.
\end{equation}
In these relations, the $-$ sign corresponds to $1-$waves and the $+$ sign corresponds to $2-$waves. Physically meaningful shock waves satisfy the \textsc{Lax} shock conditions:
\begin{equation}\label{c:lax}
\begin{array}{ll}
u_{\,-}\ -\ \sqrt{\,g\,h_{\,-}}\ >\ s\ >\ u_{\,+}\ -\ \sqrt{\,g\,h_{\,+}} & \quad\mbox{for $1-$shocks,} \\
u_{\,-}\ +\ \sqrt{\,g\,h_{\,-}}\ >\ s\ >\ u_{\,+}\ +\ \sqrt{\,g\,h_{\,+}} & \quad\mbox{for $2-$shocks.}
\end{array}
\end{equation}
From \eqref{e:s} one finds that the Lax conditions hold if and only if
\begin{equation}\label{c:lax2}
\begin{array}{ll}
h_{\,-}\ <\ h_{\,+} & \mbox{for $1-$shocks,}\\
h_{\,-}\ >\ h_{\,+} & \mbox{for $2-$shocks.}
\end{array}
\end{equation}
The two wave families are related via the natural spatial reflection symmetry of the shallow water equations:
\begin{equation*}
(x,\,t)\ \to\ (-\,x,\,t)\,, \qquad (h,\,u)\ \to\ (h,\,-\,u)\,.
\end{equation*}
Under this symmetry, $1-$shocks are mapped to $2-$shocks and vice versa.
\subsection{Energy dissipation}
The energy dissipation identity for a piecewise-smooth solution with shock curve $\Gamma\ =\ \{\,(x,\,t)\;:\; x\ =\ X\,(t)\,\}$ takes the form \eqref{e:eeo}, where the measure $\mu$ is absolutely continuous with respect to $1-$dimensional \textsc{Hausdorff} measure (arc length measure) restricted to the shock curve $\Gamma\,$. Denoting this \textsc{Hausdorff} measure by $\sigma\,$, in terms of the parametrization $x\ =\ X\,(t)$ we can write informally that $\mathrm{d}\sigma\ =\ \sqrt{\,1\ +\ s^{\,2}}\,\mathrm{d} t$ and
\begin{equation}\label{e:diss1}
{\mathcal{D}}\ \stackrel{\textup{def}}{:=}\ \frac{\mathrm{d}\mu}{\mathrm{d} t}\ =\ -\,s\,[\,{\mathcal{E}}^{\,0}\,]\ +\ [\,{\mathcal{Q}}^{\,0}\,]\ =\ \left[\half\;h\,(u\ -\ s)^{\,3}\ +\ g\,h^{\,2}\,(u\ -\ s)\right]\,.
\end{equation}
One verifies this identity by expanding $(u\ -\ s)^{\,3}$ and using that
\begin{equation*}
s^{\,3}\,[\,h\,]\ =\ s^{\,2}\,[\,h\,u\,]\ =\ s\,\left[\,h\,u^{\,2}\ +\ \half\;g\,h^{\,2}\,\right]
\end{equation*}
from the \textsc{Rankine--Hugoniot} conditions. The precise meaning of \eqref{e:diss1} and \eqref{e:eeo} is that for any smooth test function $\varphi$ with support in a small neighborhood of the shock curve $\Gamma$ and contained in the half-plane where $x\ \in\ \mathds{R}$ and $t\ >\ 0\,$, we have
\begin{equation*}
\int_{\,0}^{\,\infty}\int_{-\,\infty}^{\,\infty} (-\,{\mathcal{E}}^{\,0}\,\partial_{\,t}\,\varphi\ -\ {\mathcal{Q}}^{\,0}\,\partial_{\,x}\,\varphi)\,\mathrm{d}\,x\,\mathrm{d}\,t\ =\ \int_{\,\Gamma}\,\varphi\,\mathrm{d}\mu\ =\ \int_{\,0}^{\,\infty}\,\varphi\,(X\,(t),\,t)\,{\mathcal{D}}\,(t)\,\mathrm{d} t\,.
\end{equation*}
The identity \eqref{e:diss1} is related to the \textsc{Galilean} invariance of the shallow-water equations after changing to a frame moving with constant speed $s$ frozen at some instant of time. To conveniently compute further we introduce $v\ =\ u\ -\ s$ and write
\begin{equation}\label{d:vpm}
v_{\,-}\ =\ u_{\,-}\ -\ s\,, \qquad v_{\,+}\ =\ u_{\,+}\ -\ s\,,
\end{equation}
and note that by the \textsc{Rankine--Hugoniot} conditions,
\begin{align}\label{d:M0}
M\ &\stackrel{\textup{def}}{:=}\ h_{\,+}\,v_{\,+}\ =\ h_{\,-}\,v_{\,-}\,, \\
\label{d:N0}
N\ &\stackrel{\textup{def}}{:=}\ h_{\,+}\,v_{\,+}^{\,2}\ +\ \half\;g\,h_{\,+}^{\,2}\ =\ h_{\,-}\,v_{\,-}^{\,2}\ +\ \half\;g\,h_{\,-}^{\,2}\,.
\end{align}
With the same choice of sign as in \eqref{e:s} we find
\begin{align}\label{d:M}
M\ &=\ \mp\,\gamma\,h_{\,+}\,h_{\,-}\,, \\
\label{d:N}
N\ &=\ \frac{M^{\,2}}{h_{\,\pm}}\ +\ \half\;g\,h_{\,\pm}^{\,2}\ =\ \half\;g\,(h_{\,+}^{\,2}\ +\ h_{\,+}\,h_{\,-}\ +\ h_{\,-}^{\,2})\,.
\end{align}
Then using \eqref{d:M} and \eqref{c:lax2}, we compute
\begin{equation}\label{e:dval}
{\mathcal{D}}\ =\ \frac{M^{\,3}}2\;\left[\,\frac{1}{h^{\,2}}\,\right]\ +\ g\,M\,[\,h\,]\ =\ \pm\,\frac14\;g\,\gamma\,[\,h\,]^{\,3}\ <\ 0\,,
\end{equation}
for both $1-$shocks and $2-$shocks. Note that the dissipation is of the order of the amplitude cubed for small shocks.
\section{Weakly singular shock profiles for the regularized system}
Now consider any simple piecewise-constant shock-wave solution of the shallow water equations, in the form
\begin{equation}\label{d:simpleshock}
(h,\,u)\ =\
\begin{cases}
(h_{\,-},\,u_{\,-}) & x\ <\ s\,t\,, \\
(h_{\,+},\,u_{\,+}) & x\ >\ s\,t\,,
\end{cases}
\end{equation}
where $s\,$, $h_{\,\pm}\,$, and $u_{\,\pm}$ are constants with $h_{\,\pm}\ >\ 0\,$. Our goal in this section is to show that the regularized \textsc{Saint-Venant} equations \eqref{e:rsvh} -- \eqref{e:rsvu} admit a corresponding traveling-wave solution having shock profile that is continuous and piecewise smooth, and dissipates energy at the precise rate that the corresponding classical shock does.
We mention that through the time-reversal symmetry
\begin{equation*}
(x,\,t)\ \to\ (x,\,-t)\,, \qquad (h,\,u)\ \to\ (h,\,-u)\,,
\end{equation*}
the traveling waves that we obtain remain as valid weak solutions of the rSV system, which generate energy instead of dissipating it. These solutions correspond to non-physical shocks for the shallow-water equations that violate the \textsc{Lax} conditions in \eqref{c:lax}.
\subsection{Construction of shock profiles}
Because both the rSV and shallow water equations are invariant under spatial reflection, we may assume the shock is a $2-$shock without loss of generality. Moreover, the rSV and shallow water equations are invariant under the \textsc{Galilean} transformation taking
\begin{equation*}
u\ \to\ u\ +\ s\,, \quad \partial_{\,t}\ \to\ -\,s\,\partial_{\,x}\ +\ \partial_{\,t}\,.
\end{equation*}
Thus it is natural to work in the frame of reference moving with the shock at speed $s$ and seek a steady wave profile that is smooth except at the origin $x\ =\ 0\,$. Adopting the notation in \eqref{d:vpm} and writing $v\ =\ u\ -\ s$ for convenience, therefore we seek time-independent functions $h\,:\ \mathds{R}\ \to\ (0,\,\infty)$ and $v\,:\ \mathds{R}\ \to\ \mathds{R}$ such that $h$ and $v$ are continuous, smooth except at $x\ =\ 0\,$, take the limiting values
\begin{equation}\label{e:hvlim}
(h,\,v)\ \to\ \begin{cases}
(h_{\,-},\,v_{\,-}) & x\ \to\ -\infty\,, \\
(h_{\,+},\,v_{\,+}) & x\ \to\ +\infty\,,
\end{cases}
\end{equation}
and provide a weak solution of the steady rSV equations
\begin{gather}\label{e:steadyhv}
(h\,v)_{\,x}\ =\ 0\,, \qquad (h\,v^{\,2}\ +\ \half\;g\,h^{\,2}\ +\ \varepsilon\,{\mathcal{R}}\,h^{\,2})_{\,x}\ =\ 0\,, \\
\label{e:steadyR}
{\mathcal{R}}\ =\ h\,v_{\,x}^{\,2}\ -\ h\,v\,v_{\,x\,x}\ -\ g\,(h\,h_{\,x\,x}\ +\ \half\;h_{\,x}^{\,2})\,.
\end{gather}
As is natural, we will find solutions whose derivatives approach zero as $x\ \to\ \pm\,\infty\,$. Thus upon integration we find that
\begin{align}
& h\,v\ =\ M \,, \label{RH1a} \\
& h\,v^{\,2}\ +\ \half\;g\,h^{\,2}\ +\ \varepsilon\,\mathcal{R}\,h^{\,2}\ =\ N\,, \label{RH2a}
\end{align}
where $M$ and $N$ are the \textsc{Rankine--Hugoniot} constants defined in \eqref{d:M0} and \eqref{d:N0} and are given by \eqref{d:M} and \eqref{d:N}, respectively.
Let us first work on the right half-line where $x\ >\ 0\,$. In terms of the dimensionless variables given by
\begin{equation*}
H\ =\ \frac{h}{h_{\,+}}\,, \quad V\ =\ \frac{v}{v_{\,+}}\,, \quad z\ =\ \frac{x}{h_{\,+}}\,,
\end{equation*}
and the squared \textsc{Froude} number on the right,
\begin{equation*}
{\mathcal{F}}_{\,+}\ =\ \frac{v_{\,+}^{\,2}}{g\,h_{\,+}}\,,
\end{equation*}
the equations take the form
\begin{align*}
& H\,V\ =\ 1\,, \\
& {\mathcal{F}}\,H\,V^{\,2}\ +\ \half\;H^{\,2}\ +\ \varepsilon\,{\mathcal{F}}\,H^{\,3}\,(V_{\,z}^{\,2}\ -\ V\,V_{\,z\,z})\ -\ \varepsilon\,(H^{\,3}\,H_{\,z\,z}\ +\ \half\;H^{\,2}\,H_{\,z}^{\,2})\ =\ {\mathcal{F}}\ +\ \half\,.
\end{align*}
(For simplicity we temporarily drop the subscript on ${\mathcal{F}}_{\,+}$ here.) Eliminating $V$ we obtain a single equation for the dimensionless wave height $H\,$,
\begin{equation*}
\frac{{\mathcal{F}}}{H}\ +\ \half\;H^{\,2}\ +\ \frac{\varepsilon\,{\mathcal{F}}}{H}\;(H\,H_{\,z\,z}\ -\ H_{\,z}^{\,2})\ -\ \varepsilon\,(H^{\,3}\,H_{\,z\,z}\ +\ \half\;H^{\,2}\,H_{\,z}^{\,2})\ =\ {\mathcal{F}}\ +\ \half\,.
\end{equation*}
Dividing this equation by $H^{\,2}$ we can rewrite it as
\begin{equation*}
\frac{{\mathcal{F}}}{H^{\,3}}\ +\ \half\ +\ \varepsilon\,{\mathcal{F}}\,(H\,^{\,-1}\,H_{\,z})_{\,z}\,H\,^{\,-1}\ -\ \varepsilon\,(H^{\,\half}\,H_{\,z})_{\,z}\,H^{\,\half}\ =\ \frac{{\mathcal{F}}\ +\ \half}{H^{\,2}}\,.
\end{equation*}
Further multiplying by $H_{\,z}$ one can integrate this equation to obtain
\begin{equation}\label{e:odeH}
\varepsilon\,H_{\,z}^{\,2}\ =\ G\,({\mathcal{F}},\,H)\ \stackrel{\textup{def}}{:=}\ \frac{(H\ -\ {\mathcal{F}})\,(H\ -\ 1)^{\,2}}{H^{\,3}\ -\ {\mathcal{F}}}\,.
\end{equation}
Here the integration constant is determined by requiring $H\ \to\ 1$ as $z\ \to\ \infty\,$.
In terms of the original dimensional variables this equation takes the form
\begin{equation}\label{e:hplus}
\varepsilon\,h_{\,x}^{\,2}\ =\ G\,\left({\mathcal{F}}_{\,+},\,\frac{h}{h_{\,+}}\right)\ =\ \frac{(h\ -\ h_{\,+}\,{\mathcal{F}}_{\,+})\,(h\ -\ h_{\,+})^{\,2}}{h^{\,3}\ -\ h_{\,+}^{\,3}\,{\mathcal{F}}_{\,+}}\,.
\end{equation}
On the left half-line where $x\ <\ 0\,$, a similar integration procedure yields
\begin{equation}\label{e:hminus}
\varepsilon\,h_{\,x}^{\,2}\ =\ G\,\left({\mathcal{F}}_{\,-},\,\frac{h}{h_{\,-}}\right)\ =\ \frac{(h\ -\ h_{\,-}\,{\mathcal{F}}_{\,-})\,(h\ -\ h_{\,-})^{\,2}}{h^{\,3}\ -\ h_{\,-}^{\,3}\,{\mathcal{F}}_{\,-}}\,,
\end{equation}
with ${\mathcal{F}}_{\,-}\ =\ v_{\,-}^{\,2}/g\,h_{\,-}\,$. We note that these equations correspond to equation (29) of \cite{Clamond2018} with the appropriate choice of integration constants.
Recalling that we are dealing with a $2-$shock for which $h_{\,+}\ <\ h_{\,-}\,$, we note
\begin{equation}\label{e:Fpm}
h_{\,+}^{\,3}\,{\mathcal{F}}_{\,+}\ =\ h_{\,-}^{\,3}\,{\mathcal{F}}_{\,-}\ =\ \frac{M^{\,2}}{g}\ =\ \half\;(h_{\,+}\ +\ h_{\,-})\,h_{\,+}\,h_{\,-} \quad \in\ (h_{\,+}^{\,3},\,h_{\,-}^{\,3})\,.
\end{equation}
Therefore
\begin{equation*}
{\mathcal{F}}_{\,-}\ <\ 1\ <\ {\mathcal{F}}_{\,+}\,,
\end{equation*}
and furthermore, the denominators in \eqref{e:hplus} and \eqref{e:hminus}
vanish at the \emph{same critical height} $h_{\,c}$ satisfying $h_{\,+}\ <\ h_{\,c}\ <\ h_{\,-}\,$, where
\begin{equation}\label{d:hc}
h_{\,c}^{\,3}\ =\ \half\;(h_{\,+}\ +\ h_{\,-})\,h_{\,+}\,h_{\,-}\ =\ \frac{M^{\,2}}g\,.
\end{equation}
On the right half line where $x\ >\ 0\,$, note the denominator in \eqref{e:hplus} changes sign from negative to positive as $h$ increases from $h_{\,+}$ past the critical height $h_{\,c}\,$, while the numerator is negative for $h_{\,+}\ <\ h\ <\ h_{\,+}\,{\mathcal{F}}_{\,+}\,$. Because $h_{\,+}\,{\mathcal{F}}_{\,+}\ =\ h_{\,c}^{\,3}\,/\,h_{\,+}^{\,2}\ >\ h_{\,c}\,$, this means that the right-hand side of \eqref{e:hplus} changes sign as $h$ increases past $h_{\,c}\,$: for $h$ near $h_{\,c}$ we have
\begin{equation*}
G\,\left({\mathcal{F}}_{\,+},\,\frac{h}{h_{\,+}}\right)\ >\ 0 \quad \mbox{for $h\ <\ h_{\,c}\,$}, \qquad G\,\left({\mathcal{F}}_{\,+},\,\frac{h}{h_{\,+}}\right)\ <\ 0 \quad \mbox{for $h\ >\ h_{\,c}$}\,.
\end{equation*}
Thus a solution of \eqref{e:hplus} taking values between $h_{\,+}$ and $h_{\,-}$ can exist only as long as $h\ <\ h_{\,c}\,$. Because we require $h\ \to\ h_{\,+}$as $x\ \to\ +\,\infty\,$, such a solution must be monotone decreasing and satsify
\begin{equation}\label{e:hp2}
\sqrt{\varepsilon}\,h_{\,x}\ =\ -\,\sqrt{\,G\,({\mathcal{F}}_{\,+},\,h/h_{\,+})}\,.
\end{equation}
Actually, we have $h\,(x)\ =\ \eta_{\,+}\,(x\,/\,\sqrt\varepsilon)$ for a unique continuous function $\eta_{\,+}\colon[0,\,\infty)\ \to\ (0,\,\infty)$ which is a smooth decreasing solution of \eqref{e:hp2} with $\varepsilon\ =\ 1$ for $x\ >\ 0$ and satisfies
\begin{equation*}
\eta_{\,+}\,(0)\ =\ h_{\,c}\,, \qquad \eta_{\,+}\,(x)\ \to\ h_{\,+} \quad\mbox{as $x\ \to\ +\,\infty\,$.}
\end{equation*}
To see that this is true, one can separate variables in \eqref{e:hp2} and determine the solution implicitly according to the relation
\begin{equation}\label{e:hpint}
\int_{\,h}^{\,h_{\,c}}\,\frac{\mathrm{d} k}{\sqrt{\,G\,({\mathcal{F}}_{\,+},\,k/h_{\,+})}}\ =\ \frac{x}{\sqrt\varepsilon}\,, \quad x\ \ge\ 0\,, \quad h\ \in\ (h_{\,+},\,h_{\,c}\,]\,,
\end{equation}
since the integral converges on any interval $[\,h,\,h_{\,c}\,]\ \subset\ (h_{\,+},\,h_{\,c}\,]\,$.
On the left half line where $x\ <\ 0\,$, the reasoning is similar. The numerator in \eqref{e:hminus} is positive for $h_{\,-}\ >\ h\ >\ h_{\,-}\,{\mathcal{F}}_{\,-}$ while the denominator changes sign from positive to negative as $h$ decreases past the critical height $h_{\,c}\,$. The solution we seek takes values between $h_{\,-}$ and $h_{\,c}\,$, satisfying
\begin{equation}\label{e:hm2}
\sqrt\varepsilon\,h_{\,x}\ =\ -\,\sqrt{\,G\,({\mathcal{F}}_{\,-},\,h/h_{\,-})}\,.
\end{equation}
Again, we have $h\,(x)\ =\ \eta_{\,-}\,(x\,/\,\sqrt\varepsilon)$ for a unique continuous function $\eta_{\,-}\,\colon(-\infty,\,0\,]\ \to\ (0,\,\infty)$ which is a smooth decreasing solution of \eqref{e:hm2} with $\varepsilon\ =\ 1$ for $x\ <\ 0$ and satisfies
\begin{equation*}
\eta_{\,-}\,(0)\ =\ h_{\,c}\,, \qquad \eta_{\,-}\ \to\ h_{\,-} \quad\mbox{ as $x\ \to\ -\,\infty\,$.}
\end{equation*}
The solution is determined implicitly in this case according to the relation
\begin{equation}\label{e:hmint}
\int_{\,h}^{\,h_{\,c}}\frac{\mathrm{d} k}{\sqrt{\,G\,({\mathcal{F}}_{\,-},\,k/h_{\,-})}}\ =\ \frac{x}{\sqrt\varepsilon}\,, \quad x\ <\ 0\,, \quad h\ \in\ (h_{\,c},\, h_{\,-})\,.
\end{equation}
\bigskip
\paragraph*{Summary.}
Let us summarize: Given the $2-$shock solution \eqref{d:simpleshock} of the shallow water equations, our corresponding weakly singular traveling wave solution of the rSV equations satisfies \eqref{e:hvlim} and takes the form
\begin{equation}\label{e:hprofile}
h\,(x,\,t)\ =\ \begin{cases}
\displaystyle \eta_{\,+}\,\left(\frac{x\ -\ s\,t}{\sqrt\varepsilon}\right) & x\ \ge\ s\,t\,, \\
\displaystyle \eta_{\,-}\,\left(\frac{x\ -\ s\,t}{\sqrt\varepsilon}\right) & x\ <\ s\,t\,,
\end{cases} \qquad u\,(x,\,t)\ =\ s\ +\ \frac Mh\,,
\end{equation}
where $\eta_{\,\pm}$ are determined by $h_{\,+}$ and $h_{\,-}$ implicitly from \eqref{e:hpint} and \eqref{e:hmint} respectively with $\varepsilon\ =\ 1\,$, using \eqref{e:Fpm} to determine ${\mathcal{F}}_{\,\pm}\,$, and $h_{\,c}$ is given by \eqref{d:hc}.
\subsection{Behavior near the singular point and infinity}
The nature of the singularity at $x\ =\ s\,t$ for the solution above may be described as follows. For the function $G$ in \eqref{e:hplus}, because $h_{\,+}\,{\mathcal{F}}_{\,+}\ =\ h_{\,c}^{\,3}/h_{\,+}^{\,2}$ we have
\begin{equation}\label{e:Gpinv}
\frac{1}{G\,({\mathcal{F}}_{\,+},\,h/h_{\,+})}\ =\ \frac{(h^{\,3}\ -\ h_{\,c}^{\,3})\,h_{\,+}^{\,2}}{(h_{\,+}^{\,2}\,h\ -\ h_{\,c}^{\,3})\,(h\ -\ h_{\,+})^{\,2}}\ \sim\ K_{\,+}^{\,2}\,(h_{\,c}\ -\ h)
\end{equation}
as $h\ \to\ h_{\,c}\,$, where
\begin{equation*}
K_{\,+}^{\,2}\ =\ \frac{3\,h_{\,c}\,h_{\,+}^{\,2}}{(h_{\,c}^{\,2}\ -\ h_{\,+}^{\,2})\,(h_{\,c}\ -\ h_{\,+})^{\,2}}\,.
\end{equation*}
From this asymptotic description we infer from \eqref{e:hpint} that for small $x\ >\ 0\,$,
\begin{equation}\label{e:hp0plus}
h_{\,c}\ -\ h\ \sim\ c_{\,+}\,x^{\,2/3}\,, \qquad h_{\,x}\ \sim\ -\,\frac{2}{3}\;c_{\,+}\,x^{\,-1/3}\,, \qquad h_{\,x\,x}\ \sim\ \frac{2}{9}\;c_{\,+}\,x^{\,-4/3}\,,
\end{equation}
where $c_{\,+}\ =\ (2\,K_{\,+}\sqrt\varepsilon/3)^{\,-2/3}\,$.
A similar description holds on the other side of the singularity: From \eqref{e:hminus} we have
\begin{equation}\label{e:Gninv}
\frac{1}{G\,({\mathcal{F}}_{\,-},\,h/h_{\,-})}\ =\ \frac{(h^{\,3}\ -\ h_{\,c}^{\,3})\,h_{\,-}^{\,2}}{(h_{\,-}^{\,2}\,h\ -\ h_{\,c}^{\,3})(h\ -\ h_{\,-})^{\,2}}\ \sim\ K_{\,-}^{\,2}\,(h\ -\ h_{\,c})
\end{equation}
as $h\ \to\ h_{\,c}\,$, where
\begin{equation*}
K_{\,-}^{\,2}\ =\ \frac{3\,h_{\,c}\,h_{\,-}^{\,2}}{(h_{\,-}^{\,2}\ -\ h_{\,c}^{\,2})\,(h_{\,c}\ -\ h_{\,-})^{\,2}}\,.
\end{equation*}
So for small $x\ <\ 0\,$,
\begin{equation}\label{e:hm0minus}
h\ -\ h_{\,c}\ \sim\ c_{\,-}\,\abs{x}^{\,2/3}\,, \qquad h_{\,x}\ \sim\ -\,\frac{2}{3}\;c_{\,-}\abs{x}^{\,-1/3}\,, \qquad h_{\,x\,x}\ \sim\ -\,\frac{2}{9}\;c_{\,-}\abs{x}^{\,-4/3}\,,
\end{equation}
where $c_{\,-}\ =\ (2\,K_{\,-}\sqrt\varepsilon/3)^{\,-2/3}\,$.
The behavior of $v$ follows by differentiation from \eqref{RH1a}. Thus we see that $h_{\,x}$ and $v_{\,x}$ are square integrable in any neighborhood of $x\ =\ 0$ (and belong to $L^{\,p}$ for $p\ <\ 3$), while $h_{\,x\,x}$ and $v_{\,x\,x}$ are not integrable functions. The singularities due to second derivatives in \eqref{RH2a} cancel however (see below), to produce the constant value $N\,$. This yields a valid distributional solution of the steady rSV equations \eqref{e:steadyhv} written in conservation form.
As $x\ \to\ \pm\infty\,$, it is straightforward to check that the limits in \eqref{e:hvlim} are achieved at an exponential rate.
\subsection{Distributional derivatives}
Because of the blow-up of $h_{\,x}$ at the origin, the distributional derivative of $h_{\,x}$ is no longer a classical function. Rather, it is a generalized function or a distribution which can be computed as follows.
We write $h_{\,x\,x}$ to denote the distributional derivative of $h_{\,x}$ and write $\overline{h_{\,x\,x}}$ for the classical derivative of $h_{\,x}$ that is not defined at $0\,$. Let $\varphi\ \in\ C_{\,c}^{\,\infty}\,({\mathds{R}})$ be a test function with support $\supp\varphi\ \subset\ (-L,\,L)\,$. Let $\tau$ be a subtracting operator acting on functions from ${\mathds{R}}$ to ${\mathds{R}}$ such that
\begin{equation*}
\tau\,\varphi\,(x)\ =\ \varphi\,(x)\ -\ \varphi\,(0)\,.
\end{equation*}
Then the distributional pairing of $\varphi$ with the distribution $h_{\,x\,x}$ is
\begin{eqnarray}
\angl{h_{\,x\,x},\,\varphi}\ &=&\ -\int_{\mathds{R}}\,h_{\,x}\,\varphi_{\,x}\,\mathrm{d} x\ =\ -\,\int_{\mathds{R}}\,h_{\,x}\,(\tau\,\varphi)_{\,x}\,\mathrm{d} x \nonumber\\
&=&\ -\lim_{\varepsilon\ \to\ 0^{\,+}}\,\paren{\int_{\,-L}^{\,-\varepsilon}\,h_{\,x}\,(\tau\,\varphi)_{\,x}\,\mathrm{d} x\ +\ \int_{\,\varepsilon}^{\,L}\,h_{\,x}\,(\tau\,\varphi)_{\,x}\,\mathrm{d} x}\nonumber\\
&=&\ -\,h_{\,x}\,(-L)\,\varphi\,(0)\ +\ h_{\,x}\,(L)\,\varphi\,(0)\ +\ \int_{\,-L}^{\,L}\,\overline{h_{\,x\,x}}\,(\tau\,\varphi)\,\mathrm{d} x\,,
\end{eqnarray}
where in the last step we use the fact that $(\tau\,\varphi)\,(x)\ \sim\ x\,\varphi_{\,x}\,(0)$ when $x$ is small and the fact that $\overline{h_{\,x\,x}}\,\tau\,\varphi$ is integrable near $0\,$. Furthermore, the above equality is true
for all $L$ large enough, so sending $L$ to infinity we have that
\begin{equation*}
\angl{h_{\,x\,x},\,\varphi}\ =\ \int_{\mathds{R}}\,\overline{h_{\,x\,x}}\,(\tau\,\varphi)\,\mathrm{d} x\,.
\end{equation*}
Due to this result, the distribution $h_{\,x\,x}\,(h\ -\ h_{\,c})$ satisfies
\begin{eqnarray*}
\angl{h_{\,x\,x}\,(h\ -\ h_{\,c}),\,\varphi}\ &=&\ \angl{h_{\,x\,x},\,(h\ -\ h_{\,c})\,\varphi} \nonumber \\
&=&\ \int_{\mathds{R}}\,\overline{h_{\,x\,x}}\,\tau\,((h\ -\ h_{\,c})\,\varphi)\,\mathrm{d} x\ =\ \int_{\mathds{R}}\,\overline{h_{\,x\,x}}\,(h\ -\ h_{\,c})\,\varphi\,\mathrm{d} x \\
&=&\ \angl{\overline{h_{\,x\,x}}\,(h\ -\ h_{\,c}),\,\varphi}
\end{eqnarray*}
where the first line is justified by the fact that $h_{\,x\,x}$ is a continuous linear functional on $W^{\,1,\,p}\,({\mathds{R}})$ for any $p\ \in\ (1,\,\infty)\,$. This implies that in the sense of distributions,
\begin{equation}\label{e:hxxbar}
h_{\,x\,x}\,(h\ -\ h_{\,c})\ =\ \overline{h_{\,x\,x}}\,(h\ -\ h_{\,c})\,,
\end{equation}
where the right-hand side is a locally integrable function.
From this we can find a locally integrable representation of the quantity $h^{\,2}\,{\mathcal{R}}$ from \eqref{e:steadyR}. Differentiating \eqref{RH1a} twice and multiplying by $h^{\,2}\,v\,$, we find $h^{\,3}\,v_{\,x}^{\,2}\ =\ M^{\,2}\,h_{\,x}^{\,2}/h$ and
\begin{equation*}
-\,h^{\,3}\,v\,v_{\,xvx}\ =\ M^{\,2}\,\left(h_{\,x\,x}\ -\ \frac{2\,h_{\,x}^{\,2}}\;h\right)\,.
\end{equation*}
Because $M^{\,2}\ =\ g\,h_{\,c}^{\,3}\,$, using \eqref{e:hxxbar} it follows
\begin{equation*}
h^{\,2}\,{\mathcal{R}}\ =\ g\,(h_{\,c}^{\,3}\ -\ h^{\,3})\,\overline{h_{\,x\,x}}\ -\ \frac{g}{2\,h}\;(2\,h_{\,c}^{\,3}\ +\ h^{\,3})\,h_{\,x}^{\,2}\,.
\end{equation*}
So we conclude that the singularities appearing in $h_{\,x\,x}$ and $v_{\,x\,x}$ do cancel each other in a way that makes the stationary momentum flux locally integrable with distributional derivative $0\,$.
Another way to see this cancellation is that the singular terms in $h^{\,2}\,{\mathcal{R}}$ sum up to give
\begin{eqnarray*}
h^{\,3}\,v\,v_{\,x\,x}\ +\ g\,h^{\,3}\,h_{\,x\,x}\ &=&\ \paren{h^{\,3}\,v\,v_{\,x}\ +\ g\,h^{\,3}\,h_{\,x}}_{\,x}\ -\ (h^{\,3}\,v)_{\,x}\,v_{\,x}\ -\ g\,(h^{\,3})_{\,x}\,h_{\,x} \\
&=&\ \paren{h^{\,3}\,v\,\paren{-\frac{M}{h^{\,2}}\,h_{\,x}}\ +\ g\,h^{\,3}\,h_{\,x}}_{\,x}\ -\ (h^{\,3}\,v)_{\,x}\,v_{\,x}\ -\ g\,(h^{\,3})_{\,x}\,h_{\,x} \\
&=&\ g\,\paren{(h^{\,3}\ -\ h_{\,c}^{\,3})\,h_{\,x}}_{\,x}\ -\ (h^{\,3}\,v)_{\,x}\,v_{\,x}\ -\ g\,(h^{\,3})_{\,x}\,h_{\,x}\,,
\end{eqnarray*}
in which every term is a locally integrable function.
\subsection{Energy dissipation of weakly singular waves}
Here our aim is to show that the regularized shock-wave solutions of the rSV equations that correspond to the simple shallow-water shock \eqref{d:simpleshock} satisfy the distributional identity
\begin{equation}\label{e:disseps}
{\mathcal{E}}^{\,\eps}_{\,t}\ +\ {\mathcal{Q}}^{\,\eps}_{\,x}\ =\ \mu\,,
\end{equation}
where the dissipation measure $\mu$ is a constant multiple of $1-$dimensional \textsc{Hausdorff} measure restricted to the simple shock curve $\{(x,\,t)\,:\ x\ =\ s\,t\}\,$, satisfying
\begin{equation*}
{\mathcal{D}}\ =\ \frac{\mathrm{d} \mu}{\mathrm{d} t}\ =\ \pm\,\frac14\;g\,\gamma\,(h_{\,+}\ -\ h_{\,-})^{\,3}\ <\ 0\,,
\end{equation*}
\emph{exactly the same as the simple shallow-water shock in \eqref{d:simpleshock}}.
Indeed, the steady solution constructed above is a smooth solution of the rSV equations \eqref{e:rsvh} -- \eqref{e:rsvu} on both the right and left half-lines, hence satisfies the conservation law \eqref{e:rsvE} except at
$x\ =\ 0\,$. In this time-independent situation this means
\begin{equation*}
{\mathcal{Q}}^{\,\eps}_{\,x}\ =\ 0\,, \qquad x\ \in\ \mathds{R}\ \setminus\ \{\,0\,\}\,.
\end{equation*}
Now, integration of this equation separately on the right and left half lines yields
\begin{equation*}
{\mathcal{Q}}^{\,\eps}\ =\ \begin{cases}
{\mathcal{Q}}_{\,-}\,, & x\ <\ 0\,, \\
{\mathcal{Q}}_{\,+}\,, & x\ >\ 0\,,
\end{cases}
\end{equation*}
where the constants ${\mathcal{Q}}_{\,\pm}$ can be evaluated by taking $x\ \to\ \pm\,\infty$ in the expression for ${\mathcal{Q}}^{\,\eps}$ in \eqref{e:rsvE} and invoking the limits in \eqref{e:hvlim}. The result is that the constants ${\mathcal{Q}}_{\,\pm}$ take the same values as appear in \eqref{e:diss1} for the simple shallow-water shock. Namely,
\begin{equation*}
{\mathcal{Q}}_{\,\pm}\ =\ \frac12\;h_{\,\pm}\,v_{\,\pm}^{\,3}\ +\ g\,h_{\,\pm}^{\,2}\,v_{\,\pm}\,.
\end{equation*}
Therefore, by the same calculation that leads to \eqref{e:dval}, the weak derivative of ${\mathcal{Q}}^{\,\eps}$ on all of $\mathds{R}$ is a multiple of the \textsc{Dirac} delta measure $\delta_{\,0}$ at $x\ =\ 0\,$, satisfying
\begin{equation*}
{\mathcal{Q}}^{\,\eps}_{\,x}\ =\ ({\mathcal{Q}}_{\,+}\ -\ {\mathcal{Q}}_{\,-})\,\delta_{\,0}\ =\ {\mathcal{D}}\,\delta_{\,0}\,,
\end{equation*}
where ${\mathcal{D}}$ is the same as in \eqref{e:dval}. By undoing the Galilean transformation to the frame moving with the simple shock speed, we obtain \eqref{e:disseps} with dissipation measure $\mu$ exactly as claimed above.
\section{Cusped solitary waves for the regularized system}
The construction of weakly singular shock profiles in the previous section also enables us to describe cusped solitary waves for the rSV equations. These are weak traveling-wave solutions whose limits as $x\ \to\ -\,\infty$ are the same as those as $x\ \to\ +\,\infty\,$.
The point is that weak solutions of the steady rSV equations \eqref{e:steadyhv}--\eqref{e:steadyR} can be constructed \emph{by reflection} from either piece $\eta_{\,\pm}$ of the $2-$shock profile in the previous section. For each of these pieces, the quantities on the left-hand sides in \eqref{RH1a} and \eqref{RH2a} are locally integrable (in total, though not term-wise) and indeed constant on $\mathds{R}\ \setminus\ \{\,0\,\}\,$. Thus the construction above yields two valid distributional solutions of the steady rSV equations with height profiles
\begin{equation}\label{e:solwpm}
h\,(x,\,t)\ =\ \eta_{\,\pm}\,\left(\frac{\abs{x\ -\ s\,t}}{\sqrt\varepsilon}\right)\,,
\end{equation}
respectively satisfying $h\,(x,\,t)\ \to\ h_{\,\pm}$ as $\abs{x}\ \to\ \infty\,$. The energy of these solitary wave solutions satisfies the conservation law \eqref{e:rsvE} without alteration.
\subsection{Solitary waves of elevation}
We note that for the solution using $\eta_{\,+}\,$, the value of $h_{\,-}$ has no direct interpretation in terms of the wave shape. However, from \eqref{e:Gpinv} we see that the solitary-wave height profile with the $+$ sign can be determined from any independently chosen values of $h_{\,\infty}\ \stackrel{\textup{def}}{:=}\ h_{\,+}$ and $h_{\,c}$ with
\begin{equation*}
0\ <\ h_{\,\infty}\ <\ h_{\,c}\,.
\end{equation*}
Here $h_{\,c}$ is the maximum height of the wave and $h_{\,\infty}$ is the limiting value at $\infty\,$. The wave everywhere is a wave of elevation, with $h_{\,\infty}\ <\ h\,(x,\,t)\ \le\ h_{\,c}\,$, determined implicitly as in \eqref{e:hpint} and \eqref{e:Gpinv} by
\begin{equation}\label{e:solpint}
\int_{\,h}^{\,h_{\,c}}\,\left({\frac{h_{\,c}^{\,3}\ -\ k^{\,3}}{h_{\,c}^{\,3}\ -\ h_{\,\infty}^{\,2}\,k}}\right)^{\,\half}\;\frac{h_{\,\infty}}{k\ -\ h_{\,\infty}}\;\mathrm{d} k\ =\ \frac{\abs{x\ -\ s\,t}}{\sqrt\varepsilon}\,, \qquad x\ \in\ \mathds{R}\,, \quad h\ \in\ (h_{\,\infty},\,h_{\,c}\,]\,.
\end{equation}
It is natural for solitary waves to consider $u_{\,+}\ =\ 0$ to be the limiting velocity as $\abs{x}\ \to\ \infty$ in the original frame. Then by \eqref{e:s}, $v_{\,+}\ =\ -\,s\ =\ -\gamma\,h_{\,-}\,$, whence we find using \eqref{d:hc} that
$\gamma\ =\ \sqrt{g\,h_{\,c}}\,h_{\,c}/(h_{\,+}\,h_{\,-})$ and
\begin{equation}\label{e:sols}
s\ =\ \sqrt{g\,h_{\,c}}\,\frac{h_{\,c}}{h_{\,\infty}}\,.
\end{equation}
This determines the velocity profile according to
\begin{equation}\label{e:solu}
u\,(x,\,t)\ =\ s\ +\ \frac{M}{h}\ =\ s\,\left(1\ -\ \frac{h_{\,\infty}}{h}\right)\,.
\end{equation}
This velocity is everywhere positive, as a consequence of the fact that we started with a $2-$shock profile. We note that these solitary waves travel to the right, with speed $s$ that exceeds the characteristic speed $\sqrt{g\,h_{\,\infty}}$ at the constant state $(h_{\,\infty},\,0)$ in this case. The spatial reflection symmetry yields solitary waves that travel to the left instead. This symmetry also recovers the solitary waves that can be constructed from $1-$shock profiles.
\subsection{Solitary waves of depression}
We obtain solitary waves of depression by using $\eta_{\,-}$ in \eqref{e:solwpm} instead of $\eta_{\,+}\,$, choosing $h_{\,\infty}\ \stackrel{\textup{def}}{:=}\ h_{\,-}$ (the wave height at $\infty$) and $h_{\,c}$ (the minimum wave height) arbitrary subject to the requirement that
\begin{equation*}
0\ <\ h_{\,c}\ <\ h_{\,\infty}\,.
\end{equation*}
Similarly to \eqref{e:solpint}, the wave height $h\,(x,\,t)\ \in\ [h_{\,c},\,h_{\,\infty})$ is determined implicitly by
\begin{equation}\label{e:solmint}
\int_{\,h_{\,c}}^{\,h}\,\left({\frac{k^{\,3}\ -\ h_{\,c}^{\,3}}{h_{\,\infty}^{\,2}\,k\ -\ h_{\,c}^{\,3}}}\right)^{\,\half}\;\frac{h_{\,\infty}}{h_{\,\infty}\ -\ k}\,\mathrm{d} k\ =\ \frac{\abs{x\ -\ s\,t}}{\sqrt\varepsilon}\,, \qquad x\ \in\ \mathds{R}\,, \quad h\ \in\ [\,h_{\,c},\,h_{\,\infty})\,.
\end{equation}
Considering $u_{\,-}\ =\ 0$ to be the limiting velocity as $\abs{x}\ \to\ \infty\,$, we find $v_{\,-}\ =\ -\,s\ =\ -\,\gamma\,h_{\,+}$ from \eqref{e:s}, whence
the solitary wave speed is again given by equation \eqref{e:sols}, and again the corresponding velocity profile is given by \eqref{e:solu}. This time, the velocity is everywhere negative (when starting with the $2-$shock profile), while the solitary wave travels to the right ($s\ >\ 0$) but with speed $s$ \emph{less than} the characteristic speed $\sqrt{g\,h_{\,\infty}}$ of the state at infinity. Again, spatial reflection yields waves of depression traveling to the left.
\section{Parametric formulae for shock profiles and cusped waves}
Here we describe how weakly singular shock profiles and cusped waves can be determined in a parametric form,
\begin{equation*}
h\ =\ h\,(\xi)\,, \qquad x\ =\ x\,(\xi)\,, \qquad \xi\ \in\ \mathds{R}\,,
\end{equation*}
by a quadrature procedure that eliminates having to deal with the singularities present in the ODEs \eqref{e:hplus}, \eqref{e:hminus} and in the integrands of the implicit relations \eqref{e:hpint}, \eqref{e:hmint}, \eqref{e:solpint}. Inspired by the fact that classical solitary wave profiles of the form $f\,(\xi)\ =\ \beta\,\operatorname{sech}^{\,2}\,(\frac12\;\xi)$ (and their translates) satisfy an equation with cubic polynomial as right-hand side,
\begin{equation}\label{e:sech2}
f_{\,\xi}^{\,2}\ =\ f^{\,2}\,\left(1\ -\ \frac{f}{\beta}\right)\,,
\end{equation}
we modify the dimensionless ODE~\eqref{e:odeH} by replacing $H^{\,3}$ in the denominator by its asymptotic value $1\,$. Thus we seek the solution of \eqref{e:odeH} in parametric form $H\ =\ H\,(\xi)\,$, $z\ =\ z\,(\xi)$ by solving
\begin{gather}\label{e:Hpar}
H_{\,\xi}^{\,2}\ =\ \frac{(H\ -\ {\mathcal{F}})\,(H\ -\ 1)^{\,2}}{1\ -\ {\mathcal{F}}}\ =\ (H\ -\ 1)^{\,2}\,\left(1\ -\ \frac{H\ -\ 1}{{\mathcal{F}}\ -\ 1}\right)\,, \\
\label{e:zpar}
z_{\,\xi}^{\,2}\ =\ \varepsilon\,\frac{H^{\,3}\ -\ {\mathcal{F}}}{1\ -\ {\mathcal{F}}}\,.
\end{gather}
We require $H^{\,3}\ =\ {\mathcal{F}}$ when $z\ =\ 0\,$. It is convenient to require $z\,(0)\ =\ 0\,$. Comparing the form of \eqref{e:Hpar} with \eqref{e:sech2} we find the appropriate solution of \eqref{e:Hpar} on either half-line $\xi\ \ge\ 0$ or $\xi\ \le\ 0$ can be written in the form
\begin{equation}\label{e:Hsol}
H\,(\xi)\ =\ 1\ +\ ({\mathcal{F}}\ -\ 1)\,\operatorname{sech}^{\,2}\,\left(\frac12\;\abs{\xi}\ +\ \alpha\right)\,,
\end{equation}
where $H\,(0)\ =\ {\mathcal{F}}^{\,1/3}$ provided
\begin{equation*}
\cosh^{\,2}\,\alpha\ =\ \frac{{\mathcal{F}}\ -\ 1}{{\mathcal{F}}^{\,1/3}\ -\ 1}\,.
\end{equation*}
A unique $\alpha\ >\ 0$ solving this equation exists in either case ${\mathcal{F}}\ >\ 1$ or $0\ <\ {\mathcal{F}}\ <\ 1\,$, namely
\begin{equation}\label{d:alpha}
\alpha\ =\ \ln\,(\sqrt{\gamma}\ +\ \sqrt{\gamma\ -\ 1})\,, \qquad \gamma\ =\ \frac{{\mathcal{F}}\ -\ 1}{{\mathcal{F}}^{\,1/3}\ -\ 1}\,,
\end{equation}
because $\gamma\ >\ 1\,$. Now $z\,(\xi)$ is recovered by quadrature from \eqref{e:zpar} as
\begin{equation}\label{e:zsol}
z\,(\xi)\ =\ \sqrt\varepsilon\;\int_{\,0}^{\,\xi}\,\left(\frac{H\,(\zeta)^{\,3}\ -\ {\mathcal{F}}}{1\ -\ {\mathcal{F}}}\right)^{\,1/2}\,\mathrm{d}\zeta\,.
\end{equation}
To express this result in dimensional terms for $h\ =\ h_{\,\pm}\,H$ in each case as appropriate, we recall ${\mathcal{F}}_{\,\pm}\ =\ h_{\,c}^{\,3}/h_{\,\pm}^{\,3}$ where $h_{\,c}$ may be determined from $h_{\,+}\,$, $h_{\,-}$ by \eqref{d:hc}. We obtain
\begin{gather}\label{e:hint}
h\,(\xi)\ =\ h_{\,\pm}\ +\ \frac{(h_{\,c}\ -\ h_{\,\pm})\,\cosh^{\,2}\,\alpha_{\,\pm}}{\cosh^{\,2}\,(\frac12\;\abs{\xi}\ +\ \alpha_{\,\pm})}\,, \\
\label{e:xint}
x\,(\xi)\ =\ \sqrt\varepsilon\,h_{\,\pm}\,\int_{\,0}^{\,\xi}\left(\frac{h\,(\zeta)^{\,3}\ -\ h_{\,c}^{\,3}}{h_{\,\pm}^{\,3}\ -\ h_{\,c}^{\,3}}\right)^{\,1/2}\,\mathrm{d}\zeta\,,
\end{gather}
where $\alpha_{\,\pm}$ is determined from \eqref{d:alpha} using ${\mathcal{F}}\ =\ {\mathcal{F}}_{\,\pm}\,$.
Cusped solitary waves profiles are expressed parametrically by the same formulae
after replacing $h_{\,\pm}$ with $h_{\,\infty}\,$.
An explicit expression for $x\,(\xi)$ remains to be obtained. Even if this expression could be obtained in closed form, it likely would involve special functions that may not be easily computed. In any case, it is straightforward to compute $x\,(\xi)$ directly from the integral by an efficient quadrature method. We note, however, that \textsc{Taylor} expansion of $\operatorname{sech}^{\,2}\,(\frac12\;\abs{\xi}\ +\ \alpha_{\,\pm})$ implies that for small $\abs{\xi}\,$,
\begin{equation*}
\frac{h\,(\xi)\ -\ h_{\,c}}{h_{\,\pm}\ -\ h_{\,c}}\ =\ \abs{\xi}\,\tanh\,\alpha_{\,\pm}\ +\ \O\,(\abs{\xi}^{\,2})\,.
\end{equation*}
Consequently the integrand of \eqref{e:xint} has a weak singularity at $0\,$, with
\begin{equation*}
\left(\frac{h\,(\zeta)^{\,3}\ -\ h_{\,c}^{\,3}}{h_{\,\pm}^{\,3}\ -\ h_{\,c}^{\,3}}\right)^{\,1/2}\ =\ K\,\abs{\zeta}^{\,1/2}\ +\ \O\,(\abs{\zeta})\,, \qquad K\ =\ \left(\frac{3\,h_{\,c}^{\,2}\,\tanh\,\alpha_{\,\pm}}{h_{\,\pm}^{\,2}\ +\ h_{\,\pm}\,h_{\,c}\ +\ h_{\,c}^{\,2}}\right)^{\,1/2}\,.
\end{equation*}
This singularity can be eliminated by a change of variable $\zeta\ =\ \pm\,y^{\,2}$ --- then simple quadratures will yield accurate numerical approximations.
\section{Numerical simulations}
\label{sec:6}
In this section we examine how the theory of weakly singular shock profiles
developed in this paper fits the smoothed shocks observed
in the computations carried out in \cite{Clamond2018}.
\subsection{A dynamically generated wave front}
In Fig.~\ref{fig:1} we compare a shock profile computed by the theory developed in this paper with a solution to the rSV system computed as in \cite{Clamond2018} for ``dam-break'' initial data, similar to a \textsc{Riemann} problem for the shallow-water system. For a recent treatment of the classical Riemann problem for the shallow-water equations, including a discussion of analytical properties as well as numerical techniques, see \cite{Holden2015}.
The solid line in Fig.~\ref{fig:1} is from the numerically computed solution to the rSV system at time $t\ =\ 15$ with $\varepsilon\ =\ 0.5$ and smoothed step function (``dam break'') initial data
\begin{equation*}
h_{\,0}\,(x)\ =\ h_{\,-}\ +\ \frac12\;(h_{\,+}\ -\ h_{\,-})\,(1\ +\ \tanh(\delta\,x))
\end{equation*}
for $h_{\,-}\ =\ 1.5\,$, $h_{\,+}\ =\ 1\,$, $g\ =\ 1\,$, $\delta\ =\ 1\,$, as indicated in \cite{Clamond2018}. The numerical computation was performed with a \textsc{Fourier} pseudospectral method as described in \cite{Dutykh2011a}, using an Erfc-Log filter for anti-aliasing \cite{Boyd1995} and with $N\ =\ 8192$ modes on a periodic domain of length $4\,L$ with $L\ =\ 25\,$.
The crosses mark the shock profile solution computed parametrically using formulae \eqref{e:hint} -- \eqref{e:xint} of the previous section with $h_{\,-}\ =\ 1.2374$ and $h_{\,+}\ =\ 1\,$, with $x$ shifted by $17.67\,$. The bottom part of the figure is a zoom-in on the indicated region of the upper part. We remark that the computed rSV solution in Fig.~\ref{fig:1} corresponds directly to Fig.~3(c) of \cite{Clamond2018} --- due to a late change of notation the values of $\varepsilon$ reported for the computations in \cite{Clamond2018} correspond to $2\,\varepsilon$ in the present notation.
\begin{figure}
\begin{center}
\includegraphics[width=0.95\linewidth]{figs/rsv-fig1a.eps}
\put(-340,90){\large $h$}
\put(-50,0){\large $x$}
\includegraphics[width=0.95\linewidth]{figs/rsv-fig1b.eps}
\put(-340,100){\large $h$}
\put(-50,0){\large $x$}
\caption{\small\em Comparison of shock profile with dam-break computation of \cite{Clamond2018}. The solid line is the rSV solution with $\varepsilon\ =\ 0.5$ computed by a pseudospectral method. Crosses mark the shock profile computed as in \eqref{e:hint} -- \eqref{e:xint}, shifted by $17.67\,$.}
\label{fig:1}
\end{center}
\end{figure}
\subsection{Energy dissipation}
In Fig.~\ref{fig:2} we plot the total energy from \eqref{e:rsvE},
\begin{equation}\label{d:totalE}
E^{\,\varepsilon}\,(t)\ =\ \int_{\,-L}^{\,L} {\mathcal{E}}^{\,\eps}\,\mathrm{d} x\,,
\end{equation}
as a function of time, for a solution computed as in Fig.~\ref{fig:1} but with anti-aliasing performed using the filter employed by \textsc{Hou} and \textsc{Li} in \cite{Hou2007}, namely
\begin{equation*}
\rho\,(2\,k\,/\,N)\ =\ \exp(\,-36\,\abs{2\,k\,/\,N}^{\,36})\,, \quad k\ =\ -N/2,\,\ldots,\,N/2\ -\ 1\,,
\end{equation*}
applied on each time step. From this data, we estimate the average energy decay rate $\mathrm{d} E^{\,\varepsilon}/\mathrm{d} t\ \approx\ -\,0.00326$ over the range $t\ \in\ [\,14,\,15\,]\,$. Corresponding to $h_{\,-}\ =\ 1.2374\,$, $h_{\,+}\ =\ 1\,$, the dissipation formula \eqref{e:dval} predicts $\mathrm{d} E^{\,\varepsilon}/\mathrm{d} t\ =\ -\,0.00318\,$, giving a relative error of less than $2.6$ percent.
\begin{figure}
\begin{center}
\includegraphics[width=0.95\linewidth]{figs/rsv-fig2.eps}
\put(-348,60){\large $E^{\,\varepsilon}$}
\put(-50,0){\large $t$}
\caption{\small\em Total energy $E^{\,\varepsilon}$ vs. $t$ in the smoothed dam break problem as in Fig.~\ref{fig:1} with $\varepsilon\ =\ 0.5\,$.}
\label{fig:2}
\end{center}
\end{figure}
\subsection{Cusped waves}
The profile of a cusped solitary wave of elevation is plotted in Fig.~\ref{fig:cusp} for $h_{\,\infty}\ =\ h_{\,+}\ =\ 1$ and maximum height $h_{\,c}\ =\ 1.3\,$. We were not able to compute a clean isolated traveling cusped wave by taking the numerically computed wave profile for $(h,\,u)$ as initial data on a regular grid. Indeed, there is no particular reason our pseudospectral code should work well for such a singular solution, and anyway it may not be numerically stable. However, when taking the $h-$profile in Fig.~\ref{fig:cusp} as initial data with \emph{zero} initial velocity, the numerical solution develops two peaked waves traveling in opposite direction as indicated Fig.~\ref{fig:2peak}. While hardly conclusive, this evidence suggests that cusped solutions may be relevant in the dynamics of the rSV system.
The two peaks here are slightly skewed compared to the profile of a cusped solitary wave. Our limited exploration uncovered no convincing evidence that cusped waves collide ``cleanly'' enough to justify calling them `cuspons' or suggest that the rSV system is formally integrable --- It may be difficult to tell, though, as perturbed cusped waves do not leave behind a dispersive ``tail'' in this non-dispersive system.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{figs/rsv-fig3.eps}
\put(-325,93){\large $h$}
\put(-50,0){\large $x$}
\caption{\small\em Cusped solitary wave profile for $h_{\,\infty}\ = 1\,$, $h_{\,c}\ =\ 1.3\,$.}
\label{fig:cusp}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{figs/rsv-fig4.eps}
\put(-325,93){\large $h$}
\put(-50,0){\large $x$}
\caption{\small\em Numerical solution at $t\ =\ 6$ with initial height from Fig.~\ref{fig:cusp}, initial velocity zero.}
\label{fig:2peak}
\end{center}
\end{figure}
\section{Discussion and outlook}
Our analysis of traveling wave profiles for the rSV system proves that, as the authors of \cite{Clamond2018} stated, the regularized system admits `smoothed shocks' that propagate at exactly the same speed as corresponding classical discontinuous shocks for the shallow water equations. The new waves are indeed piecewise smooth and continuous, but have weak singularities which correctly generate the same energy dissipation as the classical shocks.
This ability of the rSV system to correctly model shock wave propagation non-dispersively without oscillations while conserving energy for smooth solutions is an interesting feature which deserves further investigation. As demonstrated in \cite{Clamond2018}, it means that a rather straightforward pseudospectral method (albeit one which involves careful dealiasing, and iteration to eliminate the time derivative term in ${\mathcal R}$) computes shock speeds accurately over a wide range of values of $\varepsilon\,$, with $2\,\varepsilon$ ranging from $0.001$ to $5$ in the examples treated in \cite{Clamond2018}.
The comparisons made in the previous section above make it plausible that the pseudospectral method used to produce Figs.~\ref{fig:1} and \ref{fig:2} is computing an accurate approximation to a solution of the rSV system which ceases to conserve energy (hence loses smoothness) around $t\ =\ 7$ or $8\,$, and develops afterward a traveling wave whose shape closely matches a weakly singular shock profile. We speculate that an important source of energy dissipation in this pseudospectral computation may be the damping of high frequency components induced for dealiasing purposes.
How this actually happens and what it may mean with regard to the design and accuracy of numerical approximations remains to be investigated in detail. Often, energy conservation, or preservation of some variational (\textsc{Lagrangian}) or symplectic (\textsc{Hamiltonian}) structure, is a desirable feature of a numerical scheme designed for long-time computations in an energy-conserving system. (See \cite{Marsden1998, Lew2004, Hairer2002, Clamond2007} for discussion of variational and symplectic integrators.) But for the rSV system considered here, exact conservation of energy appears to be {\it inappropriate} for approximating solutions containing weakly singular shock profiles, which dissipate energy as we have shown.
At present, the issue of preservation of symplectic structure may be moot anyways, since we are not aware of a canonical Hamiltonian structure for the rSV system. It seems worth mentioning, however, that the rSV system admits the following non-canonical \textsc{Hamiltonian} structure. Namely, with
\begin{equation*}
{\mathcal H}\ =\ \int\half\;h\,u^{\,2}\ +\ \half\;g\,(h\ -\ h_{\,\infty})^{\,2}\ +\ \varepsilon\,\left(\half\;h^{\,3}\,u_{\,x}^{\,2}\ +\ \half\;g\,h^{\,2}\,h_{\,x}^{\,2}\right)\,\mathrm{d} x\,,
\end{equation*}
and $m\ =\ h\,u\ -\ \varepsilon\,(h^{\,3}\,u_{\,x})_{\,x}\,$, the rSV system is formally equivalent to
\begin{equation*}
\partial_{\,t}\,\begin{pmatrix} m \\ h \end{pmatrix}\ =\
-\,\begin{pmatrix}
\partial_{\,x}\,m\ +\ m\,\partial_{\,x} & h\,\partial_{\,x} \\
\partial_{\,x} h & 0
\end{pmatrix}\cdot
\begin{pmatrix}
\delta {\mathcal H}/\delta m \\
\delta {\mathcal H}/\delta h
\end{pmatrix}\,.
\end{equation*}
This is a simple variant of the \textsc{Hamiltonian} structure well-known for the \textsc{Green}--\textsc{Naghdi} equations \cite{Holm1988, Constantin1997, Johnson2002, Li2002}, obtained by replacing the \textsc{Green}--\textsc{Naghdi} \textsc{Hamiltonian} with a \textsc{Hamiltonian} derived from \eqref{d:eep}.
Finally, as we have mentioned, quite a number of analytic questions remain for further investigation, involving the development of weak singularities in smooth solutions of the rSV system, the existence of solutions with weak singularities, and whether these phenomena occur for other important models of physical systems.
\subsection*{Acknowledgments}
\addcontentsline{toc}{subsection}{Acknowledgments}
This work was initiated during the 2017 ICERM semester program on Singularities and Waves in Incompressible Fluids. The work of RLP and YP is partially supported by the National Science Foundation under NSF grant DMS 1515400, the Simons Foundation under grant 395796 and the Center for Nonlinear Analysis (through NSF grant OISE-0967140).
\bigskip\bigskip
\addcontentsline{toc}{section}{References}
\bibliographystyle{abbrv}
|
1,477,468,751,125 | arxiv | \section{\label{sec:Intro} Introduction}
Granular materials are a class of systems which are out of equilibrium
and not easy to understand within the framework of standard statistical
mechanics. For static assemblies, the distribution of forces~\cite{Coppersmith} and
the continuum limit~\cite{Cates} are difficult to obtain. This is because
interparticle contacts are very stiff: a slight compression of two
particles that are contact, by an amount that is much less than the
interparticle separation, gives rise to large forces. Added
complications are caused by the fact that, for noncohesive granular
matter, two particles in contact repel each other when they are
compressed, but do not attract each other when they are moved away
from each other and the contact is broken; that the repulsive force
between particles is not a linear function of their compression
when the compression is small;~\cite{MvHreview} and that there are frictional forces
between particles,~\cite{Bi} resulting in history dependent forces. The
dynamic properties of granular matter are difficult to understand
because interparticle collisions are strongly inelastic. If a high
density of particles builds up in a region because of random
fluctuations, the collision rate and therefore the rate of energy
loss increases in the region. This can trap particles in the region,
causing the density fluctuations to grow.~\cite{Goldhirsch} Experimentally, one
observes distinctive phenomena such as force chains and stability
against mechanical collapse in very sparse static packings,
non-Maxwellian velocity distributions~\cite{vanNoije,Rouyer,Reis} and inelastic collapse in
dilute granular gases,~\cite{McNamara,Hopkins} and shear thinning and shear thickening in
the intermediate regime.~\cite{Brown}
The tendency of flowing granular matter to get `jammed' and stop
flowing at low densities is a practical problem that limits the
flow rate in the industrial use of granular materials.~\cite{LiuNagelCRCBook} Remarkably,the
transition from a flowing to a jammed state in granular matter,
structural glasses, and foams and colloids, can be studied with a
unified approach.~\cite{LiuNagel1998} When the transition occurs at zero temperature
and zero shear stress as the density is varied, the transition point
is called ``Point J",~\cite{OHern} and is characterized by diverging length
scales~\cite{Wyart,Ellenbroek} suggestive of a second order phase transition. At the same
time, other properties of the system change discontinuously at Point
J,~\cite{OHern} as one would expect at a first order phase transition.
The density of states for vibrational modes in a granular system
is one of the properties that has a signature of the transition at
Point J. A jammed granular system has mechanical rigidity. Even
though the force between two particles is a nonlinear function of
the compression between them, the small deviations from the jammed
state (which already has non-zero compression) can be analyzed using
a linear model, resulting in normal modes. Extensive numerical
simulations~\cite{OHern} on systems at zero temperature and zero shear stress
show that the density of states $D(\omega)$ as a function of $\omega$
approaches zero linearly as $\omega\rightarrow 0$ if the particle
density is greater than the critical density.
As the particle density is reduced, the
slope of $D(\omega)$ at the origin becomes steeper, until at Point
J, $D(\omega \rightarrow 0) \neq 0.$
In the linearized analyis of vibrational modes, the system can be
treated as a network of random springs, with the number of springs
decreasing as Point J is approached. It is natural to analyze the
problem using random matrices, and see how the resultant density
of states evolves near the transition. This has been done,~\cite{beltukov} and
yields a broad peak in the density of states that reaches $\omega
= 0$ as the transition is approached. However, the model also
predicts a gap in the density of states near $\omega = 0$ above the
transition, which does not match the numerical results. There are a few
other qualitative discrepancies.
Although it is encouraging that some features of the random matrix
density of states matches $D(\omega)$ from numerical simulations,
it is well known that the overall density of states predicted by
random matrix theory often differs from what is observed in systems
to which the theory applies, because of non-universal effects \cite{mehta}.
Instead, the correlations in the density of states and the distribution
of level spacings is a more reliable indicator of the validity of
the random matrix approach \cite{mehta}.
In this paper we therefore
turn to the correlations in the density of states predicted by random matrix
theory. We argue that the Laguerre orthogonal ensemble rather than the Gaussian
orthogonal ensemble (GOE) is the appropriate random matrix model for granular
bead packs. The correlation
function for the Laguerre ensemble differs from that for the GOE near the
low-frequency edge of the allowed range of $\omega$ \cite{nagao}.
By comparing the correlations in the numerically computed vibrational
spectrum of granular bead packs near the jamming transition to the
predictions of the Laguerre ensemble and the GOE we are able to
demonstrate good agreement with the former and to exclude the latter.
The distribution
of consecutive level spacings is also a universal feature of the spectrum
that should be described by random matrix theory. We find that the level
spacing distribution predicted by the Laguerre ensemble is very close
to the GOE result both near the zero frequency edge and at high frequency.
The spectra calculated for granular bead packs in dynamical simulations
are found to agree with this distribution. This finding further validates
the random matrix approach but it does not help discriminate between the
Laguerre ensemble and the GOE. The agreement of the level spacing distribution
with the GOE result has been observed earlier,~\cite{Silbert,Nelson} but without
reference to the Laguerre ensemble.
We also construct a random lattice model, which is a physically
motivated variant of the random matrix ensemble. Although it is not
possible to calculate the properties of this model analytically,
numerical results reveal that all the qualitative features of
$D(\omega)$ are reproduced. At the same time, the correlation
functions and the level spacing distribution seen in the idealized
random matrix theory are not significantly changed.
The rest of the paper is organized as follows.
In section II we summarize the random matrix approach to the problem
and the results for the density of states. In Section III we compare
the autocorrelation function for the density of states and the level
spacing distribution for the Laguerre ensemble and the GOE to the
vibrational spectra for granular packs near point J, obtained from
dynamical numerical simulations. In Section IV we introduce the
random lattice model and show that it reproduces both universal
and non-universal features of vibrational spectrum of granular
packs. In Appendix A we present an analysis of the level spacing
distribution for the Laguerre ensemble and in Appendix B some
technical details regarding the autocorrelation.
\section{Laguerre ensembles}
We follow the approach of Ref.~\cite{beltukov} here. Within linear response,
if the particles in the granular assembly are displaced slightly
from their resting positions, their accelerations are of the form
$\ddot x = -K x,$ where $x$ is a $N$ component column vector and $K$ is a
$N\times N$ matrix. The crucial observation~\cite{lubensky,calladine} is that the connection
between accelerations and displacements is a two-step process.
Within linear response, each contact between a pair of particles
can be represented as a spring that has been precompressed by some
amount. Thus one has a network of springs, with various spring
constants. When a particle is displaced, it stretches (or compresses)
each spring that it is connected to, by an amount that is equal to
the component of its displacement along that spring. The spring
exerts a restoring force that is proportional to this stretching;
the spring constant can be different for each spring. Thus we have
\begin{equation}
f_i = k_i \tilde A_{ij} x_j
\end{equation}
where $\tilde A_{ij} = \cos\theta_{ij}$ if the $i$'th spring is
connected to the $j$'th particle, with $\theta_{ij}$ being the angle
between the displacement and the direction of the spring, and $\tilde
A_{ij} = 0$ otherwise. The restoring force on each particle is the
sum of the forces from all the springs it is connected to, so that
\begin{equation}
m_j \ddot x_j = - \tilde A^T_{ji} f_i
\end{equation}
i.e.
\begin{equation}
\ddot x_j = - \frac{1}{m_j} \tilde A^T_{ji} k_i \tilde A_{ik} x_k.
\end{equation}
Defining $A_{ij} = \sqrt k_i \tilde A_{ij}/\sqrt m_j,$ this is equivalent to
\begin{equation}
\ddot x_j = - A^T_{ji} A_{ik} x_k
\label{AAT}
\end{equation}
Even though we have presented the argument as if the displacement
of each particle is one scalar variable, it is easy to extend this
to particles in a $d$-dimensional system, with $N$ displacement
variables for $N/d$ particles. We have implicitly assumed that the
particles are frictionless spheres, so that torque balance is
trivially satisfied.
The matrix $A$ is a rectangular matrix, since the number of springs
is greater than $N.$ As one approaches Point J, the number of
contact forces decreases, being equal to $N$ at the transition.
In the random matrix approach to this problem, we assume that all
the entries in the matrix $A$ are independent Gaussian random
variables, drawn from a distribution with zero mean and (with a
suitable rescaling) unit variance. This is the Laguerre random
matrix ensemble. It can be shown~\cite{Forrester} that the reduced probability density
for the eigenvalues $\omega_1^2, \omega_2^2\ldots \omega^2_N$ of
$A^T A$ is of the form
\begin{equation}
p(\{\omega\}) \propto \prod_{i < j} |\omega_j^2- \omega_i^2|\prod_i \omega_i^{M - N} \exp[- \sum_i \omega_i^2/2]
\label{gaussian}
\end{equation}
where the matrix $A$ is a $M\times N$ dimensional matrix, i.e. there
are $M$ inter-particle contacts in the system, and without loss of
generality we have chosen all the $\omega_i$'s to be greater than
zero.
Rewriting the probability density in terms of $\lambda_i =
\omega_i/\sqrt N,$ we have
\begin{equation}
p(\{\lambda\}) \propto \exp\left[- N \sum_i \frac{\lambda_i^2}{2} + f \ln |\lambda_i| + \sum_{i < j} \ln |\lambda_i^2 - \lambda_j^2|\right]
\end{equation}
where $M/N = 1 + f.$
If $M - N$ is $O(N),$ a saddle-point expansion yields
\begin{equation}
\frac{f}{\lambda} - \lambda + P\int_0^\infty \rho(\lambda^\prime)\left[\frac{1}{\lambda - \lambda^\prime} +
\frac{1}{\lambda + \lambda^\prime}\right]d\lambda^\prime
\end{equation}
wherever $\rho(\lambda) \neq 0.$ Here $\rho(\lambda)$ is the density of
eigenvalues, normalized to $\int_0^\infty \rho(\lambda) d\lambda =
1,$ and the $P$ denotes the principal value of the integral.
Symmetrizing $\rho(\lambda)$ by defining $\rho(\lambda < 0) =
\rho(-\lambda),$ the function
\begin{equation}
F(\lambda) = \int_{-\infty}^\infty \frac{\rho(\lambda^\prime)}{\lambda - \lambda^\prime} d\lambda^\prime
\end{equation}
of the complex variable $\lambda$ is analytic everywhere except
that it has branch cuts on the real line over intervals where
$\rho(\lambda)\neq 0,$ where it is equal to
\begin{equation}
\lambda - \frac{f}{\lambda} \mp i \pi \rho(\lambda).
\end{equation}
Furthermore, $F(\lambda\rightarrow 0)$ is finite, and
$F(\lambda\rightarrow\infty) \rightarrow 2/\lambda,$ because the
symmetrized extension of $\rho(\lambda)$ integrates to 2. This has
the solution
\begin{equation}
F(\lambda) = \lambda - \frac{f}{\lambda} - \frac{\sqrt{(\lambda^2 - a^2) (\lambda^2 - b^2)}}{\lambda}
\end{equation}
with
\begin{eqnarray}
a &=& \sqrt{M/N} - 1\nonumber\\
b &=& \sqrt{M/N} + 1.
\label{gap}
\end{eqnarray}
The density of states is then
\begin{equation}
\rho(\lambda) = \frac{1}{\pi} \frac{\sqrt{(b^2 - \lambda^2)(\lambda^2 - a^2)}}{\lambda} \qquad a < \lambda < b
\end{equation}
where we have removed the extension of $\rho(\lambda)$ to $\lambda
< 0,$ so that $\int_0^\infty \rho(\lambda) d\lambda = 1.$ The density
of states $D(\omega)$ has the same form, but with $a$ and $b$
rescaled, which is not significant since Eq.(\ref{gaussian}) was
already obtained after rescaling.
When $M/N > 1,$ there is a broad peak in $D(\omega),$ with a gap
in the spectrum near $\omega = 0.$ The peak is not symmetric, falling
off much more sharply on the small $\omega$ side than on the large
$\omega$ side. In the middle, the peak slopes downwards as $\omega$ is increased.
As $M/N$ is reduced, the gap shrinks while the width
of the peak remains constant. When $M/N = 1,$ $D(\omega) =
\sqrt{\Omega^2 - \omega^2}/\pi$ which matches the Wigner semicircle
law for the Gaussian orthogonal ensemble, and $D(\omega=0)\neq 0.$
One can compare these analytical predictions with numerical results.
Simulations of two-dimensional frictionless soft spheres are
performed, allowing the spheres to equilibrate from an initial
random configuration~\cite{Corey}. As with the analytical prediction,
there is a broad peak, that falls off more sharply at small $\omega$
than at large $\omega.$ The density of states $D(\omega=0) = 0$
except at the transition. However, the numerical data does not show
the gap in the spectrum near $\omega = 0$ predicted by random matrix
theory. The numerical data also has a pronounced boson peak at a non-zero
value of $\omega,$ and a cusp in $D(\omega)$ at the origin at the
transition. None of these is consistent with the prediction from\
random matrix theory. We return to this point in Section~\ref{sec:lattice}.
\section{Correlations}
In this section we subject the predictions of the random matrix
model to more stringent and appropriate tests. We have seen in the previous
section the mean density of states of the random matrix model
does not exactly match the density of states of the dynamical
numerical simulation of a granular pack. However that is not the
appropriate test of a random matrix model. In all of its
successful applications, what random matrix theory is able to
predict correctly is not the mean density of states, but
rather statistical features like the density of states
correlation and the level spacing distribution, that are
computed after the spectrum has been smoothed by a procedure
called unfolding (explained below) that makes the mean density
of states uniform.
Theoretically one can understand this as follows.
It can be shown~\cite{brezin} that, for an ensemble of random matrices, the mean
density of states can be changed at will by varying the assumed
distribution of the matrix elements, but that the correlations of the
unfolded spectrum retain a universal form that depends only on the
symmetries of the ensemble considered (such as whether the matrices
are real or complex or quaternion real). From this point of view
the mean density of states we obtained in the previous section is
simply an artifact of the Gaussian distribution we chose for our
random matrix elements, but the unfolded density of states correlations
and the level spacing distribution are universal and should match
the numerical data for jammed granular matter if the model is applicable.
The key feature of the random matrix spectrum is that it is rigid
(i.e. highly correlated). The rigidity of the spectrum is revealed at
small energy scales by the distribution of consecutive level spacings.
The longer range rigidity can be demonstrated by the autocorrelation
of the density of states or by specific statistical measures such as the
number statistic and the spectral rigidity~\cite{mehta}.
Insight into the strong correlations between the eigenvalues
implied by the Laguerre ensemble
distribution Eq.(\ref{gaussian}) is provided by
the following plasma analogy. We focus on the case $M = N+1$ since
we are interested in the spectrum for point J. If we rewrite
rewrite the factor $| \omega_j^2 - \omega_i^2 |$ in Eq.(\ref{gaussian})
as $\exp(\ln |\omega_j - \omega_i | + \ln |\omega_j + \omega_i |)$
and the factor $\omega_i$ as $\exp (\ln \omega_i )$,
we can interpret Eq. (\ref{gaussian}) as the partition function
of a classical plasma of N particles located on the positive
$\omega$ axis at the locations $\omega_1, \ldots, \omega_N$ with
logarithmic interactions between the particles as well as
logarithmic interactions between each particle and image
particles at locations $-\omega_1, \ldots, -\omega_N$. In
the plasma analogy the particles are also confined near the
origin by a quadratic potential and are constrained to remain
on the positive $\omega$ axis by a hard wall at the origin.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{levelspacing.jpg}
\end{center}
\caption{
The red and blue histograms show the consecutive level
spacing distribution for the numerically computed vibrational spectra
of jammed granular material. An ensemble of one thousand realizations
of the jammed state was used. The red histogram bins the eleven
consecutive level spacings between the frequencies $\omega_5$ through
$\omega_{16}$ for each realization; the blue histogram eleven
consecutive spacings between frequencies $\omega_{400}$ through
$\omega_{411}$. Each spacing $\omega_{i+1} - \omega_i$ is normalized
by $\langle \omega_{i+1} - \omega_i\rangle,$ where the average is
taken over the one thousand realizations. The black curve corresponds to
the Wigner surmise for the level spacing distribution of the Gaussian
orthogonal ensemble (which is indistinguishable from the Laguerre
ensemble at this level of resolution). The close agreement between
the two histograms and the solid black curve are consistent with
the predictions of our random matrix model of the jammed state of
granular matter.}
\label{fig:spacing}
\end{figure}
We turn now to the distribution of spacings between consecutive
levels. Intuitively one might expect that the spacing between
consecutive levels at low frequency would be different from
that between two levels at high frequency. This is because,
in terms of the plasma analogy, in the first instance the two
interacting levels are near the hard wall, whereas in the latter
instance they are deep in the interior of the plasma. However,
we show in Appendix A that the consecutive level spacing
distribution is not noticeably different for high and
low frequencies, and neither of these is noticeably different
from the distribution for the GOE.
In Fig. \ref{fig:spacing} we compare the predictions of the
Laguerre model to the numerically computed jammed granular
spectrum. The red histogram bins the eleven consecutive
level spacings between the frequencies $\omega_5$ through $\omega_{16}$
for each realization; each spacing is normalized by $\langle \omega_{i+1} - \omega_{i} \rangle,$
where the average is taken over the one thousand realizations of the
jammed granular pack. The blue histogram is the same but for the
eleven level spacings between the frequencies $\omega_{400}$ through $\omega_{411}$.
The two distributions are seen to be indistinguishable and to be in
good agreement with the approximate analytic formula for the Laguerre
ensemble (solid black curve) which is itself indistinguishable from
the prediction of the GOE at the resolution of the figure.
Fig.~\ref{fig:spacing} shows that the numerical level spacing distribution
is consistent with the prediction of the Laguerre random matrix
model and hence is a validation of that model. However, the level
spacing distribution is not able to distinguish between the Laguerre
model and the GOE, and is therefore a less sharp test of the Laguerre
model than the density of states autocorrelation to which we now
turn.
In order to calculate these correlations it is convenient to rewrite
the distribution in Eq. (\ref{gaussian}) for the case $M = N + 1$ in
the form
\begin{equation}
P(x_1, x_2, \ldots, x_N) \propto \prod_{i \neq j} | x_i - x_j | \prod_{k} \exp( - x_k)
\label{eq:laguerre}
\end{equation}
where $x_i = \omega_i^2$. The one-point and two-point correlation
functions are defined as
\begin{equation}
R_1 (x) = N \int_0^\infty d x_2 \ldots \int_0^\infty d x_N P(x, x_2, \ldots, x_N)
\label{eq:onepoint}
\end{equation}
and
\begin{equation}
R_2 (x, y) = N(N-1) \int_0^\infty d x_3 \ldots \int_0^\infty d x_N P(x, y, x_3,\ldots, x_N)
\label{eq:twopoint}
\end{equation}
The plasma analogy shows that calculation of the correlation
functions in Eq. (\ref{eq:onepoint}) and Eq. (\ref{eq:twopoint}) is a
formidable problem in classical statistical mechanics. Nonetheless
it has been exactly done by Nagao and Slevin~\cite{nagao} by rewriting Eq. (\ref{eq:laguerre}) in the
form of a quaternion determinant and performing the integrals by a
generalization of a theorem of Dyson~\cite{dyson} on integration over quaternion
determinants. Before we give those results we first describe the
unfolding procedure.
$R_1(x)$ is evidently the density of states, and we now define
\begin{equation}
\xi (x) = \int_0^x d x^\prime \; R_1 (x^\prime)
\label{eq:staircase}
\end{equation}
where $\xi(x)$ is the cumulative density of states.
The unfolded two point correlation function is then defined as
\begin{equation}
L_2 (\xi_1, \xi_2) = \frac{1}{R_1[ x (\xi_1) ]} \frac{1}{R_1 [ x(\xi_2) ]} R_2 [ x (\xi_1), x(\xi_2)]
\label{eq:ell2}
\end{equation}
It is easy to see that if $L_1 (\xi)$ is similarly defined then
$L_1 (\xi ) = 1$ showing that after unfolding the density of states
is uniform with a mean level spacing of unity. The exact expression
for $L_2$ is rather lengthy and is relegated to Appendix B.
In Figure~\ref{fig:corrln} we plot $1 - L_2(\xi, 0)$ as a function
of $\xi$ for the Laugerre ensemble. The corresponding plot for the
Gaussian Orthogonal ensemble is also shown. When $\xi$ is large,
the two curves approach each other. Indeed, the analytical expression
for $1 - L_2(\xi)$ for $\xi \gg 1$ in the Laguerre ensemble can be
verified to be
\begin{eqnarray}
1 - L_2 (\xi, 0) & = & \frac{\sin^2 \pi \xi}{ ( \pi \xi )^2} \nonumber \\
& + & \left[ \frac{1}{2} - \int_0^\xi d y \frac{ \sin \pi y}{\pi y} \right]
\left[ \frac{ \cos \pi \xi}{\xi} - \frac{1}{\pi \xi^2} \sin \pi \xi \right]
\nonumber \\
\label{eq:goetwopoint}
\end{eqnarray}
which coincides with the form for the same quantity in the GOE.
Intuitively, the reason for this coincidence can be understood in
terms of the plasma analogy. If the particles are deep in the
interior of the plasma the edge effects produced by
the image charges are screened and the Laguerre plasma becomes
indistinguishable from the GOE plasma.
Although the two curves coincide in the asymptotic limit
$\xi \gg 1$, Fig~\ref{fig:corrln} shows that there is a range
of $\xi$ values where the predictions of the Laguerre ensemble
differ significantly from the GOE. Hence comparison to the
correlation function for the numerical data for the jammed
granular spectrum provides a stringent test that is able
to distinguish between the Laguerre ensemble and the GOE.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{corrln.jpg}
\end{center}
\caption{Correlation function $1 - L_2(\xi, 0)$ from random matrix theory as a function of $\xi,$ out
to $\xi=3.0,$ i.e. out to a distance within which there should, on average, be three
eigenvalues. This is compared to a histogram of the vibrational frequencies from the numerical
simulations. The analytical results for the Gaussian orthogonal ensemble and the Laugerre ensemble
are both shown, and the latter is clearly superior.}
\label{fig:corrln}
\end{figure}
The numerical data are analyzed as follows.
The vibrational frequencies obtained in the numerical simulations
from all the 1000 realizations of the jammed state are merged
together, and bins are constructed with 200 eigenvalues in each,
i.e. there is an average of 0.2 eigenvalues per realization of the
jammed state in each bin. Next, we calculate
\begin{equation}
1 - \frac{1}{(0.2)^2} [\langle n_0 n_i\rangle - \langle n_0 \rangle\delta_{i0}]
\end{equation}
where $n_i$ is the number of vibrational frequencies in the $i^{{\rm th}}$ bin
in any given realization, and the average is over the realizations
of the jammed state. The histogram of the values obtained for this
discretized correlation function are compared with the analytical
prediction from the Laugerre ensemble and the GOE, and as seen in Fig.~\ref{fig:corrln}, the Laguerre
ensemble fits the data very well within the error bars (while the
GOE does not).
\section{Random Lattice Model}
\label{sec:lattice}
As discussed earlier, the extent to which the density $D(\omega)$
of vibrational frequencies for jammed granular materials agrees
with the predictions of random matrix theory is not a good test of
the applicability of random matrix theory to these materials, because
the distribution of eigenvalues is a non-universal prediction of
random matrix theory: if the random matrices are not assumed to be
Gaussian, the density of eigenvalues changes.
Nevertheless, there are qualitative discrepancies between the
numerically measured $D(\omega)$ and the density of eigenvalues
$\{\omega_i\}$ obtained from random matrix theory, that are worth
trying to address. As seen in Eq.(\ref{gap}), there is a gap in the
spectrum of eigenvalues near $\omega = 0$ above the jamming transition,
where $M > N$ in Eq.(\ref{gap}). By contrast, in the numerical simulations,
$D(\omega)$ is only zero at $\omega
= 0$ (except at the jamming transition), and increases linearly for
small $\omega.$ Second, at the jamming transition, $D(\omega)$ has
a cusp-like peak at $\omega = 0,$ while random matrix theory predicts
a flat $D(\omega)$ near $\omega = 0$ at the transition. Finally, the numerical
$D(\omega)$ has a pronounced boson peak at $\omega = \omega_0\neq 0,$
which is not reproduced by random matrix theory.
Various attempts have been made to construct random matrix models
that reproduce the properties of granular systems more closely.
Ref.~\cite{Manning} studies several different random matrix ensembles
and their effect on the structure of eigenmodes with frequencies
in the boson peak. Ref.~\cite{Middleton} uses weighted Laplacian
dynamical matrices to reproduce an intermediate regime in $D(\omega)$
(between the boson peak and the low frequency behavior) and $\sim
\omega^4$ scaling of the density of states in this regime.
Ref.~\cite{Beltukov1} uses a combination of a random and a regular
matrix for the dynamical matrix, to eliminate the gap near $\omega
= 0.$ Also, Ref.~\cite{Parisi} has studied an abstract model that
they argue is in the appropriate universality class.
In Eq.(\ref{AAT}), we have assumed that the entries in the matrix
$A$ are all independent random variables drawn from a Gaussian
distribution. In reality, since the matrix $A$ is supposed to be a
mapping from coordinates to contact forces, and only two particles
are associated with a contact, only the entries associated with two
particles (with $d$ entries per particle for $d$-dimensional particles)
should be non-zero in any column of $A$. Thus $A$ should be a sparse
matrix.
One could choose the two particles associated with each force
randomly, but this would result in the system breaking up into separate
clusters, not connected to each other, leading to an overabundance of zero
modes. Moreover, the concept of adjacency would not be respected: two randomly
chosen particles would be likely to be far apart, and should not have been
allowed to share a contact.
Instead of choosing the particles associated with a force randomly,
we approximate the system as being equivalent to a
triangular lattice (with periodic boundary conditions), but with
each particle displaced from the position where it would be in a perfect
triangular lattice. This randomizes the orientation of the contacts between
particles.
To be specific, particles are arranged in successive horizontal
layers, with each particle having contacts with the two particles
immediately below it: slightly to the left and slightly to the
right. Shifting the numbering in each row by half a lattice spacing relative to its
predecessor, the particle $(i, j)$ connects to the particles numbered $(i, j - 1)$ and $(i
+ 1, j - 1)$ with periodic boundary conditions in both directions.
(A particle in the bottom layer, $(i, 1),$ connects with $(i - L/2, L)$ and $(i + 1 - L/2, L)$ in
the topmost layer, where $L$ is the number of layers.)
In addition, each particle has a probability of connecting to its
neighbor on the right in the same row: $(i, j) \rightarrow (i + 1,
j).$ All contacts are bidirectional, i.e. each particle is connected
to two particles in the row above it, two in the row below it, and
either zero, one or two adjacent particles in the same row. The
spring constant associated with each contact is chosen randomly,
and the bond angles are also chosen randomly with the constraint
that the bond connecting a particle to its left (right) neighbor
in the row below points down and to the left (right), while the bond
connecting a particle to its neighbor in the same row on the left (right)
is more horizontal than the bond to its neightbor below and to the
left (right). This model is similar to the model introduced for free-standing
granular piles~\cite{Narayan}, a vector generalization of the scalar-force
``q-Model" used to model such systems~\cite{Coppersmith}.
This Random Lattice Model (RLM) with $24 \times 24$ sites was simulated in this manner, and
the vibrational frequencies from 100 different realizations of
randomness were merged and plotted as a histogram. In two-dimensions,
the number of coordinate degrees of freedom is $N = 2 \times 24\times
24 = 1152,$ which is comparable to the 800 frequencies that were present
in each system in the dynamical simulations~\cite{Corey}. The ratio
of the number of contact forces to the number of coordinate degrees
of freedom, which corresponds to $M/N,$ increases from 1 to 1.5 as
the probability of establishing contacts within the same layer
increases from 0 to 1.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{horiz_sqrt_ev08.pdf}
\includegraphics[width=\columnwidth]{horiz_sqrt_ev09.pdf}
\includegraphics[width=\columnwidth]{horiz_sqrt_ev10.pdf}
\end{center}
\caption{Histogram of vibrational frequencies obtained from the random
lattice model described in this paper. 100 realizations of 24 x 24 random
triangular lattices were constructed, with the ratio $M/N$ equal to
1.1 (top), 1.05 (middle) and 1.0 (bottom) respectively.}
\label{fig:randomlattice}
\end{figure}
The results are shown in Figure~\ref{fig:randomlattice}. The
transition from $D(\omega = 0) = 0$ when $M/N \neq 1$ to $D(\omega = 0) \neq 0$
when $M/N = 1,$ which is
seen in random matrix theory and in the dynamical simulations, is also seen in
the RLM density of states. But in addition,
there is a cusp in $D(\omega=0)$ at the transition point, as is
seen in the dynamical simulations~\cite{Corey}. The boson peak
at $\omega\neq 0$ seen in the simulations is also reproduced for
the lattice model, being most pronounced at the transition point.
Although there is still a gap in the Random Lattice Model frequency
spectrum near $\omega = 0,$ it is approximately half what one would
predict from random matrix theory for the corresponding $M/N.$
Moreover, it is clear that it is a finite size effect: the global
translational invariance of the random lattice with periodic boundary
conditions ensures that there are two zero modes (seen clearly in
the first plot in Figure~\ref{fig:randomlattice}), and the locality
of the connections that are made ensures that long wavelength
oscillations have low frequencies. This is verified by increasing
$N$ and checking that the gap decreases even though $M/N$ is held
constant.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{newmodelcorr.jpg}
\end{center}
\caption{Correlation function $1 - L_2(\xi, 0)$ for the Random
Lattice Model introduced in this paper and for the Laguerre ensemble,
at the transition point (i.e. with square random matrices). Good
agreement is seen between the two. The correlation function for the
Gaussian Orthogonal ensemble is also shown for comparison.}
\label{fig:RLMcorrlns}
\end{figure}
We see that the Random Lattice Model has the same change in $D(\omega
= 0)$ at the jamming transition that is seen in the dynamical
numerical simulations and in random matrix theory. However, it also
reproduces other qualitative features of the numerical $D(\omega)$
that random matrix theory did not. As seen in Figure~\ref{fig:RLMcorrlns}
and in Figure~\ref{fig:RLMspacings}, the distribution of spacings
between consecutive frequencies is found to be the same as for the
random matrix ensemble, consistent with the Wigner surmise, and the
correlation function $1 - L_2(\xi,0)$ at $M/N = 1$ matches that
obtained for the Laguerre ensemble. Thus the random lattice model
retains the positive features of random matrix theory, while curing
its problems.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{newmodelspacing.jpg}
\end{center}
\caption{Level spacings for the Random Lattice Model and a
fit to the Wigner surmise for the Gaussian Orthogonal
Ensemble. Spacings from the fifth to the fifteenth normal
mode frequencies are normalized, as discussed in the paper,
and combined to create the histogram. The Wigner surmise
fits the distribution very well, but as discussed in the
paper, the fit applies equally to the Laguerre ensemble.}
\label{fig:RLMspacings}
\end{figure}
\section{Conclusions}
In this paper, we show that a random matrix approach can be used
successfully to calculate the correlations between vibrational
frequencies in a granular system near the jamming transition, if
the matrix ensemble is chosen correctly. By modifying the random
matrices according to physical considerations, a Random Lattice
Model is constructed, which retains the correlation functions of
random matrix theory and also successfully reproduces all the
qualitative features in the density of vibrational frequencies.
Such random lattice models may be more broadly applicable to granular
materials.
\begin{acknowledgments}
The authors gratefully acknowledge data provided
by Kyle VanderWerf and Corey O'Hern for numerical simulations
on granular systems, to which the models in this paper were
compared. Useful conversations with Satya Majumdar are also
acknowledged.
\end{acknowledgments}
|
1,477,468,751,126 | arxiv | \section{Introduction}
Let $H$ be a (complex and separable) Hilbert space, and denote
the operator norm by $\|\cdot\|_\infty$. A function $f:\mathbb{R}\to \mathbb{C}$
is said to be \emph{operator Lipschitz} if there exists a constant $C_f$ such that
\begin{equation*}
\|f(A)-f(B)\|_\infty \leq C_f\|A-B\|_\infty,\quad A,B\in \mathcal{B}_{\mathrm{sa}}(H)
\end{equation*}
where $\mathcal{B}_{\mathrm{sa}}(H)$ denotes the space of all bounded self-adjoint linear operators on $H$. It has been
known since the work of Farforovskaya that not all Lipschitz
functions are operator Lipschitz \cite{Farforovskaja-1972} and it was later discovered that even the absolute value function $f(t) = |t|$ is not operator Lipschitz \cite{Kato1973,Davies-jlms-1988}. The problem of characterising the class
of operator Lipschitz functions has received considerable attention,
with early contributions from Daletskii and Krein \cite{Daletskii-Krein-1951,Daletskii-Krein-1956} and substantial advances by Birman, Solomyak \cite{Birman-Solomyak-I,Birman-Solomyak-II,Birman-Solomyak-III}, Aleksandrov, and Peller \cite{Aleksandrov-Peller-holder-2010,Aleksandrov-Peller-holder-zygmund-2010,Peller-besov-1990}. Some surveys on the topic are \cite{Aleksandrov-Peller-survey,Birman-Solomyak-2003,Peller-survey-2010,SkripkaTomskova}.
At present no analytic condition on $f$ that is both necessary and sufficient for $f$ to be operator Lipschitz is known, however it has been proved by Peller that it is sufficient for $f$ to be Lipschitz and in the homogeneous Besov class $\dot{B}^{1}_{\infty,1}(\mathbb{R})$
\cite[Theorem 2]{Peller-besov-1990}.
In other words, it suffices that $f$ be Lipschitz and
\begin{equation*}
\int_0^\infty \sup_{t \in \mathbb{R}} |f(t+h)-2f(t)+f(t-h)|\frac{dh}{h^2} < \infty.
\end{equation*}
Slightly weaker sufficient conditions are due to Arazy, Barton and Friedman \cite{Arazy-Barton-Friedman-1990}, \cite[Section 3.13]{Aleksandrov-Peller-survey}.
A more general problem which has also been of interest to many authors involves Lipschitz estimates
in operator ideals, the most important of which are the Schatten-von Neumann ideals.
For $0 < p < \infty$ the Schatten-von Neumann ideal $\mathcal{L}_p$ is defined as the class of operators $T$ on $H$ with $\|T\|_p := \mathrm{Tr}(|T|^p)^{1/p} < \infty$, where $\mathrm{Tr}$ is the operator trace. A function $f$ is said to be $\mathcal{L}_p$-operator Lipschitz if there is a constant $C_{f,p}>0$ such that
\begin{equation}\label{lipschitz_estimate}
\|f(A)-f(B)\|_p \leq C_{f,p}\|A-B\|_p,\quad A,B\in \mathcal{B}_{\mathrm{sa}}(H),\;A-B\in \mathcal{L}_p.
\end{equation}
It is well-known that all Lipschitz functions are $\mathcal{L}_2$-operator Lipschitz, and that the class of $\mathcal{L}_1$-operator Lipschitz functions coincides with the class of
operator Lipschitz functions.
It is now known that if $1 < p <\infty$ then for \eqref{lipschitz_estimate} to hold it is necessary and sufficient that $f$ be Lipschitz \cite{PS-acta}.
It is also known that Lipschitz functions satisfy a weak-type estimate in $\mathcal{L}_1$ \cite{CPSZ1,cpsz}.
By contrast, the range $0 < p < 1$ is poorly understood. The primary obstacle is that for this range of $p$, the ideal $\mathcal{L}_p$ is not a Banach space, but merely a quasi-Banach space.
The failure of the triangle inequality makes many of the methods used in the $p \geq 1$ case inapplicable. For example, Peller's proof in \cite{Peller-besov-1990}
of the sufficiency of $f \in \dot{B}^1_{\infty,1}(\mathbb{R})$ for a Lipschitz function to be operator Lipschitz is based on a representation of $f(A)-f(B)$ as an operator-valued integral. Since $\mathcal{L}_p$ is a quasi-Banach space when $p<1$, the usual theories of Bochner or weak integration break down for $\mathcal{L}_p$-valued functions and it does not appear to be possible to adapt the proof of \cite[Theorem 2]{Peller-besov-1990} to the quasi-Banach case. Nonetheless, a number of important results for $0 < p < 1$ have been found by Rotfel'd \cite{Rotfeld1968,Rotfeld-trudy-1977}, Aleksandrov and Peller \cite{Peller-p-lt-1-1987,Aleksandrov-Peller-hankel-and-toeplitz-2002,Aleksandrov-Peller-acb-2020}, and Ricard \cite{Ricard-2018}.
For $0 < p < 1$ some results are known in the corresponding theory of operator Lipschitz functions of unitary operators. Peller has proved \cite[Theorem 1]{Peller-p-lt-1-1987} that if $0 < p \leq 1$ and $\phi \in B^{\frac{1}{p}}_{\infty,p}(\mathbb{T})$ (a Besov
space on the unit circle) then for all unitary operators $U$ and $V$ on $H$ with $U-V \in \mathcal{L}_p$ we have the inequality
\begin{equation}\label{circle_lipschitz}
\|\phi(U)-\phi(V)\|_p \leq c_p\|\phi\|_{B^{\frac{1}{p}}_{\infty,p}(\mathbb{T})}\|U-V\|_p.
\end{equation}
Peller also proved \cite[Theorem 3]{Peller-p-lt-1-1987} that the condition $\phi \in B^{1/p}_{p,p}(\mathbb{T})$ is necessary for $\phi$ to satisfy a Lipschitz estimate of the form \eqref{circle_lipschitz} (but possibly with a different constant).
In 1991, Peller conjectured that a similar result holds for functions on $\mathbb{R}$, namely that if $f \in \dot{B}^{\frac{1}{p}}_{\infty,p}(\mathbb{R})$ then $f$
is $\mathcal{L}_p$-operator Lipschitz \cite[Page 14]{Peller-besov-1990}.
Via the Cayley transform, it is possible to directly infer sufficient conditions for a function $f:\mathbb{R}\to\mathbb{C}$ to satisfy \eqref{lipschitz_estimate} from \eqref{circle_lipschitz}.
However, we are not aware of any previous detailed study of $\mathcal{L}_p$-operator Lipschitz functions on $\mathbb{R}$ for $0 < p < 1.$ The following surprising example (proved in Section \ref{periodic_section}) is evidence that the
theory is in fact very different from the case of functions on $\mathbb{T}$.
\begin{theorem}\label{periodic_failure}
Let $f:\mathbb{R}\to\mathbb{C}$ be a non-constant periodic function. Then $f$ is not $\mathcal{L}_p$-operator Lipschitz for any $p \in (0,1)$.
\end{theorem}
Theorem \ref{periodic_failure} proves that even $C^\infty$ functions with all derivatives bounded are not necessarily $\mathcal{L}_p$-operator Lipschitz for any $0 < p < 1$.
In particular, the function $h(t) = e^{it}$ is not $\mathcal{L}_p$-operator Lipschitz for any $p \in (0,1)$. This provides a counterexample to Peller's conjecture stated above, as $h$ belongs to the homogeneous Besov space $\dot{B}^{1/p}_{\infty,p}(\mathbb{R})$ for every $p \in (0,1)$.
The main result of this paper is the following theorem, which provides a general
sufficient condition for a function to be $\mathcal{L}_p$-operator Lipschitz naturally extending \cite[Theorem 2]{Peller-besov-1990}.
\begin{theorem}\label{main_sufficient_condition}
Let $0 < p \leq 1$, and let $f$ be a Lipschitz function on $\mathbb{R}$ belonging to the homogeneous Besov space $\dot{B}^{\frac{1}{p}}_{\frac{p}{1-p},p}(\mathbb{R})$. There exists a constant $C_p > 0$ such that for
all bounded self-adjoint operators $A$ and $B$ with $A-B\in \mathcal{L}_p$ we have
$$
\|f(A)-f(B)\|_{p} \leq C_{p}(\|f'\|_{L_\infty(\mathbb{R})}+\|f\|_{\dot{B}^{\frac{1}{p}}_{\frac{p}{1-p},p}(\mathbb{R})})\|A-B\|_{p}.
$$
(Here, and throughout, we use the convention that $\frac{p}{1-p}=\infty$ when $p=1$.)
\end{theorem}
The assumption that $A$ and $B$ are bounded is made only for the convenience of exposition, and can very likely be removed. The constant $C_p$ does not depend on the operator norms of $A$
or $B$. Standard arguments show that Theorem \ref{main_sufficient_condition} also implies commutator estimates of the form
\begin{equation*}
\|f(A)X-Xf(A)\|_{p} \leq C_p(\|f'\|_{\infty}+\|f\|_{\dot{B}^{\frac{1}{p}}_{\frac{p}{1-p},p}})\|AX-XA\|_p
\end{equation*}
and more generally quasi-commutator estimates of the form
\begin{equation*}
\|f(A)X-Xf(B)\|_p \leq C_p(\|f'\|_\infty+\|f\|_{\dot{B}^{\frac{1}{p}}_{\frac{p}{1-p},p}})\|AX-XB\|_p.
\end{equation*}
See Section \ref{lp_lip_section} below for details.
In the case $p=1$, Theorem \ref{main_sufficient_condition} reduces to Peller's sufficient condition \cite[Theorem 2]{Peller-besov-1990}.
In his original proof, the function $f$ was decomposed into Littlewood-Paley components. However we have been unable to adapt these methods to $p<1$
due to the existence of non-$\mathcal{L}_p$-operator Lipschitz functions with compactly supported Fourier transform from Theorem \ref{periodic_failure}.
Our proof of Theorem \ref{main_sufficient_condition} is instead based on the decomposition of $f$ into wavelet series. Wavelet-based methods have had a considerable impact on harmonic analysis and approximation theory over the past three decades, however to our knowledge this is the first time that the wavelet decomposition has been applied in the study of operator Lipschitz functions. We note tangentially that wavelets were used by Peng in the related topic of integral multipliers \cite{Peng-wavelets-1993}, although otherwise this potentially very fruitful technique has yet to be exploited.
We do not discuss necessary conditions here, however a necessary condition for $f$ to be $\mathcal{L}_p$-operator Lipschitz for $0 < p \leq 1$ in terms of Hankel operators has been found by Peller, see \cite[Theorem 6]{Peller-besov-1990} for details.
A related theme is operator H\"older continuity, which has been studied extensively by Aleksandrov and Peller \cite{Aleksandrov-Peller-holder-2010,Aleksandrov-Peller-holder-zygmund-2010}.
In Sections \ref{holder_section} and \ref{weak_holder_section} we also prove a number of H\"older-type estimates, extending some of the results in \cite{Aleksandrov-Peller-holder-2010}.
For example, we prove that if $0 < \alpha <1$ and $0 < p \leq 1$ then for all $0 < \alpha<1,$ if $f$ is an $\alpha$-H\"older function belonging to $\dot{B}^{\alpha+\frac{1-p}{p}}_{\frac{p}{1-p},\infty}(\mathbb{R})$ we have
\begin{equation*}
\|f(A)-f(B)\|_{\frac{p}{\alpha},\infty} \leq C_{p,\alpha}\|f\|_{\dot{B}^{\alpha+\frac{1-p}{p}}_{\frac{p}{1-p},\infty}(\mathbb{R})}\|A-B\|_{p}^{\alpha},\quad A,B\in \mathcal{B}_{\mathrm{sa}}(H),\; A-B\in \mathcal{L}_p.
\end{equation*}
Here, $\|\cdot\|_{\frac{p}{\alpha},\infty}$ is a weak Schatten quasi-norm. This result extends \cite[Theorem 5.4]{Aleksandrov-Peller-holder-2010}, and coincides with that result for $p=1$. H\"older-type estimates of this nature
are related to those in \cite{HSZ2019,Ricard-2018}. In \cite{Ricard-2018} it was proved that for all $0 < \alpha < 1$ and $0 < p < \infty$ we have
\[
\||A|^{\alpha}-|B|^{\alpha}\|_{\frac{p}{\alpha}} \lesssim \|A-B\|_p^{\alpha},\quad A,B \in \mathcal{B}_{\mathrm{sa}}(H),\; A-B \in \mathcal{L}_{p}.
\]
Since the function $f(t) = |t|^{\alpha}$ belongs to $\dot{B}^{\alpha+\frac{1-p}{p}}_{\frac{p}{1-p},\infty}(\mathbb{R})$, the results here imply a weaker estimate than \cite{Ricard-2018} but for a wider class of functions. We discuss this relationship
in more detail in Section \ref{weak_holder_section}.
We wish to extend our gratitude to Dmitriy Zanin for many helpful discussions and suggestions relating to this paper and to Jinghao Huang for his careful reading and comments. We also wish to express our gratitude to the two anonymous reviewers whose thoughtful comments improved this article.
\section{Preliminaries}
\subsection{Operator ideals and Schur multipliers}
We recall some details concerning Schatten ideals and Schur multipliers. Additional details on Schatten $\mathcal{L}_p$ spaces may be found in \cite{GohbergKrein,Simon-trace-ideals-2005}, and for further discussion of Schur multipliers see \cite{Birman-Solomyak-2003, Pisier-book-2001, SkripkaTomskova, Ricard-2018,Aleksandrov-Peller-hankel-and-toeplitz-2002,Aleksandrov-Peller-acb-2020}. Let $H$ be a Hilbert space. Denote by $\mathcal{B}(H)$ the algebra of all bounded linear operators on $H$, with operator norm denoted $\|\cdot\|_{\infty}$. Given a compact operator $T \in \mathcal{B}(H)$, denote by $\mu(T) = \{\mu(j,T)\}_{j=0}^\infty$ the sequence of singular values of $T$, which may be defined as
\begin{equation*}
\mu(j,T) := \inf\{\|T-R\|_{\infty}\;:\; \mathrm{rank}(R)\leq j\}.
\end{equation*}
Equivalently, $\mu(T)$ is the sequence of eigenvalues of the absolute value $|T|$ arranged in non-increasing order with multiplicities.
For $0 < p < \infty$, denote by $\ell_p$ the space of $p$-summable sequences. The Schatten-von Neumann $\mathcal{L}_p$ space is the space of compact operators $T$ with singular value sequence in $\ell_p$. That is, $\mathcal{L}_p$
is the set of compact operators $T$ such that
\begin{equation*}
\|T\|_p := \mathrm{Tr}(|A|^{p})^{\frac{1}{p}} = \left(\sum_{j=0}^\infty \mu(j,T)^p\right)^{1/p} = \|\mu(T)\|_{\ell_p} < \infty.
\end{equation*}
For $p \geq 1$, this defines a Banach norm on $\mathcal{L}_p$. For $0 < p < 1$, this is only a quasi-norm obeying the $p$-triangle inequality
\begin{equation*}
\|T+S\|_p^p \leq \|T\|_p^p+\|S\|_p^p,\quad T,S \in \mathcal{L}_p.
\end{equation*}
See \cite[Proposition 6]{Kosaki-jfa-1984}, \cite{Rotfeld1968}.
We will also briefly refer to weak $L_p$-norms. For $p\in (0,\infty)$, the weak $L_p$-norm is defined by
\begin{equation}\label{weak_lp_def}
\|T\|_{p,\infty} := \sup_{n\geq 0}\;(n+1)^{\frac{1}{p}}\mu(n,T).
\end{equation}
\subsection{Schur multipliers of $\mathcal{L}_p$}
Let $n\geq 1$. The Schur product of two matrices $A,B \in M_n(\mathbb{C})$ is defined as the entry-wise product
\begin{equation*}
A\circ B = \{A_{j,k}B_{j,k}\}_{j,k=1}^n.
\end{equation*}
For $0 < p \leq 1$, the $\mathcal{L}_p$-bounded Schur multiplier norm of $A$ is defined as
\begin{equation*}
\|A\|_{\mathrm{m}_p} := \sup_{\|B\|_p\leq 1} \|A\circ B\|_{p}.
\end{equation*}
Note that
\begin{equation}\label{algebra}
\|A\circ B\|_{\mathrm{m}_p} \leq \|A\|_{\mathrm{m}_p}\|B\|_{\mathrm{m}_p}.
\end{equation}
The $p$-subadditivity of the $\mathcal{L}_p$-quasi-norm readily implies that
\begin{equation}\label{p-prop}
\|A+B\|_{\mathrm{m}_p}^p \leq \|A\|_{\mathrm{m}_p}^p+ \|B\|_{\mathrm{m}_p}^p.
\end{equation}
For $p \leq 1$, the $\mathrm{m}_p$-quasi-norm can be computed using rank one matrices. For $1\leq n\leq \infty$, we denote by $\ell_2^n$ either $\mathbb{C}^n$ if $n<\infty$ or $\ell_2(\mathbb{N})$ if $n=\infty$.
\begin{lemma}\label{rank_one_suffices}
Let $A \in M_{n}(\mathbb{C})$ where $1\leq n\leq \infty$ and assume that $0 < p\leq 1$. Then
\begin{equation*}
\|A\|_{\mathrm{m}_p} := \sup_{\|\xi\|\leq 1,\, \|\eta\|\leq 1} \|A\circ (\xi\otimes \eta)\|_{p}.
\end{equation*}
The supremum here is over vectors $\xi, \eta$ in the unit ball of $\ell_2^n$, and $\xi\otimes \eta$ denotes the rank one matrix
\begin{equation*}
(\xi\otimes \eta)_{j,k} = \xi_j\eta_k,\quad 1\leq j,k\leq n
\end{equation*}
where $\xi_j$ and $\eta_k$ denote the $j$ and $k$th entries of $\xi$ and $\eta$ respectively.
\end{lemma}
\begin{proof}
The matrix $\xi\otimes \eta$ is proportional to a rank one projection with constant equal to $\|\xi\|\|\eta\|$. Therefore,
\begin{equation*}
\|\xi\otimes \eta\|_p = \|\xi\|\|\eta\|\leq 1.
\end{equation*}
It follows that
\begin{equation*}
\sup_{\|\xi\|,\|\eta\|\leq 1} \|A\circ (\xi\otimes \eta)\|_p \leq \|A\|_{\mathrm{m}_p}.
\end{equation*}
Using the Schmidt decomposition, if $B \in M_{n}(\mathbb{C})$ then there exist sequences $\{\xi_j\}_{j=0}^{n-1}$, $\{\eta_j\}_{j=0}^{n-1}$
of unit vectors in $\ell_2^n$ such that
\begin{equation*}
B = \sum_{j=0}^{n-1} \mu(j,B)\xi_j\otimes \eta_j.
\end{equation*}
By the $p$-subadditivity of the $\mathcal{L}_p$-quasi-norm, we have
\begin{equation*}
\|A\circ B\|_p^p \leq \sum_{j=0}^{n-1} \mu(j,B)^p\|A\circ (\xi_j\otimes \eta_j)\|_p^p \leq \left(\sum_{j=0}^{n-1} \mu(j,B)^p\right)\sup_{\|\xi\|,\|\eta\|\leq 1} \|A\circ (\xi\otimes \eta)\|_p^p.
\end{equation*}
By definition, $\sum_{j=0}^{n-1} \mu(j,B)^p = \|B\|_p^p$. Therefore
\begin{equation*}
\|A\|_{\mathrm{m}_p} \leq \sup_{\|\xi\|,\|\eta\|\leq 1} \|A\circ (\xi\otimes \eta)\|_p.
\end{equation*}
\end{proof}
A property of the $\mathrm{m}_p$-Schur norm that we shall use frequently is the following:
\begin{lemma}
Let $A\in M_n(\mathbb{C})$, where $1\leq n\leq \infty$. If $A'$ is a submatrix of $A$, then
\begin{equation*}
\|A'\|_{\mathrm{m}_p}\leq \|A\|_{\mathrm{m}_p}.
\end{equation*}
\end{lemma}
It follows that adding rows or columns to a matrix cannot decrease the $\mathrm{m}_p$-norm.
Following the notation of Aleksandrov and Peller \cite{Aleksandrov-Peller-hankel-and-toeplitz-2002}, for $0 < p \leq 1$ a conjugate index $p^\sharp$
is defined by
\begin{equation*}
p^\sharp := \begin{cases}
\frac{p}{1-p},\quad p < 1,\\
\infty,\quad p=1.
\end{cases}
\end{equation*}
That is, $p^{\sharp}$ is the unique element of $(0,\infty]$ such that
\begin{equation*}
\frac{1}{p} = \frac{1}{p^{\sharp}}+1.
\end{equation*}
As an application of the H\"older inequality for Schatten ideals \cite[Property 2, page 92]{GohbergKrein}, \cite[Lemma 1]{Kosaki-jfa-1984}, it follows that
\begin{equation}\label{operator_holder_inequality}
\|AB\|_p \leq \|A\|_{p^{\sharp}}\|B\|_1,\quad 0 < p \leq 1.
\end{equation}
Therefore, Lemma \ref{rank_one_suffices} implies that
\begin{equation}\label{psharp_upper_bound}
\|A\|_{\mathrm{m}_p} = \sup_{\|\xi\|,\|\eta\|\leq 1} \|A\circ (\xi\otimes \eta)\|_p \leq \|A\|_{p^{\sharp}}.
\end{equation}
(c.f. \cite[Theorem 3.1]{Aleksandrov-Peller-hankel-and-toeplitz-2002}).
There also holds the H\"older inequality for sequences
\begin{equation}\label{sequential_holder}
\|xy\|_{\ell_p} \leq \|x\|_{\ell_{p^\sharp}}\|y\|_{\ell_1},\quad x \in \ell_{p^\sharp},\, y \in \ell_1.
\end{equation}
The next lemma is a very slight modification of \cite[Theorem 3.2]{Aleksandrov-Peller-hankel-and-toeplitz-2002}.
For $1\leq n < \infty$, denote by $\{e_j\}_{j=0}^{n-1}$ the canonical basis of the $n$-dimensional vector space $\ell_2^n$.
The matrix basis of $M_n(\mathbb{C})$ shall be denoted $\{e_j\otimes e_k\}_{j,k=0}^{n-1}$. A matrix $X \in M_n(\mathbb{C})$ is said to be diagonal
with respect to $\{e_j\otimes e_k\}_{j,k=0}^{n-1}$ when $\langle e_j,Xe_k\rangle = 0$ for $j\neq k$.
\begin{lemma}\label{block_diagonal_formula}
Let $A\in M_n(\mathbb{C})$, $1\leq n\leq \infty$ be a matrix having a generalised block diagonal structure in the following sense:
There exist pairwise orthogonal projections $\{p_j\}_{j=1}^N$ and $\{q_j\}_{j=1}^N$, diagonal with respect to the matrix basis $\{e_j\otimes e_k\}_{j,k=1}^n$
such that $\sum_{j=1}^N q_j = \sum_{j=1}^N p_j = 1$ and
\begin{equation*}
A = \sum_{j=1}^N p_jAq_j.
\end{equation*}
It follows that for $0 < p < 1$ we have
\begin{equation*}
\|A\|_{\mathrm{m}_p} \leq \left(\sum_{j=1}^N \|p_jAq_j\|_{\mathrm{m}_p}^{p^\sharp}\right)^{\frac{1}{p^\sharp}}.
\end{equation*}
and
\begin{equation*}
\|A\|_{\mathrm{m}_1} \leq \max_{1\leq j\leq N} \|p_jAq_j\|_{\mathrm{m}_1}.
\end{equation*}
\end{lemma}
We also define the $\mathrm{m}_p$-Schur norm for matrices indexed by arbitrary, possibly infinite and uncountable sets.
\begin{definition}
If $A = \{A_{t,s}\}_{t,s\in T\times S}$ is an infinite matrix indexed by sets $T$ and $S$, we define
\begin{equation*}
\|A\|_{\mathrm{m}_p} := \sup_{|T_0|< \infty\,|S_0|< \infty} \|\{A_{t,s}\}_{t,s\in T_0\times S_0}\|_{\mathrm{m}_p}.
\end{equation*}
That is, the $\mathrm{m}_p$-norm of an infinite matrix is defined as the supremum of the $\mathrm{m}_p$-norms of all finite submatrices.
If $\|A\|_{\mathrm{m}_p} < \infty$, then the matrix $A$ is said to be an $\mathcal{L}_p$-bounded Schur multiplier.
\end{definition}
The analogy of Lemma \ref{block_diagonal_formula} holds for matrices indexed by arbitrary sets, and also
note that the analogy of \eqref{p-prop} holds. That is,
\begin{equation*}
\|A+B\|_{\mathrm{m}_p}^p \leq \|A\|_{\mathrm{m}_p}^p +\|B\|_{\mathrm{m}_p}^p
\end{equation*}
whenever $A$ and $B$ are matrices indexed by the same sets.
\subsection{Besov spaces}\label{besov_section}
Denote by $\mathcal{S}(\mathbb{R})$ the algebra of all Schwartz class functions on $\mathbb{R}$, with its canonical Fr\'echet topology, and denote by $\mathcal{S}'(\mathbb{R})$ its topological
dual, the space of tempered distributions. Let $\Phi$ be a smooth function on $\mathbb{R}$ supported in the set
\begin{equation*}
[-2,-1+\frac{1}{7})\cup (1-\frac{1}{7},2],
\end{equation*}
and identically equal to $1$ in the set $[-2+\frac{2}{7},-1)\cup (1,2-\frac{2}{7}].$
We assume that
\begin{equation*}
\sum_{n\in \mathbb{Z}} \Phi(2^{-n}\xi) = 1\quad \xi\neq 0.
\end{equation*}
We will use a homogeneous Littlewood-Paley decomposition $\{\Delta_n\}_{n\in \mathbb{Z}}$ where
$\Delta_n$ is the operator on $\mathcal{S}'(\mathbb{R})$ of Fourier multiplication by the function $\xi\mapsto \Phi(2^{-n}\xi)$.
For $s \in \mathbb{R}$ and $p,q\in (0,\infty]$ we consider the homogeneous Besov space $\dot{B}^s_{p,q}(\mathbb{R})$. We refer to \cite{Sawano2018,Triebel-1} for comprehensive accounts of the theory of Besov spaces.
In terms of the Littlewood-Paley decomposition $\{\Delta_j\}_{j\in \mathbb{Z}}$, a distribution $f \in \mathcal{S}'(\mathbb{R})$ is said to belong to the homogeneous Besov space
$\dot{B}^s_{p,q}(\mathbb{R})$, where $s \in \mathbb{R}$ and $p,q\in (0,\infty]$ if
\begin{equation}\label{besov_seminorm_def}
\|f\|_{\dot{B}^s_{p,q}} := \|\{2^{js}\|\Delta_j f\|_p\}_{j\in \mathbb{Z}}\|_{\ell_q(\mathbb{Z})} < \infty.
\end{equation}
This definition follows \cite[Section 2.4]{Sawano2018}, \cite[Section 2.2.1]{Grafakos-2}, \cite[Section 5.1.3]{Triebel-1}.
This is only a seminorm, and $\|f\|_{\dot{B}^s_{p,q}} = 0$ for all polynomials $f$. The homogeneous Besov space is distinguished from the inhomogeneous Besov space $B^s_{p,q}(\mathbb{R})$,
which will not play a role in the present paper.
Note that if $f \in \dot{B}^s_{p,q}(\mathbb{R})$ it will not necessarily be the case that there is an equality of distributions
\begin{equation*}
f = \sum_{n\in \mathbb{Z}} \Delta_n f.
\end{equation*}
For example, if $f$ is a polynomial then the above right hand side is zero. However, there is an equality $f = \sum_{n\in \mathbb{Z}}\Delta_n f$
in the space of distributions modulo polynomials of degree at most $L>s-\frac{1}{p}$, see \cite[Theorem 2.31]{Sawano2018}.
In \cite{Peller-besov-1990, Aleksandrov-Peller-survey}, a slight modification of the definition of $\dot{B}^1_{\infty,1}(\mathbb{R})$
was made, and therefore in order to properly compare our results we explain how our present conventions align with those in \cite{Aleksandrov-Peller-survey}.
Say that that a distribution $f$ belongs to the modified homogeneous Besov space $\dot{B}^1_{\infty,1,\mathrm{mod}}(\mathbb{R})$ if $f \in \dot{B}^1_{\infty,1}(\mathbb{R})$
and the derivative $f'$ is expressed as
\begin{equation*}
f' = \sum_{n\in \mathbb{Z}} (\Delta_nf)'
\end{equation*}
where the series converges in the sense of distributions.
We now recall the well-known relation between $\dot{B}^1_{\infty,1,\mathrm{mod}}(\mathbb{R})$ and $\dot{B}^1_{\infty,1}(\mathbb{R})$. Recall that if $f$ is Lipschitz continuous, then $f$
is almost everywhere differentiable and $f'\in L_\infty(\mathbb{R})$ with $\|f'\|_{\infty}\leq \|f\|_{\mathrm{Lip}(\mathbb{R})}$ where $\|\cdot\|_{\mathrm{Lip}(\mathbb{R})}$
is the Lipschitz seminorm \cite[Subsection 3.1.6]{Federer-1969}. The reverse implication holds under the assumption that $f$ is absolutely continuous \cite[Corollary 2.9.20]{Federer-1969}.
\begin{lemma}
Suppose that $f$ belongs to the modified homogeneous Besov space $\dot{B}^1_{\infty,1,\mathrm{mod}}(\mathbb{R})$. Then $f$ is Lipschitz continuous.
Conversely, if $f$ is a Lipschitz function belonging to $\dot{B}^1_{\infty,1}(\mathbb{R})$, then there exists a constant $c$ with $|c|\leq \|f'\|_\infty +\|f\|_{\dot{B}^1_{\infty,1}}$ such that $f(t)-ct\in \dot{B}^1_{\infty,1, \mathrm{mod}}(\mathbb{R})$.
\end{lemma}
\subsection{$\mathcal{L}_p$-operator Lipschitz functions}\label{lp_lip_section}
Let $f$ be a Lipschitz function on $\mathbb{R}$, and let $0 < p \leq \infty$. The following assertions are equivalent:
\begin{enumerate}[{\rm (i)}]
\item{}\label{lipschitz_definition} There is a constant $c_f$ such that $\|f(A)-f(B)\|_p\leq c_f\|A-B\|_{p}$ for all bounded self-adjoint $A$ and $B$ with $A-B \in \mathcal{L}_p$,
\item{}\label{commutator_lipschitz} There is a constant $c_f'$ such that $\|[f(A),X]\|_{p} \leq c_f'\|[A,X]\|_{p}$ for all bounded self-adjoint $A$ and bounded $X$ with $[A,X] \in \mathcal{L}_p$,
\item{}\label{quasicommutator_lipschitz} There is a constant $c_f''$ such that $\|f(A)X-Xf(B)\|_p \leq c_f''\|AX-XB\|_p$ for all bounded self-adjoint $A,B$ and bounded $X$ with $AX-XB \in \mathcal{L}_p$,
\item{}\label{schur_boundedness} The matrix of divided differences $\{f^{[1]}(t,s)\}_{t,s\in \mathbb{R}}$, where $f^{[1]}$ is defined as
\begin{equation*}
f^{[1]}(t,s) := \frac{f(t)-f(s)}{t-s},\quad t\neq s\in \mathbb{R}.
\end{equation*}
is a Schur multiplier of $\mathcal{L}_p$ in the sense that
\begin{equation*}
\sup_{\lambda,\mu} \|\{f^{[1]}(\lambda_j,\mu_k)\}_{j,k=0}^n\|_{\mathrm{m}_p} < \infty
\end{equation*}
where the supremum ranges over all \emph{disjoint} sequences $\lambda,\mu\subset \mathbb{R}$ and all $n\geq 1$.
\end{enumerate}
Note that the constants in each case might differ. The Schur multiplier condition in \eqref{schur_boundedness} is implied by the formally stronger assertion that $\|f^{[1]}\|_{\mathrm{m}_p}<\infty$.
For $p\geq 1$, this result has been proved in different contexts and at varying levels of generality in several places \cite[Theorem 10.1]{Aleksandrov-Peller-holder-zygmund-2010}, \cite[Corollary 5.6]{Kissin-Shulman-2005}, \cite[Theorem 3.1.1]{Aleksandrov-Peller-survey}, \cite[Theorem 3.4]{DDdPS-jfa-1997}, \cite[Lemma 2.4]{RicardArchBasel2015}.
While this fact is well-established when $\mathcal{L}_p$ is a Banach space; we are not aware of any published proof of the precisely the same assertions when $\|\cdot\|_{p}$ is merely a quasi-norm although we note that closely related statements have appeared in \cite[Section 7]{HSZ2019} and \cite{RicardArchBasel2015}. Nonetheless, the results when $p<1$ may be proved in the same way, following without any changes the proofs in \cite{DDdPS-jfa-1997}. Therefore we only state without proof the relevant implications.
The condition \eqref{schur_boundedness} may seem unfamiliar, since we only require that $\|\{f^{[1]}(\lambda_j,\mu_k)\}_{j,k=0}^n\|_{\mathrm{m}_p}$ be uniformly bounded over
all disjoint sequences $\lambda$ and $\mu$, rather than all sequences. This issue is irrelevant in the Banach case $p\geq 1$, since the diagonal matrix $\{\chi_{t=s}\}_{t,s\in \mathbb{R}}$ is an $\mathcal{L}_p$-bounded Schur multiplier
for all $p\geq 1$. This is false when $p<1$, and hence some caution is needed.
Of course, \eqref{quasicommutator_lipschitz} implies both \eqref{lipschitz_definition} and \eqref{commutator_lipschitz}. It is also the case that \eqref{commutator_lipschitz} implies \eqref{quasicommutator_lipschitz}; this follows by substituting for $A$ and $X$ the matrices
\begin{equation*}
\widetilde{A} := \begin{pmatrix} A & 0 \\ 0 & B \end{pmatrix},\quad \widetilde{X} = \begin{pmatrix} 0 & X \\ X & 0\end{pmatrix}
\end{equation*}
and using the formula
\begin{equation*}
[\widetilde{A},\widetilde{X}] = \begin{pmatrix} 0 & AX-XB\\ AX-XB & 0 \end{pmatrix}
\end{equation*}
so that \eqref{commutator_lipschitz} implies \eqref{quasicommutator_lipschitz} due to the unitary invariance of the $\mathcal{L}_p$-quasi-norm.
The following Lemma states that \eqref{lipschitz_definition} implies \eqref{commutator_lipschitz}.
\begin{lemma}\label{lipschitz_implies_commutator_lipschitz}
Let $0 < p\leq 1$. If $f$ is a Borel function on $\mathbb{R}$ such that for all bounded self-adjoint operators $A$ and $B$ on $H$ with $A-B \in \mathcal{L}_p$ we have
\begin{equation*}
\|f(A)-f(B)\|_{p} \leq c_f\|A-B\|_{p}
\end{equation*}
for some constant $c_f$ not depending on $A$ or $B$, then for all self-adjoint operators $A=A^*$ bounded operators $X$ such that $[A,X] \in \mathcal{L}_p$ we have
\begin{equation*}
\|[f(A),X]\|_{p} \leq c_f\|[A,X]\|_{p}.
\end{equation*}
\end{lemma}
The following lemma essentially states that \eqref{lipschitz_definition} implies \eqref{schur_boundedness}.
\begin{lemma}\label{lp_lip_implies_lp_schur}
Let $0 < p \leq 1$. Suppose that $f:\mathbb{R}\to \mathbb{C}$ is a Borel function which is $\mathcal{L}_p$-operator Lipschitz. Then $f^{[1]}$ is an $\mathcal{L}_p$-bounded Schur multiplier in the sense of \eqref{schur_boundedness}.
\end{lemma}
The well-known converse result, which is that \eqref{schur_boundedness} implies \eqref{lipschitz_definition}, is as follows.
\begin{theorem}
Let $0 < p \leq 1$. Let $f:\mathbb{R}\to \mathbb{C}$ be a Borel function such that $\{f^{[1]}(t,s)\}_{t,s \in \mathbb{R}}$ is an $\mathcal{L}_p$-bounded Schur multiplier in the sense of \eqref{schur_boundedness}. Then $f$ is $\mathcal{L}_p$-operator Lipschitz.
\end{theorem}
\section{Negative results}
\subsection{Periodic functions}\label{periodic_section}
We now prove Theorem \ref{periodic_failure}. The proof is based on negating Lemma \ref{lp_lip_implies_lp_schur} by selecting appropriate sequences
such that the matrix $\Gamma$ constructed in the proof of Lemma \ref{lp_lip_implies_lp_schur} is not an $\mathcal{L}_p$-bounded Schur multiplier. The specific form of $\Gamma$ will be a Toeplitz matrix, and necessary and sufficient conditions for a Toeplitz matrix to be an $\mathcal{L}_p$-bounded Schur multiplier are known \cite[Theorem 5.1]{Aleksandrov-Peller-hankel-and-toeplitz-2002}. However, for the sake of being self-contained we present an elementary argument.
\begin{lemma}\label{toeplitz_is_not_schur}
Let $\varepsilon \in (0,1)$, and let $T$ be the matrix
\begin{equation*}
T = \left\{\frac{1}{\varepsilon+j-k}\right\}_{j,k\geq 0}.
\end{equation*}
Then $T$ is not an $\mathcal{L}_p$-bounded Schur multiplier for any $p \in (0,1)$.
\end{lemma}
\begin{proof}
Let $n,m \geq 1$, and consider the matrix $X_{n,m}$ defined as
\begin{equation*}
X_{n,m} = \sum_{j,k=0}^{n-1} e_{mj}\otimes e_{mk}.
\end{equation*}
Then $X$ is $n$ times a rank one projection, so,
\begin{equation*}
\|X_{n,m}\|_{p} = n.
\end{equation*}
We also have,
\begin{equation*}
T\circ X_{n,m} = \sum_{j,k=0}^{n-1} \frac{1}{\varepsilon+m(j-k)}e_{mj}\otimes e_{mk}.
\end{equation*}
Thus if $T$ is an $\mathcal{L}_p$-bounded Schur multiplier, then there is a constant $C>0$ such that for all $n,m\geq 1$ we have
\begin{align*}
\|T\circ X_{n,m}\|_p &= \left\|\sum_{j,k=0}^{n-1} \frac{1}{\varepsilon+m(j-k)}e_{mj}\otimes e_{mk}\right\|_p\\
&= \left\|\sum_{j,k=0}^{n-1} \frac{1}{\varepsilon+m(j-k)}e_{j}\otimes e_{k}\right\|_p\\
&\leq Cn.
\end{align*}
That is, for every $n,m\geq 1$ we have
\begin{equation*}
\left\|\sum_{j,k=0}^{n-1} \frac{1}{\varepsilon+m(j-k)}e_{j}\otimes e_{k}\right\|_p \leq Cn.
\end{equation*}
Taking the limit $m\to \infty$, the off diagonal terms vanish, leaving only the diagonal. This leads to
\begin{equation*}
\|\sum_{j=0}^{n-1} \frac{1}{\varepsilon}e_{j}\otimes e_{j}\|_p = \lim_{m\to\infty} \left\|\sum_{j,k=0}^{n-1} \frac{1}{\varepsilon+m(j-k)}e_{j}\otimes e_k\right\|_p \leq Cn.
\end{equation*}
The left hand side is equal to $n^{1/p}/\varepsilon$, and therefore
\begin{equation*}
n^{1/p-1} \leq C\varepsilon
\end{equation*}
for all $n\geq 1$, which is impossible since $p < 1$.
\end{proof}
\begin{remark}
The result of \cite[Theorem 5.1]{Aleksandrov-Peller-hankel-and-toeplitz-2002} states that if $0 < p < 1$, then a Toeplitz matrix $\{t_{j-k}\}_{j,k\geq 0}$ is
a Schur multiplier of $\mathcal{L}_p$ if and only if $\{t_n\}_{n\in \mathbb{Z}}$ is the sequence of Fourier coefficients of a $p$-convex combination of point masses
on $\mathbb{T}$. In particular, $\{t_n\}_{n\in \mathbb{Z}}$ must be the sequence of Fourier coefficients of a singular measure. In the case of the matrix $T$ in Lemma \ref{toeplitz_is_not_schur}, we have $t_n = \frac{1}{\varepsilon+n},\; n\in \mathbb{Z}$, which is the sequence of Fourier coefficients of an $L_2$-function.
It follows that $T$ is not an $\mathcal{L}_p$-bounded Schur multiplier and this amounts to an alternative proof of Lemma \ref{toeplitz_is_not_schur}.
\end{remark}
Recall that if $f:\mathbb{R}\to \mathbb{C}$, we denote by $f^{[1]}$ the function on $\mathbb{R}^2\setminus \{(t,t)\;:\;t\in \mathbb{R}\}$.
\begin{equation*}
f^{[1]}(t,s) = \begin{cases}
\frac{f(t)-f(s)}{t-s},\quad t \neq s,\\
\end{cases}
\end{equation*}
\begin{theorem}\label{compact_theorem}
Let $f:\mathbb{R}\to \mathbb{C}$ be a non-constant periodic function.
Then the infinite matrix $\{f^{[1]}(t,s)\}_{t,s \in \mathbb{R}}$ is not an $\mathcal{L}_p$-bounded Schur multiplier for any $p \in (0,1)$ in the sense of \eqref{schur_boundedness}. That is,
\begin{equation*}
\sup_{\lambda\cap\mu = \emptyset} \|\{f^{[1]}(\lambda_j,\mu_k)\}_{j,k=0}^n\|_{\mathrm{m}_p} = \infty.
\end{equation*}
\end{theorem}
\begin{proof}
By rescaling $f$ if necessary, we may assume without loss of generality that $f$ is $1$-periodic, and since $f$ is not constant we can select some $\varepsilon \in (0,1)$ such that $f(\varepsilon)\neq f(0)$.
Consider the following two sequences:
\begin{equation*}
\lambda_j = j+\varepsilon,\quad \mu_k = k.
\end{equation*}
Due to $f$ being $1$-periodic, we compute $f^{[1]}(\lambda_j,\mu_k)$ as
\begin{equation*}
\frac{f(\lambda_j)-f(\mu_k)}{\lambda_j-\mu_k} = \frac{f(\varepsilon)-f(0)}{\varepsilon+j-k} = \frac{1}{\varepsilon+j-k}(f(\varepsilon)-f(0)).
\end{equation*}
Since $f(\varepsilon)-f(0) \neq 0$, it follows that
\begin{equation*}
\frac{1}{\varepsilon+j-k} = \frac{f(\lambda_j)-f(\mu_k)}{\lambda_j-\mu_k} \cdot \frac{1}{f(\varepsilon)-f(0)}.
\end{equation*}
It follows that if $\{f^{[1]}(t,s)\}_{t,s \in \mathbb{R}}$ were an $\mathcal{L}_p$-bounded Schur multiplier, then the matrix $\{\frac{1}{\varepsilon+j-k}\}_{j,k\geq 0}$ would also be an $\mathcal{L}_p$-bounded Schur multiplier, but this is false due to Lemma \ref{toeplitz_is_not_schur}.
\end{proof}
Theorem \ref{compact_theorem}, combined with Lemma \ref{lp_lip_implies_lp_schur}, implies Theorem \ref{periodic_failure}.
\section{Positive results}
\subsection{Wavelet analysis}
A \emph{wavelet} is a function $\phi \in L_2(\mathbb{R})$ such that the family
\begin{equation*}
\phi_{j,k}(t) = 2^{\frac{j}{2}}\phi(2^jt-k),\quad j,k\in \mathbb{Z},\quad t \in \mathbb{R}
\end{equation*}
of translations and dilations of $\phi$ forms an orthonormal basis of $L_2(\mathbb{R})$ \cite[Definition 6.6.1]{Grafakos-1}. For example, the Haar function
\begin{equation*}
h(t) = \chi_{[0,1/2]}(t)-\chi_{(1/2,1]}(t),\quad t \in \mathbb{R}
\end{equation*}
is a wavelet.
It is a theorem of Daubechies that there exist compactly supported $C^r$-wavelets for every $r > 0$ \cite{Daubechies-wavelets-1988}, \cite[Theorem 3.8.3]{Meyer-wavelets-1992}.
For this subsection, we will fix a compactly supported wavelet $\psi$ of regularity $C^r$ for some $r>1$. In later subsections we will ask for additional smoothness on $\psi$.
Every $f\in L_2(\mathbb{R})$ admits an $L_2$-convergent wavelet decomposition
\begin{equation*}
f = \sum_{j,k\in \mathbb{Z}} \psi_{j,k}\langle f,\psi_{j,k}\rangle.
\end{equation*}
This is called a wavelet series. For brevity, denote
\begin{equation*}
f_j = \sum_{k\in \mathbb{Z}} \psi_{j,k}\langle f,\psi_{j,k}\rangle\in L_2(\mathbb{R}),\quad j \in \mathbb{Z}
\end{equation*}
That is, we have the $L_2$-convergent series
\begin{equation*}
f = \sum_{j\in \mathbb{Z}} f_j,\quad f_j(t) = \sum_{k\in \mathbb{Z}} 2^{\frac{j}{2}}\psi(2^jt-k)\langle f,\psi_{j,k}\rangle,\quad t \in \mathbb{R}.
\end{equation*}
Roughly speaking, our strategy will be to bound $\|f^{[1]}\|_{\mathrm{m}_p}$ using the wavelet decomposition and \eqref{p-prop} as follows
\begin{equation*}
\|f^{[1]}\|_{\mathrm{m}_p}^p \leq \sum_{j\in \mathbb{Z}} \|f_j^{[1]}\|_{\mathrm{m}_p}^p.
\end{equation*}
Note that for arbitrary locally integrable functions $f$ on $\mathbb{R}$, the wavelet coefficient $\langle f,\psi_{j,k}\rangle$ is meaningful due to our assumption that $\psi$ is continuous
and compactly supported. It follows that for all locally integrable $f$, we can define
\begin{equation}\label{f_j_definition}
f_j = \sum_{k\in \mathbb{Z}} \psi_{j,k}\langle f,\psi_{j,k}\rangle,\quad j\in \mathbb{Z}
\end{equation}
where the sum is finite on compact sets.
The following is \cite[Theorem 3.9.2]{Cohen2003}. We use the symbol $\approx$ to denote equivalence up to constants depending only on $p$ and the choice of wavelet.
\begin{lemma}\label{disjointifying}
Let $\phi$ be an arbitrary wavelet on $\mathbb{R}$, and let $\alpha = \{\alpha_k\}_{k\in \mathbb{Z}}$ be a scalar sequence. Define
\begin{equation*}
\phi_{\alpha}(t) = \sum_{k\in \mathbb{Z}}\alpha_k\phi(t-k),\quad t \in \mathbb{R}.
\end{equation*}
Then for all $p \in (0,\infty]$ such that $\phi$ is $p$-integrable we have
\begin{equation*}
\|\phi_{\alpha}\|_p \approx \|\alpha\|_{\ell_p}.
\end{equation*}
\end{lemma}
Lemma \ref{disjointifying} relies on the fact that the family of translates $\{\phi(\cdot-j)\}_{j\in \mathbb{Z}}$ is locally linearly independent, which holds in particular when $\phi$ is a wavelet, and is false if $\phi$ were an arbitrary compactly supported function.
A simple consequence is the following identity for the $L_p$-norm of $f_j$, which is well-known. We provide a proof for convenience.
See \cite[Proposition 6.10.7]{Meyer-wavelets-1992} for a proof in the $p\geq 1$ case.
\begin{lemma}\label{wavelet_bernstein}
Let $f$ be a locally integrable function. For every $p \in (0,\infty]$ and $j \in \mathbb{Z}$ we have
\begin{equation}\label{disjoint_supports}
\|f_j\|_p \approx 2^{j\left(\frac{1}{2}-\frac{1}{p}\right)} \left(\sum_{k\in \mathbb{Z}} |\langle f,\psi_{j,k}\rangle|^p \right)^{1/p}.
\end{equation}
In particular, the sequence $\{\langle f,\psi_{j,k}\rangle\}_{k\in \mathbb{Z}}$ is $p$-summable if and only if $f_j \in L_p(\mathbb{R})$.
\end{lemma}
\begin{proof}
We have
\begin{equation*}
f_j(t) = \sum_{k\in \mathbb{Z}} 2^{\frac{j}{2}}\psi(2^jt-k)\langle f,\psi_{j,k}\rangle,\quad t \in \mathbb{R}.
\end{equation*}
Therefore
\begin{equation*}
2^{-\frac{j}{2}}f_j(2^{-j}t) =\sum_{k\in \mathbb{Z}} \psi(t-k)\langle f,\psi_{j,k}\rangle.
\end{equation*}
Applying Lemma \ref{disjointifying} with $\alpha = \{\langle f,\psi_{j,k}\rangle\}_{k\in \mathbb{Z}}$ implies that
\begin{equation*}
\|2^{-\frac{j}{2}}f_j(2^{-j}\cdot)\|_p \approx \left(\sum_{k\in \mathbb{Z}} |\langle f,\psi_{j,k}\rangle|^p\right)^{1/p}.
\end{equation*}
Using the rule $\|f(\lambda\cdot)\|_p = \lambda^{-\frac{1}{p}}\|f\|_p$, it follows that
\begin{equation*}
2^{-\frac{j}{2}+\frac{j}{p}}\|f_j\|_p \approx \left(\sum_{k\in \mathbb{Z}} |\langle f,\psi_{j,k}\rangle|^p\right)^{1/p}.
\end{equation*}
\end{proof}
We note for future reference that since for $p \leq q$ and all locally integrable $f$ there holds the inequality
\begin{equation*}
\left(\sum_{k\in \mathbb{Z}} |\langle f,\psi_{j,k}\rangle|^q\right)^{1/q} \leq \left(\sum_{k\in \mathbb{Z}} |\langle f,\psi_{j,k}\rangle|^p\right)^{\frac{1}{p}}
\end{equation*}
it follows from Lemma \ref{wavelet_bernstein} that for all $j\in \mathbb{Z}$,
\begin{equation}\label{sequential_bernstein}
\|f_j\|_q \lesssim 2^{j(\frac{1}{p}-\frac{1}{q})}\|f_j\|_p,\quad p \leq q,
\end{equation}
The same holds for $q=\infty$. That is, $\|f_j\|_\infty \lesssim 2^{\frac{j}{p}}\|f_j\|_p$ for all $p < \infty$.
Besov spaces have very simple characterisations in terms of coefficients of wavelet series. The following is \cite[Theorem 7.20]{FrazierJawerthWeiss1991}. Related results in the inhomogeneous case are \cite[Theorem 3.7.7]{Cohen2003}, \cite[Theorem 4.7]{Sawano2018}, \cite[Theorem 1.20]{Triebel-4} (see also \cite[Section 6.10]{Meyer-wavelets-1992} for $p,q\in [1,\infty]$).
\begin{theorem}\label{besov_space_wavelet_characterisation}
Let $p,q \in (0,\infty]$ and $s \in \mathbb{R}$. Let $f$ be a locally integrable function, and let $\psi$ be a compactly supported $C^r$ wavelet for $r > |s|$. Then $f$ belongs to the homogeneous Besov class $\dot{B}^s_{p,q}(\mathbb{R})$ if and only if
\begin{equation*}
\|f\|_{\dot{B}^s_{p,q}} \approx \left(\sum_{j\in \mathbb{Z}} 2^{jsq}\|f_j\|_p^q\right)^{1/q} < \infty.
\end{equation*}
The relevant constants depend only on $s,p$ and $q$ and the wavelet.
Equivalently (via Lemma \ref{wavelet_bernstein}),
\begin{equation*}
\|f\|_{\dot{B}^s_{p,q}} \approx \left(\sum_{j\in \mathbb{Z}} 2^{jq(s+\frac{1}{2}-\frac{1}{p})}\left(\sum_{k\in \mathbb{Z}} |\langle f,\psi_{j,k}\rangle|^p\right)^{q/p}\right)^{1/q}.
\end{equation*}
The usual modifications are made if $p$ or $q$ is infinite.
\end{theorem}
Note that it is \emph{not} necessarily the case that $f \in \dot{B}^s_{p,q}(\mathbb{R})$ is equal to the sum of its wavelet series. That is, for a general locally integrable function $f$ it may not hold that
\begin{equation*}
f = \sum_{j\in \mathbb{Z}} f_j
\end{equation*}
in any sense. For example, if $f$ is a polynomial of sufficiently small order then the above right hand side is zero \cite[Chapter 3, Proposition 4]{Meyer-wavelets-1992}. This issue is parallel
to the representation of $f$ by a Littlewood-Paley decomposition discussed in Section \ref{besov_section}.
In the next lemma, we explain how Lipschitz functions belonging to $\dot{B}^{\frac{1}{p}}_{p^\sharp,p}(\mathbb{R})$, can be expressed as a limit of wavelet series, up to a polynomial correction. We shall use the fact that if $f$
is a locally integrable function such that for all $j,k\in \mathbb{Z}$ we have
\begin{equation*}
\langle f,\psi_{j,k}\rangle = 0
\end{equation*}
then $f$ is a polynomial. This follows from the realisation of distributions modulo polynomials by wavelet series, as in \cite[Section 6, Theorem 4(ii)]{Bourdaud-ondelette-1995}.
\begin{lemma}\label{besov_realisation}
Let $f$ be a Lipschitz function on $\mathbb{R}$ such that $f \in \dot{B}^{\frac{1}{p}}_{p^\sharp,p}(\mathbb{R})$, where $0 < p \leq 1$. There exists a constant $c\in \mathbb{R}$ such that
\begin{equation*}
f(t) = f(0)+ct+\sum_{j\in \mathbb{Z}} (f_j(t)-f_j(0)),\quad t \in \mathbb{R}
\end{equation*}
and the series $\sum_{j\in \mathbb{Z}} f_j(t)-f_j(0)$ converges uniformly on compact sets. Moreover, $c$ can be chosen such that
\begin{equation*}
|c| \lesssim \|f'\|_\infty+\|f\|_{\dot{B}^{\frac{1}{p}}_{p^\sharp,p}}
\end{equation*}
\end{lemma}
\begin{proof}
For all $j\in \mathbb{Z}$ we have the Bernstein-type inequality \cite[Chapter 2, Theorem 3]{Meyer-wavelets-1992}
\begin{equation*}
\|f_j'\|_\infty \lesssim 2^j\|f_j\|_\infty.
\end{equation*}
It follows from \eqref{sequential_bernstein} that
\begin{equation*}
\|f_j\|_{\infty} \lesssim 2^{j\left(\frac{1}{p}-1\right)}\|f_j\|_{p^\sharp}
\end{equation*}
and since $p \leq 1$,
\begin{equation*}
\sum_{j\in \mathbb{Z}} \|f_j'\|_\infty \lesssim \|f\|_{\dot{B}^{\frac{1}{p}}_{p^\sharp,1}}\lesssim \|f\|_{\dot{B}^{\frac{1}{p}}_{p^\sharp,p}}.
\end{equation*}
Since the wavelet $\psi$ has been assumed to be $C^r$ for some $r>1$, for every $j\in\mathbb{Z}$ the function $f_j'$ is continuous.
Hence the series $\sum_{j\in \mathbb{Z}} f_j'$ converges to a continuous function on $\mathbb{R}$. It follows that
\begin{equation*}
f'-\sum_{j\in \mathbb{Z}} f_j'
\end{equation*}
is a well-defined element of $L_\infty(\mathbb{R})$. Since the series converges uniformly, the function defined by
\begin{equation*}
g(t) := f(t)-f(0)-\sum_{j\in \mathbb{Z}} (f_j(t)-f_j(0)),\quad t\in \mathbb{R}
\end{equation*}
converges uniformly on compact subsets of $\mathbb{R}$ and due to having continuous derivative is absolutely continuous. By the triangle inequality, we have $\|g'\|_\infty\leq \|f'\|_\infty+\sum_{j\in \mathbb{Z}} \|f_j'\|_\infty\lesssim \|f'\|_\infty+\|f\|_{\dot{B}^{\frac{1}{p}}_{p^\sharp,p}}$. Since the series $\sum_{j\in \mathbb{Z}} f_j(t)-f_j(0)$ converges uniformly on compact subsets and $\psi$ is compactly supported, it follows that
\begin{equation*}
\langle g,\psi_{j,k}\rangle = 0,\quad j,k\in \mathbb{Z}.
\end{equation*}
The vanishing of all wavelet coefficients implies that $g$ is a polynomial (see the discussion preceding the theorem). Since $g'$ is a bounded polynomial, we must have that $g'$ is constant and hence there exists $c \in \mathbb{C}$ such that
\begin{equation*}
f(t)=f(0)+ct+\sum_{j\in \mathbb{Z}} f_j(t)-f_j(0),\quad t \in \mathbb{R}.
\end{equation*}
By our construction we have $|c| = \|g'\|_\infty \lesssim \|f'\|_\infty+\|f\|_{\dot{B}^\frac{1}{p}_{p^\sharp,p}}$.
\end{proof}
\subsection{Peller's sufficient condition revisited}
Peller's criterion \cite{Peller-besov-1990} is that if $f$ is a Lipschitz function belonging to $\dot{B}^{1}_{\infty,1}(\mathbb{R})$ then $f$ is operator Lipschitz (equivalently, $\mathcal{L}_1$-operator Lipschitz). In this subsection we explain how the decomposition of $f$ into a wavelet series leads to a new proof of this result. The ideas developed in this proof will be later used in the proof of Theorem \ref{main_sufficient_condition}, which is a more general assertion.
Note that we have the following homogeneity property: if $f_{(\lambda)}$ denotes the function $f_{(\lambda)}(t) = f(\lambda t)$, then
\begin{equation}\label{m_1_homogeneity}
\|f_{(\lambda)}^{[1]}\|_{\mathrm{m}_1} = \lambda\|f^{[1]}\|_{\mathrm{m}_1}.
\end{equation}
Peller's original proof of the sufficiency of $\dot{B}^1_{\infty,1}$ is based on the following estimate \cite{Peller-besov-1990}:
if $f \in L_\infty(\mathbb{R})$ has Fourier transform supported in the interval $[-\sigma,\sigma]$. Then
\begin{equation}\label{peller_fav_thm}
\|f^{[1]}\|_{\mathrm{m}_1} \lesssim \sigma\|f\|_\infty.
\end{equation}
Our proof differs from the original proof of Peller, and in place of \eqref{peller_fav_thm} we prove that for all locally integrable functions $f$ on $\mathbb{R}$ we have
$$
\|f_j^{[1]}\|_{\mathrm{m}_1} \lesssim 2^{j}\|f_j\|_{\infty},\quad j\in \mathbb{Z}
$$
where $f_j$ is computed relative to a compactly supported $C^3$ wavelet.
This will follow as a consequence of the following result:
\begin{theorem}\label{elementary_estimate}
Let $\phi \in C_c^3(\mathbb{R})$ be a compactly supported $C^3$ function, let $\alpha = \{\alpha_k\}_{k\in \mathbb{Z}}$ be a bounded sequence of complex numbers and let $\lambda > 0$. Define
\begin{equation*}
\phi_{\alpha,\lambda}(t) := \sum_{k\in \mathbb{Z}} \alpha_k\phi(\lambda t-k)
\end{equation*}
Then
\begin{equation*}
\|\phi_{\alpha,\lambda}^{[1]}\|_{\mathrm{m}_1} \lesssim \lambda\sup_{k \in \mathbb{Z}} |\alpha_k|.
\end{equation*}
The implied constant depends on $\phi$, but not on $\alpha$ or $\lambda$.
\end{theorem}
In preparation for the proof of Theorem \ref{elementary_estimate}, we record
some useful facts about Schur multipliers of $\mathcal{L}_1$.
\begin{proposition}\label{schur_properties}
Let $\phi:\mathbb{R}^2\to \mathbb{C}$ be a bounded function.
\begin{enumerate}[{\rm (i)}]
\item\label{single_variable} If $\phi$ depends only on one variable then $\|\phi\|_{\mathrm{m}_1}\leq\|\phi\|_\infty$.
\item\label{toeplitz_form} Suppose that $\phi$ has Toeplitz form. That is, there exists a bounded function $\eta$ such that $\phi(t,s) = \eta(t-s)$. Then
\begin{equation*}
\|\phi\|_{\mathrm{m}_1} \leq (2\pi)^{-1}\|\widehat{\eta}\|_1
\end{equation*}
where $\widehat{\eta}(\xi) = \int_{-\infty}^\infty e^{-it\xi}\eta(t)\,dt$ is the Fourier transform of $\eta$.
\item\label{direct_sum} Suppose that $\phi = \sum_{n\in \mathbb{Z}} \phi_n$, where the functions $\phi_n$ are have disjoint supports in both variables. That is, if $\phi_n(t,s)\neq 0$, then for all $m\neq n$ and $r \in \mathbb{R}$ we have that $\phi_m(t,r)=0$ and $\phi_m(r,s)=0$. Then
\begin{equation*}
\|\phi\|_{\mathrm{m}_1} = \sup_{n\in \mathbb{Z}} \|\phi_n\|_{\mathrm{m}_1}.
\end{equation*}
\item\label{submultiplicative} If $\chi$ is a second bounded function on $\mathbb{R}^2$, then
\begin{equation*}
\|\chi\phi\|_{\mathrm{m}_1} \leq \|\chi\|_{\mathrm{m}_1}\|\phi\|_{\mathrm{m}_1}.
\end{equation*}
Compare \eqref{algebra}.
\item\label{C2_sufficiency} If $\phi$ is a compactly supported $C^3$ function, then $\phi^{[1]}$ is a bounded $\mathcal{L}_1$-Schur multiplier.
\end{enumerate}
\end{proposition}
The nontrivial only components of the above proposition not already covered in the preliminaries are \eqref{toeplitz_form} and \eqref{C2_sufficiency}.
To prove \eqref{toeplitz_form}, it only suffices to represent $\phi(t,s)$ as
\begin{equation*}
\phi(t,s) = (2\pi)^{-1} \int_{-\infty}^\infty e^{i\xi t}e^{-i\xi s}\widehat{\eta}(\xi)\,d\xi
\end{equation*}
and note that
\begin{equation*}
\|\phi\|_{\mathrm{m}_1} \leq (2\pi)^{-1}\int_{-\infty}^\infty \|\{e^{i\xi t}\}_{t,s\in \mathbb{R}}\|_{\mathrm{m}_1}\|\{e^{-i\xi s}\}_{t,s\in \mathbb{R}}\|_{\mathrm{m}_1}|\widehat{\eta}(\xi)|\,d\xi = (2\pi)^{-1}\|\widehat{\eta}\|_1.
\end{equation*}
The assertion \eqref{C2_sufficiency} is that $C^3_c(\mathbb{R})$ functions $\phi$ are operator Lipschitz. This follows from the fact that the Fourier transform of
the derivative of $\phi$ is integrable. Alternatively, see Theorem \ref{Cbeta_sufficiency} below.
\begin{lemma}\label{wiener_class_lemma}
Let $\rho$ be a compactly supported smooth function on $\mathbb{R}$ equal to $1$ in a neighbourhood of zero. The Fourier transform of the $L_2$-function
\begin{equation*}
\eta(t) = \frac{1-\rho(t)}{t}, t\in \mathbb{R}
\end{equation*}
is integrable.
\end{lemma}
\begin{proof}
As a non-absolutely convergent integral, we have
\begin{equation*}
\widehat{\eta}(\xi) = \int_{-\infty}^\infty e^{-i\xi t}\frac{1-\rho(t)}{t}\,dt.
\end{equation*}
Applying integration by parts $k$ times yields
\begin{equation*}
(i\xi)^k\widehat{\eta}(\xi) = \int_{-\infty}^\infty e^{-i\xi t}\left(\frac{d}{dt}\right)^k\left(\frac{1-\rho(t)}{t}\right)\,dt.
\end{equation*}
When $k > 1$, this defines an absolutely convergent integral and it follows
that $\xi^k\widehat{\eta}(\xi)$ is uniformly bounded in $\xi$ for all $k>1$.
Hence, $\widehat{\eta}(\xi)$ has rapid decay as $\xi\to \pm\infty$.
Since $\eta$ belongs to $L_2(\mathbb{R}),$ the Fourier transform $\widehat{\eta}$ also belongs to $L_2(\mathbb{R}).$ In particular, $\widehat{\eta}$ is locally integrable.
Hence $\widehat{\eta}$ has rapid decay at infinity and is integrable near zero. Thus $\widehat{\eta}$ is integrable over $\mathbb{R}.$
\end{proof}
\begin{proof}[Proof of Theorem \ref{elementary_estimate}]
By the homogeneity property \eqref{m_1_homogeneity}, it suffices to take $\lambda =1$, and for brevity we denote $\phi_{\alpha} = \phi_{\alpha,1}$.
Let $\rho$ be a smooth compactly supported function on $\mathbb{R}$ such that $\rho$ is identically $1$ in a neighbourhood of zero, define $\eta(t) = \frac{1-\rho(t)}{t}$
as in Lemma \ref{wiener_class_lemma}.
We split the divided difference of $\phi_{\alpha}$ as
\begin{equation}\label{diagonal_splitting}
\phi_{\alpha}^{[1]}(t,s) = \phi_{\alpha}^{[1]}(t,s)\rho(t-s) + \phi_{\alpha}^{[1]}(t,s)(1-\rho(t-s)) \stackrel{\mathrm{def}}{=} A(t,s)+B(t,s).
\end{equation}
We bound each summand separately.
For the second summand in \eqref{diagonal_splitting}, we have by definition that
\begin{align*}
B(t,s) &:= \sum_{k\in \mathbb{Z}} \alpha_k \frac{\phi(t-k)-\phi(s-k)}{t-s}(1-\rho(t-s))\\
&= \left(\sum_{k\in \mathbb{Z}} \alpha_k(\phi(t-k)-\phi(s-k))\right)\eta(t-s)\\
&= \sum_{k\in \mathbb{Z}} \alpha_k\phi(t-k)\eta(t-s) - \sum_{k\in \mathbb{Z}} \alpha_k\phi(s-k)\eta(t-s).
\end{align*}
Using Proposition \ref{schur_properties}.\eqref{submultiplicative} and the triangle inequality, we have
\begin{equation*}
\|B\|_{\mathrm{m}_1} \leq \|\sum_{k\in \mathbb{Z}} \alpha_k\phi(t-k)\|_{\mathrm{m}_1}\|\eta(t-s)\|_{\mathrm{m}_1} + \|\sum_{k\in \mathbb{Z}} \alpha_k\phi(s-k)\|_{\mathrm{m}_1}\|\eta(t-s)\|_{\mathrm{m}_1}.
\end{equation*}
By Lemma \ref{wiener_class_lemma}, the Fourier transform of $\eta$ is integrable
and hence Proposition \ref{schur_properties}.\eqref{toeplitz_form} implies
that $\|\eta(t-s)\|_{\mathrm{m}_1} < \infty$. Proposition \ref{schur_properties}.\eqref{single_variable} implies that
\begin{equation*}
\|\sum_{k\in \mathbb{Z}} \alpha_k\phi(\cdot -k)\|_{\mathrm{m}_1}\lesssim \sup_{k\in \mathbb{Z}} |\alpha_k|.
\end{equation*}
Thus,
\begin{equation*}
\|B\|_{\mathrm{m}_1} \lesssim \sup_{k\in \mathbb{Z}} |\alpha_k|.
\end{equation*}
Now we bound the first summand in \eqref{diagonal_splitting}. We may assume that $\rho$ is supported in the interval $(-1,1)$. It follows that
the function $(t,s) \mapsto \phi_{\alpha}^{[1]}(t,s)\rho(t-s)$ is supported in the strip
$\{(t,s)\in \mathbb{R}^2\;:\; |t-s|< 1\}.$
Observe that we have
\begin{equation*}
\{(t,s)\in \mathbb{R}^2\;:\; |t-s|< 1\} \subset \bigcup_{n\in \mathbb{Z},|j|<2} [n,n+1)\times [n+j,n+j+1).
\end{equation*}
Therefore,
\begin{align*}
A(t,s) := \phi_{\alpha}^{[1]}(t,s)\rho(t-s) &= \sum_{|j|<2}\sum_{n\in \mathbb{Z}} \phi_{\alpha}^{[1]}(t,s)\rho(t-s)\chi_{[n,n+1)}(t)\chi_{[n+j,n+j+1)}(s)\\
&= \sum_{|j|<2}\left(\sum_{n,k\in \mathbb{Z}} \alpha_k\frac{\phi(t-k)-\phi(s-k)}{t-s}\rho(t-s)\chi_{[n,n+1)}(t)\chi_{[n+j,n+j+1)}(s)\right)\\
&= \sum_{|j|<2} F_j(t,s)
\end{align*}
where for each $|j|<2$ we have denoted
\begin{equation*}
F_j(t,s) := \sum_{n,k\in \mathbb{Z}} \alpha_k\frac{\phi(t-k)-\phi(s-k)}{t-s}\rho(t-s)\chi_{[n,n+1)}(t)\chi_{[n+j,n+j+1)}(s).
\end{equation*}
This function can be written in the form
\begin{equation*}
F_j(t,s) = \sum_{n\in \mathbb{Z}} \chi_{[n,n+1)}(t)\chi_{[n+j,n+j+1)}(s)\sum_{k\in \mathbb{Z}} \alpha_k \frac{\phi(t-k)-\phi(s-k)}{t-s}\rho(t-s)\chi_{[n,n+1)}(t)\chi_{[n+j,n+j+1)}(s).
\end{equation*}
Hence, $F_j$ has the form required in Proposition \ref{schur_properties}.\eqref{direct_sum}. Thus,
\begin{equation*}
\|F_j\|_{\mathrm{m}_1} = \sup_{n\in \mathbb{Z}} \|\sum_{k\in \mathbb{Z}} \alpha_k\frac{\phi(t-k)-\phi(s-k)}{t-s}\rho(t-s)\chi_{[n,n+1)}(t)\chi_{[n+j,n+j+1)}(s)\|_{\mathrm{m}_1}.
\end{equation*}
Since $\phi$ is compactly supported, for each $n$ the sum over $k$ has only finitely many terms. In fact, there exists a constant $N$ (depending on $\phi$ and $j$) such that for $|n-k|> N$ we have
\begin{equation*}
\frac{\phi(t-k)-\phi(s-k)}{t-s}\rho(t-s)\chi_{[n,n+1)}(t)\chi_{[n+j,n+j+1)}(s) = 0
\end{equation*}
Therefore,
\begin{equation*}
\|F_j\|_{\mathrm{m}_1} = \sup_{n\in \mathbb{Z}} \|\sum_{|k-n|\leq N}\alpha_k \frac{\phi(t-k)-\phi(s-k)}{t-s}\rho(t-s)\chi_{[n,n+1)}(t)\chi_{[n+j,n+j+1)}(s)\|_{\mathrm{m}_1}.
\end{equation*}
Since $\phi\in C^3$, the divided difference $(t,s)\mapsto \frac{\phi(t-k)-\phi(s-k)}{t-s}$
is a bounded Schur multiplier with norm independent of $k$ by Proposition \ref{schur_properties}.\eqref{C2_sufficiency}. Similarly, since the Fourier transform of $\rho$ is Schwartz class the function $(t,s) \mapsto \rho(t-s)$ is a bounded Schur multiplier by Proposition \ref{schur_properties}.\eqref{toeplitz_form}. It follows that
\begin{equation*}
\|F_j\|_{\mathrm{m}_1} \lesssim \sup_{n\in \mathbb{Z}} \sum_{|k-n|<N} |\alpha_k| \lesssim_N \sup_{k\in \mathbb{Z}} |\alpha_k|.
\end{equation*}
By the triangle inequality, it follows that
\begin{equation*}
\|A\|_{\mathrm{m}_1} \lesssim \sup_{k\in \mathbb{Z}} |\alpha_k|.
\end{equation*}
Finally, from \eqref{diagonal_splitting} we have
\begin{equation*}
\|\phi^{[1]}_{\alpha}\|_{\mathrm{m}_1} \lesssim \sup_{k\in \mathbb{Z}} |\alpha_k|.
\end{equation*}
\end{proof}
Using Lemma \ref{wavelet_bernstein}, we can deduce the following substitute for \eqref{peller_fav_thm}.
Recall that $f_j = \sum_{k\in \mathbb{Z}} \psi_{j,k}\langle f,\psi_{j,k}\rangle.$
\begin{lemma}\label{peller_fav_easy}
Let $f$ be a locally integrable function on $\mathbb{R}$, and let $j\in \mathbb{Z}$ be such that $f_j$ is bounded where $f_j$ is computed with respect to a compactly supported $C^3$ wavelet $\psi$. We have
\begin{equation*}
\|f_j^{[1]}\|_{\mathrm{m}_1} \lesssim 2^{j}\|f_j\|_\infty.
\end{equation*}
\end{lemma}
\begin{proof}
This is essentially a special case of Theorem \ref{elementary_estimate}. We have
\begin{equation*}
f_j(t) = \sum_{k\in \mathbb{Z}} 2^{\frac{j}{2}} \psi(2^j t-k)\langle f,\psi_{j,k}\rangle,\quad t \in \mathbb{R}.
\end{equation*}
Theorem \ref{elementary_estimate} and \eqref{disjoint_supports} together yield
\begin{equation*}
\|f_j^{[1]}\|_{\mathrm{m}_1} \leq 2^{j}\sup_{k\in \mathbb{Z}} 2^{\frac{j}{2}}|\langle f,\psi_{j,k}\rangle| \approx 2^j\|f_j\|_{\infty}.
\end{equation*}
\end{proof}
Finally, we achieve Peller's sufficient condition.
\begin{corollary}\label{Peller_critereon}
Let $f \in \dot{B}^1_{\infty,1}(\mathbb{R})$ be Lipschitz. Then $\|f^{[1]}\|_{\mathrm{m}_1}< \infty$, and
\begin{equation*}
\|f^{[1]}\|_{\mathrm{m}_1} \lesssim \|f'\|_{\infty} + \|f\|_{\dot{B}^1_{\infty,1}}.
\end{equation*}
\end{corollary}
\begin{proof}
We apply the representation of $f$ from Lemma \ref{besov_realisation}. We have
\begin{equation*}
f(t) = f(0)+ct+\sum_{j\in \mathbb{Z}} f_j(t)-f_j(0),\quad t \in \mathbb{R}
\end{equation*}
where $f_j$ is computed relative to a compactly supported $C^3$-wavelet.
By the triangle inequality, the bound in Lemma \ref{besov_realisation} and Lemma \ref{peller_fav_easy}, we have
$$
\|f^{[1]}\|_{\mathrm{m}_1} \leq |c|+\sum_{j\in \mathbb{Z}} \|f_j^{[1]}\|_{\mathrm{m}_1} \lesssim \|f'\|_\infty+\|f\|_{\dot{B}^1_{\infty,1}}+\sum_{j\in \mathbb{Z}} 2^j\|f_j\|_\infty = \|f'\|_\infty+\|f\|_{\dot{B}^1_{\infty,1}}.
$$
\end{proof}
\subsection{Sufficient conditions for a function to be $\mathcal{L}_p$-operator Lipschitz}\label{sufficency_section}
We now present sufficient conditions for a function to be $\mathcal{L}_p$-operator Lipschitz for $0 < p < 1$. The $p=1$ case has been covered by Corollary \ref{Peller_critereon}, however
the arguments in this subsection apply for $p=1$.
Recall that $\|\cdot\|_{\mathrm{m}_p}$ denotes the $\mathcal{L}_p$-bounded Schur multiplier norm. Our proofs are based on the following result.
\begin{theorem}\label{Cbeta_sufficiency}
Let $0 < p \leq 1$. If $f \in C^\beta_c(\mathbb{R})$ is compactly supported, where $\beta > \frac{2}{p}$, then $\|f^{[1]}\|_{\mathrm{m}_p} < \infty$.
\end{theorem}
The result should be compared with Theorem 9.6 of \cite{Birman-Solomyak-survey-1977}, which is very similar. The proof we give here is based on first
proving that if $\phi \in C^\beta(\mathbb{T})$, then $\{\phi^{[1]}(z,w)\}_{z,w \in \mathbb{T}}$ is an $\mathcal{L}_p$-bounded Schur multiplier. This is actually an immediate consequence
of \eqref{circle_lipschitz}, since $C^{\beta}(\mathbb{T}) \subset B^{\frac{1}{p}}_{\infty,p}(\mathbb{T})$ when $\beta > \frac{1}{p}$.
If $\phi$ is a function of class $C^{\beta}(\mathbb{T})$, supported in a compact subset of $\mathbb{T}\setminus \{1\}$, then the Cayley transform
sends $\phi$ to a compactly supported function of class $C^\beta$ on $\mathbb{R}$. We use the much stronger assumption that $\beta> \frac{2}{p}$ because
it suffices for our purposes and we can give an especially elementary proof. Ultimately, the same result with only $\beta>\frac{1}{p}$ follows from Theorem \ref{main_sufficient_condition}.
\begin{proof}[Proof of Theorem \ref{Cbeta_sufficiency}]
Initially we prove the corresponding result on the circle $\mathbb{T}$. Note that for $n\in \mathbb{Z}$ we have
\begin{equation*}
\frac{z^n-w^n}{z-w} = \sum_{k=0}^{n-1} z^kw^{n-k-1},\quad z,w\in \mathbb{T}
\end{equation*}
and therefore the $p$-triangle inequality for the $\mathrm{m}_p$-quasi-norm implies
\begin{equation*}
\left\|\frac{z^n-w^n}{z-w}\right\|_{\mathrm{m}_p} \lesssim n^{\frac{1}{p}}
\end{equation*}
with a constant independent of $n$. If $h \in C^{\beta}(\mathbb{T})$, then the Fourier coefficients $\{\widehat{h}(n)\}_{n\in \mathbb{Z}}$ obey
\begin{equation*}
|\widehat{h}(n)|\lesssim (1+|n|)^{-\beta}.
\end{equation*}
See e.g. \cite[Theorem 3.2.9(b)]{Grafakos-1}.
For all $z\neq w\in \mathbb{T}$ we have
\begin{equation*}
h^{[1]}(z,w) := \frac{h(z)-h(w)}{z-w} = \sum_{n\in \mathbb{Z}} \widehat{h}(n)\frac{z^n-w^n}{z-w}.
\end{equation*}
Using the $p$-triangle inequality for the $\mathrm{m}_p$-norm \eqref{p-prop}, it follows that
\begin{equation*}
\|h^{[1]}\|_{\mathrm{m}_p}^p \leq \sum_{n\in \mathbb{Z}} |\widehat{h}(n)|^{ p}\left\|\{\frac{z^n-w^n}{z-w}\}_{z,w\in \mathbb{T}}\right\|^{ p} \lesssim \sum_{n\in \mathbb{Z}} |n|(1+|n|)^{-\beta p}
\end{equation*}
Since $\beta > \frac{2}{p}$, this series converges and hence $h^{[1]}$ is a Schur multiplier of $\mathcal{L}_p$.
Now let $f \in C^\beta_c(\mathbb{R})$, and consider the image under the Cayley transform,
\begin{equation*}
h(z) := f\left(i\frac{z+1}{z-1}\right).
\end{equation*}
Since $f \in C^\beta_c(\mathbb{R})$ and the Cayley transform is Lipschitz on compact subsets of $\mathbb{R}$, it follows that $h \in C^\beta(\mathbb{T})$.
Therefore $\|h^{[1]}\|_{\mathrm{m}_p} < \infty$, and that $h$ is $\mathcal{L}_p$-operator Lipschitz for differences of unitary operators in the sense
\begin{equation*}
\|h(U)-h(V)\|_p\lesssim \|U-V\|_p
\end{equation*}
for all unitaries $U$ and $V$ such that $U-V \in \mathcal{L}_p$. If $A$ and $B$ are self-adjoint operators such that $A-B \in \mathcal{L}_p$, we define
\begin{equation*}
U = \frac{A+i}{A-i},\quad V = \frac{B+i}{B-i}.
\end{equation*}
Then
\begin{equation*}
U-V = \frac{2i}{A-i}-\frac{2i}{B-i} = 2i(B-i)^{-1}(B-A)(A-i)^{-1} \in \mathcal{L}_p.
\end{equation*}
We also have $f(A) = h(U)$ and $f(B) = h(V)$. Therefore,
\begin{equation*}
\|f(A)-f(B)\|_p = \|h(U)-h(V)\|_p\lesssim \|U-V\|_p \lesssim \|(B-i)^{-1}\|_\infty\|(A-i)^{-1}\|_\infty\|A-B\|_p\lesssim \|A-B\|_p.
\end{equation*}
It follows from Theorem \ref{lp_lip_implies_lp_schur} that $f^{[1]}$ is a Schur multiplier of $\mathcal{L}_p$.
\end{proof}
Note that with $p < 1$ we still have the following homogeneity result, identical to \eqref{m_1_homogeneity}:
\begin{equation}\label{schur_homogeneity}
\|f^{[1]}_{(\lambda)}\|_{\mathrm{m}_p} = \lambda\|f^{[1]}\|_{\mathrm{m}_p}
\end{equation}
where $f_{(\lambda)}(t) := f(\lambda t)$.
The most important component of our proof of Theorem \ref{main_sufficient_condition} is as follows. Recall that we denote
\begin{equation*}
p^\sharp = \frac{p}{1-p}.
\end{equation*}
\begin{theorem}\label{quasi_wavelet_bernstein}
Let $\phi$, $\alpha$, $\lambda$ be as in Theorem \ref{elementary_estimate}, but now assume that $\phi \in C^\beta_c(\mathbb{R})$ where $\beta > \frac{2}{p}$. Let $0 < p \leq 1$. Then
\begin{equation*}
\|\phi_{\alpha,\lambda}^{[1]}\|_{\mathrm{m}_p} \lesssim \lambda \|\alpha\|_{\ell_{p^\sharp}}.
\end{equation*}
\end{theorem}
Theorem \ref{quasi_wavelet_bernstein} generalises Theorem \ref{elementary_estimate}, and our proof is similar. We first record some useful properties of Schur multipliers
of $\mathcal{L}_p$.
\begin{lemma}\label{push_to_floor}
Let $\phi:\mathbb{Z}^2\to \mathbb{C}$. Then
\begin{equation*}
\|\{\phi(\lfloor t\rfloor,\lfloor s\rfloor)\}_{t,s\in \mathbb{R}}\|_{\mathrm{m}_p} = \|\{\phi(j,k)\}_{j,k\in\mathbb{Z}}\|_{\mathrm{m}_p}.
\end{equation*}
\end{lemma}
\begin{proof}
Let $\{s_j\}_{j=1}^K$ and $\{t_k\}_{k=1}^K$ be finite subsets of $\mathbb{R}$. Assume that
\begin{equation*}
\max_{1\leq j,k\leq K} \{|s_j|,|t_k|\} \leq N\in \mathbb{N}.
\end{equation*}
By enlarging $K$ and adding additional points to the sequences $\{s_j\}_{j=1}^K$ and $\{t_k\}_{k=1}^K$ if necessary, we assume that there exists $n > 0$ such that for all $-N\leq l< N$ we have
\begin{equation*}
n = |\{s_j\}_{j=1}^K\cap [l,l+1)| = |\{t_k\}_{k=1}^K\cap [l,l+1)|.
\end{equation*}
We now relabel the sequences as $s_{j,l}$ and $t_{k,m}$, where $1\leq j,k\leq n$ and $-N\leq l,m < N$ such that
\begin{equation*}
s_{j,l},\, t_{k,l}\in [l,l+1),\quad -N\leq l<N.
\end{equation*}
Denote by $\mathrm{id}_{M_n(\mathbb{C})}$ the $n\times n$ matrix of ones. We have
\begin{equation*}
\{\phi(\lfloor s_{j,l}\rfloor ,\lfloor t_{k,m}\rfloor)\}_{1\leq j,k\leq n,\,-N\leq l,m< N} = \{\phi(l,m)\}_{-N\leq l,m< N} \otimes \mathrm{id}_{M_n(\mathbb{C})}.
\end{equation*}
Due to the automatic complete boundedness property (Theorem \ref{acb}), it follows that
\begin{equation*}
\|\{\phi(\lfloor s_{j,l}\rfloor ,\lfloor t_{k,m}\rfloor)\}_{1\leq j,k\leq n,\,-N\leq l,m< N}\|_{\mathrm{m}_p} = \|\{\phi(l,m)\}_{-N\leq l,m< N}\|_{\mathrm{m}_p} \leq \|\{\phi(l,m)\}_{l,m\in \mathbb{Z}}\|_{\mathrm{m}_p}.
\end{equation*}
Since adding rows and columns to a matrix can only increase the $\mathrm{m}_p$-norm, it follows that for arbitrary sequences $\{s_{j}\}_{j=1}^K$ and $\{t_k\}_{k=1}^K$ we have
\begin{equation*}
\|\{\phi(\lfloor s_j\rfloor,\lfloor t_k\rfloor)\}_{1\leq j,k\leq K}\|_{\mathrm{m}_p} \leq \|\{\phi(l,m)\}_{l,m\in \mathbb{Z}}\|_{\mathrm{m}_p}.
\end{equation*}
Taking the supremum over all sequences yields
\begin{equation*}
\|\{\phi(\lfloor s\rfloor, \lfloor t\rfloor)\}_{t,s\in \mathbb{R}}\|_{\mathrm{m}_p} \leq \|\{\phi(l,m)\}_{l,m\in \mathbb{Z}}\|_{\mathrm{m}_p}.
\end{equation*}
The reverse inequality is trivial.
\end{proof}
One further property we need is that if $\lambda = \{\lambda_j\}_{j\in \mathbb{Z}}$ is a scalar sequence and $A = \{A_{j,k}\}_{j,k\in \mathbb{Z}}$ is a matrix then
\begin{equation}\label{left_multiplication}
\|\{\lambda_jA_{j,k}\}_{j,k\in \mathbb{Z}}\|_{\mathrm{m}_p} \leq \|\{\lambda_j\}_{j\in \mathbb{Z}}\|_{\ell_{p^\sharp}}\|A\|_{\mathrm{m}_1}.
\end{equation}
Indeed, if $\Lambda$ denotes the diagonal matrix with entries $\{\lambda_j\}_{j\in \mathbb{Z}}$, then by H\"older's inequality \eqref{operator_holder_inequality} and Lemma \ref{rank_one_suffices} we have
\begin{align*}
\|\{\lambda_jA_{j,k}\}_{j,k\in \mathbb{Z}}\|_{\mathrm{m}_p} &= \sup_{\|\xi\|,\|\eta\|\leq 1} \|\{\lambda_jA_{j,k}\xi_j\eta_k\}_{j,k\in \mathbb{Z}}\|_p\\
&= \sup_{\|\xi\|,\|\eta\|\leq 1} \|\Lambda (A\circ (\xi\otimes \eta))\|_{p}\\
&\leq \|\Lambda\|_{p^\sharp}\sup_{\|\xi\|,\|\eta\|\leq 1} \|A\circ (\xi\otimes \eta)\|_1\\
&= \|\Lambda\|_{p^\sharp}\|A\|_{\mathrm{m}_1}.
\end{align*}
Since $\|\Lambda\|_{p^\sharp} = \|\{\lambda_j\}_{j\in \mathbb{Z}}\|_{\ell_{p^\sharp}}$, this proves \eqref{left_multiplication}.
Our method of proof of Theorem \ref{quasi_wavelet_bernstein} is conceptually similar to that of Theorem \ref{elementary_estimate}, but in place of the function $(t,s)\mapsto 1-\rho(t-s)$
it is more convenient to use the following discretised version:
\begin{equation*}
(t,s)\mapsto \chi_{|\lfloor t\rfloor -\lfloor s\rfloor|> R}
\end{equation*}
where $R>1$ is sufficiently large, depending on $p$.
\begin{lemma}\label{vanishing_toeplitz_lemma}
Let $\alpha = \{\alpha_k\}_{k\in \mathbb{Z}}$ be a scalar sequence, and let $g_{\alpha}$ be the function
\begin{equation*}
g_{\alpha}(t) = \sum_{k\in \mathbb{Z}} \alpha_k\chi_{[k,k+1)}(t).
\end{equation*}
Then for all $n\geq 1$ and $R > 1$ we have
\begin{equation*}
\|\{g_{\alpha}(t)\frac{\chi_{|\lfloor t\rfloor -\lfloor s\rfloor| > R}}{(\lfloor t\rfloor -\lfloor s\rfloor)^n}\}_{t,s\in \mathbb{R}}\|_{\mathrm{m}_p} \leq \left(\frac{2}{R}\right)^{n/2}\|\alpha\|_{\ell_{p^\sharp}}.
\end{equation*}
\end{lemma}
\begin{proof}
Note that $g_{\alpha}(t) = \alpha_{\lfloor t\rfloor}$. Using Lemma \ref{push_to_floor}, it suffices to prove that
\begin{equation*}
\|\{\alpha_j\frac{\chi_{|j-k|> R}}{(j-k)^n}\}_{j,k\in \mathbb{Z}}\|_{\mathrm{m}_p} \leq \left(\frac{2}{R}\right)^{\frac{n}{2}}\|\alpha\|_{\ell_{p^\sharp}}.
\end{equation*}
In fact, via \eqref{left_multiplication}, and repeatedly using \eqref{algebra}, it only suffices to check that
\begin{equation*}
\|\{\frac{\chi_{|j-k|> R}}{j-k}\}_{j,k\in \mathbb{Z}}\|_{\mathrm{m}_1} \leq \left(\frac{2}{R}\right)^{\frac{1}{2}}.
\end{equation*}
This is a Toeplitz matrix. Hence, by Proposition \ref{schur_properties}.\eqref{toeplitz_form} we have
\begin{equation*}
\|\{\frac{\chi_{|j-k| > R}}{j-k}\}_{j,k\in \mathbb{Z}}\|_{\mathrm{m}_1} \leq \|f\|_{L_1[0,1]}
\end{equation*}
where $f$ is the function
\begin{equation*}
f(t) = \sum_{|j|>R} \frac{1}{j}e^{2\pi i t j}.
\end{equation*}
Bounding the $L_1$-norm of $f$ by the $L_2$-norm, it follows from Plancherel's identity that
\begin{equation*}
\|\{\frac{\chi_{|j-k|> R}}{j-k}\}_{j,k\in \mathbb{Z}}\|_{\mathrm{m}_1} \leq \left(\sum_{|j|> R} \frac{1}{j^2}\right)^{\frac{1}{2}} \leq \left(\frac{2}{R}\right)^{\frac{1}{2}}.
\end{equation*}
This completes the proof.
\end{proof}
Now we set $R=2^{3+\frac2p}$, and defining $g_{\alpha}$ as in Lemma \ref{vanishing_toeplitz_lemma} we define
$$G_{\alpha}(t,s):=g_{\alpha}(t)\cdot \frac{\chi_{|\lfloor s\rfloor-\lfloor t\rfloor|> R}}{t-s},$$
$$H_{\alpha}(t,s):=g_{\alpha}(s)\cdot \frac{\chi_{|\lfloor s\rfloor-\lfloor t\rfloor|> R}}{t-s}.$$
\begin{lemma}\label{GH lemma} In the notation above, for every $\alpha\in \ell_{p^{\sharp}},$ we have
$$\|G_{\alpha}\|_{\mathrm{m}_p},\|H_{\alpha}\|_{\mathrm{m}_p}\leq c_p\|\alpha\|_{\ell_{p^{\sharp}}}$$
where $c_p$ depends only on $p$.
\end{lemma}
\begin{proof}
Denote by $\{t\}$ and $\{s\}$ the fractional parts of $t,s\in \mathbb{R}$ respectively, so that
\begin{equation*}
\frac{1}{t-s} = \frac{1}{\lfloor t\rfloor+\{t\}-\lfloor s\rfloor-\{s\}} = \frac{1}{\lfloor t\rfloor-\lfloor s\rfloor}\cdot \frac{1}{1-\frac{\{s\}-\{t\}}{\lfloor t\rfloor-\lfloor s\rfloor}}.
\end{equation*}
Since $R>8$, if $\lfloor t\rfloor-\lfloor s\rfloor>R$ then $\left|\frac{\{s\}-\{t\}}{\lfloor t\rfloor -\lfloor s\rfloor}\right|<1$, and hence for all $t,s\in \mathbb{R}$ we have a convergent series
\begin{align*}
\frac{\chi_{|\lfloor s\rfloor -\lfloor t\rfloor|>R}}{t-s} &= \frac{\chi_{|\lfloor t\rfloor -\lfloor s\rfloor|>R}}{\lfloor t\rfloor -\lfloor s\rfloor} \cdot \frac{1}{1-\frac{\{s\}-\{t\}}{\lfloor t\rfloor-\lfloor s\rfloor}}\\
&= \frac{\chi_{|\lfloor t\rfloor -\lfloor s\rfloor|>R}}{\lfloor t\rfloor -\lfloor s\rfloor}\sum_{k=0}^\infty \left(\frac{\{s\}-\{t\}}{\lfloor t\rfloor -\lfloor s\rfloor}\right)^k.
\end{align*}
It follows from the $p$-triangle inequality for the $\mathrm{m}_p$-norm \eqref{p-prop} that
\begin{equation*}
\|G_{\alpha}\|_{\mathrm{m}_p}^p \leq \sum_{k=0}^\infty \|\{g_{\alpha}(t)\frac{\chi_{|\lfloor s\rfloor-\lfloor t\rfloor|> R}}{(\lfloor t\rfloor -\lfloor s\rfloor)^{k+1}}(\{s\}-\{t\})^k\}_{t,s\in \mathbb{R}}\|_{\mathrm{m}_p}^p.
\end{equation*}
Using the submultiplicativity property of the $\mathrm{m}_p$-norm \eqref{algebra}, it follows that
\begin{equation*}
\|G_{\alpha}\|_{\mathrm{m}_p}^p \leq \sum_{k=0}^\infty \|\{g_{\alpha}(t)\frac{\chi_{|\lfloor s\rfloor-\lfloor t\rfloor|> R}}{(\lfloor t\rfloor -\lfloor s\rfloor)^{k+1}}\}_{t,s\in \mathbb{R}}\|_{\mathrm{m}_p}^p\|\{\{t\}-\{s\}\}_{t,s\in \mathbb{R}}\|_{\mathrm{m}_p}^{kp}.
\end{equation*}
Since $\{t\}$ and $\{s\}$ are bounded above by $1$, we have
\begin{equation*}
\|\{\{t\}-\{s\}\}_{t,s\in \mathbb{R}}\|_{\mathrm{m}_p}^{p} \leq 2
\end{equation*}
and therefore
\begin{equation*}
\|\{\{t\}-\{s\}\}_{t,s\in \mathbb{R}}\|_{\mathrm{m}_p}^{kp} \leq 2^{k}.
\end{equation*}
It follows that
\begin{equation*}
\|G_{\alpha}\|_{\mathrm{m}_p}^p \leq \sum_{k=0}^\infty 2^k \|\{g_{\alpha}(t)\frac{\chi_{|\lfloor s\rfloor-\lfloor t\rfloor|> R}}{(\lfloor t\rfloor -\lfloor s\rfloor)^{k+1}}\}_{t,s\in \mathbb{R}}\|_{\mathrm{m}_p}^p.
\end{equation*}
Applying Lemma \ref{vanishing_toeplitz_lemma} and using $R = 2^{3+\frac{2}{p}}$ we have
\begin{align*}
\|G_{\alpha}\|_{\mathrm{m}_p}^p &\leq \|\alpha\|_{\ell_{p^\sharp}}^p\sum_{k=0}^\infty 2^k \left(\frac{2}{R}\right)^{\frac{kp}{2}}\\
&= \|\alpha\|_{\ell_{p^\sharp}}^p\sum_{k=0}^\infty 2^{-kp}\\
&=c_p\|\alpha\|_{\ell_{p^\sharp}}^p.
\end{align*}
This proves the first inequality. The second identity follows from taking the transpose of the first.
\end{proof}
\begin{proof}[Proof of Theorem \ref{quasi_wavelet_bernstein}]
Using the homogeneity property \eqref{schur_homogeneity}, it suffices to take $\lambda=1$, and we abbreviate $\phi_{\alpha} = \phi_{\alpha,1}$.
Without loss of generality, we may assume that the functions $\{\phi(\cdot-k)\}_{k\in \mathbb{Z}}$ are disjointly supported. Indeed, otherwise we may select $N>1$ sufficiently
large such that $\{\phi(\cdot-Nk)\}_{k\in \mathbb{Z}}$ are disjointly supported, and write
\begin{equation*}
\phi_{\alpha} = \sum_{j=0}^{N-1}\phi_{\alpha^{(j)}}
\end{equation*}
where $\alpha^{(j)}$ is the sequence $\{\alpha_{j+Nk}\}_{k\in \mathbb{Z}}$. Then we may prove the assertion for each $\phi_{\alpha^{(j)}}$ separately. Moreover, since the assertion
is invariant under rescaling, without loss of generality we assume that $\phi$ is supported in $(0,1)$.
Fix $R=2^{3+\frac2p}.$ We split up $\phi_{\alpha}^{[1]}$ as
\begin{equation}\label{diagonal_splitting_2}
\phi_{\alpha}^{[1]}(t,s)=\phi_{\alpha}^{[1]}(t,s)\chi_{|\lfloor s\rfloor-\lfloor t\rfloor|\leq R}+\phi_{\alpha}^{[1]}(t,s)\chi_{|\lfloor s\rfloor-\lfloor t\rfloor|> R}\stackrel{\mathrm{def}}{=}A_R(t,s)+B_R(t,s),\quad t,s\in \mathbb{R}.
\end{equation}
We bound the individual terms separately.
For the first summand, we have
\begin{equation*}
A_R=\sum_{|j|\leq R}F_j,\text{ where } F_j(t,s):=\phi_{\alpha}^{[1]}(t,s)\chi_{\lfloor s\rfloor-\lfloor t\rfloor=j}.
\end{equation*}
Clearly,
$$\chi_{\lfloor s\rfloor-\lfloor t\rfloor=j}=\sum_{n\in\mathbb{Z}}\chi_{[n,n+1)}(t)\chi_{[n+j,n+j+1)}(s).$$
Thus,
$$F_j=\sum_{n\in\mathbb{Z}}F_{n,j},\text{ where } F_{n,j}(t,s):=\sum_{n\in\mathbb{Z}}\chi_{[n,n+1)}(t)\phi^{[1]}_{\alpha}(t,s)\chi_{[n+j,n+j+1)}(s).$$
We have (see \eqref{p-prop})
\begin{equation}\label{ar preliminary estimate}
\|A_R\|_{\mathrm{m}_p}^p\leq\sum_{|j|\leq R}\|F_j\|_{\mathrm{m}_p}^p.
\end{equation}
Each $F_j$ has a generalised block-diagonal structure in the sense of Lemma \ref{block_diagonal_formula}. It follows from that Lemma that
\begin{equation}\label{fj 22 estimate}
\|F_j\|_{\mathrm{m}_p}\leq \Big\|\Big\{\|F_{n,j}\|_{\mathrm{m}_p}\Big\}_{j\in \mathbb{Z}}\Big\|_{\ell_{p^{\sharp}}}.
\end{equation}
For $j\neq0,$ we write
$$F_{n,j}=\alpha_n G_{n,j}-\alpha_{n+j}H_{n,j},$$
where
\begin{align*}
G_{n,j}(t,s)&:=\phi^{[1]}(t-n,s-n)\chi_{[n+j,n+j+1)}(s),\\
H_{n,j}(t,s)&:=\phi^{[1]}(t-n-j,s-n-j)\chi_{[n,n+1)}(t)\chi_{[n+j,n+j+1)}(s).
\end{align*}
In particular,
\begin{equation*}
\|F_{n,j}\|_{\mathrm{m}_p}^p \leq |\alpha_n|^p\|G_{n,j}\|_{\mathrm{m}_p}^p+|\alpha_{n+j}|^p\|H_{n,j}\|_{\mathrm{m}_p}^p
\leq |\alpha_n|^p\|\phi^{[1]}\|_{\mathrm{m}_p}^p+|\alpha_{n+j}|^p\|\phi^{[1]}\|_{\mathrm{m}_p}^p,\quad j\neq0.
\end{equation*}
Substituting this into \eqref{fj 22 estimate}, we obtain
\begin{equation}\label{fj jneq0}
\|F_j\|_{\mathrm{m}_p}\lesssim \|\phi^{[1]}\|_{\mathrm{m}_p}\|\alpha\|_{\ell_{p^{\sharp}}},\quad j\neq0.
\end{equation}
For $j=0,$ we have $F_{n,0}=\alpha_n G_{n,n}.$ Thus,
\begin{equation}\label{fj jeq0}
\|F_0\|_{\mathrm{m}_p}\leq\|\alpha\|_{\ell_{p^{\sharp}}}\|\phi^{[1]}\|_{\mathrm{m}_p}.
\end{equation}
Substituting \eqref{fj jneq0} and \eqref{fj jeq0} into \eqref{ar preliminary estimate}, we obtain
$$\|A_R\|_{\mathrm{m}_p}\lesssim\|\alpha\|_{\ell_{p^{\sharp}}}\|\phi^{[1]}\|_{\mathrm{m}_p}.$$
Denote by $\phi_{1}$ the function $\phi_{\alpha}$ when the sequence $\alpha$ consists of $1$'s. Since $\phi$ is supported in $(0,1)$, we have that
$$
\phi_{\alpha} = \phi_{1}\sum_{j\in \mathbb{Z}} \alpha_j\chi_{[j,j+1)}.
$$
Recalling the notation $G_{\alpha}$ and $H_{\alpha}$ from Lemma \ref{GH lemma}, the second summand in \eqref{diagonal_splitting_2}, is expressed as
\begin{align*}
B_R(t,s)&=\phi_{\alpha}(t)\cdot \frac{\chi_{|\lfloor s\rfloor-\lfloor t\rfloor|> R}}{t-s} -\phi_{\alpha}(s)\cdot \frac{\chi_{|\lfloor s\rfloor-\lfloor t\rfloor|> R}}{t-s }\\ &=
\phi_1(t)\cdot G_{\alpha}(t,s)-\phi_1(s)\cdot H_{\alpha}(t,s).
\end{align*}
It follows that
$$\|B_R\|_{\mathrm{m}_p}^p\leq\|\{\phi_1(t)\}_{t,s\in\mathbb{R}}\|_{\mathrm{m}_p}^p\|G_{\alpha}\|_{\mathrm{m}_p}^p+\|\{\phi_1(s)\}_{t,s\in\mathbb{R}}\|_{\mathrm{m}_p}^p\|H_{\alpha}\|_{\mathrm{m}_p}^p.$$
It follows from Lemma \ref{GH lemma} and from trivial estimates
$$\|\{\phi_1(t)\}_{t,s\in\mathbb{R}}\|_{\mathrm{m}_p},\|\{\phi_1(s)\}_{t,s\in\mathbb{R}}\|_{\mathrm{m}_p}\leq\|\phi\|_{\infty}$$
that
$$\|B_R\|_{\mathrm{m}_p}\leq c_p\|\phi\|_{\infty}\|\alpha\|_{p^{\sharp}}.$$
Finally, \eqref{diagonal_splitting_2} yields the result.
\end{proof}
Using Theorem \ref{quasi_wavelet_bernstein}, we obtain the following analogy for $0<p<1$ of Lemma \ref{peller_fav_easy}.
\begin{lemma}\label{quasi_fav_lemma}
Let $\psi$ be a compactly supported $C^\beta$-wavelet, where $\beta > \frac{2}{p}$, and let $f$ be a locally integrable function on $\mathbb{R}$ such that $f_j\in L_{p^\sharp}(\mathbb{R})$. Then
\begin{equation*}
\|f_j^{[1]}\|_{\mathrm{m}_p} \lesssim_p 2^{\frac{j}{p}}\|f_j\|_{p^\sharp}.
\end{equation*}
\end{lemma}
\begin{proof}
By definition \eqref{f_j_definition},
\begin{equation*}
f_j(t) = \sum_{k\in \mathbb{Z}} 2^{\frac{j}{2}}\psi(2^j t-k)\langle f, \psi_{j,k}\rangle,\quad t \in \mathbb{R}.
\end{equation*}
Theorem \ref{quasi_wavelet_bernstein} with $\phi=\psi$ yields
\begin{equation*}
\|f_j^{[1]}\|_{\mathrm{m}_p} \lesssim 2^{\frac{3j}{2}}\left(\sum_{k\in \mathbb{Z}} |\langle f,\psi_{j,k}\rangle|^{p^\sharp}\right)^{1/p^\sharp}.
\end{equation*}
Now applying Lemma \ref{wavelet_bernstein} gives us
\begin{equation*}
2^{\frac{3j}{2}}\left(\sum_{k\in \mathbb{Z}} |\langle f,\psi_{j,k}\rangle|^{p^\sharp}\right)^{1/p^\sharp}\lesssim 2^{\frac{3j}{2}}\cdot 2^{j\left(\frac{1}{p^{\sharp}}-\frac{1}{2}\right)}\|f_j\|_{p^\sharp} = 2^{\frac{j}{p}}\|f_j\|_{p^\sharp}.
\end{equation*}
\end{proof}
Lemma \ref{quasi_fav_lemma} gives us the following result, which generalises Corollary \ref{Peller_critereon} and is proved in the same way.
\begin{theorem}
Let $0 < p \leq 1$. Then
\begin{equation*}
\|f^{[1]}\|_{\mathrm{m}_p} \lesssim \|f'\|_\infty+\left(\sum_{j\in \mathbb{Z}} 2^{j}\|f_j\|_{p^\sharp}^p\right)^{1/p} = \|f'\|_\infty+\|f\|_{\dot{B}^{\frac{1}{p}}_{p^\sharp,p}}
\end{equation*}
for all Lipschitz functions $f$ belonging to $\dot{B}^{\frac{1}{p}}_{p^\sharp,p}(\mathbb{R})$.
\end{theorem}
\begin{proof}
By Lemma \ref{besov_realisation}, there exists a constant $c$ such that
\begin{equation*}
f(t) = f(0)+ct+\sum_{j\in \mathbb{Z}} f_j(t)-f_j(0),\quad t \in \mathbb{R}.
\end{equation*}
Therefore,
\begin{equation*}
f^{[1]} = c+\sum_{j\in \mathbb{Z}} f_j^{[1]}.
\end{equation*}
Using the $p$-triangle inequality \eqref{p-prop}, it follows that
\begin{equation*}
\|f^{[1]}\|_{\mathrm{m}_p}^p \leq |c|^p + \sum_{j\in \mathbb{Z}} \|f_j^{[1]}\|_{\mathrm{m}_p}^p.
\end{equation*}
Bounding the $j$th summand with Lemma \ref{quasi_fav_lemma},
\begin{equation*}
\|f^{[1]}\|_{\mathrm{m}_p}^p \lesssim |c|^p + \sum_{j\in \mathbb{Z}} 2^{\frac{j}{p}}\|f_j\|_{p^{\sharp}}^p = |c|^p+\|f\|_{\dot{B}^\frac{1}{p}_{p^\sharp,p}}^p.
\end{equation*}
Since $|c|\lesssim \|f'\|_\infty+\|f\|_{B^{\frac{1}{p}}_{p^\sharp,p}}$, the result follows.
\end{proof}
This completes the proof of Theorem \ref{main_sufficient_condition}.
\begin{remark}
Using Theorem \ref{main_sufficient_condition}, it is possible to extend \eqref{peller_fav_thm} to the range $0<p<1$ in a certain sense. That is, if $f\in L_{p^\sharp}(\mathbb{R})$ is a distribution with Fourier transform
supported in the set $[-\frac{12}{7}\sigma,-\sigma)\cup (\sigma,\frac{12}{7}\sigma]$ where $\sigma>0$, then
\begin{equation*}
\|f^{[1]}\|_{\mathrm{m}_p} \lesssim_p \sigma^{\frac{1}{p}}\|f\|_{p^\sharp}.
\end{equation*}
Note that by rescaling if necessary and applying \eqref{schur_homogeneity} it suffices to take $\sigma=1$, so that $\Delta_0 f = f$ and $\Delta_{n}f=0$ for $n\neq 0$. Assume
now that $f$ is a $p^\sharp$-integrable distribution with Fourier transform supported in $[-\frac{12}{7},-1)\cup (1,\frac{12}{7}]$.
It follows from Theorem \ref{main_sufficient_condition} and Bernstein's inequality \cite[Corollary 1.5]{Sawano2018} that
\begin{equation*}
\|f^{[1]}\|_{\mathrm{m}_p} \lesssim_p \|f'\|_{\infty}+\|f\|_{\dot{B}^{\frac{1}{p}}_{p^\sharp,p}} \lesssim_p \|f\|_{\infty}+\|f\|_{\dot{B}^{\frac{1}{p}}_{p^\sharp,p}}.
\end{equation*}
According to \cite[Corollary 1.8]{Sawano2018}, we have $\|f\|_{\infty} \lesssim_p \|f\|_{p^\sharp}$ and hence
\begin{equation*}
\|f^{[1]}\|_{\mathrm{m}_p} \lesssim_p \|f\|_{p^\sharp}+\|f\|_{\dot{B}^{\frac{1}{p}}_{p^\sharp,p}}.
\end{equation*}
Using the definition of the Besov semi-norm \eqref{besov_seminorm_def}, that $\Delta_0 f = f\in L_{p^\sharp}(\mathbb{R})$ and that $\Delta_nf=0$ for $n\neq 0$, we have
\begin{align*}
\|f\|_{\dot{B}^{\frac{1}{p}}_{p^\sharp,p}} = \left(\sum_{n\in \mathbb{Z}} 2^{n}\|\Delta_n f\|_{p^\sharp}^p\right)^{\frac{1}{p}} = \|\Delta_0 f\|_{p^\sharp} = \|f\|_{p^\sharp}.
\end{align*}
Hence, for $0< p\leq 1$ and $f$ with Fourier transform supported in $[-\frac{12}{7},-1)\cup (1,\frac{12}{7}]$ we have
\begin{equation*}
\|f^{[1]}\|_{\mathrm{m}_p} \lesssim_p \|f\|_{p^\sharp}.
\end{equation*}
\end{remark}
\subsection{Submajorisation inequalities}\label{submajor_section}
In this section we assume that $0 < p \leq 1$, and $\psi$ is a $C^\beta$ compactly supported wavelet where $\beta > \frac{2}{p}$. All wavelet components $f_j$
are computed with respect to $\psi$.
In terms of singular values, Theorem \ref{main_sufficient_condition} states that there exists a constant $C_p > 0$ such that for all self-adjoint bounded operators $A$ and $B$
with $A-B\in \mathcal{L}_p$ we have
\begin{equation*}
\sum_{k=0}^\infty \mu(k,f(A)-f(B))^p \leq C_p^p(\|f'\|_\infty+\|f\|_{B^{\frac{1}{p}}_{p^\sharp,p}})^p\sum_{k=0}^\infty \mu(k,A-B)^p.
\end{equation*}
Using a short argument borrowed from \cite{HSZ2019} we can strengthen this inequality. For all bounded self-adjoint operators $A$ and $B$ with $A-B$ compact, in this section we will prove that the following stronger statement holds:
\begin{equation*}
\sum_{k=0}^n \mu(k,f(A)-f(B))^p \leq K_p^p(\|f'\|_\infty+\|f\|_{B^{\frac{1}{p}}_{p^\sharp,p}})^p\sum_{k=0}^n \mu(k,A-B)^p,\quad n\geq 0.
\end{equation*}
Here $K_p>0$ is a constant. In principle it may be that $K_p$ is larger than $C_p$, but $K_p$ is independent of $n$. The argument in \cite{HSZ2019} is based on real interpolation of the couple $(\mathcal{L}_p,\mathcal{L}_{\infty})$, we recall
the relevant details in a mostly self-contained manner here.
We make use of the following inequality originally due to Rotfel'd \cite{Rotfeld1968}, which holds for $0 < p \leq 1$
and compact operators $X$ and $Y$,
\begin{equation}\label{p_ky_fan}
\mu(X+Y)^p \prec\prec \mu(X)^p+\mu(Y)^p.
\end{equation}
Here, $\prec\prec$ denotes submajorisation in the sense of Hardy, Littlewood and P\'{o}lya. The meaning of \eqref{p_ky_fan} is that for all $n\geq 0$ we have
\begin{equation*}
\sum_{k=0}^n \mu(k,X+Y)^p \leq \sum_{k=0}^n \mu(k,X)^p+\mu(k,Y)^p.
\end{equation*}
An alternative perspective on \eqref{p_ky_fan} is that it follows from the fact that $t\mapsto t^p$ is operator monotone when $0 < p\leq 1$. See \cite[Theorem 3.7]{DS-concave-2009}.
We will make use of the following lemma, which is purely operator theoretic.
\begin{lemma}\label{infimum_lemma}
Let $X$ be a compact operator. For all $n\geq 0$ there exists a projection
$P$ such that
\begin{equation*}
\|X(1-P)\|_p^p+(n+1)\|XP\|_{\infty}^p \leq 2\sum_{k=0}^n \mu(k,X)^p.
\end{equation*}
The projection $P$ can be chosen such that $1-P$ has finite rank.
\end{lemma}
\begin{proof}
Passing to a polar decomposition $|X| = UX$ if necessary, it suffices to take $X\geq 0$.
Let $P$ denote the projection
\begin{equation*}
P := \chi_{[0,\mu(n,X))}(X).
\end{equation*}
Then,
\begin{equation*}
\mu(j,X(1-P)) = \mu(j,X),\quad j\leq n,\quad \mu(n+1,X(1-P)) = 0.
\end{equation*}
It follows that
\begin{equation}\label{lower_cut}
\|X(1-P)\|_p^p = \sum_{j=0}^n \mu(j,X(1-P))^p = \sum_{j=0}^n \mu(j,X)^p
\end{equation}
and
\begin{equation*}
\|XP\|_{\infty} \leq \mu(n,X).
\end{equation*}
Therefore,
\begin{equation}\label{upper_cut}
(n+1)\|XP\|_{\infty}^p \leq (n+1)\mu(n,X)^p \leq \sum_{j=0}^n \mu(j,X)^p.
\end{equation}
Adding \eqref{lower_cut} and \eqref{upper_cut} yields the result.
\end{proof}
From Theorem \ref{main_sufficient_condition} we can deduce the following result, whose proof is based on Theorem 6.1 in \cite{HSZ2019}. Recall that $\psi$ is a fixed compactly supported $C^\beta$ wavelet, where $\beta > \frac{2}{p}$
and for $j\in \mathbb{Z}$ we denote
\begin{equation*}
f_j = \sum_{k\in \mathbb{Z}} \psi_{j,k}\langle f,\psi_{j,k}\rangle.
\end{equation*}
Since $\|\cdot\|_{\mathrm{m}_1}=\|\cdot\|_{m_\infty},$ it follows from Lemma \ref{peller_fav_easy} that
\begin{equation}\label{peller_fav_l_infty}
\|f_j(A)-f_j(B)\|_{\infty} \lesssim 2^{j}\|f_j\|_{\infty}\|A-B\|_{\infty},\quad A,B\in \mathcal{B}_{\mathrm{sa}}(H).
\end{equation}
\begin{theorem}\label{submajor_thm}
Let $f$ be a locally integrable function on $\mathbb{R}$, and let $j\in \mathbb{Z}$ be such that $f_j\in L_\infty(\mathbb{R})$. There exists a constant $C_p>0$ such that for all bounded self-adjoint operators $A$ and $B$ with $A-B$ compact we have
\begin{equation*}
\mu(f_j(A)-f_j(B))^p \prec\prec C_p2^{j}\|f_j\|_{p^\sharp}^p\mu(A-B)^p.
\end{equation*}
\end{theorem}
\begin{proof}
Here, one uses the inequality
$$\sum_{k=0}^n\mu^p(k,X+Y)\leq\|X\|_p^p+(n+1)\|Y\|_{\infty}^p,$$
for compact $X$ and $Y$, which follows from \eqref{p_ky_fan}.
Let $P$ be a projection with $1-P$ finite rank, and let $A_P = B+(A-B)P$. Then
\begin{equation*}
f_j(A)-f_j(B) = f_j(A)-f_j(A_P)+f_j(A_P)-f_j(B)
\end{equation*}
Therefore, for all $n\geq 0$ we have
\begin{equation*}
\sum_{k=0}^n \mu(k,f_j(A)-f_j(B))^p \leq \|f_j(A)-f_j(A_P)\|_p^p+(n+1)\|f_j(A_P)-f_j(B)\|_\infty^p.
\end{equation*}
Note that $A-A_P = (A-B)(1-P)$ and $A_P-B = (A-B)P$. Since $1-P$ has finite
rank, $A-A_P\in \mathcal{L}_p$.
Applying Theorem \ref{quasi_fav_lemma} to $\|f_j(A)-f_j(A_P)\|_p^p$ and \eqref{peller_fav_l_infty} for $\|f_j(A_P)-f_j(B)\|_\infty^p$ yields
\begin{equation*}
\sum_{k=0}^n \mu(k,f_j(A)-f_j(B))^p \lesssim_p 2^{j}\|f_j\|_{p^\sharp}^p\|A-A_P\|_p^p+(n+1)2^{jp}\|f_j\|_\infty^p\|A_P-B\|_\infty^p
\end{equation*}
where the constant is independent of $n$.
From \eqref{sequential_bernstein}, we have that $\|f_j\|_\infty \lesssim_p 2^{j(\frac{1}{p}-1)} \|f_j\|_{p^\sharp}$. Therefore,
\begin{equation*}
\sum_{k=0}^n \mu(k,f_j(A)-f_j(B))^p \lesssim_p 2^{j}\|f_j\|_{p^\sharp}^p(\|A-A_P\|_p^p+(n+1)\|A_P-B\|_\infty^p).
\end{equation*}
Using Lemma \ref{infimum_lemma} with $X = A-B$ implies that there exists $P$
such that
\begin{equation*}
\|A-A_P\|_p^p+(n+1)\|A_P-B\|_\infty^p = \|(A-B)(1-P)\|_{p}^p+(n+1)\|(A-B)P\|_\infty^p \leq 2\sum_{j=0}^n \mu(j,A-B)^p.
\end{equation*}
Thus,
\begin{equation*}
\sum_{k=0}^n \mu(k,f_j(A)-f_j(B))^p \lesssim_p 2^{j}\|f_j\|_{p^\sharp}^p\sum_{k=0}^n \mu(k,A-B)^p,\quad n\geq 0
\end{equation*}
as required.
\end{proof}
Representing $f$ as $ct+\sum_{j\in \mathbb{Z}}f_j-f_j(0)$ and applying \eqref{p_ky_fan}, we arrive at the following submajorisation result.
\begin{corollary}
Let $0 < p \leq 1$. For Lipschitz functions $f \in \dot{B}^{\frac{1}{p}}_{p^\sharp,p}(\mathbb{R})$ and all bounded self-adjoint operators $A$ and $B$ with $A-B$ compact we have
$$
|f(A)-f(B)|^p \prec\prec C_{p}(\|f'\|_\infty+\|f\|_{\dot{B}^{\frac{1}{p}}_{p^\sharp,p}})^p|A-B|^p.
$$
Equivalently, for any fully symmetric operator space $\mathcal{E}$ (see e.g. \cite[Definition 2.5.7]{LSZ}), we have the Lipschitz estimate
$$
\||f(A)-f(B)|^p\|_{\mathcal{E}} \lesssim (\|f'\|_\infty+\|f\|_{\dot{B}^{\frac{1}{p}}_{p^\sharp,p}})^p\||A-B|^p\|_{\mathcal{E}}.
$$
\end{corollary}
\subsection{$\mathcal{L}_p$-operator H\"older functions}\label{holder_section}
To complement Theorem \ref{main_sufficient_condition}, we study the related issue of operator H\"older estimates. It is well-known that all $\alpha$-H\"older functions belong to $\dot{B}^{\alpha}_{\infty,\infty}.$
The arguments in this section are inspired by those of Aleksandrov and Peller \cite[Section 5]{Aleksandrov-Peller-holder-2010}, with adaptations for the wavelet decomposition.
We will take $0 < p \leq 1,$ and $\psi$ denotes a compactly supported $C^k$ wavelet where $k > \frac{2}{p}.$ Recall that if $f$ is a locally integrable function, we denote
\[
f_j := \sum_{k\in \mathbb{Z}} \psi_{j,k}\langle f,\psi_{j,k}\rangle.
\]
This series is finite on bounded subsets of $\mathbb{R}.$
The following lemma concerns the representation of a H\"older continuous function $f$ by a wavelet series. This issue is parallel to Lemma \ref{besov_realisation},
and the proof is similar.
\begin{lemma}\label{holder_wavelet_realisation}
Let $f$ be a H\"older continuous function on $\mathbb{R}$ of order $\alpha \in (0,1).$ Then
\[
f(t) = f(0)+\sum_{j\in \mathbb{Z}} f_j(t)-f_j(0),\quad t \in \mathbb{R}
\]
where the series converges uniformly on compact subsets of $\mathbb{R}.$
\end{lemma}
\begin{proof}
Since $f$ is H\"older continuous of order $\alpha,$ we have $f \in \dot{B}^{\alpha}_{\infty,\infty}(\mathbb{R})$ and hence Theorem \ref{besov_space_wavelet_characterisation} implies
\[
\|f\|_{\dot{B}^{\alpha}_{\infty,\infty}} := \sup_{j\in \mathbb{Z}} 2^{j\alpha}\|f_j\|_{\infty} < \infty.
\]
Hence,
\[
|f_j(t)-f_j(0)| \leq 2\|f_j\|_{\infty} \leq 2^{1-j\alpha}\|f\|_{\dot{B}^{\alpha}_{\infty,\infty}}
\]
and hence the series
\[
\sum_{j\geq 0} f_j(t)-f_j(0)
\]
converges uniformly over $t \in \mathbb{R}$.
For $K\geq 0,$ we have
\[
\sup_{-K\leq t \leq K}|f_j(t)-f_j(0)| \leq \sup_{-K\leq t\leq K}|t|\|f_j'\|_{\infty} = K\|f_j'\|_{\infty}.
\]
Applying the Bernstein-type inequality $\|f_j'\|_{\infty}\lesssim 2^{j}\|f_j\|_{\infty}$ \cite[Chapter 2, Theorem 3]{Meyer-wavelets-1992}, we arrive at
\[
\sup_{-K\leq t\leq K}|f_j(t)-f_j(0)| \lesssim K2^{j}\|f_j\|_{\infty}\leq K2^{j(1-\alpha)}\|f\|_{\dot{B}^{\alpha}_{\infty,\infty}}.
\]
Thus, the series
\[
\sum_{j<0} f_j(t)-f_j(0)
\]
converges uniformly over $-K\leq t \leq K.$ Since $K$ is arbitrary, we have that the series
\[
\sum_{j\in \mathbb{Z}} f_j(t)-f_j(0)
\]
converges uniformly over compact subsets of $\mathbb{R}.$ Next we prove that $\sum_{j\in \mathbb{Z}} f_j-f_j(0)$ is an $\alpha$-H\"older continuous function.
For all $t,s\in \mathbb{R},$ we have
\begin{align*}
\left|\sum_{j\in \mathbb{Z}} f_j(t)-f_j(s)\right| &\leq \sum_{2^{j}<|t-s|} |f_j(t)-f_j(s)| + \sum_{2^j\geq |t-s|}|f_j(t)-f_j(s)|\\
&\leq \sum_{2^j<|t-s|} |t-s|\|f_j'\|_{\infty}+\sum_{2^{j}\geq |t-s|} 2\|f_j\|_{\infty}\\
&\leq \sum_{2^j<|t-s|} |t-s|2^{j(1-\alpha)}\|f\|_{\dot{B}^{\alpha}_{\infty,\infty}} + 2\sum_{2^j \geq |t-s|} 2^{-j\alpha}\|f\|_{\dot{B}^{\alpha}_{\infty,\infty}}\\
&\lesssim |t-s|^{\alpha}\|f\|_{\dot{B}^{\alpha}_{\infty,\infty}}.
\end{align*}
That is, the function $\sum_{j\in \mathbb{Z}} f_j(t)-f_j(0)$ is H\"older continuous of order $\alpha.$
Now we consider the difference
\[
f-\sum_{j\in \mathbb{Z}} f_j-f_j(0).
\]
This must be a polynomial, since all of its wavelet coefficients vanish (see the discussion preceding the proof of Lemma \ref{besov_realisation}).
Since $f$ and $\sum_{j\in \mathbb{Z}} f_j-f_j(0)$ are $\alpha$-H\"older for $\alpha<1$, it follows that $f-\sum_{j\in \mathbb{Z}} f_j-f_j(0)$ is a polynomial
of degree $0$, i.e. a constant. Hence there exists $c \in \mathbb{C}$ such that
\[
f(t) = c+\sum_{j\in \mathbb{Z}} f_j(t)-f_j(0).
\]
Substituting $t=0$ yields $c = f(0).$
\end{proof}
\begin{theorem}\label{main_holder_trick}
Let $j\in \mathbb{Z}$ and let $f$ be a locally integrable function such that $f_j$ is bounded. Then for all $\alpha \in (0,1)$, $0 < p \leq 1$ we have
\begin{equation*}
\|f_j(A)-f_j(B)\|_{\frac{p}{\alpha}} \lesssim_{p,\alpha} 2^{j(\alpha+\frac{1}{p^\sharp})}\|f_j\|_{p^\sharp}\|A-B\|_p^\alpha
\end{equation*}
for all bounded operators $A$ and $B$ such that $A-B \in \mathcal{L}_p$.
\end{theorem}
\begin{proof}
For each $j\in \mathbb{Z}$, we have
\begin{equation*}
\|f_j(A)-f_j(B)\|_{\frac{p}{\alpha}} = \||f_j(A)-f_j(B)|^{1/\alpha}\|_{p}^{\alpha} \leq \|f_j(A)-f_j(B)\|_\infty^{1-\alpha}\|f_j(A)-f_j(B)\|_{p}^{\alpha}.
\end{equation*}
Since $f_j$ is bounded, the bounds $\|f_j(A)\|_\infty,\|f_j(B)\|_\infty\leq \|f_j\|_\infty$ give
\begin{equation*}
\|f_j(A)-f_j(B)\|_{\infty}^{1-\alpha} \leq 2^{1-\alpha}\|f_j\|_{\infty}^{1-\alpha}.
\end{equation*}
Now we apply \eqref{sequential_bernstein}, which gives us
\begin{equation*}
\|f_j(A)-f_j(B)\|_{\infty}^{1-\alpha} \leq 2^{1-\alpha}\|f_j\|_{\infty}^{1-\alpha} \lesssim_{\alpha,p} 2^{\frac{j(1-\alpha)}{p^\sharp}}\|f_j\|_{p^\sharp}^{1-\alpha}.
\end{equation*}
Using Lemma \ref{quasi_fav_lemma}, we also have
\begin{equation*}
\|f_j(A)-f_j(B)\|_p^{\alpha} \lesssim_p 2^{\frac{j\alpha}{p}}\|f_j\|_{p^\sharp}^{\alpha}\|A-B\|_p^{\alpha}.
\end{equation*}
Hence,
\begin{equation*}
\|f_j(A)-f_j(B)\|_{\frac{p}{\alpha}} \lesssim_{p,\alpha} 2^{\frac{j}{p^\sharp}}\|f_j\|_{p^\sharp}\|A-B\|_p^{\alpha}.
\end{equation*}
\end{proof}
Theorem \ref{main_holder_trick} implies the following sufficient condition for a function to be $\alpha$-H\"older in $\mathcal{L}_p$. Interestingly, the condition differs
depending on $p\leq \alpha$ or $p> \alpha$. With $p=1$, this recovers \cite[Theorem 5.3]{Aleksandrov-Peller-holder-2010}.
\begin{theorem}
Let $0 < p \leq 1$ and $\alpha \in (0,1)$. If $f$ is an $\alpha$-H\"older function such that $f \in \dot{B}^{\alpha+\frac{1}{p^\sharp}}_{p^\sharp,\min\{1,\frac{p}{\alpha}\}}(\mathbb{R})$
then for all bounded self-adjoint operators $A$ and $B$ with $A-B\in \mathcal{L}_p$ we have
\begin{equation*}
\|f(A)-f(B)\|_{\frac{p}{\alpha}} \lesssim_{p,\alpha} \|f\|_{\dot{B}^{\alpha+\frac{1}{p^\sharp}}_{p^\sharp,\min\{1,\frac{p}{\alpha}\}}}\|A-B\|_p^{\alpha}.
\end{equation*}
\end{theorem}
\begin{proof}
With $\nu := \min\{1,\frac{p}{\alpha}\}$, the quasi-norm $\|\cdot \|_{\frac{p}{\alpha}}$ obeys a $\nu$-triangle inequality. Therefore, applying the representation from Lemma \ref{holder_wavelet_realisation} yields
\begin{equation*}
\|f(A)-f(B)\|_{\frac{p}{\alpha}}^\nu \leq \sum_{j\in \mathbb{Z}} \|f_j(A)-f_j(B)\|_{\frac{p}{\alpha}}^\nu.
\end{equation*}
Theorems \ref{main_holder_trick} and \ref{besov_space_wavelet_characterisation} together imply
\begin{equation*}
\|f(A)-f(B)\|_{\frac{p}{\alpha}}^\nu \lesssim \sum_{j\in \mathbb{Z}} 2^{j\nu\left(\alpha+\frac{1}{p^\sharp}\right)}\|f_j\|_{p^\sharp}^\nu\|A-B\|_p^{\alpha \nu} \approx \|f\|_{\dot{B}^{\alpha+\frac{1}{p^\sharp}}_{p^\sharp,\nu}}^\nu \|A-B\|_p^{\alpha\nu}.
\end{equation*}
\end{proof}
\subsection{Weak-type H\"older estimates}\label{weak_holder_section}
Aleksandrov and Peller have proved that for all $p\in [1,\infty]$, $\alpha \in (0,1)$ we have
\begin{equation*}
\|f(A)-f(B)\|_{\frac{p}{\alpha},\infty} \lesssim_{p,\alpha} \|f\|_{C^{\alpha}}\|A-B\|_p^\alpha,\quad A=A^*,B=B^*\in \mathcal{B}(H),\,A-B\in \mathcal{L}_p
\end{equation*}
where $\|f\|_{C^\alpha}$ is the $\alpha$-H\"older norm (see \cite[Theorem 5.4]{Aleksandrov-Peller-holder-2010}). This result can be viewed as a complement to
the main result of \cite{cpsz}, which states that
\begin{equation*}
\|f(A)-f(B)\|_{1,\infty} \lesssim \|f'\|_\infty \|A-B\|_1,\quad A=A^*,B=B^*\in \mathcal{B}(H),\,A-B\in \mathcal{L}_1.
\end{equation*}
In order to continue this theme, we will study H\"older-type estimates for $\|f(A)-f(B)\|_{\frac{p}{\alpha},\infty}$ where $0 < p < 1$.
The following argument is closely based on \cite[Theorem 5.1]{Aleksandrov-Peller-holder-2010}, the essential difference is that we use the wavelet decomposition
in place of the Littlewood-Paley decomposition. Note that by Theorem \ref{besov_space_wavelet_characterisation},
if $f \in \dot{B}^s_{p,q}(\mathbb{R})$ for some $s\in \mathbb{R}$, $p,q\in (0,\infty]$ then for every $j\in \mathbb{Z}$ we have $f_j \in L_{\infty}(\mathbb{R}).$
\begin{theorem}
Let $\alpha \in (0,1)$ and $p \in (0,1]$. Let $f$ be an $\alpha$-H\"older function. Let $A$ and $B$ be self-adjoint bounded operators such that $A-B$ is compact. For all $n\geq 0$ we have
\begin{equation}
\mu(n,f(A)-f(B)) \lesssim_{p,\alpha} (1+n)^{-\frac{\alpha}{p}}\|f\|_{\dot{B}^{\alpha+\frac{1}{p^\sharp}}_{p^\sharp,\infty}}\left(\sum_{k=0}^n \mu(k,A-B)^p\right)^{\frac{\alpha}{p}}.
\end{equation}
\end{theorem}
\begin{proof}
Let $N \in \mathbb{Z}$, to be specified shortly. By the inequality \eqref{p_ky_fan}, for all $n\geq 0$ we have
\begin{equation*}
\sum_{k=0}^n \mu(k,\sum_{j\leq N} f_j(A)-f_j(B))^p \leq \sum_{j\leq N} \sum_{k=0}^n \mu(k,f_j(A)-f_j(B))^p
\end{equation*}
According to Theorem \ref{submajor_thm}, we have
\begin{equation*}
\sum_{k=0}^n \mu(k,f_j(A)-f_j(B))^p \lesssim_p 2^{j}\|f_j\|_{p^\sharp}^p\sum_{k=0}^n \mu(k,A-B)^p.
\end{equation*}
Therefore,
\begin{equation*}
\sum_{k=0}^n \mu(k,\sum_{j\leq N} f_j(A)-f_j(B))^p \lesssim_p \sum_{j\leq N} 2^{j}\|f_j\|_{p^\sharp}^p\left(\sum_{k=0}^n \mu(k,A-B)^p\right).
\end{equation*}
By Theorem \ref{besov_space_wavelet_characterisation}, for all $j \in \mathbb{Z}$ we have
\begin{equation*}
\|f_j\|_{p^\sharp} \lesssim 2^{-j(\alpha+\frac{1}{p^\sharp})}\|f\|_{\dot{B}^{\frac{1}{p^\sharp}+\alpha}_{p^\sharp,\infty}}.
\end{equation*}
Hence, taking into account that $1-\alpha p -\frac{p}{p^\sharp} = p(1-\alpha)>0$, we have
\begin{align*}
\sum_{k=0}^n \mu(k,\sum_{j\leq N} f_j(A)-f_j(B))^p &\lesssim_p \sum_{j\leq N} 2^{j(1-\alpha p-\frac{p}{p^\sharp})}\|f\|_{\dot{B}^{\frac{1}{p^\sharp}+\alpha}_{p^\sharp,\infty}}^p\left(\sum_{k=0}^n \mu(k,A-B)^p\right)\\
&\lesssim_{p,\alpha} 2^{Np(1-\alpha)}\|f\|_{\dot{B}^{\frac{1}{p^\sharp}+\alpha}_{p^\sharp,\infty}}^p\left(\sum_{k=0}^n \mu(k,A-B)^p\right).
\end{align*}
That is,
\begin{equation*}
\left(\sum_{k=0}^n \mu(k,\sum_{j\leq N} f_j(A)-f_j(B))^p\right)^{\frac{1}{p}} \lesssim_{p,\alpha} 2^{N(1-\alpha)}\|f\|_{\dot{B}^{\frac{1}{p^\sharp}+\alpha}_{p^\sharp,\infty}}\left(\sum_{k=0}^n \mu(k,A-B)^p\right)^{\frac{1}{p}}.
\end{equation*}
It follows that
\begin{equation}\label{low_frequency}
\mu(n,\sum_{j\leq N} f_j(A)-f_j(B))\lesssim_{p,\alpha} (1+n)^{-\frac{1}{p}}2^{N(1-\alpha)}\|f\|_{\dot{B}^{\alpha+\frac{1}{p^\sharp}}_{p^\sharp,\infty}}\left(\sum_{k=0}^n \mu(k,A-B)^p\right)^{\frac{1}{p}}.
\end{equation}
Putting this aside for the moment, we consider the norm $\left\|\sum_{j> N} f_j(A)-f_j(B)\right\|_\infty$. By the triangle inequality, this is controlled by
\begin{equation*}
\left\|\sum_{j> N} f_j(A)-f_j(B)\right\|_\infty \leq \sum_{j> N} \|f_j(A)-f_j(B)\|_\infty \leq \sum_{j> N} 2\|f_j\|_\infty.
\end{equation*}
By Theorem \ref{besov_space_wavelet_characterisation}, we have $\|f_j\|_\infty\lesssim_{\alpha} 2^{-j\alpha} \|f\|_{\dot{B}^{\alpha}_{\infty,\infty}}$. Therefore
\begin{equation}\label{high_frequnecy}
\left\|\sum_{j>N} f_j(A)-f_j(B)\right\|_\infty \lesssim_{\alpha} \sum_{j>N} 2^{-j\alpha}\|f\|_{\dot{B}^{\alpha}_{\infty,\infty}} \lesssim_{\alpha} 2^{-N\alpha}\|f\|_{\dot{B}^{\alpha}_{\infty,\infty}}.
\end{equation}
Now we combine \eqref{low_frequency} and \eqref{high_frequnecy} to estimate $\mu(n,f(A)-f(B)).$ Using the representation from Lemma \ref{holder_wavelet_realisation}, we have
\[
f(A)-f(B) = \sum_{j\in \mathbb{Z}} f_j(A)-f_j(B).
\]
Since $A$ and $B$ are bounded and the series in Lemma \ref{holder_wavelet_realisation} converges uniformly over compact subsets, this series converges in the operator norm.
Therefore,
\begin{align*}
\mu(n,f(A)-f(B)) &= \mu\left(n,\sum_{j\leq N} f_j(A)-f_j(B) + \sum_{j> N} f_j(A)-f_j(B)\right)\\
&\leq \mu\left(n,\sum_{j\leq N} f_j(A)-f_j(B)\right)+\left\|\sum_{j> N} f_j(A)-f_j(B)\right\|_\infty\\
&\stackrel{\eqref{high_frequnecy}}{\lesssim_{\alpha}} \mu\left(n,\sum_{j\leq N} f_j(A)-f_j(B)\right)+ 2^{-N\alpha}\|f\|_{\dot{B}^{\alpha}_{\infty,\infty}}\\
&\stackrel{\eqref{low_frequency}}{\lesssim_{\alpha}} (1+n)^{-\frac{1}{p}}2^{N(1-\alpha)}\|f\|_{\dot{B}^{\alpha+\frac{1}{p^\sharp}}_{p^\sharp,\infty}}\left(\sum_{k=0}^n\mu(k,A-B)^p\right)^{1/p} + 2^{-N\alpha}\|f\|_{\dot{B}^{\alpha}_{\infty,\infty}}.
\end{align*}
Now we choose $N\in \mathbb{Z}$ such that
\begin{equation*}
2^{-N-1} \leq (1+n)^{-\frac{1}{p}}\left(\sum_{k=0}^n \mu(k,A-B)^p\right)^{\frac{1}{p}} \leq 2^{-N}.
\end{equation*}
Hence,
\begin{equation*}
\mu(n,f(A)-f(B)) \lesssim_{\alpha,p} (\|f\|_{\dot{B}^{\frac{1}{p^\sharp}+\alpha}_{p^\sharp,\infty}}+\|f\|_{\dot{B}^{\alpha}_{\infty,\infty}}) ((1+n)^{-\frac{1}{p}+(1-\alpha)\frac{1}{p}}+(1+n)^{-\frac{\alpha}{p}})\left(\sum_{k=0}^n \mu(k,A-B)^p\right)^{\frac{\alpha}{p}}.
\end{equation*}
Since $\dot{B}^{\frac{1}{p^\sharp}+\alpha}_{p^\sharp,\infty} \subseteq \dot{B}^{\alpha}_{\infty,\infty}$ (this follows from \eqref{sequential_bernstein}), the desired result follows.
\end{proof}
It follows immediately from the definition of the $\mathcal{L}_{\frac{p}{\alpha},\infty}$ quasi-norm \eqref{weak_lp_def} that we have the following:
\begin{theorem}\label{weak_holder_thm}
Let $p \in (0,1]$ and $\alpha \in (0,1)$. Assume that $f \in \dot{B}^{\frac{1}{p^\sharp}+\alpha}_{p^\sharp,\infty}(\mathbb{R})$ is an $\alpha$-H\"older function. For all self-adjoint bounded operators $A$ and $B$ such that $A-B\in \mathcal{L}_p$ we have
\begin{equation*}
\|f(A)-f(B)\|_{\frac{p}{\alpha},\infty} \lesssim_{p,\alpha} \|f\|_{\dot{B}^{\frac{1}{p^\sharp}+\alpha}_{p^\sharp,\infty}} \|A-B\|_p^{\alpha}.
\end{equation*}
\end{theorem}
Theorem \ref{weak_holder_thm} is related to results obtained in \cite{HSZ2019,Ricard-2018,Sobolev-jlms-2017}. The class $S_{d,\alpha}$
introduced by A.~V.~Sobolev \cite{Sobolev-jlms-2017} is the space of functions $f \in C^d(\mathbb{R} \setminus \{0\})$ such that
$$
|f^{(k)}(x)|\lesssim_k |x|^{\alpha-k},\quad 0\leq k\leq d,\; x \in \mathbb{R}\setminus \{0\}.
$$
It was proved in \cite[Theorem 1.2]{HSZ2019} that if $f \in S_{d,\alpha}$ where $d > \frac{1}{p}+2$ then
\[
\|f(A)-f(B)\|_{\frac{p}{\alpha}} \lesssim_{p,\alpha,f} \|A-B\|_{p}^{\alpha},\quad A,B\in \mathcal{B}_{\mathrm{sa}}(H),\; A-B\in \mathcal{L}_p.
\]
Theorem \ref{weak_holder_thm} complements that result in the sense that the class of functions is wider, but the $\mathcal{L}_{\frac{p}{\alpha}}$ estimate is weakened to an $\mathcal{L}_{\frac{p}{\alpha},\infty}$
estimate.
We can prove that the class of functions is indeed larger using the following characterisation of the Besov seminorm. If $n\in \mathbb{N}$ is such that $\max\{\frac{1}{p^\sharp}-1,0\} < s < n$, then
\begin{equation}\label{holder_zygmund_characterisation}
\|f\|_{\dot{B}^s_{p^{\sharp},\infty}} \approx \sup_{h>0} h^{-s}\left(\int_{-\infty}^\infty \left|\sum_{k=0}^n \binom{n}{k}(-1)^{n-k}f(t+kh)\right|^{p^{\sharp}}\,dt\right)^{1/p^\sharp}.
\end{equation}
The choice of $n>s$ is irrelevant. This is well-known when $p^\sharp\geq 1$, see e.g. \cite[Theorem 2.39]{Sawano2018}. For $p^\sharp < 1,$ see \cite[Chapter 11, Theorem 18]{PeetreBook}.
\begin{theorem}
If $d > \frac{1}{p}$ and $0 < \alpha < 1$, then
\begin{equation*}
S_{d,\alpha} \subset \dot{B}^{\alpha+\frac{1}{p^\sharp}}_{p^\sharp,\infty}(\mathbb{R}).
\end{equation*}
\end{theorem}
\begin{proof}
Let $f \in S_{d,\alpha}.$ Since $d > \frac{1}{p}$, in particular we have $d > \alpha+\frac{1}{p}-1 = \alpha+\frac{1}{p^\sharp}$. It follows that we can take $n=d$
in \eqref{holder_zygmund_characterisation}. Therefore,
\[
\|f\|_{\dot{B}^{\frac{1}{p^\sharp}+\alpha}_{p^\sharp,\infty}} \approx \sup_{h>0} h^{-\frac{1}{p^\sharp}-\alpha}\left(\int_{-\infty}^\infty \left|\sum_{k=0}^d \binom{d}{k}(-1)^{d-k}f(t+kh)\right|^{p^{\sharp}}\,dt\right)^{1/p^\sharp}.
\]
We split the integral into the regions $|t|<dh+h$ and $|t|>dh+h.$ That is,
\begin{align*}
\int_{-\infty}^\infty \left|\sum_{k=0}^d \binom{d}{k}(-1)^{d-k}f(t+kh)\right|^{p^{\sharp}}\,dt &= \int_{-dh-h}^{dh+h} \left|\sum_{k=0}^d \binom{d}{k}(-1)^{d-k}f(t+kh)\right|^{p^{\sharp}}\,dt\\
&\quad+\int_{|t|>dh+h} \left|\sum_{k=0}^d \binom{d}{k}(-1)^{d-k}f(t+kh)\right|^{p^{\sharp}}\,dt.
\end{align*}
By the mean value theorem, we have
\begin{equation*}
\sum_{k=0}^d \binom{d}{k}(-1)^{d-k}f(t+kh) = (dh)^d\int_0^1 (1-\theta)^{d-1}f^{(d)}(t+dh\theta)\,d\theta.
\end{equation*}
So by the definition of the Sobolev class $S_{d,\alpha},$ when $t>dh+h$ we have
\[
\left|\sum_{k=0}^d \binom{d}{k}(-1)^{d-k}f(t+kh)\right| \lesssim_d h^d \int_0^1 (1-\theta)^{d-1}|t+dh\theta|^{\alpha-d}\,d\theta \lesssim_d h^d|t|^{\alpha-d}
\]
and when $t < -dh-h$,
\[
\left|\sum_{k=0}^d \binom{d}{k}(-1)^{d-k}f(t+kh)\right| \lesssim_d h^d |t+dh|^{\alpha-d}.
\]
Since $\alpha<1$ and $d > \frac{1}{p},$ the function $|t|^{p^\sharp(\alpha-d)}$ is integrable over $[h,\infty),$ and therefore
\[
\int_{|t|>dh+h}\left|\sum_{k=0}^d \binom{d}{k}(-1)^{d-k}f(t+kh)\right|^{p^\sharp}\,dt \lesssim_d \int_{h}^\infty h^{dp^\sharp}|t|^{p^{\sharp}(\alpha-d)}\,dt \approx_{d,p} h^{\alpha p^\sharp+1}.
\]
On the other hand, for $|t|<dh+h$ we use the estimate
\begin{equation*}
\left|\sum_{k=0}^d \binom{d}{k}(-1)^{d-k}f(t+kh)\right| \lesssim_d \max_{0\leq \theta \leq dh} |t+\theta|^{\alpha} \leq (|t|+dh)^{\alpha} \lesssim_d |t|^{\alpha}+h^{\alpha}.
\end{equation*}
Therefore,
\[
\int_{|t|\leq dh+h}\left|\sum_{k=0}^d \binom{d}{k}(-1)^{d-k}f(t+kh)\right|^{p^\sharp}\,dt \lesssim_{d,p} \int_{|t|<dh+h} |t|^{\alpha p^\sharp}+h^{\alpha p^\sharp}\,dt \approx_{d,p} h^{\alpha p^{\sharp}+1}.
\]
It follows that for $h > 0$ we have
\begin{equation*}
h^{-\alpha-\frac{1}{p^\sharp}}\left(\int_{-\infty}^\infty \left|\sum_{k=0}^d \binom{d}{k}(-1)^{d-k}f(t+kh)\right|^{p^\sharp}\,dt\right)^{\frac{1}{p^\sharp}} \lesssim_{d,p} h^{\frac{1}{p^{\sharp}}+\alpha-\frac{1}{p^{\sharp}}-\alpha} = 1.
\end{equation*}
Taking the supremum over $h > 0$ yields $f \in \dot{B}^{\alpha+\frac{1}{p^\sharp}}_{p^\sharp,\infty}(\mathbb{R})$ from \eqref{holder_zygmund_characterisation}.
The embedding is strict, because $S_{d,\alpha}$ is not closed under translation while $\dot{B}^{\alpha+\frac{1}{p^\sharp}}_{p^\sharp,\infty}(\mathbb{R})$ is.
\end{proof}
|
1,477,468,751,127 | arxiv | \section{Introduction}
This paper presents a comparison of geometric multigrid
methods for the solution of systems arising from high-order (we target
polynomial orders up to 16) continuous finite element discretizations
of elliptic partial differential equations. Our particular interest is
to compare the efficiency of different multigrid methods for elliptic
problems with varying coefficients on complex geometries.
High-order spatial discretizations for these problems can have significant advantages
over low-order methods since they reduce the problem size for given
accuracy, and allow for better performance on modern hardware.
The main challenges in high-order discretizations are that matrices
are denser compared to low-order methods, and that they lose structural
properties such as the M-matrix
property, which often allows to prove convergence of iterative
solvers.
As illustrated in Figure~\ref{fig:approaches}, there are several
possibilities for constructing a multigrid hierarchy for high-order
discretizations: (1) high-order geometric $h$-multigrid, where the
mesh is coarsened geometrically and high-order interpolation and
prolongation operators are used; (2) $p$-multigrid, in which the
problem is coarsened by reducing the polynomial order, and the
interpolation and prolongation take into account the different order
basis functions; and (3) a first-order approximation as
preconditioner, constructed from the nodes of the high-order
discretization.
For the polynomial orders $1\le p\le 16$, we
compare these multigrid approaches, combined with different
smoothers. We also compare the use of multigrid as a solver as well as a preconditioner
in a Krylov subspace method. While we use moderate size model
problems (up to about $2$~million unknowns in 3D), we also discuss our findings with regard to parallel
implementations on high performance computing platforms.
We also discuss parallelization aspects relevant for implementations on
shared or distributed memory architectures. For instance, the
implementation of Gauss-Seidel smoothers can be challenging in
parallel~\cite{AdamsBrezinaHuEtAl03, BakerFalgoutKolevEtAl11}; for
this reason, we include a Chebyshev-accelerated Jacobi smoother in our
comparisons. This Chebyshev smoother is easy to implement in parallel,
and often is as effective a smoother as Gauss-Seidel.
We use high-order discretizations based on Legende-Gauss-Lobotto
(LGL) nodal basis functions on quadrilateral or hexahedral
meshes. Tensorized basis functions allow for a fast, matrix-free
application of element matrices. This is particularly important for
high polynomial degrees in three dimensions, as element matrices
can become large. For instance, for a three-dimensional hexahedral mesh
and finite element discretizations with polynomial degree $p$, the
dense element
matrices are of size $(p+1)^3\times (p+1)^3$. Thus, for $p=8$, this
amounts to more than half a million entries per element. For
tensorized nodal basis functions on hexahedral meshes, the application
of elemental matrices to vectors can be implemented efficiently by
exploiting the tensor structure of the basis functions, as is common
for spectral elements ~\cite{DevilleFischerMund02}.
\begin{figure}[t]
\begin{tikzpicture}[scale=1.0]
\draw (-5,4) grid +(4,1);
\foreach \e in {-5,...,-2}
\foreach \x in {0,0.1727,0.5,0.8273, 1.0} {
\draw[fill=utsecblue] (\e+\x, 4) circle (0.03);
\draw[fill=utsecblue] (\e+\x, 4.1727) circle (0.03);
\draw[fill=utsecblue] (\e+\x, 4.5) circle (0.03);
\draw[fill=utsecblue] (\e+\x, 4.8273) circle (0.03);
\draw[fill=utsecblue] (\e+\x, 5) circle (0.03);
}
\draw[-latex',thick] (-3, 3.75) -- node[right] {{\scriptsize $h$-coarsen}} (-3, 3.25);
\draw (-5,2) rectangle +(4,1);
\draw (-3,2) -- (-3,3);
\foreach \e in {-5,-3}
\foreach \x in {0,0.1727,0.5,0.8273, 1.0} {
\draw[fill=utsecblue] (\e+2*\x, 2) circle (0.03);
\draw[fill=utsecblue] (\e+2*\x, 2.1727) circle (0.03);
\draw[fill=utsecblue] (\e+2*\x, 2.5) circle (0.03);
\draw[fill=utsecblue] (\e+2*\x, 2.8273) circle (0.03);
\draw[fill=utsecblue] (\e+2*\x, 3) circle (0.03);
}
\draw[-latex',thick] (-3, 1.75) -- node[right] {{\scriptsize $h$-coarsen}} (-3, 1.25);
\draw (-5,0) rectangle +(4,1);
\foreach \x in {0,0.1727,0.5,0.8273, 1.0} {
\draw[fill=utsecblue] (-5+4*\x, 0) circle (0.03);
\draw[fill=utsecblue] (-5+4*\x, 0.1727) circle (0.03);
\draw[fill=utsecblue] (-5+4*\x, 0.5) circle (0.03);
\draw[fill=utsecblue] (-5+4*\x, 0.8273) circle (0.03);
\draw[fill=utsecblue] (-5+4*\x, 1) circle (0.03);
}
\draw (0,4) grid +(4,1);
\foreach \e in {0,...,3}
\foreach \x in {0,0.1727,0.5,0.8273, 1.0} {
\draw[fill=utsecblue] (\e+\x, 4) circle (0.03);
\draw[fill=utsecblue] (\e+\x, 4.1727) circle (0.03);
\draw[fill=utsecblue] (\e+\x, 4.5) circle (0.03);
\draw[fill=utsecblue] (\e+\x, 4.8273) circle (0.03);
\draw[fill=utsecblue] (\e+\x, 5) circle (0.03);
}
\draw[-latex',thick] (2, 3.75) -- node[right] {{\scriptsize $p$-coarsen}} (2, 3.25);
\draw (0,2) grid +(4,1);
\foreach \x in {0,0.5,...,4} {
\draw[fill=utsecblue] (\x, 2) circle (0.03);
\draw[fill=utsecblue] (\x, 2.5) circle (0.03);
\draw[fill=utsecblue] (\x, 3) circle (0.03);
}
\draw[-latex',thick] (2, 1.75) -- node[right] {{\scriptsize $p$-coarsen}} (2, 1.25);
\draw (0,0) grid +(4,1);
\foreach \x in {0,1,2,3,4} {
\draw[fill=utsecblue] (\x, 0) circle (0.05);
\draw[fill=utsecblue] (\x, 1) circle (0.05);
}
\draw (5,4) grid +(4,1);
\foreach \e in {5,...,8}
\foreach \x in {0,0.1727,0.5,0.8273, 1.0} {
\draw[fill=utsecblue] (\e+\x, 4) circle (0.03);
\draw[fill=utsecblue] (\e+\x, 4.1727) circle (0.03);
\draw[fill=utsecblue] (\e+\x, 4.5) circle (0.03);
\draw[fill=utsecblue] (\e+\x, 4.8273) circle (0.03);
\draw[fill=utsecblue] (\e+\x, 5) circle (0.03);
}
\draw[-latex',thick] (7, 3.75) -- node[right] {{\scriptsize sparsify}} (7, 1.25);
\draw[step=0.5] (4.99,0) grid +(4.01,1);
\draw (5,0.1727) -- (9,0.1727);
\draw (5,0.8273) -- (9,0.8273);
\foreach \e in {5,...,8} {
\draw (\e+0.1727,0) -- (\e+0.1727,1);
\draw (\e+0.8273,0) -- (\e+0.8273,1);
\foreach \x in {0,0.1727,0.5,0.8273, 1.0} {
\draw[fill=utsecblue] (\e+\x, 0) circle (0.03);
\draw[fill=utsecblue] (\e+\x, 0.1727) circle (0.03);
\draw[fill=utsecblue] (\e+\x, 0.5) circle (0.03);
\draw[fill=utsecblue] (\e+\x, 0.8273) circle (0.03);
\draw[fill=utsecblue] (\e+\x, 1) circle (0.03);
}
}
\node at (-3, -0.5) { (a) };
\node at (2, -0.5) { (b) };
\node at (7, -0.5) { (c) };
\end{tikzpicture}
\caption{\label{fig:approaches} Illustration of
different multigrid hierarchies for high-order
finite element discretizations: (a) high-order
$h$-multigrid, (b) $p$-multigrid and (c)
low-order approximation preconditioner based
on the nodes of the high-order discretization
.}
\end{figure}
{\em Related work:} Multigrid for high-order/spectral finite elements
has been studied as early as in the 1980s. In~\cite{RonquistPatera87},
the authors observe that point smoothers such as the simple Jacobi
method result in resolution-independent convergence rates for
high-order elements on simple one and two-dimensional
geometries. Initial theoretical evidence for this behavior is given
in~\cite{MadayMunoz88}, where multigrid convergence is studied for
one-dimensional spectral methods and spectral element problems.
The use of $p$-multigrid is rather common in the context of
high-order discontinuous Galerkin discretizations
\cite{FidkowskiOliverLuEtAl05, HelenbrookAtkins06}, but $p$-multigrid
has also been used for continuous finite element discretizations ~\cite{HelenbrookMavriplisAtkins03}.
A popular strategy for high-order discretizations on unstructured
meshes, for which geometric mesh coarsening is
challenging, is to assemble a low-order approximation of the
high-order system and use an algebraic multigrid method to invert the
low-order (and thus much sparser) operator~\cite{Brown10, Kim07,
DevilleMund90, Olson07, CanutoGervasioQuarteroni10}.
In~\cite{HeysManteuffelMcCormickEtAl05}, this approach is compared
with the direct application of algebraic multigrid to the high-order
operator and the authors find that one of the main difficulties is the
assembly of the high-order matrices required by algebraic multigrid
methods.
{\em Contributions:}
There has been a lot of work on high-order discretization methods and on the efficient application of the resulting operators. However, efficient solvers for such
discretization schemes have received much less attention. In particular, theoretical and experimental studies are scattered regarding the actual performance (say the number of v-cycles or matrix-vector products to solve a system) of
the different schemes under different scenarios. A systematic analysis of such performance is not available.
In this paper, we address this gap in the existing literature. In particular we (1) consider high-order continuous Galerkin discretizations up to 16th order, (2) examine three different multigrid hierarchies ($h$, $p$, and first-order), (3) examine several different smoothers: Jacobi, polynomial, SSOR, and block Jacobi, (4) consider different settings (constant, mildly variable, and highly variable) of coefficients and (5) consider problems in 2D and 3D.
To our knowledge, this is the first study of this kind. Our results demonstrate significant variability in the performance of the different schemes for higher-order
elements, highlighting the need for further research on the smoothers.
Although the overall runtime will depend on several factors---including the implementation and the target architecture---in this work we limit ourselves to characterizing performance as the number of fine-grid matrix-vector products needed for convergence. This is the most dominant cost and is also independent of the implementaion and architecture, allowing for easier interpretation and systematic comparison with other approaches.
Finally, we provide an easily extendable Matlab
implementation,\footnote{\url{http://hsundar.github.io/homg/}} which allows a systematic comparison of the different methods in the same framework.
{\em Limitations:} While this work is partly driven by our interest in
scalable parallel simulations on nonconforming meshes derived from
adaptive octrees (e.g.,\cite{SundarBirosBursteddeEtAl12}),
for the comparisons
presented in this paper we restrict ourselves to moderate size
problems on conforming meshes. We do not fully address time-to-solution, as
we do not use a high-performance implementation. However, recent
results using a scalable parallel implementation indicate that many of
our observations generalize to non-conforming meshes and that the
methods are scalable to large parallel computers
\cite{GholaminejadMalhotraSundarEtAl14}. While we
experiment with problems with strongly varying coefficients, we do not
study problems with discontinuous or anisotropic coefficients, nor consider
ill-shaped elements.
{\em Organization of this paper:} In \S\ref{sec:problem} we describe
the test problem, as well as discretization approach for the different
multigrid schemes. In \S\ref{sec:approaches}, we describe in detail
the different multilevel approaches for solving the resulting
high-order systems. In \S\ref{sec:numerics}, we present a
comprehensive comparison of different approaches using test problems
in 2D and 3D. Finally, in \S\ref{sec:discuss} we draw conclusions and
discuss our findings.
\section{Problem statement and preliminaries}
\label{sec:problem}
We wish to solve the Poisson problem with
homogeneous Dirichlet boundary conditions on an open bounded domain
$\Omega\subset\mathbb R^d$ ($d=2$ or $d=3$) with boundary $\partial
\Omega$, i.e., we search the solution $u(\bs x)$ of:
\begin{equation}\label{eq:Poisson}
\begin{aligned}
-\nabla\cdot\left(\mu(\bs x)\nabla u(\bs x)\right) &= f(\bs x) \quad &&\text{ for } \bs x\in \Omega,\\
\quad u(\bs x)& = 0 \quad &&\text{ for } \bs x\in \partial\Omega.
\end{aligned}
\end{equation}
Here, $\mu(\bs x)\ge \mu_0>0$ is a spatially varying coefficient that
is bounded away from zero, and $f(x)$ is a given right hand side. We
discretize~\eqref{eq:Poisson} using finite elements with basis
functions of polynomial order $p$ and solve the resulting discrete
system using different multigrid variants. Next, in
\S\ref{subsec:galerkin} and \S\ref{sub:restriction_&_prolongation}, we
discuss the Galerkin approximation to~\eqref{eq:Poisson} and the setup
of the inter-grid transfer operators to establish a multilevel
hierarchy. In \S\ref{sub:meshing}, we discuss details of the meshes
and implementation used for our comparisons.
\subsection{Galerkin approximation} \label{subsec:galerkin}
Given a bounded, symmetric bilinear form\footnote{In our case,
$a(u,v)=\int_\Omega \mu\nabla u \cdot \nabla v$.} $a(u,v)$ that is
coercive
on $H_0^{1}(\Omega)$, and $f \in L^{2}(\Omega)$, we want to find $u
\in H_0^{1}(\Omega)$ such that $u$ satisfies
\begin{equation}
\label{eqn:weakForm}
a(u,v) = (f,v)_{L^2(\Omega)}, \ \ \ \forall v \in H_0^{1}(\Omega),
\end{equation}
where $(f,v)_{L^2(\Omega)} = \int_\Omega fv\,dx$ and
$H_0^1(\Omega)\subset L^2(\Omega)$ denotes the subspace of
functions with square integrable derivatives that vanish on the
boundary.
This problem is known to have a unique solution $u^*$ \cite{BrennerScott94}.
We now derive discrete equations whose solutions
approximate the solution of
(\ref{eqn:weakForm}). First, we define a sequence of $m$ nested conforming
{\em finite}-dimensional spaces, $V_1 \subset V_2 \subset \cdots \subset V_m \subset
H_0^{1}(\Omega)$.
Here, $V_k$ is the finite element space that corresponds to a finite element mesh
at a specified
polynomial order, and $V_{k-1}$ corresponds to the next coarser
problem,
as illustrated in Figure~\ref{fig:approaches}-(a,b) for different coarsenings. Then, the discretized problem on $V_k$ is to find
$u_k \in V_k$ such that
\begin{equation}
\label{eqn:galerkinForm}
a(u_{k},v_k) = (f,v_k)_{L^2(\Omega)}, \ \ \ \forall v_k \in V_k.
\end{equation}
This problem has a unique solution, and the sequence
$\{u_k\}$ converges to $u^*$ \cite{BrennerScott94}.
The $L^2$-projection of the linear operator corresponding to the
bilinear form $a(\cdot\,,\cdot)$ onto $V_k$ is defined as the linear
operator $A_k : V_{k} \rightarrow V_{k}$ such that
\begin{equation}
\label{eqn:fematDef}
(A_{k} v_k,w_k)_{L^2(\Omega)} = a(v_k,w_k), \ \ \ \forall v_k,w_k \in V_k.
\end{equation}
The operator $A_k$ is self-adjoint
with respect to the $L^2$-inner product and positive definite.
Let $\{\phi_1^k,\phi_2^k,\ldots,\phi_{N_k}^k\}$ be a basis for $V_k$ and
denote by $\mathbf{A_k}$ the representation of $A_k$ in that
basis. Then, \eqref{eqn:fematDef} becomes the linear matrix equation
for the coefficient vector $\mathbf{u}_k\in \mathbb{R}^{N_k}$
\begin{equation}
\mathbf{A}_k\mathbf{u}_k = \mathbf{f}_k,
\end{equation}
where, for $i,j = 1,2,\ldots,N_k$, the components of $\mathbf A_k$, $\mathbf u_k$ and $\mathbf f_k$ are given by
\begin{align*}
(\mathbf{A}_k)_{ij} =& a(\phi_i^k,\phi_j^k), \\% &&
(\mathbf{f}_{k})_j =& (f,\phi_j^k)_{L^2(\Omega)}, \\%&& \text{ for } j=1,2,\ldots,N_k, \\
(\mathbf{M}_k)_{ij} =& (\phi_i^k,\phi_j^k)_{L^2(\Omega)},
\end{align*}
where the integrals on the right hand sides are often approximated
using numerical quadrature.
Here, $\mathbf{M}_k$ is the mass matrix, which appears in the
approximation of the $L^2$-inner product in $V_k$ since
$(u_k,v_k)_{L^2(\Omega)} = \mathbf{u}_k^T\mathbf{M}_k\mathbf{v}_k$ for
all $u_k,v_k\in V_k$ with corresponding coefficient vectors
$\mathbf{u}_k,\mathbf{v}_k\in \mathbb{R}^{N_k}$.
\subsection{Restriction and prolongation}
\label{sub:restriction_&_prolongation}
Since the coarse-grid space is a subspace of the fine-grid
space, any coarse-grid function $v_{k-1}$ can be expanded in
terms of the fine-grid basis functions,
\begin{equation}
v_{k-1} = \sum_{i=1}^{N_{k-1}} \mathbf v_{i,k-1}\phi_i^{k-1} = \sum_{j=1}^{N_k} \mathbf v_{j,k}\phi_j^k,
\end{equation}
where, $\mathbf v_{i,k}$ and $\mathbf v_{i,k-1}$ are the coefficients in the basis
expansion for $v_{k-1}$ on the fine and coarse grids, respectively.
The application of the prolongation operator can be represented as a matrix-vector
product with the input vector as the coarse grid nodal values and the
output as the fine grid nodal values \cite{SampathBiros10}. The matrix
entries of this operator are thus the coarse grid shape functions evaluated at the fine-grid
vertices, $p_i$, i.e.,
\begin{equation}
\label{eq:Pstencil}
\mathbf P_{\!ij} = \phi_j^{k-1}(p_i) \quad \text{ for } 1\le i \le N_k, 1\le j\le N_{k-1}.
\end{equation}
This gives rise to two different operators depending on whether the
coarse grid is obtained via $h$-coarsening or whether it is obtained
via $p$-coarsening; see Figure~\ref{fig:approaches} for an
illustration of the two cases. The restriction operator is the adjoint
of the prolongation operator with respect to the mass-weighted inner
products. This only requires the application of the transpose of the
prolongation operator to vectors.
\subsection{Meshing and implementation}
\label{sub:meshing}
For the numerical comparisons in this work we consider domains that
are the image of a square or a cube under a
diffeomorphism, i.e., a smooth mapping from the reference domain
$S\coloneqq[0,1]^d$ to the physical domain $\Omega$. Hexahedral finite
element meshes and tensorized nodal basis function based on
Legende-Gauss-Lobotto (LGL) points are used. We use isoparametric
elements to approximate the geometry of $\Omega$, i.e., on each element the geometry
diffeomorphism is approximated using the same basis
functions as the
finite element approximation. The Jacobians for this transformation
are computed at every quadrature point, and Gauss quadrature is used
to numerically approximate integrals.
We assume that the coefficient $\mu$ is a given function, which, at each level, can be evaluated at the respective quadrature points.
We restrict our comparisons to uniformly refined conforming meshes and our implementation, written in Matlab, is publicly available. It allows comparisons of different smoothing and
coarsening methods for high-order discretized problems in two and three dimensions, and can
easily be modified or extended. It does not support distributed memory
parallelism, and is
restricted to conforming meshes that can be mapped to a square (in 2D) or a
cube (in 3D). While, in practice, matrix assembly for
high-order discretizations is discouraged, we use sparse
assembled operators in this prototype implementation.
Note that for hexahedral elements in combination with a
tensorial finite element basis, the effect of matrix-free operations
for higher-order elements can be quite significant\footnote{For tetrahedral elements,
this difference might be less pronounced.} in terms of
floating point operations, memory requirements, and actual run time:
\begin{itemize}
\item \emph{Memory requirements for assembled matrices:} For an order
$p$, assembled element matrices are dense and of the size
$(p+1)^3\times(p+1)^3$. For $p=9$, for instance, $(p+1)^3=1000$ and
thus each element contributes $10^6$ entries to the assembled
stiffness matrix, and each row in the matrix contains, on average,
several 1000 nonzero entries. Thus, for high orders, memory becomes
a significant issue.
\item \emph{Floating point operations for matrix-free versus assembled
MatVec:} For hexahedral elements, the operation count for a
tensorized matrix-free matvec is $\mathcal{O}(p^4)$ as opposed to
$\mathcal{O}(p^6)$ for a fully assembled matrix \cite{orszag80,DevilleFischerMund02}.
\end{itemize}
Detailed theoretical and experimental arguments in favor of matrix-free approaches, especially for high-order discretizations can be found in \cite{DevilleFischerMund02,BursteddeGhattasGurnisEtAl08,MayBrownLePourhiet14}
\section{Multigrid approaches for high-order finite element discretizations}
\label{sec:approaches}
In this section, we summarize different multigrid approaches
for high-order/spectral finite element
discretizations, which can either be used as
a solver or can serve as a preconditioner within a Krylov method. We
summarize different approaches for the construction of
multilevel hierarchies in~\S\ref{subsec:hierarchy} and discuss
smoothers in~\S\ref{subsec:smoothers}.
\subsection{Hierarchy construction, restriction and prolongation operators}\label{subsec:hierarchy}
There are several possibilities to build a multilevel hierarchy for
high-order discretized problems; see Figure~\ref{fig:approaches}. One
option is the construction of a geometric mesh hierarchy while keeping
the polynomial order unchanged; we refer to this approach as
high-order \emph{$h$-multigrid}. An alternative is to construct coarse
problems by reducing the polynomial degree of the finite element basis
functions, possibly followed by standard geometric multigrid; this is
commonly referred to as \emph{$p$-multigrid}. For unstructured
high-order element discretizations, where geometric coarsening is
challenging, using an algebraic multigrid hierarchy of a
\emph{low-order approximation to the high-order operator} as a
preconditioner has proven efficient. Some details of
these different approaches are summarized next.
\subsubsection{$h$-multigrid}\label{subsec:h}
A straightforward extension of low-order to high-order geometric
multigrid is to use the high-order discretization of the operator for
the residual computation on each multigrid level, combined with
high-order restriction and prolongation operators
(see \S\ref{sub:restriction_&_prolongation}).
For hexahedral (or quadrilateral) meshes, the required high-order residual
computations and the application of the interpolation and restriction operators
can often be accelerated using elementwise computations and
tensorized finite element basis functions, as is common
in spectral element methods \cite{DevilleFischerMund02}.
\subsubsection{$p$-multigrid}\label{subsec:p}
In the $p$-multigrid approach, a multigrid hierarchy is obtained by reducing
the polynomial order of the element basis functions.
Starting from an order-$p$ polynomial basis (for
simplicity, we assume here that $p$ is a power of 2), the coarser
grids correspond to polynomials of order $p/2, p/4,\ldots,1$, followed
by geometric coarsening of the $p=1$ grid (i.e., standard low-order
geometric multigrid). As for high-order $h$-multigrid, devising
smoothers can be a challenge for $p$-multigrid. Moreover, one often
finds dependence of the convergence factor on the order of the
polynomial basis \cite{MadayMunoz89}.
\subsubsection{Preconditioning by lower-order operator} \label{subsec:low}
In this defect correction approach (see
\cite{TrottenbergOosterleeSchuller01, Hackbusch85}), the high-order
residual is iteratively corrected using a low-order operator,
obtained by overlaying the high-order nodes with a low-order
(typically linear) finite element mesh. While the resulting low-order
operator has the same number of unknowns as the high-order operator,
it is much sparser and can, thus, be assembled efficiently and
provided as input to an algebraic multigrid method, which computes a
grid hierarchy through algebraic point aggregation. This construction
of a low-order preconditioner based on the nodes of the high-order
discretization is used, for instance in~\cite{Brown10, Kim07,
DevilleMund90, HeysManteuffelMcCormickEtAl05}. Due to the black-box
nature of algebraic multigrid, it is particularly attractive for
high-order discretizations on unstructured meshes. Note that even if
the mesh is structured, it is not
straightforward to use low-order geometric multigrid since the nodes---inherited from the high-order discretization---are not evenly spaced
(see Figure~\ref{fig:approaches}).
\subsection{Smoothers}\label{subsec:smoothers}
In our numerical comparisons, we focus on point smoothers but we also compare
with results obtained with an elementwise block-Jacobi smoother.
In this section, we summarize different
smoothers and numerically study their behavior for high-order
discretizations. Note that, multigrid
smoothers must target the reduction of the error components in the
upper half of the spectrum.
\subsubsection{Point smoothers}
We compare the Jacobi and the symmetric successive over
relaxation (SSOR) smoothers, as well as a Chebyshev-accelerated Jacobi
smoother~\cite{Brandt77}. All of these smoothers require the diagonal of the
system matrix; if this matrix is not assembled (i.e., in a matrix-free approach),
these diagonal entries must be computed in a setup step; for high-order discretizations on deformed meshes, this can be a significant computation. Note that the
parallelization of Gauss-Seidel smoothers (such as SSOR) requires coloring of
unknowns in parallel, and, compared to Jacobi smoothing, more
complex communication in a distributed memory implementation. The
Chebyshev-accelerated Jacobi method is an alternative to SSOR; it
can significantly improve over Jacobi smoothing, while being as simple to
implement~\cite{AdamsBrezinaHuEtAl03}. The acceleration of Jacobi smoothing
with Chebyshev polynomials requires knowledge of the maximum eigenvalue of
the system matrix, usually estimated during setup with an iterative solver.
\subsubsection{Comparison of point smoothers}\label{subsubsec:ptsmoothcomparison}
In Figures~\ref{fig:smoothers} and~\ref{fig:smoothers-var}, we compare
the efficiency of these point smoothers for different polynomial
orders and constant and varying coefficients. For that purpose, we
compute the eigenvectors of the system matrix, choose a
zero right hand side and an initialization that has all unit
coefficients in the basis given by these eigenvectors. For the
polynomial orders $p=1,4,16$, we compare the performance of point
smoothers with and without a 2-level v-cycle with exact coarse solve.
The coarse grid for all polynomial orders is obtained using $h$-coarsening.
We depict the
coefficients after six smoothing steps in the left column, and the results
obtained for a two-grid method\footnote{For simplicity, we chose two
grids in our tests; the results for a multigrid v-cycle are
similar.} with three pre- and three post-smoothing steps (and thus overall six
smoothing steps on the finest grid) in the right column. The SSOR smoother
uses a lexicographic ordering of the unknowns, and we employ two pre- and one
post-smoothing steps, which again amounts to overall six smoothing steps
on the finest grid. The damping factors for Jacobi and SSOR smoothing
are $\omega = 2/3$ and $\omega=1$, respectively. The Chebyshev
smoother targets the part of the
spectrum given by $[\lambda_\text{max}/4,\lambda_\text{max}]$, where
$\lambda_\text{max}$ is the maximum eigenvalue of the system matrix, which is estimated using 10 iterations of the Arnoldi algorithm.
The results for the constant coefficient Laplacian operator on the
unit square (see Figure~\ref{fig:smoothers}) show that all point smoothers
decrease the error components in the upper half of the spectrum;
however,
the decrease is smaller for high-order elements. Observe that compared to Jacobi smoothing, Chebyshev accelerated Jacobi smoothing dampens a larger
part of the spectrum. Both, the Chebyshev and
SSOR methods outperform Jacobi smoothing, in particular for higher
orders. Combining the smoothers with a two-grid cycle, all error
components are decreased for all smoothers (and thus the resulting
two-grid methods converge, see Table~\ref{tab:box} in \S\ref{subsec:results}), but the error decreases slower for
higher polynomial orders. For high polynomial orders, a two-grid
iteration with SSOR smoothing results in a much better error
reduction than Jacobi or Chebyshev smoothing.
In Figure~\ref{fig:smoothers-var}, we study the performance of
different smoothers for the test problem {\bf 2d-var}, defined in
\S\ref{subsec:tests}. In this problem, we solve
\eqref{eq:Poisson} with a strongly (but smoothly) varying coefficient
$\mu$ on a deformed domain $\Omega$. Compared to the constant
coefficient case, Jacobi smoothing performs worse, both,
when used as a solver and as a smoother. Let us focus on the two-grid
correction for polynomial order $p=16$ and compare with the results
obtained when using multigrid as a solver, shown in Table~\ref{tab:2d-fan}.
Jacobi smoothing does not lead to a converging two-grid algorithm, as
several coefficients are amplified by the two-grid cycle. For
Chebyshev smoothing, the multigrid v-cycle converges slowly
although one or two coefficients appear amplified in the two-grid
iteration. This convergence can be explained by the fact that errors
can be interchanged between different eigenvectors in the v-cycle.
SSOR smoothing combined with the two-grid method retains a significant
error reduction rate and, as a consequence, converges quickly.
\begin{figure}
\centering
\IfFileExists{homg-figure1.pdf}{\includegraphics[width=0.49\textwidth]{homg-figure1.pdf}}{
\begin{tikzpicture}[scale=0.9]
\begin{semilogyaxis}[ymajorgrids,ymin=1e-5,ymax=2,xmin=0,xmax=961]
\addplot[color=black] table[x=dof, y=u]{smoother-const-box.dat};
\addplot[color=blue!70, opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=jacobi1]{smoother-const-box.dat};
\addplot[color=red!70!black, opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=chebyshev1]{smoother-const-box.dat};
\addplot[color=green!70!black,only marks, opacity=0.5,mark=*,mark size=1pt] table[x=dof, y=ssor1]{smoother-const-box.dat};
\end{semilogyaxis}
\draw[black, fill=white] (3.8, 3.7) rectangle +(3.05,1.7);
\node at (5.35, 5.53) {\bf \small{smoother, $p=1$}};
\node[fill=blue!70, draw, circle,minimum width=0.1cm] at (4.3, 5.0) {};
\node[fill=red!70!black, draw, circle,minimum width=0.1cm] at (4.3, 4.5) {};
\node[fill=green!70!black, draw, circle,minimum width=0.1cm] at (4.3, 4.0) {};
\node[text width=1.9cm] at (5.75, 5.0) {\small Jacobi$(6)$};
\node[text width=1.9cm] at (5.75, 4.5) {\small Chebyshev$(6)$};
\node[text width=1.9cm] at (5.75, 4.0) {\small SSOR$(3)$};
\end{tikzpicture}
}
\IfFileExists{homg-figure2.pdf}{\includegraphics[width=0.45\textwidth]{homg-figure2.pdf}}{
\begin{tikzpicture}[scale=0.9]
\begin{semilogyaxis}[ymajorgrids,ymin=1e-5,ymax=2,xmin=0,xmax=961,yticklabels={,,}]
\addplot[color=black] table[x=dof, y=u]{vcycle-const-box.dat};
\addplot[color=blue!70,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=jacobi1]{vcycle-const-box.dat};
\addplot[color=red!70!black,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=chebyshev1]{vcycle-const-box.dat};
\addplot[color=green!70!black,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=ssor1]{vcycle-const-box.dat};
\end{semilogyaxis}
\draw[black, fill=white] (3.7, 3.7) rectangle +(3.15,1.7);
\node at (4.6, 5.53) {\bf \small{two-grid correction, $p=1$}};
\node[fill=blue!70, draw, circle,minimum width=0.1cm] at (4.0, 5.0) {};
\node[fill=red!70!black, draw, circle,minimum width=0.1cm] at (4.0, 4.5) {};
\node[fill=green!70!black, draw, circle,minimum width=0.1cm] at (4.0, 4.0) {};
\node[text width=2.1cm] at (5.55, 5.0) {\small Jacobi$(3,3)$};
\node[text width=2.1cm] at (5.55, 4.5) {\small Chebyshev$(3,3)$};
\node[text width=2.1cm] at (5.55, 4.0) {\small SSOR$(2,1)$};
\end{tikzpicture}
}
\\
\IfFileExists{homg-figure3.pdf}{\includegraphics[width=0.49\textwidth]{homg-figure3.pdf}}{
\begin{tikzpicture}[scale=0.9]
\begin{semilogyaxis}[ymajorgrids,ymin=1e-5,ymax=2,xmin=0,xmax=961]
\addplot[color=black] table[x=dof, y=u]{smoother-const-box.dat};
\addplot[color=blue!70,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=jacobi4]{smoother-const-box.dat};
\addplot[color=red!70!black,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=chebyshev4]{smoother-const-box.dat};
\addplot[color=green!70!black,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=ssor4]{smoother-const-box.dat};
\end{semilogyaxis}
\node at (5.35, 5.53) {\bf \small{smoother, $p=4$}};
\end{tikzpicture}
}
\IfFileExists{homg-figure4.pdf}{\includegraphics[width=0.45\textwidth]{homg-figure4.pdf}}{
\begin{tikzpicture}[scale=0.9]
\begin{semilogyaxis}[ymajorgrids,ymin=1e-5,ymax=2,xmin=0,xmax=961,yticklabels={,,}]
\addplot[color=black] table[x=dof, y=u]{vcycle-const-box.dat};
\addplot[color=blue!70,only marks,opacity=0.5, mark=*,mark size=1pt] table[x=dof, y=jacobi4]{vcycle-const-box.dat};
\addplot[color=red!70!black,only marks,opacity=0.5, mark=*,mark size=1pt] table[x=dof, y=chebyshev4]{vcycle-const-box.dat};
\addplot[color=green!70!black,only marks,opacity=0.5, mark=*,mark size=1pt] table[x=dof, y=ssor4]{vcycle-const-box.dat};
\end{semilogyaxis}
\node at (4.6, 5.53) {\bf \small{two-grid correction, $p=4$}};
\end{tikzpicture}
}
\\
\IfFileExists{homg-figure5.pdf}{\includegraphics[width=0.49\textwidth]{homg-figure5.pdf}}{
\begin{tikzpicture}[scale=0.9]
\begin{semilogyaxis}[ymajorgrids,ymin=1e-5,ymax=2,xmin=0,xmax=961]
\addplot[color=black] table[x=dof, y=u]{smoother-const-box.dat};
\addplot[color=blue!70,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=jacobi16]{smoother-const-box.dat};
\addplot[color=red!70!black,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=chebyshev16]{smoother-const-box.dat};
\addplot[color=green!70!black,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=ssor16]{smoother-const-box.dat};
\end{semilogyaxis}
\node at (5.25, 5.53) {\bf \small{smoother, $p=16$}};
\end{tikzpicture}
}
\IfFileExists{homg-figure6.pdf}{\includegraphics[width=0.45\textwidth]{homg-figure6.pdf}}{
\begin{tikzpicture}[scale=0.9]
\begin{semilogyaxis}[ymajorgrids,ymin=1e-5,ymax=2,xmin=0,xmax=961,yticklabels={,,}]
\addplot[color=black] table[x=dof, y=u]{vcycle-const-box.dat};
\addplot[color=blue!70,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=jacobi16]{vcycle-const-box.dat};
\addplot[color=red!70!black,opacity=0.4,only marks, mark=*,mark size=1pt] table[x=dof, y=chebyshev16]{vcycle-const-box.dat};
\addplot[color=green!70!black,opacity=0.6,only marks, mark=*,mark size=1pt] table[x=dof, y=ssor16]{vcycle-const-box.dat};
\end{semilogyaxis}
\node at (4.6, 5.53) {\bf \small{two-grid correction, $p=16$}};
\end{tikzpicture}
}
\caption{\label{fig:smoothers} Error decay for different point
smoothers when used as solver (left column) and when used in
a single two-grid step with exact coarse grid solution
(right column) for a two-dimensional, constant coefficient
Laplace problem on a unit square (problem {\bf 2d-const}
specified in \S\ref{subsec:tests}). To keep the
number of unknowns the same accross all polynomial orders,
meshes of $32\times 32$, $8\times 8$ and $2\times 2$
elements are used for polynomial orders $p=1$, $p=4$ and
$p=16$, respectively. The horizontal axis is the
index for the eigenvectors of the system matrix $\mathbf{A}_k$, and
the vertical axis is the magnitude of the error component
for each eigenvector. The eigenvectors are ordered such that
the corresponding eigenvalues are ascending; thus, due to
the properties of $\mathbf{A}_k$, the
smoothness in every eigenvector decays from left to
right. The system right hand side is zero and the
initialization is chosen to have all unit coefficients in
the eigenvector expansion. A total of six smoothing steps is
used for all methods, and the coarse problem in the two-grid
step is solved by a direct solver.}
\end{figure}
\begin{figure}
\centering
\IfFileExists{homg-figure7.pdf}{\includegraphics[width=0.49\textwidth]{homg-figure7.pdf}}{
\begin{tikzpicture}[scale=0.9]
\begin{semilogyaxis}[ymajorgrids,ymin=1e-5,ymax=2,xmin=0,xmax=961]
\addplot[color=black] table[x=dof, y=u]{smoother-var-shell.dat};
\addplot[color=blue!70, opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=jacobi1]{smoother-var-shell.dat};
\addplot[color=red!70!black, opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=chebyshev1]{smoother-var-shell.dat};
\addplot[color=green!70!black,only marks, opacity=0.5,mark=*,mark size=1pt] table[x=dof, y=ssor1]{smoother-var-shell.dat};
\end{semilogyaxis}
\draw[black, fill=white] (3.8, 3.7) rectangle +(3.05,1.65);
\node at (5.35, 5.53) {\bf \small{smoother, $p=1$}};
\node[fill=blue!70, draw, circle,minimum width=0.1cm] at (4.3, 5.0) {};
\node[fill=red!70!black, draw, circle,minimum width=0.1cm] at (4.3, 4.5) {};
\node[fill=green!70!black, draw, circle,minimum width=0.1cm] at (4.3, 4.0) {};
\node[text width=1.9cm] at (5.75, 5.0) {\small Jacobi$(6)$};
\node[text width=1.9cm] at (5.75, 4.5) {\small Chebyshev$(6)$};
\node[text width=1.9cm] at (5.75, 4.0) {\small SSOR$(3)$};
\end{tikzpicture}
}
\IfFileExists{homg-figure8.pdf}{\includegraphics[width=0.45\textwidth]{homg-figure8.pdf}}{
\begin{tikzpicture}[scale=0.9]
\begin{semilogyaxis}[ymajorgrids,ymin=1e-5,ymax=2,xmin=0,xmax=961,yticklabels={,,}]
\addplot[color=black] table[x=dof, y=u]{vcycle-var-shell.dat};
\addplot[color=blue!70,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=jacobi1]{vcycle-var-shell.dat};
\addplot[color=red!70!black,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=chebyshev1]{vcycle-var-shell.dat};
\addplot[color=green!70!black,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=ssor1]{vcycle-var-shell.dat};
\end{semilogyaxis}
\draw[black, fill=white] (3.7, 3.7) rectangle +(3.15,1.65);
\node at (4.65, 5.53) {\bf \small{two-grid correction, $p=1$}};
\node[fill=blue!70, draw, circle,minimum width=0.1cm] at (4.0, 5.0) {};
\node[fill=red!70!black, draw, circle,minimum width=0.1cm] at (4.0, 4.5) {};
\node[fill=green!70!black, draw, circle,minimum width=0.1cm] at (4.0, 4.0) {};
\node[text width=2.1cm] at (5.55, 5.0) {\small Jacobi$(3,3)$};
\node[text width=2.1cm] at (5.55, 4.5) {\small Chebyshev$(3,3)$};
\node[text width=2.1cm] at (5.55, 4.0) {\small SSOR$(2,1)$};
\end{tikzpicture}
}
\\
\IfFileExists{homg-figure9.pdf}{\includegraphics[width=0.49\textwidth]{homg-figure9.pdf}}{
\begin{tikzpicture}[scale=0.9]
\begin{semilogyaxis}[ymajorgrids,ymin=1e-5,ymax=2,xmin=0,xmax=961]
\addplot[color=black] table[x=dof, y=u]{smoother-var-shell.dat};
\addplot[color=blue!70,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=jacobi4]{smoother-var-shell.dat};
\addplot[color=red!70!black,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=chebyshev4]{smoother-var-shell.dat};
\addplot[color=green!70!black,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=ssor4]{smoother-var-shell.dat};
\end{semilogyaxis}
\node at (5.35, 5.53) {\bf \small{smoother, $p=4$}};
\end{tikzpicture}
}
\IfFileExists{homg-figure10.pdf}{\includegraphics[width=0.45\textwidth]{homg-figure10.pdf}}{
\begin{tikzpicture}[scale=0.9]
\begin{semilogyaxis}[ymajorgrids,ymin=1e-5,ymax=2,xmin=0,xmax=961,yticklabels={,,}]
\addplot[color=black] table[x=dof, y=u]{vcycle-var-shell.dat};
\addplot[color=blue!70,only marks, mark=*,opacity=0.5,mark size=1pt] table[x=dof, y=jacobi4]{vcycle-var-shell.dat};
\addplot[color=red!70!black,only marks, mark=*,opacity=0.5,mark size=1pt] table[x=dof, y=chebyshev4]{vcycle-var-shell.dat};
\addplot[color=green!70!black,only marks, mark=*,opacity=0.5,mark size=1pt] table[x=dof, y=ssor4]{vcycle-var-shell.dat};
\end{semilogyaxis}
\node at (4.65, 5.53) {\bf \small{two-grid correction, $p=4$}};
\end{tikzpicture}
}
\\
\IfFileExists{homg-figure11.pdf}{\includegraphics[width=0.49\textwidth]{homg-figure11.pdf}}{
\begin{tikzpicture}[scale=0.9]
\begin{semilogyaxis}[ymajorgrids,ymin=1e-5,ymax=2,xmin=0,xmax=961]
\addplot[color=black] table[x=dof, y=u]{smoother-var-shell.dat};
\addplot[color=blue!70,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=jacobi16]{smoother-var-shell.dat};
\addplot[color=red!70!black,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=chebyshev16]{smoother-var-shell.dat};
\addplot[color=green!70!black,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=ssor16]{smoother-var-shell.dat};
\end{semilogyaxis}
\node at (5.25, 5.53) {\bf \small{smoother, $p=16$}};
\end{tikzpicture}
}
\IfFileExists{homg-figure12.pdf}{\includegraphics[width=0.45\textwidth]{homg-figure12.pdf}}{
\begin{tikzpicture}[scale=0.9]
\begin{semilogyaxis}[ymajorgrids,ymin=1e-5,ymax=2,xmin=0,xmax=961,yticklabels={,,}]
\addplot[color=black] table[x=dof, y=u]{vcycle-var-shell.dat};
\addplot[color=blue!70,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=jacobi16]{vcycle-var-shell.dat};
\addplot[color=red!70!black,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=chebyshev16]{vcycle-var-shell.dat};
\addplot[color=green!70!black,opacity=0.5,only marks, mark=*,mark size=1pt] table[x=dof, y=ssor16]{vcycle-var-shell.dat};
\end{semilogyaxis}
\node at (4.65, 5.53) {\bf \small{two-grid correction, $p=16$}};
\end{tikzpicture}
}
\caption{\label{fig:smoothers-var} Shown is the same comparison as
in Figure~\ref{fig:smoothers}, but for the two-dimensional
warped geometry, variable coefficient problem {\bf 2d-var}
specified in \S\ref{subsec:tests}.}
\end{figure}
\subsubsection{Block-Jacobi smoothing}\label{subsubsec:schwarz}
An alternative smoothing approach for high-order discretizations is
based on local block solves. Since for high polynomial orders many
unknowns lie in the element interiors, Schwarz-type domain
decomposition smoothers are promising. For instance, they are more
stable for anisotropic meshes than point smoothers. A main challenge
of Schwarz-type smoothers is that they require the solution of dense
local systems. This is either done by using direct methods or
approximations that allow for a fast iterative solution on hexahedral
meshes \cite{LottesFischer05, FischerLottes05}.
In
\S\ref{sec:numerics}, we compare the performance of point
smoothers with an elementwise block Jacobi smoothing.
\section{Numerical results}\label{sec:numerics}
In this section, we present a comprehensive comparison of our
algorithms for the solution of high-order discretizations of
\eqref{eq:Poisson}. After introducing our test problems in
\S\ref{subsec:tests}, we present a simple model for the computational
cost of the different approaches in terms of matrix-vector
applications in \S\ref{subsec:complexity}. In \S\ref{subsec:measures},
we specify settings and metrics for our comparisons. The results of
these comparisons are presented and discussed in
\S\ref{subsec:results}.
\subsection{Test problems}\label{subsec:tests}
We compare our algorithms for the solution of~\eqref{eq:Poisson} with
constant coefficient $\mu\equiv 1$ on the unit square and the unit cube,
and, with varying coefficients $\mu(\bs x)$, on the warped two and
three-dimensional domains shown in Figure~\ref{fig:mesh}. To be
precise, we consider the following four problems:
\begin{itemize}
\item {\bf 2d-const:} The domain $\Omega$ for the problem is the unit square, and $\mu\equiv
1$.
\item {\bf 2d-var:} The warped two-dimensional domain $\Omega$ is
shown on the left in Figure~\ref{fig:mesh}, and the varying
coefficient is $\mu(x,y) = 1 + 10^6(\cos^2(2\pi x) + \cos^2(2\pi
y))$. We also study a modification of this problem with a more
oscillatory coefficient
$\mu(x,y) = 1 + 10^6(\cos^2(10\pi x) +
\cos^2(10\pi y))$, which we refer to as {\bf 2d-var$'$}.
\item {\bf 3d-const:} For this problem, $\Omega$ is the unit cube,
and we use the constant coefficient $\mu\equiv 1$.
\item {\bf 3d-var:} The warped three-dimensional domain $\Omega$
shown on the right of Figure~\ref{fig:mesh} is used; the
varying coefficient is $\mu(x,y,z) = 1 + 10^6(\cos^2(2\pi x) +
\cos^2(2\pi y) + \cos^2(2\pi z))$.
\end{itemize}
\begin{figure}
\includegraphics[width=0.48\textwidth]{fan2d}
\includegraphics[width=0.48\textwidth]{fan3d}
\caption{\label{fig:mesh} Two and three-dimensional warped
meshes used in our numerical experiments. The color
illustrates the logarithm of the coefficient field, which
varies over six orders of magnitude.}
\end{figure}
\subsection{Comparing the computational cost}\label{subsec:complexity}
To compare the computational cost of the different methods, we focus
on the matrix-vector multiplications on the finest multigrid level,
which dominate the overall computation. Denoting the number of
unknowns on the finest level by $N$, the computational cost---measured
in floating point operations (flops)---for a matrix-vector product is
$Ng_p$, where $g_p$ is the number of flops per unknown and the
subscript $p$ indicates the polynomial order used in the FEM basis.
Since high-order discretizations result in less sparse operators,
$g_1\le g_2\le \ldots$ holds. The actual value of $g_p$ depends
strongly on the implementation. Also note that the conversion from
$g_p$ to wall-clock time is not trivial, as wall-clock timings depend
on caching, vectorization, blocking and other effects. Thus,
although $g_p$ increases with $p$, wall-clock
times might not increase as significantly.
In general, high-order implementations allow more memory locality,
which often results in higher performance compared to low-order
methods.
This discussion, however, is beyond the scope of this paper.
The dominant computational cost per iteration of the high-order
multigrid approaches discussed in \S\ref{sec:approaches} can
thus be summarized as
\begin{equation}\label{eq:compcost}
Ng_p(1+m(s_\text{pre}+s_\text{post})).
\end{equation}
Here, we denote by $s_\text{pre}$ and $s_\text{post}$ the number of
pre- and post-smoothing steps on the finest multigrid level,
respectively. Moreover, $m$ denotes the number of residual
computations (and thus matrix-vector computations) per smoothing step.
Jacobi smoothing and Chebyshev-accelerated Jacobi require $m=1$
matrix-vector multiplication per smoothing step, while SSOR requires
$m=2$ matrix-vector operations. If, in the approach discussed in
\S\ref{subsec:low}, the sparsified linear-element residual is used in the
smoother on the finest grid, the cost \eqref{eq:compcost} reduces to
\begin{equation}\label{eq:compcost2}
N(g_p + g_1 m(s_\text{pre}+s_\text{post})).
\end{equation}
However, since the overall number of iterations increases (see
\S\ref{subsec:results}), this does not necessarily decrease the
solution time.
If the overall number of unknowns $N$ is kept fixed and the
solution is smooth, it is well known that the accuracy increases for
high-order discretizations. Due to the decreased sparsity of the
discretized operators, this does not automatically translate to more
accuracy per computation time; see, e.g.,~\cite{Brown10}. However, note
that many computations in, for instance, a multigrid
preconditioned conjugate gradient algorithm are of complexity
$\mathcal{O}(N)$ (see Algorithm~\ref{alg:pcg}) and are thus independent of
$g_p$. Thus, the computational cost of these steps does not depend on
the order of the discretization. Even if these $\mathcal{O}(N)$ steps
do not dominate the computation, they contribute to making high-order
discretizations favorable not only in terms of accuracy per unknown,
but also in terms of accuracy per computation time.
\begin{algorithm}[ht]
\caption{Complexity of individual steps in multigrid-preconditioned CG} \label{alg:pcg}
\begin{algorithmic}[1]
\Require rhs and guess
\Ensure solution
\While {not converged}
\State $\bs{h} = A \bs{p}$ \Comment $~~\quad\quad\quad\quad\mathcal{O}(Ng_p)$
\State $\rho_r = (\rho, \bs{r})$ \Comment $~~\quad\quad\quad\quad\mathcal{O}(N)~~~$
\State $\alpha = \rho_r / ( \bs{p}, \bs{h} )$ \Comment $~~\quad\quad\quad\quad\mathcal{O}(N)~~~$
\State $\bs{u} = \bs{u} + \alpha\bs{p}$ \Comment $~~\quad\quad\quad\quad\mathcal{O}(N)~~~$
\State $\bs{r} = \bs{r} - \alpha\bs{h}$ \Comment $~~\quad\quad\quad\quad\mathcal{O}(N)~~~$
\State Convergence Test
\State $\rho = M\bs{r}$ \Comment v-cycle $\quad\mathcal{O}(Ng_p)$
\State $\beta = (\rho, \bs{r}) / \rho_r$ \Comment $~~\quad\quad\quad\quad\mathcal{O}(N)~~~$
\State $\bs{p} = \rho + \beta\bs{p}$ \Comment $~~\quad\quad\quad\quad\mathcal{O}(N)~~~$
\EndWhile
\end{algorithmic}
\end{algorithm}
\subsection{Setup of comparisons}\label{subsec:measures}
We test the different multigrid schemes in two contexts: as solvers
and as preconditioners in a conjugate gradient (CG) method. In tables
\ref{tab:box}--\ref{tab:3d-fan},
we report the number of multigrid v-cycles%
\footnote{each CG iteration uses a single multigrid
v-cycle as preconditioner}
required to reduce the norm of the discrete
residual by a factor of $10^8$, where a ``-'' indicates that the
method did not converge within the specified maximum number of
iterations.
In particular, these tables report the following
information:
\begin{itemize}
\item[$\bullet$] The first column gives the polynomial \emph{order}
used in the finite element discretization.
\item[$\bullet$] The columns labeled \emph{MG as solver} report
the number of v-cycles required for convergence when multigrid is used as
solver. The subcolums are:
\begin{itemize}
\item \emph{Jacobi(3,3)} denotes that 3 pre-smoothing and 3
post-smoothing steps of a pointwise Jacobi smoother are used on
each level. We use a damping factor $\omega=2/3$ in all experiments.
\item \emph{Cheb(3,3)} indicates that Chebyshev-accelerated Jacobi
smoothing is used, again using 3 pre-smoothing and 3
post-smoothing steps. An estimate for the maximal eigenvalue of
the linear systems on each level, as required by the Chebyshev
method, is computed in a setup step using 10 Arnoldi iterations.
\item \emph{SSOR(2,1)} denotes that a symmetric successive
over-relaxation method is employed, with 2 pre-smoothing and 1
post-smoothing steps. Note that each SSOR iteration amounts to a
forward and a backward Gauss-Seidel smoothing step, and thus
requires roughly double the computational work compared to Jacobi
smoothing. The SSOR smoother is based on a lexicographic ordering
of the unknowns, and the damping factor is $\omega=1$.
\end{itemize}
For the two-dimensional problems reported in
Tables~\ref{tab:box}--\ref{tab:2d-fan2}, we use a multigrid
hierarchy with three levels corresponding to meshes with
$32\times32$, $16\times16$ and $8\times8$ elements.
The multigrid hierarchy for the three-dimensional
tests reported in Tables~\ref{tab:3d-box} and \ref{tab:3d-fan}
also has three levels with
$8\times8\times8$, $4\times4\times4$ and $2\times2\times2$
elements. Note that for each smoother we report results
for $h$-multigrid (columns marked by \emph{h}; see
\S\ref{subsec:h}) as well as for $p$-multigrid (columns marked by
\emph{p}; see \S\ref{subsec:p}). For $p$-multigrid, we restrict
ourselves to orders that are powers of 2. After coarsening in $p$
till $p=1$, we coarsen in $h$. For example, for the
two-dimensional problems and $p=16$, we use a total of 7 grids;
the first five all use meshes with $32\times32$ elements,
and $p=16,8,4,2,1$, respectively, followed by two additional
coarse grids of size $16\times16$ and $8\times8$, and
$p=1$.
\item[$\bullet$] The columns labeled \emph{MG with pCG} present the
number of conjugate gradient iterations required for the solution,
where each iteration uses one multigrid v-cycle as preconditioner.
The sub-columns correspond to different smoothers, as described
above.
\item[$\bullet$] The columns labeled \emph{low-order MG pCG} report
the number of CG iterations needed to solve the high-order system,
when preconditioned with the low-order operator based on the
high-order nodal points (see \S\ref{subsec:low}). While in practice
one would use algebraic multigrid to solve the linearized system
approximately, in our tests we use a factorization method to solve
the low-order system directly. As a consequence, the reported
iteration counts are a lower bound for the iteration counts one
would obtain if the low-order system was inverted approximately by
algebraic multigrid.
\end{itemize}
Note that the number of smoothing steps in the different methods
is chosen such that, for fixed polynomial order, the computational
work is comparable. Each multigrid v-cycle requires one residual
computation and overall six matrix-vector multiplications. Following
the simple complexity estimates \eqref{eq:compcost} and
\eqref{eq:compcost2}, this amounts to a per-iteration cost of $7Ng_p$
for $h$- and $p$-multigrid, and of $N(g_1+6g_p)$ for the low-order
multigrid preconditioner. As a consequence, the iteration numbers
reported in the next section can be used to compare
the efficiency of the different methods.
Note that in our tests, we change the polynomial degree of the
finite element functions but retain the same mesh. This results in an
increasing number of unknowns as $p$ increases. Since, as illustrated
in \S\ref{subsec:num_mesh}, we observe mesh independent convergence
for fixed $p$, this does not influence the comparison.
\subsection{Summary of numerical results}\label{subsec:results}
Next, in \S\ref{subsec:num_point}, we compare the performance of
different point smoothers for the test problems presented in
\S\ref{subsec:tests}. Then, in \S\ref{subsec:num_mesh}, we
illustrate that the number of iterations is independent of the mesh
resolution. Finally, in \S\ref{subsec:num_block}, we study the
performance of a block Jacobi smoother for discretizations with
polynomial orders $p=8$ and $p=16$.
\begin{table}
\caption{\label{tab:box} Iteration counts for the two-dimensional unit square
problem {\bf 2d-const} defined in \S\ref{subsec:tests}. The finest
mesh has $32\times 32$ elements and the multigrid hierarchy
consists of three meshes. For $p$-multigrid, the
polynomial order is first reduced to $p=1$, followed by two
geometric coarsenings of the mesh. For a detailed description of
the different experiments reported in this table we refer to
\S\ref{subsec:measures}.} \centering
\begin{tabular}{|r|c c|c c|c c||c c|c c|c c||c|}
\hline
& \multicolumn{6}{c||}{MG as solver} & \multicolumn{6}{c||}{MG
with pCG} & \!\!low-order MG\!\! \\
\cline{2-13}
\!\!\! order \!\!\!\! & \multicolumn{2}{c|}{\!\!\scriptsize Jacobi(3,3)\!\!} & \multicolumn{2}{c|}{\!\!\scriptsize Cheb(3,3)\!\!} & \multicolumn{2}{c||}{\!\!\scriptsize SSOR(2,1)\!\!} & \multicolumn{2}{c|}{\!\!\scriptsize Jacobi(3,3)\!\!} & \multicolumn{2}{c|}{\!\!\scriptsize Cheb(3,3)\!\!} & \multicolumn{2}{c||}{\!\!\scriptsize SSOR(2,1)\!\!} & pCG \\
\hline
& $h$ & $p$ & $h$ & $p$& $h$ & $p$& $h$ & $p$& $h$ & $p$& $h$ & $p$&
~ \\
\cline{2-13}
1 & 6 & & 5 & & 5 & & 5 & & 4 & & 4 & & - \\
2 & 7 & 7 & 5 & 6 & 5 & 5 & 5 & 5 & 4 & 4 & 4 & 4 & 14 \\
3 & 8 & & 6 & & 5 & & 6 & & 5 & & 4 & & 16 \\
4 & 9 & 8 & 6 & 6 & 5 & 5 & 6 & 5 & 5 & 5 & 4 & 4 & 16 \\
5 & 12 & & 8 & & 7 & & 7 & & 6 & & 5 & & 17 \\
6 & 12 & & 9 & & 7 & & 7 & & 6 & & 5 & & 18 \\
7 & 16 & & 12 & & 8 & & 8 & & 7 & & 6 & & 18 \\
8 & 17 & 14 & 13 & 10 & 8 & 7 & 9 & 8 & 7 & 6 & 6 & 5 & 19 \\
16 & 40 & 33 & 33 & 27 & 17 & 14 & 14 & 12 & 12 & 11 & 9 & 8 & 21 \\
\hline
\end{tabular}
\end{table}
\subsubsection{Comparison of different multigrid/smoothing combinations}\label{subsec:num_point}
Tables \ref{tab:box}--\ref{tab:2d-fan2} present the number of
iterations obtained for various point smoothers and different
polynomial orders for the two-dimensional test problems. As can be
seen in Table~\ref{tab:box}, for {\bf 2d-const}
all solver variants converge in a
relatively small number of iterations for all polynomial
orders. However, the number of
iterations increases with the polynomial order $p$, in particular when
multigrid is used as a solver. Using multigrid as a preconditioner in
the conjugate gradient method results in a reduction of overall
multigrid v-cycles, in some cases even by a factor or two. Also, we
observe that SSOR smoothing generally performs better than the two
Jacobi-based smoothers. We find that the linear-order operator based
on the high-order nodes is a good preconditioner for the high-order
system. Note that if algebraic multigrid is used for the solution of
the low-order approximation, the smoother on the finest level can
either use the residual of the low-order or of the high-order
operator. Initial tests that mimic the use of the high-order residual
in the fine-grid smoother show that this has the potential to reduce the
number of iterations.
Let us now contrast these observations with the results for the
variable coefficient problems {\bf 2d-var} and {\bf 2d-var$'$}
summarized in Tables~\ref{tab:2d-fan} and \ref{tab:2d-fan2}. First,
note that all variants of the solver perform reasonably for
discretizations up to order $p=4$. When used as a solver, multigrid
either diverges or converges slowly for orders $p>4$. Convergence
is reestablished when multigrid is combined with CG. Using multigrid
with SSOR smoothing as preconditioner in CG yields, for orders up to
$p=8$, convergence with a factor of at least $0.1$ in each iteration.
Comparing the results for {\bf 2d-var} shown in Table~\ref{tab:2d-fan}
with the results for {\bf 2d-var$'$} in Table~\ref{tab:2d-fan2} shows
that the convergence does not degrade much for the
coefficient with 5-times smaller wavelength.
Next, we turn to the results for {\bf 3d-const} and {\bf 3d-var},
which we report in Tables~\ref{tab:3d-box} and \ref{tab:3d-fan},
respectively. For {\bf 3d-const}, all variants of the solver
converge. For this three-dimensional problem, the benefit of using
multigrid as preconditioner rather than as solver is even more evident
than in two dimensions.
Our results for {\bf 3d-var} are
summarized in Table~\ref{tab:3d-fan}. As for {\bf 2d-var}, the
performance of multigrid when used as a solver degrades
for orders $p>4$. We can also observe that the low-order matrix based on
the high-order node points represents a good preconditioner for the
high-order system.
\begin{table}
\caption{\label{tab:2d-fan} Iteration counts for two-dimensional
warped-geometry, varying coefficient problem {\bf 2d-var} defined
in \S\ref{subsec:tests}. The finest
mesh has $32\times 32$ elements and the multigrid hierarchy
consists of three meshes. For $p$-multigrid, the
polynomial order is first reduced to $p=1$, followed by two
geometric coarsenings of the mesh.
For a
detailed description of the different experiments reported in this
table we refer to \S\ref{subsec:measures}.} \centering
\begin{tabular}{|r|c c|c c|c c||c c|c c|c c||c|}
\hline
& \multicolumn{6}{c||}{MG as solver} & \multicolumn{6}{c||}{MG
with pCG} & \!\!low-order MG\!\! \\
\cline{2-13}
\!\!\! order \!\!\!\! & \multicolumn{2}{c|}{\!\!\scriptsize Jacobi(3,3)\!\!} & \multicolumn{2}{c|}{\!\!\scriptsize Cheb(3,3)\!\!} & \multicolumn{2}{c||}{\!\!\scriptsize SSOR(2,1)\!\!} & \multicolumn{2}{c|}{\!\!\scriptsize Jacobi(3,3)\!\!} & \multicolumn{2}{c|}{\!\!\scriptsize Cheb(3,3)\!\!} & \multicolumn{2}{c||}{\!\!\scriptsize SSOR(2,1)\!\!} & pCG\\
\hline
& $h$ & $p$ & $h$ & $p$& $h$ & $p$& $h$ & $p$& $h$ & $p$& $h$ & $p$& ~ \\
\cline{2-13}
1 & 14 & & 11 & & 6 & & 8 & & 7 & & 5 & & - \\
2 & 20 & 19 & 15 & 15 & 7 & 8 & 10 & 10 & 8 & 8 & 5 & 6 & 16 \\
3 & 20 & & 16 & & 8 & & 10 & & 9 & & 6 & & 18 \\
4 & 22 & 21 & 21 & 19 & 10 & 9 & 11 & 10 & 10 & 10 & 7 & 6 & 19 \\
5 & - & & 28 & & 12 & & 14 & & 12 & & 7 & & 21 \\
6 & - & & 35 & & 13 & & 15 & & 13 & & 8 & & 23 \\
7 & - & & 45 & & 16 & & 18 & & 15 & & 9 & & 24 \\
8 & - & - & 52 & 46 & 17 & 15 & 20 & 20 & 16 & 15 & 9 & 8 & 25 \\
16 & - & - & 169 & 148 & 37 & 33 & 51 & 45 & 30 & 27 & 13 & 12 & 31 \\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{\label{tab:2d-fan2} Iteration counts for two-dimensional
warped-geometry, varying coefficient problem {\bf 2d-var$'$}
defined in \S\ref{subsec:tests}. This problem is identical to {\bf
2d-var} (see Table~\ref{tab:2d-fan}),
but the variations in the coefficient $\mu$ have a smaller wave length.
The finest
mesh has $32\times 32$ elements and the multigrid hierarchy
consists of three meshes. For $p$-multigrid, the
polynomial order is first reduced to $p=1$, followed by two
geometric coarsenings of the mesh.
For a detailed description of the different experiments
reported in this table we refer to \S\ref{subsec:measures}.}
\centering
\begin{tabular}{|r|c c|c c|c c||c c|c c|c c||c|}
\hline
& \multicolumn{6}{c||}{MG as solver} & \multicolumn{6}{c||}{MG
with pCG} & \!\!low-order MG\!\! \\
\cline{2-13}
\!\!\! order \!\!\!\! & \multicolumn{2}{c|}{\!\!\scriptsize Jacobi(3,3)\!\!} & \multicolumn{2}{c|}{\!\!\scriptsize Cheb(3,3)\!\!} & \multicolumn{2}{c||}{\!\!\scriptsize SSOR(2,1)\!\!} & \multicolumn{2}{c|}{\!\!\scriptsize Jacobi(3,3)\!\!} & \multicolumn{2}{c|}{\!\!\scriptsize Cheb(3,3)\!\!} & \multicolumn{2}{c||}{\!\!\scriptsize SSOR(2,1)\!\!} & pCG \\
\hline
& $h$ & $p$ & $h$ & $p$& $h$ & $p$& $h$ & $p$& $h$ & $p$& $h$ & $p$& ~ \\
\cline{2-13}
1 & 14 & & 12 & & 8 & & 8 & & 8 & & 6 & & - \\
2 & 19 & 19 & 15 & 14 & 7 & 8 & 10 & 10 & 8 & 8 & 6 & 6 & 19 \\
3 & 20 & & 17 & & 8 & & 10 & & 9 & & 6 & & 22 \\
4 & 261 & 333 & 21 & 20 & 10 & 9 & 15 & 15 & 11 & 10 & 7 & 6 & 26 \\
5 & - & & 30 & & 12 & & 19 & & 13 & & 8 & & 29 \\
6 & - & & 39 & & 13 & & 37 & & 15 & & 8 & & 35 \\
7 & - & & 52 & & 16 & & 78 & & 18 & & 9 & & 36 \\
8 & - & - & 63 & 55 & 17 & 16 & 137 & 109 & 19 & 18 & 10 & 9 & 38 \\
16 & - & - & 232 & 201 & 67 & 76 & - & - & 44 & 37 & 19 & 18 & 56 \\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{\label{tab:3d-box} Iteration counts for three-dimensional unit cube
problem {\bf 3d-const} defined in \S\ref{subsec:tests}.
The finest
mesh has $8\times 8\times 8$ elements and the multigrid hierarchy
consists of three meshes. For $p$-multigrid, the
polynomial order is first reduced to $p=1$, followed by two
geometric coarsenings of the mesh.
For a
detailed description of the different experiments reported in this
table we refer to \S\ref{subsec:measures}.} \centering
\begin{tabular}{|r|c c|c c|c c||c c|c c|c c||c|}
\hline
& \multicolumn{6}{c||}{MG as solver} &
\multicolumn{6}{c||}{MG with pCG} &
\!\!low-order MG\!\! \\
\cline{2-13}
\!\!\! order \!\!\!\! & \multicolumn{2}{c|}{\!\!\scriptsize Jacobi(3,3)\!\!} & \multicolumn{2}{c|}{\!\!\scriptsize Cheb(3,3)\!\!} & \multicolumn{2}{c||}{\!\!\scriptsize SSOR(2,1)\!\!} & \multicolumn{2}{c|}{\!\!\scriptsize Jacobi(3,3)\!\!} & \multicolumn{2,1}{c|}{\!\!\scriptsize Cheb(3)\!\!} & \multicolumn{2}{c||}{\!\!\scriptsize SSOR(2,1)\!\!} & pCG \\
\hline
& $h$ & $p$ & $h$ & $p$& $h$ & $p$& $h$ & $p$& $h$ & $p$& $h$ & $p$& ~\\
\cline{2-13}
1 & 6 & & 4 & & 4 & & 5 & & 4 & & 3 & & - \\
2 & 8 & 8 & 4 & 5 & 4 & 5 & 6 & 6 & 4 & 4 & 4 & 4 & 25 \\
3 & 10 & & 7 & & 5 & & 6 & & 5 & & 5 & & 27 \\
4 & 11 & 10 & 8 & 7 & 6 & 5 & 7 & 7 & 6 & 5 & 5 & 4 & 28 \\
5 & 14 & & 10 & & 7 & & 8 & & 7 & & 5 & & 29 \\
6 & 16 & & 11 & & 7 & & 9 & & 7 & & 6 & & 32 \\
7 & 20 & & 15 & & 9 & & 10 & & 9 & & 6 & & 34 \\
8 & 22 & 19 & 17 & 15 & 9 & 8 & 10 & 10 & 9 & 8 & 6 & 6 & 35 \\
16 & 47 & 42 & 38 & 34 & 17 & 15 & 16 & 14 & 14 & 13 & 9 & 9 & 39 \\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{\label{tab:3d-fan} Iteration counts for three-dimensional,
warped-geometry, varying coefficient problem {\bf 3d-var} defined
in \S\ref{subsec:tests}. The finest
mesh has $8\times 8\times 8$ elements and the multigrid hierarchy
consists of three meshes. For $p$-multigrid, the
polynomial order is first reduced to $p=1$, followed by two
geometric coarsenings of the mesh.
For a detailed description of the
different experiments reported in this table we refer to
\S\ref{subsec:measures}.} \centering
\begin{tabular}{|r|c c|c c|c c||c c|c c|c c||c|}
\hline
& \multicolumn{6}{c||}{MG as solver} &
\multicolumn{6}{c||}{MG with pCG} &
\!\!low-order MG\!\! \\
\cline{2-13}
\!\!\! order \!\!\!\! & \multicolumn{2}{c|}{\!\!\scriptsize Jacobi(3,3)\!\!} & \multicolumn{2}{c|}{\!\!\scriptsize Cheb(3,3)\!\!} & \multicolumn{2}{c||}{\!\!\scriptsize SSOR(2,1)\!\!} & \multicolumn{2}{c|}{\!\!\scriptsize Jacobi(3,3)\!\!} & \multicolumn{2}{c|}{\!\!\scriptsize Cheb(3,3)\!\!} & \multicolumn{2}{c||}{\!\!\scriptsize SSOR(2,1)\!\!} & pCG \\
\hline
& $h$ & $p$ & $h$ & $p$& $h$ & $p$& $h$ & $p$& $h$ & $p$& $h$ & $p$& ~ \\
\cline{2-13}
1 & 13 & & 7 & & 5 & & 7 & & 5 & & 4 & & - \\
2 & 17 & 18 & 13 & 13 & 7 & 7 & 9 & 9 & 8 & 8 & 5 & 5 & 26 \\
3 & 20 & & 16 & & 8 & & 10 & & 9 & & 6 & & 29 \\
4 & 23 & 22 & 18 & 18 & 9 & 9 & 11 & 11 & 9 & 9 & 7 & 6 & 31 \\
5 & 26 & & 21 & & 10 & & 12 & & 10 & & 7 & & 34 \\
6 & 30 & & 27 & & 12 & & 13 & & 12 & & 8 & & 37 \\
7 & 35 & & 34 & & 14 & & 14 & & 14 & & 8 & & 37 \\
8 & - & - & 40 & 38 & 16 & 15 & 18 & 17 & 15 & 14 & 9 & 9 & 38 \\
16 & - & - & 117 & 110 & 32 & 29 & 67 & 60 & 27 & 26 & 13 & 13 & 47\\
\hline
\end{tabular}
\end{table}
\subsubsection{Mesh independence of iterations}\label{subsec:num_mesh}
To illustrate the mesh-independence of our multigrid-based solvers, we
compare the number of v-cycles required for the solution of the
two-dimensional problems {\bf 2d-const} and {\bf 2d-var} when
discretized on different meshes. In this comparison, the coarsest mesh
in the multigrid hierarchy is the same; thus, the number of levels in the
hierarchy increases as the problem is discretized on finer meshes. As
can be seen in Table~\ref{tab:meshInd}, once the mesh is sufficiently
fine, the number of iterations remains the same for all polynomial orders.
\begin{table}[h]\centering
\caption{\label{tab:meshInd} Number of v-cycles required for the
solution of the two-dimensional problems {\bf 2d-const} and {\bf
2d-var} defined in \S\ref{subsec:tests} for different fine
meshes and different polynomial orders. The coarsest grid for all
cases has $2\times 2$ elements. In this comparison, multigrid with
SSOR(2,1) smoothing is used as preconditioner in the conjugate
gradient method. A star indicated that the
corresponding test was not performed due to the large problem size.}
\begin{tabular}{|c|c|c|c|c|c|c|c||c|c|c|c|c|c|c|}
\hline
& \multicolumn{7}{c||}{\bf 2d-const} &
\multicolumn{7}{c|}{\bf 2d-var} \\
\hline
order & 4 & 8 & 16 & 32 & 64 & 128 & 256 & 4 & 8 & 16 & 32 & 64 & 128 & 256 \\
\hline
1 & 3 & 4 & 4 & 4 & 4 & 4 & 4 & 3 & 4 & 5 & 5 & 5 & 5 & 5 \\
2 & 4 & 4 & 4 & 4 & 4 & 4 & 4 & 5 & 5 & 5 & 5 & 5 & 5 & 5 \\
4 & 5 & 5 & 4 & 4 & 4 & 4 & 4 & 6 & 6 & 7 & 7 & 7 & 7 & 7 \\
8 & 6 & 6 & 6 & 6 & 6 & 6 & 6 & 9 & 9 & 9 & 9 & 9 & 9 & 9 \\
16 & 9 & 9 & 9 & 9 & 9 & * & * & 13 & 13 & 13 & 13 & 13 & * & * \\
\hline
\end{tabular}
\end{table}
\subsubsection{Performance of block and $\ell_1$-Jacobi smoothers}\label{subsec:num_block}
For completeness, we also include a comparison with two common
variants of the Jacobi smoother---the block-Jacobi and the
$\ell_1$-Jacobi point smoother.
We limit these comparisons to $8$ and $16$ order, and to the {\bf 2d-const}, {\bf
2d-var} and the {\bf 3d-var} problems. These results are summarized in Table~\ref{tab:block-jac}.
\paragraph{$\ell_1$-Jacobi smoother} These smoothers work by adding an appropriate diagonal matrix
to guarantee convergence \cite{BakerFalgoutKolevEtAl11}. They have the additional benefit of not requiring eigenvalue estimates
compared with Chebyshev smoothers. In practice, while guaranteed convergence is desirable, the overall
work (i.e., number of iterations) increases.
In particular, point-Jacobi outperforms $\ell_1$-Jacobi as a smoother
for multigrid used as a solver as well as
a preconditioner for CG.
\paragraph{Block Jacobi smoother}
Schwarz-type domain decomposition smoothers are particularly promising
for high polynomial orders, such as order 8 or higher. Results
obtained with an elementwise block Jacobi preconditioner for orders 8
and 16 are summarized in Table~\ref{tab:block-jac}. For this
comparison, we invert the element matrices exactly, which can be
problematic with respect to computational time as well as storage for
realistic problems, in particular for warped meshes and high
polynomial orders. One remedy is to use approximate inverse element
matrices \cite{LottesFischer05}. As can be seen in
Table~\ref{tab:block-jac}, the
number of iterations is reduced compared to pointwise Jacobi
smoothing; however, this does not imply a faster method since
block-Jacobi smoothing is, in general, more expensive. Again, a
high-performance implementation is required to assess the
effectiveness of the different methods.
\begin{table}\centering
\caption{\label{tab:block-jac} Comparison between different Jacobi
smoothers---point, block and $\ell_1$. Shown is the number of
iterations, obtained with 3
pre- and 3 post-smoothing steps. All experiments used a damping factor of $\omega=2/3$.}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\!\!order\!\! & \multicolumn{6}{c|}{\bf 2d-const} &
\multicolumn{6}{c|}{\bf 2d-var} & \multicolumn{6}{c|}{\bf 3d-var} \\
\cline{2-19}
& \multicolumn{3}{c|}{MG} & \multicolumn{3}{c|}{pCG} &\multicolumn{3}{c|}{MG} & \multicolumn{3}{c|}{pCG} & \multicolumn{3}{c|}{MG} & \multicolumn{3}{c|}{pCG} \\
\hline
& {\!pt\!} & {\!blk\!} & {\!$\ell_1$\!} & {\!pt\!} & {\!blk\!} & {\!$\ell_1$\!} & {\!pt\!} & {\!blk\!} & {\!$\ell_1$\!} & {\!pt\!} & {\!blk\!} & {\!$\ell_1$\!}& {\!pt\!} & {\!blk\!} & {\!$\ell_1$\!}& {\!pt\!} & {\!blk\!} & {\!$\ell_1$\!}\\
\hline \small
8 & 17 & 16 & 51 & 9 & 8 & 16 & - & 31 & \!111\! & - & 12 & 57 & - & 30 & \!176\! & 18 & 13 & 37 \\
16 & 40 & 31 & \!133\! & 14 & 12 & 27 & - & 61 & - & 51 & 17 & \!186\! & - & 52 & 48 & 67 & 17 & 68 \\
\hline
\end{tabular}
\end{table}
In the next section, we summarize our findings and draw conclusions.
\section{Discussion and conclusions}
\label{sec:discuss}
Using multigrid as preconditioner in the conjugate gradient (CG)
method rather than directly as solver results in significantly faster
convergence, which more than compensates for the additional work
required by the Krylov method. This is particularly true for
high-order methods, where the residual computation is more expensive
than for low-order methods, thus making the additional vector
additions and inner products in CG negligible. For problems with
varying coefficients, we find that the number of v-cycles
decreases by up to a factor of three when multigrid is combined with
the conjugate gradient method.
None of the tested approaches yields a number of iterations that is
independent of the polynomial order; Nevertheless, point smoothers can
be efficient for finite element discretizations with polynomial orders
up to $p=16$. For constant coefficient, all tested multigrid
hierarchy/smoother combinations (Jacobi, Chebyshev-accelerated Jacobi
and Gauss-Seidel SSOR smoothing) lead to converging multigrid
methods. In general, the difference in the number of iterations
between $h$- and $p$-multigrid is small.
Problems with strongly varying coefficients on deformed
geometries are much more challenging. Here, SSOR outperforms
Jacobi-based smoothers for orders $p>4$. However, in a distributed
environment, where Gauss-Seidel smoothing is usually more difficult to
implement and requires more parallel communication,
Chebyshev-accelerated Jacobi smoothing represents an interesting
alternative to SSOR. It is as simple to implement as Jacobi smoothing
but requires significantly less iterations to converge; compared to
point Jacobi smoothing, it additionally only requires an estimate of
the largest eigenvalue of the diagonally preconditioned system matrix.
We find that a low-order operator based on the high-order node points
is a good preconditioner, and it is particularly attractive for
high-order discretizations on unstructured meshes, as also observed in
\cite{Brown10, DevilleMund90, HeysManteuffelMcCormickEtAl05}. When
combined with algebraic multigrid for the low-order operator, the
smoother on the finest mesh can either use the low-order or the
high-order residual. Initial numerical tests indicate that the latter
choice is advantageous, but this should be studied more systematically.
\section*{Acknowledgments}
We would like to thank Tobin Isaac for helpful discussions on the
low-order preconditioner. Support for this work was
provided through the U.S.~National Science Foundation (NSF) grants
CMMI-1028889 and
ARC-0941678,
and through the Scientific Discovery through Advanced
Computing (SciDAC) projects
DE-SC0009286,
and DE-SC0002710
funded by the U.S.~Department of Energy
Office of Science, Advanced Scientific Computing Research and
Biological and Environmental Research.
\bibliographystyle{siam}
|
1,477,468,751,128 | arxiv | \section{Introduction}\label{Section:Introduction}
This paper is concerned with weak Galerkin (WG) finite element
methods by exploring optimal use of polynomial approximating spaces.
In general, weak Galerkin refers to finite element techniques for
partial differential equations in which differential operators
(e.g., gradient, divergence, curl, Laplacian) are approximated by
weak forms as distributions. The main idea of weak Galerkin finite
element methods is the use of weak functions and their corresponding
discrete weak derivatives in algorithm design. For the second order
elliptic equation, weak functions have the form of $v=\{v_0,v_b\}$
with $v=v_0$ inside of each element and $v=v_b$ on the boundary of
the element. Both $v_0$ and $v_b$ can be approximated by polynomials
in $P_\ell(T)$ and $P_s(e)$ respectively, where $T$ stands for an
element and $e$ the edge or face of $T$, $\ell$ and $s$ are
non-negative integers with possibly different values. Weak
derivatives are defined for weak functions in the sense of
distributions. For computing purpose, one needs to approximate the
weak derivatives by polynomials. For example, for the weak gradient
operator, one may approximate it in the polynomial space
$[P_m(T)]^d$. Various combination of $(P_\ell(T),P_s(e),[P_m(T)]^d)$
leads to different class of weak Galerkin methods tailored for
specific partial differential equations. The goal of this paper is
to explore optimal combination of the polynomial spaces $P_\ell(T)$
and $P_s(e)$ that minimizes the number of unknowns without
compromising the rate of convergence for the corresponding WG
method.
For simplicity, we demonstrate the idea of optimality for
polynomials by using the second order elliptic problem that seeks an
unknown function $u$ satisfying
\begin{eqnarray}
-\nabla\cdot (a\nabla u)&=&f,\quad \mbox{in}\;\Omega,\label{pde}\\
u&=&g,\quad\mbox{on}\;\partial\Omega,\label{bc}
\end{eqnarray}
where $\Omega$ is a polytopal domain in $\mathbb{R}^d$ (polygonal or
polyhedral domain for $d=2,3$), $\nabla u$ denotes the gradient of
the function $u$, and $a$ is a symmetric
$d\times d$ matrix-valued function in $\Omega$. We shall assume that
there exists a positive number $\lambda>0$ such that
\begin{equation}\label{ellipticity}
\xi^ta\xi\ge \lambda \xi^t\xi,\qquad\forall
\xi\in\mathbb{R}^d.
\end{equation}
Here $\xi$ is understood as a column vector and $\xi^t$ is the transpose
of $\xi$.
A weak Galerkin method has been introduced and analyzed in \cite{wy}
for second order elliptic equations based on a {\em discrete weak
gradient} arising from local {\em RT} \cite{rt} or {\em BDM}
\cite{bdm} elements. More specifically, in the case of {\em BDM}
element of order $k\ge 1$, the gradient space is taken as
$[P_m(T)]^d\equiv [P_k(T)]^d$ and the weak functions are defined by
using $(P_\ell(T),P_s(e))\equiv (P_{k-1}(T), P_k(e))$. For the {\em
RT} element of $k\ge 0$, the gradient space is the usual {\em RT}
element for the vector component while the weak functions are given
by $(P_\ell(T),P_s(e))\equiv (P_{k}(T), P_k(e))$. Due to the use of
the {\em RT} and {\em BDM} elements, the WG finite element
formulation of \cite{wy} is limited to classical finite element
partitions of triangles ($d=2$) or tetrahedra ($d=3$). In addition,
the corresponding WG scheme exhibits a close connection with the
standard mixed finite element method for (\ref{pde})-(\ref{bc}).
The main goal of this paper is to investigate the
possibility of optimal combination of polynomial spaces that
minimize the number of unknowns in the numerical scheme without
compromising the order of
convergence. The new WG scheme will use the configuration of
$(P_k(T), P_{k-1}(e), P_{k-1}(T)^d)$, and the corresponding WG
solution converges to the exact solution of (\ref{pde})-(\ref{bc})
with rate of $O(h^k)$ in $H^1$ and $O(h^{k+1})$ in $L^2$ norm,
provided that the exact solution of the original problem is
sufficiently smooth. It should be pointed out that the unknown $v_0$
associated with the interior of each element can be eliminated in
terms of the unknown $v_b$ defined on the element boundary in
practical implementation. This means that, for problems in
$\mathbb{R}^2$, only edges of the finite element partition shall
contribute unknowns ($k$ unknowns from each edge) to the global
stiffness matrix problem. The new WG scheme is, therefore, a natural
extension of the classical Crouzix-Raviart $P_1$ non-conforming
triangular element to arbitrary order and arbitrary polygonal
partitions.
It have been proved rigorously in \cite{wy-m} that $P_k$ type of
polynomials can be used in weak Galerkin finite element procedures
on any polygonal/polyhedral elements. It contrasts to the use of
polynomials $P_k$ for triangular elements and tensor products $Q_k$
for quadrilateral elements in classic finite element methods. In
practice, allowing arbitrary shape in finite element partition
provides a great flexibility in both numerical approximation and
mesh generation, especially in regions where the domain geometry is
complex. Such a flexibility is also very much appreciated in
adaptive mesh refinement methods. Another objective of this paper is
to study the reliability, flexibility and accuracy of the weak
Galerkin method through extensive numerical tests. The first and
second order weak Galerkin elements are tested on partitions with
different shape of polygons and polyhedra. Our numerical results
show optimal order of convergence for $k=1,2$ on triangular,
quadrilateral, honey comb meshes in 2d and deformed cube in 3d.
One close relative of the WG finite element method of this paper is
the hybridizable discontinuous Galerkin (HDG) method \cite{cgl}. But
these two methods are fundamentally different in concept and
formulation. The HDG method is formulated by using the standard
mixed method approach for the usual system of first order equations,
while the key to WG is the use of discrete weak differential
operators. For the second order elliptic problem
(\ref{pde})-(\ref{bc}), these two methods share the same feature of
approximating first order derivatives or fluxes through a formula
that was commonly employed in the mixed finite element method. For
high order PDEs, such as the biharmonic equation
\cite{mwy-biharmonic}, the WG method is greatly
different from the HDG. It should be emphasized that the concept of
weak derivatives makes WG a widely applicable numerical technique
for a large variety of partial differential equations which we shall
report in forthcoming papers.
The paper is organized as follows. In Section
\ref{Section:weak-gradient}, we shall review the definition of the
weak gradient operator and its discrete analogues. In Section
\ref{Section:wg-fem}, we shall describe a new WG scheme. Section
\ref{Section:wg-massconservation} will be devoted to a discussion of
mass conservation for the WG scheme. In Section
\ref{Section:L2projections}, we shall present some technical
estimates for the usual $L^2$ projection operators. Section
\ref{Section:error-analysis} is used to derive an optimal order
error estimate for the WG approximation in both $H^1$ and $L^2$
norms. Finally in Section \ref{Section:numerical}, we shall present
some numerical results that confirm the theory developed in earlier
sections.
\section{Weak Gradient and Discrete Weak Gradient}\label{Section:weak-gradient}
Let $K$ be any polytopal domain with boundary $\partial K$. A {\em
weak function} on the region $K$ refers to a function $v=\{v_0,
v_b\}$ such that $v_0\in L^2(K)$ and $v_b\in H^{\frac12}(\partial
K)$. The first component $v_0$ can be understood as the value of $v$
in $K$, and the second component $v_b$ represents $v$ on the
boundary of $K$. Note that $v_b$ may not necessarily be related to
the trace of $v_0$ on $\partial K$ should a trace be well-defined.
Denote by $W(K)$ the space of weak functions on $K$; i.e.,
\begin{equation}\label{W(K)}
W(K)= \{v=\{v_0, v_b \}:\ v_0\in L^2(K),\; v_b\in
H^{\frac12}(\partial K)\}.
\end{equation}
Define $(v,w)_D=\int_Dvwdx$ and ${\langle} v, w{\rangle}_\gamma=\int_\gamma vwds$.
The weak gradient operator, as was introduced in \cite{wy}, is
defined as follows for the completion of the paper.
\medskip
\begin{defi}
The dual of $L^2(K)$ can be identified with itself by using the
standard $L^2$ inner product as the action of linear functionals.
With a similar interpretation, for any $v\in W(K)$, the {\em weak
gradient} of $v$ is defined as a linear functional $\nabla_w v$ in
the dual space of $H(div,K)$ whose action on each $q\in H(div,K)$ is
given by
\begin{equation}\label{weak-gradient}
(\nabla_w v, q)_K = -(v_0, \nabla\cdot q)_K + \langle v_b,
q\cdot{\bf n}\rangle_{\partial K},
\end{equation}
where ${\bf n}$ is the outward normal direction to $\partial K$,
$(v_0,\nabla\cdot q)_K=\int_K v_0 (\nabla\cdot q)dK$ is the action
of $v_0$ on $\nabla\cdot q$, and $\langle v_b,
q\cdot{\bf n}\rangle_{\partial K}$ is the action of $q\cdot{\bf n}$ on
$v_b\in H^{\frac12}(\partial K)$.
\end{defi}
\medskip
The Sobolev space $H^1(K)$ can be embedded into the space $W(K)$ by
an inclusion map $i_W: \ H^1(K)\to W(K)$ defined as follows
$$
i_W(\phi) = \{\phi|_{K}, \phi|_{\partial K}\},\qquad \phi\in H^1(K).
$$
With the help of the inclusion map $i_W$, the Sobolev space $H^1(K)$
can be viewed as a subspace of $W(K)$ by identifying each $\phi\in
H^1(K)$ with $i_W(\phi)$. Analogously, a weak function
$v=\{v_0,v_b\}\in W(K)$ is said to be in $H^1(K)$ if it can be
identified with a function $\phi\in H^1(K)$ through the above
inclusion map. It is not hard to see that the weak gradient is
identical with the strong gradient (i.e., $\nabla_w v=\nabla v$) for
smooth functions $v\in H^1(K)$.
Denote by $P_{r}(K)$ the set of polynomials on $K$ with degree no
more than $r$. We can define a discrete weak gradient operator by
approximating $\nabla_w$ in a polynomial subspace of the dual of
$H(div,K)$.
\smallskip
\begin{defi}
The discrete weak gradient operator, denoted by
$\nabla_{w,r, K}$, is defined as the unique polynomial
$(\nabla_{w,r, K}v) \in [P_r(K)]^d$ satisfying the following
equation
\begin{equation}\label{dwd}
(\nabla_{w,r, K}v, q)_K = -(v_0,\nabla\cdot q)_K+ \langle v_b,
q\cdot{\bf n}\rangle_{\partial K},\qquad \forall q\in [P_r(K)]^d.
\end{equation}
\end{defi}
By applying the usual integration by part to the first term on the
right hand side of (\ref{dwd}), we can rewrite the equation
(\ref{dwd}) as follows
\begin{equation}\label{dwd-2}
(\nabla_{w,r, K}v, q)_K = (\nabla v_0,q)_K+ \langle v_b-v_0,
q\cdot{\bf n}\rangle_{\partial K},\qquad \forall q\in [P_r(K)]^d.
\end{equation}
\section{Weak Galerkin Finite Element Schemes}\label{Section:wg-fem}
Let ${\cal T}_h$ be a partition of the domain $\Omega$ consisting of
polygons in two dimension or polyhedra in three dimension satisfying
a set of conditions specified in \cite{wy-m}. Denote by ${\cal E}_h$
the set of all edges or flat faces in ${\cal T}_h$, and let ${\cal
E}_h^0={\cal E}_h\backslash\partial\Omega$ be the set of all
interior edges or flat faces. For every element $T\in {\mathcal T}_h$, we
denote by $h_T$ its diameter and mesh size $h=\max_{T\in{\mathcal T}_h} h_T$
for ${\cal T}_h$.
For a given integer $k\ge 1$, let $V_h$ be the weak Galerkin finite
element space associated with ${\mathcal T}_h$ defined as follows
\begin{equation}\label{vhspace}
V_h=\{v=\{v_0,v_b\}:\; v_0|_T\in P_k(T),\ v_b|_e\in P_{k-1}(e),\ e\in{\partial T}, T\in {\mathcal T}_h\}
\end{equation}
and
\begin{equation}\label{vh0space}
V^0_h=\{v: \ v\in V_h,\ v_b=0 \mbox{ on } \partial\Omega\}.
\end{equation}
We would like to emphasize that any function $v\in V_h$ has a single
value $v_b$ on each edge $e\in{\mathcal E}_h$.
For each element $T\in {\mathcal T}_h$, denote by $Q_0$ the $L^2$ projection
from $L^2(T)$ to $P_k(T)$ and by $Q_b$ the $L^2$ projection from
$L^2(e)$ to $P_{k-1}(e)$. Denote by $\mathbb{Q}_h$ the $L^2$ projection
onto the local discrete gradient space $[P_{k-1}(T)]^d$. Let
$V=H^1(\Omega)$. We define a projection operator $Q_h: V \to V_h$ so
that on each element $T\in{\mathcal T}_h$
\begin{equation}\label{Qh-def}
Q_h v=\{Q_0v_0, Q_bv_b\},\qquad \{v_0,v_b\}=i_W(v)\in W(T).
\end{equation}
Denote by $\nabla_{w,k-1}$ the discrete weak gradient operator on
the finite element space $V_h$ computed by using
(\ref{dwd}) on each element $T$; i.e.,
$$
(\nabla_{w,k-1}v)|_T =\nabla_{w,k-1, T} (v|_T),\qquad \forall v\in
V_h.
$$
For simplicity of notation, from now on we shall drop the subscript
$k-1$ in the notation $\nabla_{w,k-1}$ for the discrete weak
gradient.
Now we introduce two forms on $V_h$ as follows:
\begin{eqnarray*}
a(v,w) & = & \sum_{T\in {\cal T}_h}( a\nabla_w v, \nabla_w w)_T,\\
s(v,w) & = & \rho\sum_{T\in {\cal T}_h} h_T^{-1}\langle Q_bv_0-v_b,
Q_bw_0-w_b\rangle_{\partial T},
\end{eqnarray*}
where $\rho$ can be any positive number. In practical
computation, one might set $\rho=1$. Denote by $a_s(\cdot,\cdot)$ a
stabilization of $a(\cdot,\cdot)$ given by
$$
a_s(v,w)=a(v,w)+s(v,w).
$$
\begin{algorithm}
A numerical approximation for (\ref{pde}) and (\ref{bc}) can be
obtained by seeking $u_h=\{u_0,u_b\}\in V_h$ satisfying both $u_b=
Q_b g$ on $\partial \Omega$ and the following equation:
\begin{equation}\label{wg}
a_s(u_h,v)=(f,v_0), \quad\forall\ v=\{v_0,v_b\}\in V_h^0.
\end{equation}
\end{algorithm}
Note that the system (\ref{wg}) is symmetric and positive definite
for any parameter value of $\rho>0$.
\bigskip
Next, we justify the well-postedness of the scheme (\ref{wg}). For
any $v\in V_h$, let
\begin{equation}\label{3barnorm}
{|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|}:=\sqrt{a_s(v,v)}.
\end{equation}
It is not hard to see that ${|\hspace{-.02in}|\hspace{-.02in}|}\cdot{|\hspace{-.02in}|\hspace{-.02in}|}$ defines a semi-norm in
the finite element space $V_h$. We claim that this semi-norm becomes
to be a full norm in the finite element space $V_h^0$. It suffices
to check the positivity property for ${|\hspace{-.02in}|\hspace{-.02in}|}\cdot{|\hspace{-.02in}|\hspace{-.02in}|}$. To this end,
assume that $v\in V_h^0$ and ${|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|}=0$. It follows that
\[
(a\nabla_w v,\nabla_w v)+\rho\sum_{T\in{\mathcal T}_h} h_T^{-1}\langle
Q_bv_0-v_b, Q_bv_0-v_b\rangle_{\partial T}=0,
\]
which implies that $\nabla_w v=0$ on each element $T$ and
$Q_bv_0=v_b$ on ${\partial T}$. It follows from $\nabla_w v=0$ and
(\ref{dwd-2}) that for any $q\in [P_{k-1}(T)]^d$
\begin{eqnarray*}
0&=&(\nabla_w v,q)_T\\
&=&(\nabla v_0,q)_T-\langle v_0-v_b,q\cdot{\bf n}\rangle_{\partial T}\\
&=&(\nabla v_0,q)_T-\langle Q_bv_0-v_b,q\cdot{\bf n}\rangle_{\partial T}\\
&=&(\nabla v_0,q)_T.
\end{eqnarray*}
Letting $q=\nabla v_0$ in the equation above yields $\nabla v_0=0$
on $T\in {\cal T}_h$. Thus, $v_0=const$ on every $T\in{\mathcal T}_h$. This,
together with the fact that $Q_bv_0=v_b$ on $\partial T$ and $v_b=0$
on $\partial\Omega$, implies that $v_0=v_b=0$.
\medskip
\begin{lemma}
The weak Galerkin finite element scheme (\ref{wg}) has a unique
solution.
\end{lemma}
\smallskip
\begin{proof}
If $u_h^{(1)}$ and $u_h^{(2)}$ are two solutions of (\ref{wg}), then
$e_h=u_h^{(1)}-u_h^{(2)}$ would satisfy the following equation
$$
a_s(e_h,v)=0,\qquad\forall v\in V_h^0.
$$
Note that $e_h\in V_h^0$. Then by letting $v=e_h$ in the above
equation we arrive at
$$
{|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}^2 = a_s(e_h, e_h)=0.
$$
It follows that $e_h\equiv 0$, or equivalently, $u_h^{(1)}\equiv
u_h^{(2)}$. This completes the proof of the lemma.
\end{proof}
\medskip
\section{Mass Conservation}\label{Section:wg-massconservation}
The second order elliptic equation (\ref{pde}) can be rewritten in a
conservative form as follows:
$$
\nabla \cdot q = f, \quad q=-a\nabla u.
$$
Let $T$ be any control volume. Integrating the first equation over
$T$ yields the following integral form of mass conservation:
\begin{equation}\label{conservation.01}
\int_{\partial T} q\cdot {\bf n} ds = \int_T f dT.
\end{equation}
We claim that the numerical approximation from the weak Galerkin
finite element method (\ref{wg}) for (\ref{pde}) retains the mass
conservation property (\ref{conservation.01}) with an appropriately
defined numerical flux $q_h$. To this end, for any given $T\in {\cal
T}_h$, we chose in (\ref{wg}) a test function $v=\{v_0, v_b=0\}$ so
that $v_0=1$ on $T$ and $v_0=0$ elsewhere. It follows from
(\ref{wg}) that
\begin{equation}\label{mass-conserve.08}
\int_T a\nabla_{w} u_h\cdot \nabla_{w}v dT +\rho
h_T^{-1}\int_{\partial T} (Q_bu_0-u_b)ds = \int_T f dT.
\end{equation}
Let $\mathbb{Q}_h$ be the local $L^2$ projection onto the gradient space
$[P_{k-1}(T)]^d$. Using the definition (\ref{dwd}) for $\nabla_{w}v$
one arrives at
\begin{eqnarray}
\int_T a\nabla_{w} u_h\cdot \nabla_{w}v dT &=& \int_T
\mathbb{Q}_h(a\nabla_{w} u_h)\cdot \nabla_{w}v dT \nonumber\\
&=& - \int_T \nabla\cdot \mathbb{Q}_h(a\nabla_{w} u_h) dT \nonumber\\
&=& - \int_{\partial T} \mathbb{Q}_h(a\nabla_{w}u_h)\cdot{\bf n} ds.
\label{conserv.88}
\end{eqnarray}
Substituting (\ref{conserv.88}) into (\ref{mass-conserve.08}) yields
\begin{equation}\label{mass-conserve.09}
\int_{\partial T} \left\{-\mathbb{Q}_h\left(a\nabla_{w}u_h\right)+\rho
h_T^{-1}(Q_bu_0-u_b){\bf n}\right\}\cdot{\bf n} ds = \int_T f dT,
\end{equation}
which indicates that the weak Galerkin method conserves mass with a
numerical flux given by
$$
q_h = - \mathbb{Q}_h\left(a\nabla_{w}u_h\right)+\rho
h_T^{-1}(Q_bu_0-u_b){\bf n}.
$$
Next, we verify that the normal component of the numerical flux,
namely $q_h\cdot{\bf n}$, is continuous across the edge of each element
$T$. To this end, let $e$ be an interior edge/face shared by two
elements $T_1$ and $T_2$. Choose a test function $v=\{v_0,v_b\}$ so
that $v_0\equiv 0$ and $v_b=0$ everywhere except on $e$. It follows
from (\ref{wg}) that
\begin{eqnarray}\label{mass-conserve.108}
\int_{T_1\cup T_2} a\nabla_{w} u_h\cdot \nabla_{w}v dT & & -\rho
h_{T_1}^{-1}\int_{\partial T_1\cap e} (Q_bu_0-u_b)|_{T_1}v_bds \\
& &- \rho h_{T_2}^{-1}\int_{\partial T_2\cap e}
(Q_bu_0-u_b)|_{T_2}v_bds\nonumber\\
& & =0.\nonumber
\end{eqnarray}
Using the definition of weak gradient (\ref{dwd}) we obtain
\begin{eqnarray*}
\int_{T_1\cup T_2} a\nabla_{w} u_h\cdot \nabla_{w}v dT&=&
\int_{T_1\cup T_2} \mathbb{Q}_h(a\nabla_{w} u_h)\cdot \nabla_{w}v dT\\
&=& \int_e\left(\mathbb{Q}_h(a\nabla_{w} u_h)|_{T_1}\cdot{\bf n}_1 +
\mathbb{Q}_h(a\nabla_{w} u_h)|_{T_2}\cdot{\bf n}_2\right)v_b ds,
\end{eqnarray*}
where ${\bf n}_i$ is the outward normal direction of $T_i$ on the edge
$e$. Note that ${\bf n}_1+{\bf n}_2=0$. Substituting the above equation into
(\ref{mass-conserve.108}) yields
\begin{eqnarray*}
\int_e\left(-\mathbb{Q}_h(a\nabla_{w} u_h)|_{T_1}+\rho
h_{T_1}^{-1}(Q_bu_0-u_b)|_{T_1}{\bf n}_1\right)\cdot{\bf n}_1 v_bds\\
=-\int_e \left(-\mathbb{Q}_h(a\nabla_{w} u_h)|_{T_2}+\rho h_{T_2}^{-1}
(Q_bu_0-u_b)|_{T_2}{\bf n}_2\right)\cdot{\bf n}_2 v_bds,
\end{eqnarray*}
which shows the continuity of the numerical flux $q_h$ in the normal
direction.
\section{Some Technical Estimates}\label{Section:L2projections}
This section shall present some technical results useful for the
forthcoming error analysis. The first one is a trace inequality
established in \cite{wy-m} for functions on general shape regular
partitions. More precisely, let $T$ be an element with $e$ as an
edge. For any function $\varphi\in H^1(T)$, the following trace
inequality holds true (see \cite{wy-m} for details):
\begin{equation}\label{trace}
\|\varphi\|_{e}^2 \leq C \left( h_T^{-1} \|\varphi\|_T^2 + h_T
\|\nabla \varphi\|_{T}^2\right).
\end{equation}
Another useful result is a commutativity property for some
projection operators.
\begin{lemma}
Let $Q_h$ and $\mathbb{Q}_h$ be the $L^2$ projection operators defined in
previous sections. Then, on each element $T\in{\mathcal T}_h$, we have the
following commutative property
\begin{equation}\label{key}
\nabla_w (Q_h \phi) = \mathbb{Q}_h (\nabla \phi),\quad\forall \phi\in
H^1(T).
\end{equation}
\end{lemma}
\begin{proof}
Using (\ref{dwd}), the integration by
parts and the definitions of $Q_h$ and $\mathbb{Q}_h$, we have that for
any $\tau\in [P_{k-1}(T)]^d$
\begin{eqnarray*}
(\nabla_w (Q_h \phi),\tau)_T &=& -(Q_0 \phi,\nabla\cdot\tau)_T
+\langle Q_b \phi,\tau\cdot{\bf n}\rangle_{{\partial T}}\\
&=&-(\phi,\nabla\cdot\tau)_T + \langle \phi,\tau\cdot{\bf n}\rangle_{\partial T}\\
&=&(\nabla \phi,\tau)_T\\
&=&(\mathbb{Q}_h(\nabla\phi),\tau)_T,
\end{eqnarray*}
which implies the desired identity (\ref{key}).
\end{proof}
\medskip
The following lemma provides some estimates for the projection
operators $Q_h$ and $\mathbb{Q}_h$. Observe that the underlying mesh
${\mathcal T}_h$ is assumed to be sufficiently general to allow polygons or
polyhedra. A proof of the lemma can be found in \cite{wy-m}. It
should be pointed out that the proof of the lemma requires some
non-trivial technical tools in analysis, which have also been
established in \cite{wy-m}.
\begin{lemma}
Let ${\mathcal T}_h$ be a finite element partition of $\Omega$ that is shape
regular. Then, for any $\phi\in H^{k+1}(\Omega)$, we have
\begin{eqnarray}
&&\sum_{T\in{\mathcal T}_h} \|\phi-Q_0\phi\|_{T}^2 +\sum_{T\in{\mathcal T}_h}h_T^2
\|\nabla(\phi-Q_0\phi)\|_{T}^2\le C h^{2(k+1)}
\|\phi\|^2_{k+1},\label{Qh}\\
&&\sum_{T\in{\mathcal T}_h} \|a(\nabla\phi-\mathbb{Q}_h(\nabla\phi))\|^2_{T} \le
Ch^{2k} \|\phi\|^2_{k+1}.\label{Rh}
\end{eqnarray}
Here and in what follows of this paper, $C$ denotes a generic
constant independent of the meshsize $h$ and the functions in the
estimates.
\end{lemma}
\medskip
In the finite element space $V_h$, we introduce a discrete $H^1$
semi-norm as follows:
\begin{equation}\label{March24-2013-discreteH1norm}
\|v\|_{1,h} = \left( \sum_{T\in{\mathcal T}_h}\left(\|\nabla
v_0\|_T^2+h_T^{-1} \| Q_bv_0-v_b\|_{\partial T}\right) \right)^{\frac12}.
\end{equation}
The following lemma indicates that $\|\cdot\|_{1,h}$ is equivalent
to the trip-bar norm (\ref{3barnorm}).
\begin{lemma} There exist two positive constants $C_1$ and $C_2$ such
that for any $v=\{v_0,v_b\}\in V_h$, we have
\begin{equation}\label{happy}
C_1 \|v\|_{1,h}\le {|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|} \leq C_2 \|v\|_{1,h}.
\end{equation}
\end{lemma}
\begin{proof}
For any $v=\{v_0,v_b\}\in V_h$, it follows from the definition of
weak gradient (\ref{dwd-2}) and $Q_b$ that
\begin{eqnarray}\label{March24-2013-01}
(\nabla_wv,q)_T=(\nabla v_0,q)_T+{\langle} v_b-Q_bv_0,
q\cdot{\bf n}{\rangle}_{\partial T},\quad \forall q\in [P_{k-1}(T)]^d.
\end{eqnarray}
By letting $q=\nabla_w v$ in (\ref{March24-2013-01}) we arrive at
\begin{eqnarray*}
(\nabla_wv,\nabla_w v)_T=(\nabla v_0,\nabla_w v)_T+{\langle} v_b-Q_bv_0,
\nabla_w v\cdot{\bf n}{\rangle}_{\partial T}.
\end{eqnarray*}
From the trace inequality (\ref{trace}) and the inverse inequality
we have
\begin{eqnarray*}
(\nabla_wv,\nabla_w v)_T &\le& \|\nabla v_0\|_T \|\nabla_w v\|_T+ \|
Q_bv_0-v_b\|_{\partial T} \|\nabla_w v\|_{\partial T}\\
&\le& \|\nabla v_0\|_T \|\nabla_w v\|_T+ Ch_T^{-1/2}\|
Q_bv_0-v_b\|_{\partial T} \|\nabla_w v\|_T\\
\end{eqnarray*}
Thus,
$$
\|\nabla_w v\|_T \le C \left(\|\nabla v_0\|_T^2 +h_T^{-1}\|
Q_bv_0-v_b\|_{\partial T}^2\right)^{\frac12},
$$
which verifies the upper bound of ${|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|}$. As to the lower
bound, we chose $q=\nabla v_0$ in (\ref{March24-2013-01}) to obtain
\begin{eqnarray*}
(\nabla_wv,\nabla v_0)_T=(\nabla v_0,\nabla v_0)_T+{\langle} v_b-Q_bv_0,
\nabla v_0\cdot{\bf n}{\rangle}_{\partial T}.
\end{eqnarray*}
Thus, from the trace an inverse inequality we have
$$
\|\nabla v_0\|_T^2 \leq \|\nabla_w v\|_T \|\nabla v_0\|_T
+Ch_T^{-1/2}\| Q_bv_0-v_b\|_{\partial T} \|\nabla v_0\|_T.
$$
This leads to
$$
\|\nabla v_0\|_T \leq C\left(\|\nabla_w v\|_T^2 +Ch_T^{-1}\|
Q_bv_0-v_b\|_{\partial T}^2\right)^{\frac12},
$$
which verifies the lower bound for ${|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|}$. Collectively,
they complete the proof of the lemma.
\end{proof}
\bigskip
\begin{lemma} Assume that ${\mathcal T}_h$ is shape regular. Then for any $w\in H^{k+1}(\Omega)$ and
$v=\{v_0,v_b\}\in V_h$, we have
\begin{eqnarray}
|s(Q_hw, v)|&\le&
Ch^k\|w\|_{k+1}{|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|},\label{mmm1}\\
\left|\ell_w(v)\right| &\leq& C h^k\|w\|_{k+1} {|\hspace{-.02in}|\hspace{-.02in}|}
v{|\hspace{-.02in}|\hspace{-.02in}|},\label{mmm2}
\end{eqnarray}
where $\ell_w(v)=\sum_{T\in{\mathcal T}_h} \langle a(\nabla w-\mathbb{Q}_h\nabla w)\cdot{\bf n},\;
v_0-v_b\rangle_{\partial T}$.
\end{lemma}
\medskip
\begin{proof}
Using the definition of $Q_b$, (\ref{trace}), and (\ref{Qh}), we
obtain
\begin{eqnarray*}
|s(Q_hw, v)|&=&
\left|\sum_{T\in{\mathcal T}_h} h_T^{-1}\langle Q_b(Q_0w)-Q_bw,\; Q_bv_0-v_b\rangle_{\partial T}\right|\\
&=&\left|\sum_{T\in{\mathcal T}_h} h_T^{-1}\langle Q_0w-w,\; Q_bv_0-v_b\rangle_{\partial T}\right|\\
&\le& C\left(\sum_{T\in{\mathcal T}_h}(h_T^{-2}\|Q_0w-w\|_T^2+
\|\nabla (Q_0w-w)\|_T^2)\right)^{\frac12}\cdot\\ &&
\left(\sum_{T\in{\mathcal T}_h}h_T^{-1}\|Q_bv_0-v_b\|^2_{{\partial T}}\right)^{\frac12}\\
&\le& Ch^k\|w\|_{k+1}{|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|}.
\end{eqnarray*}
As to (\ref{mmm2}), it follows from the Cauchy-Schwarz inequality,
the trace inequality (\ref{trace}) and the estimate (\ref{Rh}) that
\begin{eqnarray}\label{happy1}
|\ell_w(v)|&=&\left|\sum_{T\in{\mathcal T}_h}\langle a(\nabla w-\mathbb{Q}_h\nabla
w)\cdot{\bf n}, v_0-v_b\rangle_{\partial T}\right|\\
&\le & C \sum_{T\in{\mathcal T}_h}\|a(\nabla w-\mathbb{Q}_h\nabla w)\|_{{\partial T}}
\|v_0-v_b\|_{\partial T}\nonumber\\
&\le & C \left(\sum_{T\in{\mathcal T}_h}h_T\|a(\nabla w-\mathbb{Q}_h\nabla
w)\|_{{\partial T}}^2\right)^{\frac12}
\left(\sum_{T\in{\mathcal T}_h}h_T^{-1}\|v_0-v_b\|_{\partial T}^2\right)^{\frac12}\nonumber\\
&\le &
Ch^k\|w\|_{k+1}\left(\sum_{T\in{\mathcal T}_h}h_T^{-1}\|v_0-v_b\|_{\partial T}^2\right)^{\frac12}.\nonumber
\end{eqnarray}
Using the trace inequality (\ref{trace}) and the approximation
property of the $L^2$ projection operator we obtain
\begin{eqnarray*}
\|v_0-v_b\|_{\partial T} &\leq& \|v_0-Q_b v_0\|_{\partial T} + \|Q_b v_0-v_b\|_{\partial T}\\
&\le& C h_T^{1/2}\|\nabla v_0\|_T +\|Q_b v_0-v_b\|_{\partial T}.
\end{eqnarray*}
Substituting the above inequality into (\ref{happy1}) yields
\begin{eqnarray}\label{happy2}
|\ell_w(v)| \leq Ch^k\|w\|_{k+1}\left(\sum_{T\in{\mathcal T}_h}\left\{\|\nabla
v_0\|_T^2 + h_T^{-1}\|Q_bv_0-v_b\|_{\partial T}^2\right\}\right)^{\frac12},
\end{eqnarray}
which, along with the estimate (\ref{happy}), verifies the desired
estimate (\ref{mmm2}).
\end{proof}
\section{Error Analysis}\label{Section:error-analysis}
The goal of this section is to establish some error estimates for
the weak Galerkin finite element solution $u_h$ arising from (\ref{wg}).
The error will be measured in two natural norms: the triple-bar
norm as defined in (\ref{3barnorm}) and the standard $L^2$ norm. The
triple bar norm is essentially a discrete $H^1$ norm for the
underlying weak function.
For simplicity of analysis, we assume that the coefficient tensor
$a$ in (\ref{pde}) is a piecewise constant matrix with respect to
the finite element partition ${\mathcal T}_h$. The result can be extended to
variable tensors without any difficulty, provided that the tensor
$a$ is piecewise sufficiently smooth.
\subsection{Error equation}
Let $u_h=\{u_0,u_b\}\in V_h$ be the weak
Galerkin finite element solution arising from the numerical scheme
(\ref{wg}). Assume that the exact solution of
(\ref{pde})-(\ref{bc}) is given by $u$. The $L^2$
projection of $u$ in the finite element space $V_h$ is given by
$$
Q_h u=\{Q_0 u,Q_b u\}.
$$
Let
$$
e_h=\{e_0,e_b\}=\{Q_0u-u_0,Q_bu-u_b\}
$$
be the error between the WG finite element solution and the $L^2$
projection of the exact solution.
\begin{lemma}\label{Lemma:error-equation}
Let $e_h$ be the error of the weak Galerkin
finite element solution arising from (\ref{wg}). Then,
for any $v\in V_h^0$ we have
\begin{eqnarray}
a_s(e_h,v)=\ell_u(v)+ s(Q_hu,v),\label{ee}
\end{eqnarray}
where $\ell_u(v)=\sum_{T\in{\mathcal T}_h} \langle a(\nabla u-\mathbb{Q}_h\nabla
u)\cdot{\bf n},v_0-v_b\rangle_{\partial T}$.
\end{lemma}
\begin{proof}
Testing (\ref{pde}) by using $v_0$ of $v=\{v_0,v_b\}\in V_h^0$ we
arrive at
\begin{equation}\label{m1}
\sum_{T\in{\mathcal T}_h}(a\nabla u,\nabla v_0)_T-\sum_{T\in{\mathcal T}_h} \langle
a\nabla u\cdot{\bf n},v_0-v_b\rangle_{\partial T}=(f,v_0),
\end{equation}
where we have used the fact that $\sum_{T\in{\mathcal T}_h}\langle a\nabla
u\cdot{\bf n}, v_b\rangle_{\partial T}=0$. To deal with the term
$\sum_{T\in{\mathcal T}_h}(a\nabla u,\nabla v_0)_T$ in (\ref{m1}), we need the
following equation. For any $\phi\in H^1(T)$ and $v\in V_h$, it
follows from (\ref{key}), the definition of the discrete weak
gradient (\ref{dwd}), and the integration by parts that
\begin{eqnarray}
(a\nabla_w Q_h\phi,\nabla_w v)_T&=&(a \mathbb{Q}_h(\nabla\phi),\nabla_w v)_T\nonumber\\
&=& -(v_0,\nabla\cdot (a \mathbb{Q}_h\nabla\phi))_T+\langle v_b,(a \mathbb{Q}_h\nabla\phi)\cdot{\bf n}\rangle_{\partial T}\nonumber\\
&=&(\nabla v_0,a\mathbb{Q}_h\nabla\phi)_T-\langle v_0-v_b,(a\mathbb{Q}_h\nabla\phi)\cdot{\bf n}\rangle_{\partial T}\nonumber\\
&=&(a\nabla\phi,\nabla v_0)_T-{\langle} (a\mathbb{Q}_h\nabla\phi)\cdot{\bf n},\
v_0-v_b{\rangle}_{\partial T}.\label{j1}
\end{eqnarray}
By letting $\phi=u$ in (\ref{j1}),
we have from combining (\ref{j1}) and (\ref{m1}) that
\begin{eqnarray*}
\sum_{T\in{\mathcal T}_h} (a\nabla_w Q_hu,\nabla_w v)_T&=&(f,v_0)+
\sum_{T\in{\mathcal T}_h} \langle a(\nabla u-\mathbb{Q}_h\nabla
u)\cdot{\bf n},v_0-v_b\rangle_{\partial T}\\
&=&(f,v_0)+\ell_u(v).
\end{eqnarray*}
Adding $s(Q_hu,v)$ to both sides of the above equation gives
\begin{equation}\label{j2}
a_s(Q_hu, v)=(f, v_0)+ \ell_u(v) +s(Q_hu,v).
\end{equation}
Subtracting (\ref{wg}) from (\ref{j2}) yields the following error
equation,
\begin{eqnarray*}
a_s(e_h,v)=\ell_u(v)+ s(Q_hu,v),\quad \forall v\in V_h^0.
\end{eqnarray*}
This completes the proof of the lemma.
\end{proof}
\subsection{Error estimates}
The error equation (\ref{ee}) can be used to derive the following
error estimate for the WG finite element solution.
\begin{theorem} Let $u_h\in V_h$ be the weak Galerkin finite element solution of the problem
(\ref{pde})-(\ref{bc}) arising from (\ref{wg}). Assume the exact solution $u\in H^{k+1}(\Omega)$. Then,
there exists a constant $C$ such that
\begin{equation}\label{err1}
{|\hspace{-.02in}|\hspace{-.02in}|} u_h-Q_hu{|\hspace{-.02in}|\hspace{-.02in}|} \le Ch^{k}\|u\|_{k+1}.
\end{equation}
\end{theorem}
\begin{proof}
By letting $v=e_h$ in (\ref{ee}), we have
\begin{eqnarray}
{|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}^2&=&\ell_u(e_h)+s(Q_hu,\;\ e_h).\label{main}
\end{eqnarray}
It then follows from (\ref{mmm1}) and (\ref{mmm2}) that
\[
{|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}^2 \le Ch^k\|u\|_{k+1}{|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|},
\]
which implies (\ref{err1}). This completes the proof.
\end{proof}
Next, we will measure the difference between $u$ and $u_h$ in the
discrete $H^1$ semi-norm $\|\cdot\|_{1,h}$ as defined in
(\ref{March24-2013-discreteH1norm}). Note that
(\ref{March24-2013-discreteH1norm}) can be easily extended to
functions in $H^1(\Omega)+V_h$ through the inclusion map $i_W$.
\begin{corollary}
Let $u_h\in V_h$ be the weak Galerkin finite element solution of the problem
(\ref{pde})-(\ref{bc}) arising from (\ref{wg}). Assume the exact solution $u\in H^{k+1}(\Omega)$. Then,
there exists a constant $C$ such that
\begin{equation}\label{err8}
\| u-u_h\|_{1,h} \le Ch^{k}\|u\|_{k+1}.
\end{equation}
\end{corollary}
\begin{proof}
It follows from (\ref{happy}) and (\ref{err1}) that
$$
\|Q_h u-u_h\|_{1,h}\le C{|\hspace{-.02in}|\hspace{-.02in}|} Q_hu-u_h {|\hspace{-.02in}|\hspace{-.02in}|}\le Ch^{k}\|u\|_{k+1}.
$$
Using the triangle inequality, (\ref{Qh}) and the equation above, we have
$$
\| u-u_h\|_{1,h}\le\| u-Q_hu\|_{1,h}+\| Q_hu-u_h\|_{1,h}\le
Ch^{k}\|u\|_{k+1}.
$$
This completes the proof.
\end{proof}
\bigskip
In the rest of the section, we shall derive an optimal order error
estimate for the weak Galerkin finite element scheme (\ref{wg}) in
the usual $L^2$ norm by using a duality argument as was commonly
employed in the standard Galerkin finite element methods \cite{ci,
sue}. To this end, we consider a dual problem that seeks $\Phi\in
H_0^1(\Omega)$ satisfying
\begin{eqnarray}
-\nabla\cdot (a \nabla\Phi)&=& e_0\quad
\mbox{in}\;\Omega.\label{dual}
\end{eqnarray}
Assume that the above dual problem has the usual $H^{2}$-regularity.
This means that there exists a constant $C$ such that
\begin{equation}\label{reg}
\|\Phi\|_2\le C\|e_0\|.
\end{equation}
\begin{theorem} Let $u_h\in V_h$ be the weak Galerkin finite element solution of the problem
(\ref{pde})-(\ref{bc}) arising from (\ref{wg}). Assume the exact solution $u\in H^{k+1}(\Omega)$. In
addition, assume that the dual problem (\ref{dual}) has the usual
$H^2$-regularity. Then, there exists a constant $C$ such that
\begin{equation}\label{err2}
\|u-u_0\| \le Ch^{k+1}\|u\|_{k+1}.
\end{equation}
\end{theorem}
\begin{proof}
By testing (\ref{dual}) with $e_0$ we obtain
\begin{eqnarray}\nonumber
\|e_0\|^2&=&-(\nabla\cdot (a\nabla\Phi),e_0)\\
&=&\sum_{T\in{\mathcal T}_h}(a\nabla \Phi,\ \nabla e_0)_T-\sum_{T\in{\mathcal T}_h}{\langle}
a\nabla\Phi\cdot{\bf n},\ e_0- e_b{\rangle}_{{\partial T}},\label{jw.08}
\end{eqnarray}
where we have used the fact that $e_b=0$ on $\partial\Omega$.
Setting $\phi=\Phi$ and $v=e_h$ in (\ref{j1}) yields
\begin{eqnarray}
(a\nabla_w Q_h\Phi,\;\nabla_w e_h)_T=(a\nabla\Phi,\;\nabla e_0)_T-{\langle}
(a\mathbb{Q}_h\nabla\Phi)\cdot{\bf n},\ e_0-e_b{\rangle}_{\partial T}.\label{j1-new}
\end{eqnarray}
Substituting (\ref{j1-new}) into (\ref{jw.08}) gives
\begin{eqnarray}
\|e_0\|^2&=&(a\nabla_w e_h,\ \nabla_w Q_h\Phi)+\sum_{T\in{\mathcal T}_h}{\langle}
a(\mathbb{Q}_h\nabla\Phi-\nabla\Phi)\cdot{\bf n},\ e_0-e_b{\rangle}_{{\partial T}}\nonumber\\
&=&(a\nabla_w e_h,\ \nabla_w Q_h\Phi)+\ell_\Phi(e_h).\label{m2}
\end{eqnarray}
It follows from the error equation (\ref{ee}) that
\begin{eqnarray}
(a\nabla_w e_h,\ \nabla_w Q_h\Phi)&=&\ell_u(Q_h\Phi)
+s(Q_hu,\ Q_h\Phi)-s(e_h,\ Q_h\Phi).\label{m3}
\end{eqnarray}
By combining (\ref{m2}) with (\ref{m3}) we arrive at
\begin{eqnarray}\label{m4}
\|e_0\|^2=\ell_u(Q_h\Phi)
+s(Q_hu,\ Q_h\Phi)-s(e_h,\ Q_h\Phi)
+\ell_\Phi(e_h).
\end{eqnarray}
Let us bound the terms on the right hand side of (\ref{m4}) one by
one. Using the triangle inequality, we obtain
\begin{eqnarray}
|\ell_u(Q_h\Phi)|&=&\left|\sum_{T\in{\mathcal T}_h} \langle a(\nabla u-\mathbb{Q}_h\nabla
u)\cdot{\bf n},\; Q_0\Phi-Q_b\Phi\rangle_{\partial T} \right|\nonumber\\
&\le&\left|\sum_{T\in{\mathcal T}_h} \langle a(\nabla u-\mathbb{Q}_h\nabla
u)\cdot{\bf n},\; Q_0\Phi-\Phi\rangle_{\partial T} \right|\nonumber\\
&+&\left|\sum_{T\in{\mathcal T}_h} \langle a(\nabla u-\mathbb{Q}_h\nabla
u)\cdot{\bf n},\; \Phi-Q_b\Phi\rangle_{\partial T} \right|.\label{1st-term}
\end{eqnarray}
We first use the definition of $Q_b$ and
the fact that $\Phi=0$ on $\partial\Omega$ to obtain
\begin{eqnarray}
\sum_{T\in{\mathcal T}_h}{\langle} a(\nabla u-\mathbb{Q}_h\nabla u)\cdot{\bf n}, \Phi-Q_b\Phi{\rangle}_{\partial T} =\sum_{T\in{\mathcal T}_h}{\langle} a\nabla u\cdot{\bf n}, \Phi-Q_b\Phi{\rangle}_{\partial T} = 0.\label{l21}
\end{eqnarray}
From the trace inequality (\ref{trace}) and the estimate (\ref{Qh})
we have
$$
\left(\sum_{T\in{\mathcal T}_h}\|Q_0\Phi-\Phi\|^2_{\partial T}\right)^{1/2} \leq C
h^{\frac32}\|\Phi\|_2
$$
and
$$
\left(\sum_{T\in{\mathcal T}_h}\|a(\nabla u-\mathbb{Q}_h\nabla
u)\|^2_{\partial T}\right)^{1/2} \leq Ch^{k-\frac12}\|u\|_{k+1}.
$$
Thus, it follows from the Cauchy-Schwarz inequality and the above
two estimates that
\begin{eqnarray}
&&\left|\sum_{T\in{\mathcal T}_h} \langle a(\nabla u-\mathbb{Q}_h\nabla
u)\cdot{\bf n},\; Q_0\Phi-\Phi\rangle_{\partial T} \right|\nonumber\\
&&\le C\left(\sum_{T\in{\mathcal T}_h}\|a(\nabla u-\mathbb{Q}_h\nabla
u)\|^2_{\partial T}\right)^{1/2}
\left(\sum_{T\in{\mathcal T}_h}\|Q_0\Phi-\Phi\|^2_{\partial T}\right)^{1/2} \nonumber\\
&&\le C h^{k+1} \|u\|_{k+1}\|\Phi\|_2.\label{l22}
\end{eqnarray}
Combining (\ref{1st-term}) with (\ref{l21}) and (\ref{l22}) yields
\begin{eqnarray}\label{1st-term-complete}
|\ell_u(Q_h\Phi)| \leq C h^{k+1} \|u\|_{k+1}
\|\Phi\|_2.
\end{eqnarray}
Analogously, it follows from the definition of $Q_b$, the trace
inequality (\ref{trace}), and the estimate (\ref{Qh}) that
\begin{eqnarray}\nonumber
\left|s(Q_hu,\; Q_h\Phi)\right|&\le & \rho\sum_{T\in{\mathcal T}_h}h_T^{-1}
\left|(Q_b(Q_0u)-Q_bu,\ Q_b(Q_0\Phi)-Q_b\Phi)_{\partial T}\right|\nonumber\\
&\le & \rho\sum_{T\in{\mathcal T}_h}h_T^{-1}\|Q_b(Q_0u-u)\|_{\partial T}\|Q_b(Q_0\Phi-\Phi)\|_{\partial T}\nonumber\\
&\le & \rho\sum_{T\in{\mathcal T}_h}h_T^{-1}\|Q_0u-u\|_{\partial T}\|Q_0\Phi-\Phi\|_{\partial T}\nonumber\\
&\le& C\left(\sum_{T\in{\mathcal T}_h}h_T^{-1}\|Q_0u-u\|^2_{\partial T}\right)^{1/2}
\left(\sum_{T\in{\mathcal T}_h}h_T^{-1}\|Q_0\Phi-\Phi\|^2_{\partial T}\right)^{1/2}\nonumber \\
&\le& Ch^{k+1}\|u\|_{k+1}\|\Phi\|_2.\label{2nd-term-complete}
\end{eqnarray}
The estimates (\ref{mmm1}) with $k=1$ and the error estimate
(\ref{err1}) imply
\begin{eqnarray}\label{3rd-term-complete}
|s(e_h,\ Q_h\Phi)|\le Ch\|\Phi\|_2{|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}\le
Ch^{k+1}\|u\|_{k+1}\|\Phi\|_2.
\end{eqnarray}
Similarly, it follows from (\ref{mmm2}) and (\ref{err1}) that
\begin{eqnarray}\label{4th-term-complete}
|\ell_\Phi(e_h)| &\le& Ch^{k+1}\|u\|_{k+1}\|\Phi\|_2.
\end{eqnarray}
Now substituting (\ref{1st-term-complete})-(\ref{4th-term-complete})
into (\ref{m4}) yields
$$
\|e_0\|^2 \leq C h^{k+1}\|u\|_{k+1} \|\Phi\|_2,
$$
which, combined with the regularity assumption (\ref{reg}) and the
triangle inequality, gives the desired optimal order error estimate
(\ref{err2}).
\end{proof}
\section{Numerical Examples}\label{Section:numerical}
In this section, we examine the WG method by testing its convergence
and flexibility for solving second order elliptic problems. In the
test of convergence, the first ($k=1$) and second ($k=2$) order of
weak Galerkin elements are used in the construction of the finite
element space $V_h$. In the test of flexibility of the WG method,
elliptic problems are solved on finite element partitions with
various configurations, including triangular mesh, deformed
rectangular mesh, and honeycomb mesh in two dimensions and deformed
cubic mesh in three dimensions. Our numerical results confirm the
theory developed in previous sections; namely, optimal rate of
convergence in $H^1$ and $L^2$ norms. In addition, it shows a great
flexibility of the WG method with respect to the shape of finite
element partitions.
Let $u_h=\{u_0,u_b\}$ and $u$ be the solution to the weak Galerkin
equation and the original equation, respectively. The error is
defined by $e_h=u_h-Q_hu=\{e_0,e_b\}$, where $e_0=u_0-Q_0u$ and
$e_b=u_b-Q_bu$. Here $Q_h u=\{Q_0 u,Q_b u\}$ with $Q_h$ as the $L^2$
projection onto appropriately defined spaces. The following norms
are used to measure the error in all of the numerical experiments:
\begin{eqnarray*}
&&H^1\mbox{ semi-norm: } {|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}=
\bigg(\sum_{T\in\mathcal{T}_h}\int_T|\nabla_w e_h|^2dT
+h^{-1}\int_{\partial T}|Q_be_0-e_b|^2ds\bigg)^{\frac12},\\
&&\mbox{ Element-based }L^2\mbox{ norm: }\|e_0\|
=\bigg(\sum_{T\in\mathcal{T}_h}\int_K|e_0|^2dT\bigg)^{\frac12}.
\end{eqnarray*}
\subsection{On Triangular Mesh}
Consider the second order elliptic equation that seeks an unknown function $u=u(x,y)$ satisfying
$$-\nabla\cdot(a\nabla)=f$$
in the square domain $\Omega=(0,1)\times(0,1)$ with Dirichlet boundary condition. The boundary condition $u|_{\partial\Omega}=g$ and $f$ are chosen such that the exact solution is given by $u=\sin(\pi x)\cos(\pi y)$ and
$$a=\begin{pmatrix}
x^2+y^2+1&xy\\
xy&x^2+y^2+1
\end{pmatrix}.$$
The triangular mesh ${\mathcal T}_h$ used in this example is constructed by:
1) uniformly partitioning the domain into $n\times n$
sub-rectangles; 2) dividing each rectangular element by the diagonal
line with a negative slope. The mesh size is denoted by $h=1/n$. The
lowest order ($k=1$) weak Galerkin element is used for obtaining the
weak Galerkin solution $u_h=\{u_0,u_b\}$; i.e., $u_0$ and $u_b$ are
polynomials of degree $k=1$ and degree $k-1=0$ respectively on each
element $T\in {\mathcal T}_h$.
Table \ref{tab:ex1} shows the convergence rate for WG solutions
measured in $H^1$ and $L^2$ norms. The numerical results indicate
that the WG solution of linear element is convergent with rate
$O(h)$ in $H^1$ and $O(h^2)$ in $L^2$ norms.
\begin{table}
\caption{Example 1. Convergence rate of lowest order WG ($k=1$) on triangular meshes.}
\label{tab:ex1}
\center
\begin{tabular}{||c|c|c|c|c||}
\hline\hline
$h$ & ${|\hspace{-.02in}|\hspace{-.02in}|} e_h {|\hspace{-.02in}|\hspace{-.02in}|}$ & order & $\| e_0\|$ & order \\
\hline
1/4 & 1.3240e+00 & & 1.5784e+00 & \\ \hline
1/8 & 6.6333e-01 & 9.9710e-01 & 3.6890e-01 & 2.0972 \\ \hline
1/16 & 3.3182e-01 & 9.9933e-01 & 9.0622e-02 & 2.0253 \\ \hline
1/32 & 1.6593e-01 & 9.9983e-01 & 2.2556e-02 & 2.0064 \\ \hline
1/64 & 8.2966e-02 & 9.9998e-01 & 5.6326e-03 & 2.0016 \\ \hline
1/128 & 4.1483e-02 & 1.0000 & 1.4078e-03 & 2.0004\\ \hline\hline
\end{tabular}
\end{table}
In the second example, we consider the Poisson problem that seeks an
unknown function $u=u(x,y)$ satisfying
$$
-\Delta u=f
$$
in the square domain $\Omega=(0,1)^2$. Like the first example, the
exact solution here is given by $u=\sin(\pi x)\cos(\pi y)$ and $g$
and $f$ are chosen accordingly to match the exact solution.
The very same triangular mesh is employed in the numerical
calculation. Associated with this triangular mesh ${\mathcal T}_h$, two weak
Galerkin elements with $k=1$ and $k=2$ are used in the computation
of the weak Galerkin finite element solution $u_h$. For simplicity,
these two elements shall be referred to as $(P_1(T), P_{0}(e))$ and
$(P_2(T), P_{1}(e))$.
Tables \ref{tab:ex1_1} and \ref{tab:ex1_2} show the numerical
results on rate of convergence for the WG solutions in $H^1$ and
$L^2$ norms associated with $k=1$ and $k=2$, respectively. Note that
$\|e_h\|_{\mathcal{E}_h}$ is a discrete $L^2$ norm for the
approximation $u_b$ on the boundary of each element. Optimal rates
of convergence are observed numerically for each case.
\begin{table}
\caption{Example 2. Convergence rate of lowest order WG ($k=1$) on triangular meshes.}
\label{tab:ex1_1}
\center
\begin{tabular}{||c||c|c|c||}
\hline\hline
$h$ & ${|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}$ & $\|e_h\|$ & $\|e_h\|_{\mathcal{E}_h}$ \\
\hline\hline
1/2 &2.7935e-01 &6.1268e-01 &5.7099e-02 \\ \hline
1/4 &1.4354e-01 &1.5876e-01 &1.3892e-02 \\ \hline
1/8 &7.2436e-02 &4.0043e-02 &3.5430e-03 \\ \hline
1/16 &3.6315e-02 &1.0033e-02 &8.9325e-04 \\ \hline
1/32 &1.8170e-02 &2.5095e-03 &2.2384e-04 \\ \hline
1/64 &9.0865e-03 &6.2747e-04 &5.5994e-05 \\ \hline
1/128 &4.5435e-03 &1.5687e-04 &1.4001e-05 \\ \hline\hline
$O(h^r),r=$ &9.9232e-01 &1.9913 &1.9955 \\ \hline\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Example 2. Convergence rate of second order WG ($k=2$) on triangular meshes.}
\label{tab:ex1_2}
\center
\begin{tabular}{||c||c|c|c||}
\hline\hline
$h$ & ${|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}$ & $\|e_h\|$ & $\|e_h\|_{\mathcal{E}_h}$ \\
\hline\hline
1/2 &1.7886e-01 &9.4815e-02 &3.3742e-02 \\ \hline
1/4 &4.8010e-02 &1.2186e-02 &4.9969e-03 \\ \hline
1/8 &1.2327e-02 &1.5271e-03 &6.6539e-04 \\ \hline
1/16 &3.1139e-03 &1.9077e-04 &8.5226e-05 \\ \hline
1/32 &7.8188e-04 &2.3829e-05 &1.0763e-05 \\ \hline
1/64 &1.9586e-04 &2.9774e-06 &1.3516e-06 \\ \hline
1/128 &4.9009e-05 &3.7210e-07 &1.6932e-07 \\ \hline\hline
$O(h^r),r=$ &1.9769 &2.9956 &2.9453 \\ \hline\hline
\end{tabular}
\end{table}
\subsection{On Quadrilateral Meshes}
In this test, we solve the same poisson equation considered in the
second example by using quadrilateral meshes. We start with an
initial quadrilateral mesh, shown as in Figure \ref{fig:ex2} (Left).
The mesh is then successively refined by connecting the barycenter
of each coarse element with the middle points of its edges, shown as
in Figure \ref{fig:ex2} (Right). For the quadrilateral mesh ${\mathcal T}_h$,
two weak Galerkin elements with $k=1$ and $k=2$ are used in the WG
finite element scheme (\ref{wg}).
Tables \ref{tab:ex2} and \ref{ex2_2} show the rate of convergence
for the WG solutions in $H^1$ and $L^2$ norms associated with $k=1$
and $k=2$ on quadrilateral meshes, respectively. Optimal rates of
convergence are observed numerically.
\begin{figure}[!htb]
\centering
\begin{tabular}{cc}
\resizebox{2in}{1.8in}{\includegraphics{ex2_mesh1.pdf}} \quad
\resizebox{2in}{1.8in}{\includegraphics{ex2_mesh2.pdf}}
\end{tabular}
\caption{Mesh level 1 (Left) and mesh level 2 (Right) for example
2.}\label{fig:ex2}
\end{figure}
\begin{table}[!htb]
\caption{Example 3. Error and rate of convergence for first order WG on quadrilateral meshes.}
\label{tab:ex2}
\center
\begin{tabular}{||c|c|c|c|c||}
\hline\hline
$h$ & ${|\hspace{-.02in}|\hspace{-.02in}|} e_h {|\hspace{-.02in}|\hspace{-.02in}|}$ & order & $\| e_0\|$ & order \\
\hline
2.9350e-01 & 1.9612e+00 & &2.1072e+00 & \\ \hline
1.4675e-01 & 1.0349e+00 &9.2225e-01 &5.7219e-01 & 1.8808 \\ \hline
7.3376e-02 & 5.2434e-01 &9.8094e-01 &1.4458e-01 & 1.9847 \\ \hline
3.6688e-02 & 2.6323e-01 &9.9418e-01 &3.5655e-02 & 2.0197 \\ \hline
1.8344e-02 & 1.3179e-01 &9.9808e-01 &8.6047e-03 & 2.0509 \\ \hline
9.1720e-03 & 6.5925e-02 &9.9934e-01 &2.0184e-03 & 2.0919 \\ \hline\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Example 3. Error and rate of convergence for second order WG on quadrilateral meshes.}
\label{ex2_2}
\center
\begin{tabular}{||c||c|c|c|c|c|c||}
\hline\hline
$h$ & ${|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}$ & order & $\|e_0\|$ & order \\
\hline\hline
1/2 &1.7955e-01 & &1.4891e-01 & \\ \hline
1/4 &8.7059e-02 &1.0444 &1.8597e-02 &3.0013 \\ \hline
1/8 &2.8202e-02 &1.6262 &2.1311e-03 &3.1254 \\ \hline
1/16 &7.8114e-03 &1.8521 &2.4865e-04 &3.0995 \\ \hline
1/32 &2.0347e-03 &1.9408 &2.9964e-05 &3.0528 \\ \hline
1/64 &5.1767e-04 &1.9747 &3.6806e-06 &3.0252 \\ \hline
1/128 &1.3045e-04 &1.9885 &4.5627e-07 &3.0120 \\ \hline\hline
\end{tabular}
\end{table}
\subsection{On Honeycomb Mesh}
In the forth test, we solve the Poisson equation on the domain of
unit square with exact solution $u=\sin(\pi x)\sin(\pi y)$. The
Dirichlet boundary data $g$ and $f$ are chosen to match the exact
solution. The numerical experiment is performed on the honeycomb
mesh as shown in Figure \ref{fig:ex3}. The linear WG element ($k=1$)
is used in this numerical computation.
The error profile is presented in Table \ref{tab:ex3}, which
confirms the convergence rates predicted by the theory.
\begin{figure}[!htb]
\centering
\begin{tabular}{c}
\resizebox{2.2in}{1.9in}{\includegraphics{hexmesh_8.pdf}}
\end{tabular}
\caption{Honeycomb mesh for example 3.}\label{fig:ex3}
\end{figure}
\begin{table}[!htb]
\caption{Example 4. Error and rate of convergence for linear WG element on honeycomb meshes.}\label{tab:ex3}
\center
\begin{tabular}{||c|c|c|c|c||}
\hline\hline
$h$ & ${|\hspace{-.02in}|\hspace{-.02in}|} e_h {|\hspace{-.02in}|\hspace{-.02in}|}$ & order & $\| e_0\|$ & order \\
\hline
1.6667e-01 & 3.3201e-01 & & 1.6006e-02 & \\ \hline
8.3333e-02 & 1.6824e-01 & 9.8067e-01 & 3.9061e-03 &2.0347 \\ \hline
4.1667e-02 & 8.4784e-02 & 9.8867e-01 & 9.6442e-04 &2.0180 \\ \hline
2.0833e-02 & 4.2570e-02 & 9.9392e-01 & 2.3960e-04 &2.0090 \\ \hline
1.0417e-02 & 2.1331e-02 & 9.9695e-01 & 5.9711e-05 &2.0047 \\ \hline
5.2083e-03 & 1.0677e-02 & 9.9839e-01 & 1.4904e-05 &2.0022
\\ \hline\hline
\end{tabular}
\end{table}
\subsection{On Deformed Cubic Meshes}
In the fifth test, the Poisson equation is solved on a three
dimensional domain $\Omega=(0,1)^3$. The exact solution is chosen as
\begin{eqnarray*}
u=\sin(2\pi x)\sin(2\pi y)\sin(2\pi z),
\end{eqnarray*}
and the Dirichlet boundary date $g$ and $f$ are chosen accordingly
to match the exact solution.
Deformed cubic meshes are used in this test, see Figure
\ref{fig:ex4} (Left) for an illustrative element. The construction
of the deformed cubic mesh starts with a coarse mesh. The next level
of mesh is derived by refining each deformed cube element into $8$
sub-cubes, as shown in Figure \ref{fig:ex4} (Right). Table
\ref{tab:ex4} reports some numerical results for different level of
meshes. It can be seen that a convergent rate of $O(h)$ in $H^1$ and
$O(h^2)$ in $L^2$ norms are achieved for the corresponding WG finite
element solutions. This confirms the theory developed in earlier
sections.
\begin{figure}[!htb]
\centering
\begin{tabular}{cc}
\resizebox{2.3in}{2in}{\includegraphics{3D_cube1.pdf}}
\resizebox{2.4in}{2in}{\includegraphics{3D_cube2.pdf}}
\end{tabular}
\caption{Mesh level 1 (Left) and mesh level 2 (Right) for example
4.}\label{fig:ex4}
\end{figure}
\begin{table}[!htb]
\caption{Example 5. Error and convergence rate for $k=1$ on deformed cubic mesh.}
\label{tab:ex4}
\center
\begin{tabular}{||c|c|c|c|c||}
\hline\hline
$h$ & ${|\hspace{-.02in}|\hspace{-.02in}|} e_h {|\hspace{-.02in}|\hspace{-.02in}|}$ & order & $\| e_0\|$ & order \\
\hline
1/2 &5.7522 & &9.1990 & \\ \hline
1/4 &1.3332 & 2.1092 &1.5684 & 2.5522\\ \hline
1/8 &6.4071e-01 & 1.0571 &2.7495e-01 & 2.5121\\ \hline
1/16 &3.2398e-01 & 9.8377e-01 &6.8687e-02 & 2.0011\\ \hline
1/32 &1.6201e-01 & 9.9982e-01 &1.7150e-02 & 2.0018\\ \hline\hline
\end{tabular}
\end{table}
\vfill\eject
|
1,477,468,751,129 | arxiv | \section{Introduction}
Suppose that $X$ is a projective $d$-dimensional algebraic variety over a field $k$ and $D$ is an ${\NZQ R}$-Cartier divisor on $X$.
Then the volume of $D$ is
$$
{\rm vol}(D)=\lim_{n\rightarrow \infty}\frac{\dim_k\Gamma(X,\mathcal O_X(nD))}{n^d/d!}.
$$
If $D$ is nef, then the volume of $D$ is the self intersection number ${\rm vol}(D)=(D^d)$. For an arbitrary ${\NZQ R}$-Cartier divisor $D$,
$$
{\rm vol}(D)=\left\{\begin{array}{cl}
\langle D^d\rangle&\mbox{ if $D$ is pseudo effective}\\
0&\mbox{ otherwise.}
\end{array}\right.
$$
Here $\langle D^d\rangle$ is the positive intersection product. The positive intersection product $\langle D^d\rangle$ is the ordinary intersection product $(D^d)$ if $D$ is nef, but these products are different in general. More generally, given pseudo effective ${\NZQ R}$-Cartier divisors $D_1,\ldots,D_p$ on $X$ with $p\le d$, there is a positive intersection product $\langle D_1\cdot\ldots\cdot D_p\rangle$ which is a linear form on $N^1(\mathcal X)^{d-p}$, where $\mathcal X$ is the limit of all birational models of $X$. We have that
$$
{\rm vol}(D)=\langle D^p\rangle =\langle D\rangle \cdot\ldots \cdot\langle D\rangle=\langle D\rangle^d.
$$
We denote the linear forms on $N^1(\mathcal X)^{d-p}$ by $L^{d-p}(\mathcal X)$. The theory of intersection theory and volumes which is required for this paper is reviewed in Section \ref{PrelSect}.
Suppose that $D_1$ and $D_2$ are pseudo effective ${\NZQ R}$-Cartier divisors on $X$. We have the Minkowski inequality
$$
{\rm vol}(D_1+D_2)^{\frac{1}{d}}\ge {\rm vol}(D_1)^{\frac{1}{d}}+{\rm vol}(D_2)^{\frac{1}{d}}
$$
which follows from Theorem \ref{Ineq+} below. Further, we have the following characterization of equality in the Minkowski inequality.
\begin{Theorem}\label{Theorem22+} Let $X$ be a $d$-dimensional projective variety over a field $k$. For any two big ${\NZQ R}$-Cartier divisors $D_1$ and $D_2$ on $X$,
\begin{equation}\label{Neweq20+}
{\rm vol}(D_1+D_2)^{\frac{1}{d}}\ge {\rm vol}(D_1)^{\frac{1}{d}}+{\rm vol}(D_2)^{\frac {1}{d}}
\end{equation}
with equality if and only if $\langle D_1\rangle $ and $\langle D_2\rangle$ are proportional in $L^{d-1}(\mathcal X)$.
\end{Theorem}
In the case that $D_1$ and $D_2$ are nef and big, this is proven in \cite[Theorem 2.15]{BFJ} (over an algebraically closed field of characteristic zero) and in \cite[Theorem 6.13]{C} (over an arbitrary field). In this case of nef divisors, the condition that
$\langle L_1\rangle $ and $\langle L_2\rangle$ are proportional in $L^{d-1}(\mathcal X)$ is just that $D_1$ and $D_2$ are proportional in $N^1(X)$.
Theorem \ref{Theorem22+} is obtained in the case that $D_1$ and $D_2$ are big and movable and $k$ is an algebraically closed field of characteristic zero in \cite[Proposition 3.7]{LX2}. In this case the condition for equality is that $D_1$ and $D_2$ are proportional in $N^1(X)$. Theorem \ref{Theorem22+} is established in the case that $D_1$ and $D_2$ are big ${\NZQ R}$-Cartier divisors and $X$ is nonsingular, over an algebraically closed field $k$ of characteristic zero in \cite[Theorem 1.6]{LX2}. In this case, the condition for equality is that the positive parts of the $\sigma$ decompositions of $D_1$ and $D_2$ are proportional; that is, $P_{\sigma}(D_1)$ and $P_{\sigma}(D_2)$ are proportional in $N^1(X)$.
In Section \ref{SecMink}, we modify the proof sketched in \cite{LX2} of \cite[Proposition 3.7]{LX2} to be valid over an arbitrary field. Characteristic zero is required in the proof in \cite{LX2} as the existence of resolution of singularities is assumed and an argument using the theory of multiplier ideals is used, which requires characteristic zero as it relies on both resolution of singularities and Kodaira vanishing.
We will write
$$
s_i=\langle D_1^i\cdot D_2^{d-i}\rangle\mbox{ for $0\le i\le d$}.
$$
We have the following generalization of the Khovanskii-Teissier inequalities to positive intersection numbers.
\begin{Theorem} (Minkowski Inequalities)\label{Ineq+} Suppose that $X$ is a complete algebraic variety of dimension $d$ over a field $k$ and $D_1$ and $D_2$ are pseudo effective ${\NZQ R}$-Cartier divisors on $X$. Then
\begin{enumerate}
\item[1)] $s_i^2\ge s_{i+1}s_{i-1}$ for $1\le i\le d-1.$
\item[2)] $s_is_{d-i}\ge s_0s_d$ for $1\le i\le d-1$.
\item[3)] $s_i^d\ge s_0^{d-i}s_d^i$ for $0\le i\le d$.
\item[4)] ${\rm vol}(D_1+D_2) \ge {\rm vol}(D_1)^{\frac{1}{d}}+{\rm vol}(D_2)^{\frac{1}{d}}$.
\end{enumerate}
\end{Theorem}
Theorem \ref{Ineq+} follows from \cite[Theorem 2.15]{BFJ} when $k$ has characteristic zero and from \cite[Theorem 6.6]{C} in general.
When $D_1$ and $D_2$ are nef, the inequalities of Theorem \ref{Ineq+} are proven by Khovanskii and Teissier \cite{T1}, \cite{T2}, \cite[Example 1.6.4]{L}. In the case that $D_1$ and $D_2$ are nef, we have that $s_i=\langle D_1^i\cdot D_2^{d-i}\rangle=(D_1^i\cdot D_2^{d-i})$ are the ordinary intersection products.
We have the following characterization of equality in these inequalities.
\begin{Theorem} (Minkowski equalities)\label{Minkeq+} Suppose that $X$ is a projective algebraic variety of dimension $d$ over a field $k$ of characteristic zero, and $D_1$ and $D_2$ are big ${\NZQ R}$-Cartier divisors on $X$. Then the following are equivalent:
\begin{enumerate}
\item[1)] $s_i^2= s_{i+1}s_{i-1}$ for $1\le i\le d-1.$
\item[2)] $s_is_{d-i}= s_0s_d$ for $1\le i\le d-1$.
\item[3)] $s_i^d= s_0^{d-i}s_d^i$ for $0\le i\le d$.
\item[4)] $s_{d-1}^d=s_0s_d^{d-1}$.
\item[5)] ${\rm vol}(D_1+D_2) = {\rm vol}(D_1)^{\frac{1}{d}}+{\rm vol}(D_2)^{\frac{1}{d}}$.
\item[6)] $\langle D_1\rangle$ is proportional to $\langle D_2\rangle$ in $L^{d-1}(\mathcal X)$.
\end{enumerate}
\end{Theorem}
Theorem \ref{Minkeq+} is valid over any field $k$ when $\dim X\le 3$, since resolution of singularities is true in these dimensions.
When $D_1$ and $D_2$ are nef and big, then Theorem \ref{Minkeq+} is proven in \cite[Theorem 2.15]{BFJ} when $k$ has characteristic zero and in \cite[Theorem 6.13]{C} for arbitrary $k$. When $D_1$ and $D_2$ are nef and big, the condition
6) of Theorem \ref{Minkeq+} is just that $D_1$ and $D_2$ are proportional in $N^1(X)$.
The proof of Theorem \ref{Minkeq+} relies on the following Diskant inequality for big divisors.
Suppose that $X$ is a projective variety and $D_1$ and $D_2$ are ${\NZQ R}$-Cartier divisors on $X$. The slope $s(D_1,D_2)$ of $D_2$ with respect to $D_1$ is the smallest real number $s=s(D_1,D_2)$ such that $\langle D_1\rangle \ge s\langle D_2\rangle$.
\begin{Theorem}\label{PropNew60+}(Diskant inequality for big divisors)
Suppose that $X$ is a projective $d$-dimensional variety over a field $k$ of characteristic zero and $D_1,D_2$
are big ${\NZQ R}$-Cartier divisors on $X$. Then
\begin{equation}
\langle D_1^{d-1} \cdot D_2\rangle ^{\frac{d}{d-1}}-{\rm vol}(D_1){\rm vol}(D_2)^{\frac{1}{d-1}}
\ge [\langle D_1^{d-1}\cdot D_2\rangle^{\frac{1}{d-1}}-s(D_1,D_2){\rm vol}(D_2)^{\frac{1}{d-1}}]^d.
\end{equation}
\end{Theorem}
The Diskant inequality is proven for nef and big divisors in \cite[Theorem G]{BFJ} in characteristic zero and in \cite[Theorem 6.9]{C} for nef and big divisors over an arbitrary field. In the case that $D_1$ and $D_2$ are nef and big, the condition that $\langle D_1\rangle - s \langle D_2\rangle$ is pseudo effective in $L^{d-1}(\mathcal X)$ is that $D_1-sD_2$ is pseudo effective in $N^1(X)$. The Diskant inequality is proven when $D_1$ and $D_2$ are big and movable divisors and $X$ is a projective variety over an algebraically closed field of characteristic zero in \cite[Proposition 3.3, Remark 3.4]{LX2}. Theorem \ref{PropNew60+} is a consequence of \cite[Theorem 3.6]{DF}.
Generalizing Teissier \cite{T1}, we
define the inradius of $\alpha$ with respect to $\beta$ as
$$
r(\alpha;\beta)=s(\alpha,\beta)
$$
and the outradius of $\alpha$ with respect to $\beta$ as
$$
R(\alpha;\beta)=\frac{1}{s(\beta,\alpha)}.
$$
We deduce the following consequence of the Diskant inequality.
\begin{Theorem}\label{TheoremH+} Suppose that $X$ is a $d$-dimensional projective variety over a field $k$ of characteristic zero and $\alpha,\beta$ are big ${\NZQ R}$-Cartier divisors on $X$. Then
\begin{equation}
\frac{s_{d-1}^{\frac{1}{d-1}}-(s_{d-1}^{\frac{d}{d-1}}-s_0^{\frac{1}{d-1}}s_d)^{\frac{1}{d}}}{s_0^{\frac{1}{d-1}}}
\le r(\alpha;\beta)\le \frac{s_d}{s_{d-1}}\le\frac{s_1}{s_0}\le R(\alpha;\beta)\le
\frac{s_d^{\frac{1}{d-1}}}{s_1^{\frac{1}{d-1}}-(s_1^{\frac{d}{d-1}}-s_d^{\frac{1}{d-1}}s_0)^{\frac{1}{d}}}.
\end{equation}
\end{Theorem}
This gives a solution to \cite[Problem B]{T1} for big ${\NZQ R}$-Cartier divisors. The inequalities of Theorem \ref{TheoremH+} are proven by Teissier in \cite[Corollary 3.2.1]{T1} for divisors on surfaces satisfying some conditions.
In the case that $D_1$ and $D_2$ are nef and big on a projective variety over a field of characteristic zero, Theorem \ref{TheoremH+} follows from the Diskant inequality \cite[Theorem F]{BFJ}. In the case that $D_1$ and $D_2$ are nef and big on a projective variety over an arbitrary field, Theorem \ref{TheoremH+} is proven in \cite[Theorem 6.11]{C}, as a consequence of the Diskant inequality \cite[Theorem 6.9]{C} for nef divisors.
\section{Preliminaries}\label{PrelSect}
In this section we review some properties of cycles and intersection theory on projective varieties over an arbitrary field.
\subsection{Codimension 1 cycles}
To establish notation we give a quick review of some material from \cite{Kl}, \cite[Chapter 2]{F} and \cite[Chapter 1]{L}. Although the ongoing assumption in \cite{L} is that $k={\NZQ C}$, this assumption is not needed in the material reviewed in this subsection.
Let $X$ be a $d$-dimensional projective variety over a field $k$.
The group of Cartier divisors on $X$ is denoted by ${\rm Div}(X)$. There is a natural homomorphism from ${\rm Div}(X)$ to the $(k-1)$-cycles (Weil divisors) $Z_{k-1}(X)$ of $X$ written as $D\mapsto [D]$. Further, there is a natural homomorphism
${\rm Div}(X)\rightarrow \mbox{Pic}(X)$ given by $D\mapsto \mathcal O_X(D)$.
Denote numerical equivalence on ${\rm Div}(X)$ by $\equiv$. For $D$ a Cartier divisor, $D\equiv 0$ if and only if $(C\cdot D)_X:=\mbox{deg}(\mathcal O_X(D)\otimes\mathcal O_C)=0$ for all integral curves $C$ on $X$.
The group $N^1(X)_{{\NZQ Z}}={\rm Div}(X)/\equiv$ and $N^1(X)=N_1(X)_{{\NZQ Z}}\otimes {\NZQ R}$. An element of ${\rm Div}(X)\otimes\mathcal {\NZQ Q}$ will be called a ${\NZQ Q}$-Cartier divisor and an element of ${\rm Div}(X)\otimes {\NZQ R}$ will be called an ${\NZQ R}$-Cartier divisor. In an effort to keep notation as simple as possible, the class in $N^1(X)$ of an ${\NZQ R}$-Cartier divisor $D$ will often be denoted by $D$.
We will also denote the numerical equivalence on $Z_{d-1}(X)$ defined on page 374 \cite{F} by $\equiv$. Let $N_{d-1}(X)_{{\NZQ Z}}=Z_{d-1}(X)/\equiv$ and $N_{d-1}(X)=N_{d-1}(X)_{{\NZQ Z}}\otimes_{{\NZQ Z}}{\NZQ R}$.
There is a natural homomorphism $N^1(X)\rightarrow N_{d-1}(X)$ which is induced by associating to the class of a ${\NZQ R}$-Cartier divisor $D$ the class in $N_{d-1}(X)$ of its associated Weil divisor $[D]$ \cite[Section 2.1]{F}. If $f:Y\rightarrow X$ is a morphism,
the cycle map $f_*:Z_{d-1}(Y)\rightarrow Z_{d-1}(X)$ of \cite[Section 1.4]{F} induces a homomorphism $f_*:N_{d-1}(Y)\rightarrow N_{d-1}(X)$ (\cite[Example 19.1.6]{F}).
Suppose that $f:Y\rightarrow X$ is a dominant morphism where $Y$ is projective variety. Then $f^*:{\rm Div}(X)\rightarrow
{\rm Div}(Y)$ is defined by taking local equations of $D$ on $X$ as local equations of $f^*(D)$ on $Y$. There is an induced homomorphism $f^*:N^1(X)\rightarrow N^1(Y)$ which is an injection by \cite[Lemma 1]{Kl}.
By \cite[Proposition 2.3]{F}, we have that if $D$ is an ${\NZQ R}$-Cartier divisor on $X$, then
\begin{equation}\label{eq41}
f_*{[f^* D]}=\mbox{deg}(X'/X) D
\end{equation}
where $\mbox{deg}(X'/X)$ is the index of the function field of $X$ in the function field of $X'$.
In this subsection, we will use the notation for intersection numbers of \cite[Definition 2.4.2]{F}.
The first statement of the following lemma follows immediately from \cite{M} or \cite[Corollary XIII.7.4]{Kl2} if $k$ is algebraically closed. The second statement is \cite[Example 19.1.5]{F}.
\begin{Lemma}\label{Lemma55} Let $X$ be a $d$-dimensional projective variety over a field $k$. Then:
\begin{enumerate}
\item[1)] The homomorphism $N^1(X)\rightarrow N_{d-1}(X)$ is an injection.
\item[2)] If $X$ is nonsingular, then the homomorphism $N^1(X)\rightarrow N_{d-1}(X)$ is an isomorphism.
\end{enumerate}
\end{Lemma}
\begin{proof} Suppose that $N^1(X)\rightarrow N_{d-1}(X)$ is not injective.
The homomorphism $N^1(X)\rightarrow N_{d-1}(X)$ is obtained by tensoring the natural map $N_1(X)_{{\NZQ Z}}\otimes_{{\NZQ Z}}{\NZQ Q}\rightarrow N_{d-1}(X)_{{\NZQ Z}}\otimes_{{\NZQ Z}}{\NZQ Q}$ with ${\NZQ R}$ over ${\NZQ Q}$. Thus $N_1(X)_{{\NZQ Z}}\otimes_{{\NZQ Z}}{\NZQ Q}\rightarrow N_{d-1}(X)_{{\NZQ Z}}\otimes_{{\NZQ Z}}{\NZQ Q}$ is not injective, and so there exists a Cartier divisor $D$ on $X$
such that the Weil divisor $[D]$ associated to $D$ is numerically equivalent to zero (its class is zero in $N_{d-1}(X)$) but the class of $D$ is not zero in $N^1(X)$. Thus there exists
an integral curve $C$ on $X$ such that
\begin{equation}\label{eq59}
(C\cdot D)_X\ne 0.
\end{equation}
Let $\overline k$ be an algebraic closure of $k$. There exists an integral subscheme $\overline X$ of $X\otimes_k\overline k$ such that $\overline X$ dominates $X$. Thus $\overline X$ is a projective variety over $\overline k$. Let $\psi:\overline X\rightarrow X$ be the induced dominant morphism. Let $U\subset X$ be an affine open subset such that $U\cap C\ne\emptyset$. $\psi^{-1}(U)$ is affine since it is a closed subscheme of the affine scheme $U\otimes_k\overline k$. Let $A=\Gamma(U,\mathcal O_X)$ and $B=\Gamma(\psi^{-1}(U),\mathcal O_{\overline X})$. The ring extension $A\rightarrow B$ is integral. Let $P=\Gamma(U,\mathcal I_C)$, a prime ideal of $A$ such that $\dim A/P=1$, and let $M$ be a maximal ideal of $A$ containing $P$. By the going up theorem, there exists a prime ideal $Q$ of $B$ such that $Q\cap A=P$ and prime ideal $N$ of $B$ such that $Q\subset N$ and $N\cap A=M$. Now $A/M\rightarrow B/N$ is an integral extension from a field to a domain, so $B/N$ is a field. Thus $N$ is a maximal ideal of $B$ and since there are no prime ideals of $B$ properly between $Q$ and $N$ (by \cite[Corollary 5.9]{At}) we have that $\dim B/Q=1$. Let $\overline C$ be the closure of $V(Q)\subset \psi^{-1}(U)$ in $\overline X$. Then $\overline C$ is an integral curve on $X$ which dominates $C$. There exists a field of definition $k'$ of $\overline X$ and $\overline C$ over $k$ which is a subfield of $\overline k$ which is finite over $k$. That is, there exist subvarieties $C'\subset X'$ of $X\otimes_kk'$ such that $X'\otimes_{k'}\overline k=\overline X$ and $C'\otimes_{k'}\overline k=\overline C$. We factor $\psi:\overline X\rightarrow X$ by morphisms
$$
\overline X\stackrel{\alpha}{\rightarrow} X'\stackrel{\phi}{\rightarrow} X
$$
where $\alpha={\rm id}_X'\otimes_{{\rm id}_{k'}}{\rm id}_{\overline k}$. The morphism $\phi$ is finite and surjective and $\alpha$ is flat (although it might not be of finite type). Let $H$ be an ample Cartier divisor on $X$.
Then $\phi^*H$ is an ample Cartier divisor on $X'$ (by \cite[Exercise III.5.7(d)]{H}). Thus for some positive integer $m$ we have that global sections of $\mathcal O_{X'}(m\phi^*(H))$ give a closed embedding of $X'$ in ${\NZQ P}^n_{k'}$ for some $n$. Thus
global sections of $\mathcal O_{\overline X}(m\psi^*(H))$ give a closed embedding of $\overline X=X'\otimes_{k'}\overline k$ in ${\NZQ P}^n_{\overline k}$. In particular, we have that $\psi^*(H)$ is an ample Cartier divisor on $\overline X$. We have natural morphisms
$$
N^1(X)\rightarrow N^1(X')\rightarrow N^1(\overline X).
$$
Here $X$ is a $k$-variety and $\overline X$ is a $\overline k$-variety. $X'$ is both a $k$-variety and a $k'$-variety. When we are regarding $X'$ as a $k$-variety we will write $X'_k$ and when we are regarding $X'$ as a $k'$-variety we will write $X'_{k'}$.
We may use the formalism of Kleiman \cite{Kl}, using the Snapper polynomials \cite{Sn} to compute intersection products of Cartier divisors. This is consistent with the intersection products of Fulton \cite{F} by \cite[Example 18.3.6]{F}. This intersection theory is also presented in \cite[Chapter 19]{AG}.
Since $D$ is numerically equivalent to zero as a Weil divisor, we have that
\begin{equation}\label{eq56}
(D\cdot H^{d-1})_X=(D^2\cdot H^{d-2})_X=0.
\end{equation}
We have that
$$
(\psi^*D\cdot \psi^*H^{d-1})_{\overline X}=(\phi^*D\cdot \phi^*H^{d-1})_{X'_{k'}}=\frac{1}{[k':k]}(\phi^* D\cdot \phi^* H^{d-1})_{X'_k}
$$
using \cite[Example 18.3.6]{F} and the fact that
$$
H^i(\overline X,\mathcal O_{\overline X}(\psi^*(mD)+\psi^*(nH)))=H^i(X'_{k'},\mathcal O_{X'}(\phi^*(mD)+\phi^*(nH)))\otimes_{k'}\overline k
$$
for all $m,n$ since $\alpha$ is flat. We thus have that
\begin{equation}\label{eq57}
(\psi^*D\cdot \psi^*H^{d-1})_{\overline X}=\frac{1}{[k':k]}(\phi^* D\cdot \phi^* H^{d-1})_{X'_k}=\frac{\deg(X'/X)}{[k':k]}(D\cdot H^{d-1})_X=0
\end{equation}
by \cite[Proposition 2.3]{F} and (\ref{eq56}). Similarly,
\begin{equation}\label{eq58}
(\psi^*D^2\cdot \psi^*H^{d-2})_{\overline X}=0.
\end{equation}
Since $\overline k$ is algebraically closed and the equations (\ref{eq57}) and (\ref{eq58}) hold, we have that
$$
(\psi^*D\cdot \overline C)_{\overline X}=0
$$
by \cite{M} and \cite[Corollary XIII.7.4]{Kl2}. Thus by \cite[Example 18.3.6 and Proposition 2.3]{F},
$$
\begin{array}{lll}
0&=&(\psi^*D\cdot \overline C)_{\overline X}=(\phi^*D\cdot C')_{X'_{k'}}
=\frac{1}{[k':k]}(\phi^*D\cdot C')_{X'_{k}}\\
&=&\frac{1}{[k':k]}(D\cdot \phi_*C')_X=\frac{\deg(C'/C)}{[k':k]}(D\cdot C)_X,
\end{array}
$$
giving a contradiction to (\ref{eq59}). Thus the map $N^1(X)\rightarrow N_{d-1}(X)$ is injective.
This homomorphism is always an isomorphism if $X$ is nonsingular by \cite[Example 19.1.5]{F}.
\end{proof}
As defined and developed in \cite{Kl}, \cite[Chapter 2]{L}, there are important cones ${\rm Amp}(X)$ (the ample cone), ${\rm Big}(X)$ (the big cone), ${\rm Nef}(X)$ (the nef cone) and ${\rm Psef}(X):=\overline{\rm Eff}(X)$ (the pseudo effective cone) in $N^1(X)$.
If $D$ is a Cartier divisor on the projective variety $X$, then the complete linear system $|D|$ is defined by
\begin{equation}\label{eq30}
|D|=\{{\rm div}(\sigma)\mid \sigma\in \Gamma(X,\mathcal O_X(D))\}.
\end{equation}
Let ${\rm Mov'}(X)$ be the convex cone in $N^1(X)$ generated by the classes of Cartier divisors $D$ such that $|D|$ has no codimension 1 fixed component. Define $\overline {\rm Mov}(X)$ to be the closure of ${\rm Mov'}(X)$ in $N^1(X)$. An ${\NZQ R}$-Cartier divisor $D$ is said to be movable if the class of $D$ is in $\overline {\rm Mov}(X)$. Define ${\rm Mov}(X)$ to be the interior of $\overline{\rm Mov}(X)$. As explained in \cite[page 85]{N}, we have inclusions
$$
{\rm Amp}(X)\subset {\rm Mov}(X)\subset {\rm Big}(X)
$$
and
$$
{\rm Nef}(X)\subset \overline{\rm Mov}(X)\subset {\rm Psef}(X).
$$
\begin{Lemma}\label{Lemma7} Suppose that $X$ is a $d$-dimensional variety over a field $k$, $D$ is a pseudo effective ${\NZQ R}$-Cartier divisor on $X$, $H$ is an ample ${\NZQ Q}$-Cartier divisor on $X$ and $(H^{n-1}\cdot D)_X=0$. Then $D\equiv 0$.
\end{Lemma}
\begin{proof} We will establish the lemma when $k$ is algebraically closed. The lemma will then follow for arbitrary $k$ by the method of the proof of Lemma \ref{Lemma55}.
We consider two operations on varieties. First suppose that $Y$ is a projective variety of dimension $d\ge 2$ over $k$, $\tilde H$ is an ample ${\NZQ Q}$-Cartier divisor and $\tilde D$ is a pseudo effective ${\NZQ R}$-Cartier divisor on $Y$ and $\tilde C$ is an integral curve on $Y$. Let
$\pi:\overline Y\rightarrow Y$ be the normalization of $Y$. Then there exists an integral curve $\overline C$ in $\overline Y$ such that $\pi(\overline C) =\tilde C$ (as in the proof of Lemma \ref{Lemma55}). We have that
$$
(\pi^*(\tilde H)^{d-1}\cdot \pi^*(\tilde D))_{\overline Y}=(\tilde H^{d-1}\cdot \tilde D)_Y
$$
and
$$
(\overline C\cdot \pi^*(\tilde D))_{\overline Y}=\mbox{deg}(\overline C/\tilde C)(\tilde C\cdot \tilde D)_Y.
$$
We further have that $\pi^*(\tilde D)$ is pseudo effective.
For the second operation, suppose that $Y$ is a normal projective variety over $k$. Let $\tilde H$ be an ample ${\NZQ Q}$-Cartier divisor on $Y$ and $\tilde D$ be a pseudo effective ${\NZQ R}$-Cartier divisor on $Y$. Let
$\tilde C$ be an integral curve on $Y$. Let $\phi:Z:=B(\tilde C)\rightarrow Y$ be the blow up of $\tilde C$. Let $E$ be the effective Cartier divisor on $Z$ such that $\mathcal O_Z(-E)=\mathcal I_{\tilde C}\mathcal O_Z$. There exists a positive integer $m$ such that $m\tilde H$ is a Cartier divisor and $\phi^*(m\tilde H)-E$ is very ample on $Z$. Let $L$ be the linear system
$$
L=\{F\in |mH|\mid \tilde C\subset \mbox{Supp}(F)\}
$$
on $Y$. The base locus of $L$ is $\tilde C$. We have an induced rational map $\Phi_L:X\dashrightarrow {\NZQ P}^n$ where $n$ is the dimension of $L$. Let $Y'$ be the image of $\Phi_L$. Then $Y'\cong Z$ since $\phi^*(m\tilde H)-E$ is very ample on $Z$. Thus $\dim Y'=d$ and we have equality of function fields $k(Y')=k(Y)$. By the first theorem of Bertini, \cite{M1}, \cite[Section I.7]{Z}, \cite[Theorem 22.12]{AG}, a general member $W$ of $L$ is integral, so that it is a variety. By construction, $\tilde C\subset W$. Let $\alpha:W\rightarrow Y$ be the inclusion. We have that $\alpha^*(\tilde H)$ is ample on $W$. A general member of $L$ is not a component of the support of $\tilde D$ so $\alpha^*(\tilde D)$ is pseudo effective. We have that
$(\alpha^*(\tilde H)^{d-2}\cdot \alpha^*(\tilde D))_W=(\tilde H^{d-1}\cdot \tilde D)_Y$.
Further,
$(\tilde C\cdot \alpha^*(\tilde D))_W=(\tilde C\cdot \tilde D)_Y$.
Suppose that $D$ is not numerically equivalent to zero. We will derive a contradiction. There then exists an integral curve $C$ on $X$ such that $(C\cdot D)_X\ne 0$.
By iterating the above two operations, we construct a morphism of $k$-varieties $\beta:S\rightarrow X$ such that $S$ is a two dimensional projective variety, with an integral curve
$\tilde C$ on $S$, an ample ${\NZQ Q}$-Cartier divisor $\tilde H$ on $S$ and a pseudo effective ${\NZQ R}$-Cartier divisor on $S$ such that
$(\tilde H\cdot \tilde D)_S=0$ but $(\tilde D\cdot \tilde C)_S\ne 0$. Let $\gamma:T\rightarrow S$ be a resolution of singularities (which exists by \cite{Ab}, \cite{Li} or \cite{CJS}).
There exists an exceptional divisor $E$ on $T$ and a positive integer $m$ such that $m\tilde H$ is a Cartier divisor on $S$ and $A:=\gamma^*(m\tilde H)-E$ is an ample ${\NZQ Q}$-Cartier divisor. There exists an integral curve $\overline C$ on $T$ such that $\gamma(\overline C)=\tilde C$ and $\gamma^*(\tilde D)$ is a pseudo effective ${\NZQ R}$-Cartier divisor. Since $E$ is exceptional for $\gamma$, We have that
$$
(A\cdot \gamma^*(\tilde D))_T=(\gamma^*(m\tilde H)-E)\cdot \gamma^*(\tilde D))_T=(\gamma^*(m\tilde H)\cdot \gamma^*(\tilde D))_T=m(\tilde H\cdot \tilde D)_S=0
$$
and
$$
(\gamma^*(\tilde D)\cdot \overline C)=\deg(\overline C/\tilde C)(\tilde C\cdot \tilde D)_S\ne 0
$$
by \cite[Chapter I]{Kl}, \cite[Proposition 19.8 and Proposition 19.12]{AG}. But this is a contradiction to \cite[Theorem 1, page 317]{Kl}, \cite[Theorem 1.4.29]{L}, since $N^1(T)=N_1(T)$ by Lemma \ref{Lemma55}.
\end{proof}
\subsection{Normal varieties}\label{subsecnorm} In this section we review some material from \cite{FKL}.
Suppose that $X$ is a normal projective variety over a field $k$. The map $D\rightarrow [ D]$ is an inclusion of ${\rm Div}(X)$ into $Z_{d-1}(X)$, and thus induces an inclusion of ${\rm Div}(X)\otimes\mathcal {\NZQ R}$ into $Z_{d-1}(X)\otimes {\NZQ R}$. We may thus identify a Cartier divisor $D$ on $X$ with its associated Weil divisor $[D]$.
Let $x$ be a real number. Define $\lfloor x\rfloor$ to be the round down of $x$ and $\{x\}=x-\lfloor x\rfloor$.
Let $E$ be an ${\NZQ R}$-Weil divisor on a normal variety $X$ (an element of $Z_{d-1}(X)\otimes {\NZQ R}$). Expand $E=\sum a_iE_i$ with $a_i\in {\NZQ R}$ and $E_i$ prime divisors on $X$. Then we have associated divisors
$$
\lfloor E\rfloor =\sum \lfloor a_i\rfloor E_i\mbox{ and }\{E\}=\sum \{a_i\}E_i.
$$
There is an associated sheaf coherent sheaf $\mathcal O_X(E)$ on $X$ defined by
$$
\Gamma(U,E)=\{f\in k(X)^*\mid \mbox{div}(f)+E|_U \ge 0\}\mbox{ for $U$ an open subset of $X$.}
$$
We have that $\mathcal O_X(D)=\mathcal O_X(\lfloor D\rfloor)$.
If $D$ and $D'$ are ${\NZQ R}$-Weil divisors on $X$, then define $D'\sim_{{\NZQ Z}}D$ if $D'-D=\mbox{div}(f)$ for some $f\in k(X)$.
Define $D'\sim_{{\NZQ Q}}D$ if there exists $m\in {\NZQ Z}_{>0}$ such that $mD'\sim_{{\NZQ Z}}mD$.
For $D$ an ${\NZQ R}$-Weil divisor, the complete linear system $|D|$ is defined as
$$
|D|=\{\mbox{${\NZQ R}$-Weil divisors }D'\mid D'\ge 0\mbox{ and }D'\sim_{{\NZQ Z}}D\}.
$$
If $D$ is an integral Cartier divisor, then this is in agreement with the definition of (\ref{eq30}). For $D$ an ${\NZQ R}$ Weil divisor, we define
$$
|D|_{{\NZQ Q}}=\{\mbox{${\NZQ R}$-Weil divisors } D'\mid D'\ge 0\mbox{ and }D'\sim_{{\NZQ Q}}D\}.
$$
\subsection{$\sigma$-decomposition}\label{Subsecsigma} In this subsection we assume that $X$ is a nonsingular projective variety over a field $k$. We will restrict our use of $\sigma$-decompositions to this situation. Nakayama defined and developed $\sigma$-decompositions for nonsingular complex projective varieties in Chapter III of \cite{N}. The theory and proofs in this chapter extend to arbitrary fields.
The $\sigma$-decomposition is extended to complete normal projective varieties in \cite{FKL}.
Since $X$ is nonsingular, the map $D\rightarrow [D]$ is an isomorphism from ${\rm Div}(X)$ to $Z_{d-1}(X)$, and thus induces an isomorphism ${\rm Div}(X)\otimes\mathcal {\NZQ R}\rightarrow Z_{d-1}(X)\otimes {\NZQ R}$. Thus we may identify ${\NZQ R}$-Cartier divisors and ${\NZQ R}$-Weil divisors on $X$, which we will refer to as ${\NZQ R}$-divisors. Since $X$ is normal, we may use the theory of Subsection \ref{subsecnorm}.
Let $D$ be an ${\NZQ R}$-divisor. We define
$$
|D|_{\rm num}=\{\mbox{${\NZQ R}$ divisors $D'$ on $X$}\mid D'\ge 0\mbox{ and }D'\equiv D\}.
$$
Let $D$ be a big ${\NZQ R}$-divisor and $\Gamma$ be a prime divisor on $X$. Then we define
$$
\sigma_{\Gamma}(D)_{{\NZQ Z}}:=\left\{\begin{array}{ll}
\inf\{\mbox{mult}_{\Gamma}\Delta\mid \Delta\in |D|\}&\mbox{ if }|D|\ne 0\\
+\infty&\mbox{ if }|D|=\emptyset,
\end{array}\right.
$$
$$
\sigma_{\Gamma}(D)_{{\NZQ Q}}:=\inf\{\mbox{mult}_{\Gamma}\Delta\mid \Delta\in |D|_{{\NZQ Q}}\},
$$
$$
\sigma_{\Gamma}(D):=\inf\{\mbox{mult}_{\Gamma}\Delta\mid \Delta\in |D|_{\rm num}\}.
$$
These three functions $\sigma_{\Gamma}(D)_*$ satisfy
$$
\sigma_{\Gamma}(D_1+D_2)_*\le \sigma_{\Gamma}(D_1)_*+\sigma_{\Gamma}(D_2)_*.
$$
We have that
\begin{equation}\label{eq37}
\sigma_{\Gamma}(D)_{{\NZQ Q}}=\sigma_{\Gamma}(D)
\end{equation}
by \cite[Lemma III.1.4]{N}.
The function $\sigma_{\Gamma}$ is continuous on ${\rm Big}(X)$ by \cite[Lemma 1.7]{N}.
If $D$ is a pseudo effective ${\NZQ R}$-divisor and $\Gamma$ is a prime divisor, then
$$
\sigma_{\Gamma}(D):=\lim_{t\rightarrow 0^+}\sigma_{\Gamma}(D+tA)
$$
where $A$ is any ample ${\NZQ R}$-divisor on $X$. These limits exist and converge to the same number by \cite[Lemma 1.5]{N}.
By \cite[Corollary 1.11]{N}, there are only finitely many prime divisors $\Gamma$ on $X$ such that $\sigma_{\Gamma}(D)>0$.
For a given pseudo effective ${\NZQ R}$-divisor $D$, the ${\NZQ R}$-divisors
$$
N_{\sigma}(D)=\sum_{\Gamma}\sigma_{\Gamma}(D)\Gamma
\mbox{ and }
P_{\sigma}(D)=D-N_{\sigma}(D)
$$
are defined in \cite[Definition 1.12]{N}. The decomposition $D=P_{\sigma}(D)+N_{\sigma}(D)$ is called the $\sigma$-decomposition of $D$.
Suppose that $D$ is a pseudo effective ${\NZQ R}$-divisor, $A$ and $H$ are ample ${\NZQ R}$-divisors and $t,\epsilon>0$. Then, since $D+tA+\epsilon H$, $D+\epsilon H$ and $tA$ are big, we have that for any prime divisor $\Gamma$,
$$
\sigma_{\Gamma}(D+tA+\epsilon H)\le \sigma_{\Gamma}(D+\epsilon H)+\sigma_{\Gamma}(tA)=\sigma_{\Gamma}(D+\epsilon H).
$$
Thus
$$
\sigma_{\Gamma}(D+tA)=\lim_{\epsilon\rightarrow 0^+}\sigma_{\Gamma}(D+tA+\epsilon H)
\le \lim_{\epsilon\rightarrow 0^+}\sigma_{\Gamma}(D+\epsilon H)=\sigma_{\Gamma}(D).
$$
In particular, if $\Gamma_1,\ldots,\Gamma_s$ are the prime divisors such that $N_{\sigma}(D)=\sum_{i=1}^s a_i\Gamma_i$
where $a_i>0$ for all $i$, then for all $t>0$, there is an expansion $N_{\sigma}(D+tA)=\sum_{i=1}^sa_i(t)\Gamma_i$ where
$a_i(t)\in {\NZQ R}_{\ge 0}$. Thus $\lim_{t\rightarrow 0^+}N_{\sigma}(D+tA)=N_{\sigma}(D)$ and $\lim_{t\rightarrow 0^+}P_{\sigma}(D+tA)=P_{\sigma}(D)$.
\begin{Lemma}\label{Lemma31} Suppose that $D$ is a pseudo effective ${\NZQ R}$-divisor on a nonsingular projective variety $X$. Then
\begin{enumerate}
\item[1)] $P_{\sigma}(D)$ is pseudo effective.\item[2)] $\sigma_{\Gamma}(P_{\sigma}(D))=0$ for all prime divisors $\Gamma$ on $X$, so that the class of $P_{\sigma}(D)$ is in $\overline{\rm Mov}(X)$.
\item[3)] $N_{\sigma}(D)=0$ if and only if the class of $D$ is in $\overline{\rm Mov}(X)$.
\end{enumerate}
\end{Lemma}
\begin{proof}
Let $A$ be an ample ${\NZQ R}$-divisor on $X$. For all $\epsilon>0$, $D+\epsilon A$ is big. Thus the class of
$D+\epsilon A-\sum \sigma_{\Gamma}(D+\epsilon A)\Gamma$ is in $\mbox{Big}(X)$.
Thus $P_{\sigma}(D)=\lim_{\epsilon \rightarrow 0^+}D+\epsilon A-\sum \sigma_{\Gamma}(D+\epsilon A)\Gamma$ is pseudo effective.
Statement 2) follows from \cite[Lemma III.1.8]{N} and \cite[Proposition III.1.14]{N}. Statement 3) is \cite[Proposition III.1.14]{N}.
\end{proof}
\subsection{Movable divisors on a normal variety}\label{SubSecMN}
Let $X$ be a normal projective variety over a field, and $\Gamma$ be a prime divisor on $X$. As explained in \cite{FKL},
the definitions of $\sigma_{\Gamma}(D)_{{\NZQ Z}}$ and $\sigma_{\Gamma}(D)_{{\NZQ Q}}$ of Subsection \ref{Subsecsigma} extend to ${\NZQ R}$-Weil divisors $D$ on $X$, as do the inequalities
$$
\sigma_{\Gamma}(D_1+D_2)_{{\NZQ Z}}\le \sigma_{\Gamma}(D_1)_{{\NZQ Z}}+\sigma_{\Gamma}(D_2)_{{\NZQ Z}}
\mbox{ and }
\sigma_{\Gamma}(D_1+D_2)_{{\NZQ Q}}\le \sigma_{\Gamma}(D_1)_{{\NZQ Z}}+\sigma_{\Gamma}(D_2)_{{\NZQ Q}}.
$$
Let $D$ be a big and movable ${\NZQ R}$-Cartier divisor on $X$ and $A$ be an ample ${\NZQ R}$-Cartier divisor on $X$. Then $D+tA\in {\rm Mov}(X)$ for all positive $t$, so that $\sigma(D+tA)_{{\NZQ Q}}=0$ for all $t>0$. Since $D$ is big, there exists $\delta>0$ such that $D\sim_{{\NZQ Q}}\delta A+\Delta$ where $\Delta$ is an effective ${\NZQ R}$-Cartier divisor. Then for all $\epsilon >0$, $(1+\epsilon)D\sim_{{\NZQ Q}}D+\epsilon\delta A+\epsilon\Delta$ and so
$$
(1+\epsilon)\sigma_{\Gamma}(D)_{{\NZQ Q}}\le \sigma(D+\epsilon\delta A)_{{\NZQ Q}}+\epsilon\mbox{mult}_{\Gamma}(\Delta)=\epsilon\mbox{mult}_{\Gamma}(\Delta)
$$
for all $\epsilon>0$. Thus, with our assumption that $D$ is a big and movable ${\NZQ R}$-Cartier divisor, we have that
\begin{equation}\label{eq50}
\sigma_{\Gamma}(D)_{{\NZQ Q}}=0\mbox{ for all prime divisors $\Gamma$ on $X$.}
\end{equation}
\subsection{Positive intersection products} Let $X$ be a $d$-dimensional projective variety over a field $k$. In \cite{C}, we generalize the positive intersection product on projective varieties over an algebraically closed field of characteristic zero defined in \cite{BFJ} to projective varieties over an arbitrary field.
Let $I(X)$ be the directed set of projective varieties $Y$ which have a birational morphism to $X$. If $f:Y'\rightarrow Y$ is in $I(X)$ and $\mathcal L\in N^1(Y)$, then $f^*\mathcal L\in N^1(Y')$. We may thus define $N^1(\mathcal X)=\lim_{\rightarrow }N^1(Y)$. If $D$ is a Cartier ${\NZQ R}$-divisor on $Y$, we will sometimes abuse notation and identify $D$ with is class in $N^1(\mathcal X)$.
In \cite{C}, $N^1(Y)$ is denoted by $M^1(Y)$.
For $Y\in I(X)$ and $0\le p\le d$, we let $L^p(Y)$ be the real vector space of $p$-multilinear forms on $N^1(Y)$.
Giving the finite dimensional real vector space $L^p(Y)$ the Euclidean topology, we define
\begin{equation}\label{eq300}
L^p(\mathcal X)=\lim_{\leftarrow}L^p(Y).
\end{equation}
$L^p(\mathcal X)$ is a Hausdorff topological real vector space. We define $L^0(\mathcal X)={\NZQ R}$.
The pseudo effective cone ${\rm Psef}(L^p(Y))$ in $L^p(Y)$ is the closure of the cone generated by the natural images of the $p$-dimensional closed subvarieties of $Y$. The inverse limit of the ${\rm Psef}(L^p(Y))$ is then a closed convex and strict cone ${\rm Psef}(L^p(\mathcal X))$ in $L^p(\mathcal X)$, defining a partial order $\ge$ in $L^p(\mathcal X)$. The pseudo effective cone in $L^0(\mathcal X)$ is the set of nonnegative real numbers.
For $Y\in I(X)$, let $\rho_Y:N^1(Y)\rightarrow N^1(\mathcal X)$ and $\pi_Y:L^p(\mathcal X)\rightarrow L^p(Y)$ be the induced continuous linear maps. In \cite{BFJ} they consider a related but different vector space from $L^p(\mathcal X)$.
Suppose that $\alpha_1,\ldots,\alpha_r\in N^1(\mathcal X)$ with $r\le d$. Let $f:Y\rightarrow X\in I(X)$ be such that $\alpha_1,\ldots,\alpha_r$
are represented by classes in $N^1(Y)$ of ${\NZQ R}$-Cartier divisors $D_1,\ldots,D_r$ on $Y$. Then the ordinary intersection product $D_1\cdot\ldots\cdot D_r$ induces a linear map $D_1\cdot\ldots\cdot D_r\in L^{d-r}(\mathcal X)$. If $r=d$, then this linear map is just the intersection number $(D_1\cdot\ldots\cdot D_d)_Y\in {\NZQ R}$ of \cite[Definition 2.4.2]{F}.
If $\alpha_1,\ldots,\alpha_p \in N^1(\mathcal X)$ are big, we define the positive intersection product (\cite[Definition 2.5, Proposition 2.13]{BFJ} in characteristic zero, \cite[Definition 4.4, Proposition 4.12]{C}) to be
\begin{equation}\label{eq33}
\begin{array}{llll}
\langle \alpha_1\cdot \ldots\cdot\alpha_p\rangle &=& {\rm lub}&\{(\alpha_1- D_1)\cdots (\alpha_p- D_p)\in L^{d-p}(\mathcal X)\mid D_i \mbox{ are effective ${\NZQ R}$-Cartier}\\
&&&\mbox{divisors on some $Y_i\in I(X)$ and $\alpha- D_i$ are big}\}.
\end{array}
\end{equation}
\begin{Proposition}\label{Prop35}(\cite[Proposition 2.13]{BFJ}, \cite[Proposition 4.12]{C})
If $\alpha_1,\ldots,\alpha_p\in N^1(\mathcal X)$ are big, we have that
$\langle \alpha_1\cdot \ldots\cdot \alpha_p\rangle$ is the least upper bound in $L^{d-p}(\mathcal X)$ of all intersection products
$\beta_1\cdot\ldots\cdot\beta_p$ where $\beta_i$ is the class of a nef ${\NZQ R}$-Cartier divisor such that $\beta_i\le \alpha_i$ for all $i$.
\end{Proposition}
If $\alpha_1,\ldots,\alpha_p\in N^1(\mathcal X)$ are pseudo effective, their positive intersection product is defined (\cite[Definition 2.10]{BFJ}, \cite[Definition 4.8, Lemma 4.9]{C}) as
$$
\lim_{\epsilon\rightarrow 0^+}\langle (\alpha_1+\epsilon H)\cdot \ldots \cdot (\alpha_p+\epsilon H)\rangle
$$
where $H$ is a big ${\NZQ R}$-Cartier divisor on some $Y\in I(X)$.
\begin{Lemma}\label{Lemma36}(\cite[Proposition 2.9, Remark 2.11]{BFJ}, [\cite[Lemma 4.13]{C}, \cite[Proposition 4.7]{C}) The positive intersection product $\langle \alpha_1\cdot\ldots\cdot\alpha_p\rangle$ is homogeneous and super additive on each variable in ${\rm Psef}(\mathcal X)$. Further, it is continuous on the $p$-fold product of the big cone. \end{Lemma}
\begin{Remark}\label{Remark50}
Since a positive intersection product is always in the pseudo effective cone, if $\alpha_1,\ldots,\alpha_d\in N^1(\mathcal X)$ are pseudo effective, then
$\langle \alpha_1\cdot\ldots\cdot\alpha_d\rangle\in {\NZQ R}_{\ge 0}$. Since the intersection product of nef and big ${\NZQ R}$-Cartier divisors is positive, it follows from Proposition \ref{Prop35} that if $\alpha_1,\ldots,\alpha_d\in N^1(\mathcal X)$ are big, then
$\langle \alpha_1\cdot\ldots\cdot\alpha_d\rangle\in {\NZQ R}_{> 0}$.
\end{Remark}
\begin{Lemma}\label{Lemma34} Let $H$ be an ample ${\NZQ R}$-Cartier divisor on some $Y\in I(X)$ and let $\alpha\in N^1(\mathcal X)$ be pseudo effective. Then
$$
\langle H^{d-1}\cdot \alpha\rangle = H^{d-1}\cdot\langle \alpha \rangle.
$$
\end{Lemma}
\begin{proof} By Proposition \ref{Prop35}, for all $\epsilon >0$,$$
\langle \left((1+\epsilon) H)\right)^{d-1}\cdot(\alpha+\epsilon H)\rangle=(1+\epsilon)^{d-1}\left( H^{d-1}\cdot \langle \alpha+\epsilon H\rangle\right).
$$
Taking the limit as $\epsilon$ goes to zero, we have the conclusions of the lemma.
\end{proof}
\begin{Theorem}\label{Theorem17}
Suppose that $X$ is a $d$-dimensional projective variety, $\alpha\in N^1(X)$ is big and $\gamma\in N^1(X)$ is arbitrary. Then
$$
\frac{d}{dt}{\rm vol}(\alpha+t\gamma)=d\langle (\alpha+t\gamma)^{d-1}\rangle\cdot\gamma
$$
whenever $\alpha+t\gamma$ is big.
\end{Theorem}
This is a restatement of \cite[Theorem A]{BFJ}, \cite[Theorem 5.6]{C}. The proof shows that
$$
\lim_{\Delta t\rightarrow 0}\frac{{\rm vol}(\alpha+(t+\Delta t)\gamma)-{\rm vol}(\alpha+t\gamma)}{\Delta t}
=d\langle (\alpha+t\gamma)^{d-1}\rangle\cdot\gamma.
$$
Suppose $\alpha\in N^1(\mathcal X)$ is pseudo effective. Then we have for varieties over arbitrary fields, the formula of \cite[Corollary 3.6]{BFJ},
\begin{equation}\label{eq40}
\langle \alpha^d\rangle =\langle \alpha^{d-1}\rangle\cdot \alpha.
\end{equation}
To establish this formula, first
suppose that $\alpha$ is big.
Then taking the derivative at $t=0$ of $\langle(\alpha+t\alpha)^d\rangle=(1+t)^d\langle\alpha^d\rangle$, we obtain formula (\ref{eq40}) from Theorem \ref{Theorem17}. If $\alpha$ is pseudo effective, we obtain (\ref{eq40}) by regarding $\alpha$ as a limit of the big divisors $\alpha +tH$ where $H$ is an ample ${\NZQ R}$-Cartier divisor.
The natural map $N^1(X)\rightarrow L^{d-1}(X)$ is an injection, as follows from the proof of Lemma \ref{Lemma55}. Let $WN^1(X)$ be the image of the homomorphism of $Z_{d-1}(X)\otimes{\NZQ R}$ to $L^{d-1}(X)$ which associates to $D\in Z_{d-1}(X)\otimes{\NZQ R}$ the natural map $(\mathcal L_1,\ldots,\mathcal L_{d-1})\mapsto (\mathcal L_1\cdot\ldots\cdot L_{d-1}\cdot D)_X$.
We have that $WN^1(X)$ is the subspace of $L^{d-1}(X)$ generated by $\mbox{Psef}(X)$. We always have a factorization $N^1(X)\rightarrow N_{d-1}(X)\rightarrow WN^1(X)$.
In this way we can identify the map $D\cdot$ which is the image of an element of $Z_{d-1}(X)\otimes{\NZQ R}$ in $L^{d-1}(X)$ with its class in $WN^1(X)$. If $X$ is nonsingular, then $WN^1(X)=N_{d-1}(X)=N^1(X)$.
\begin{Lemma}\label{Lemma200} Suppose that $X$ is a projective variety and $D$ is a big ${\NZQ R}$-Cartier divisor on $X$. Let $f:Y\rightarrow X\in I(X)$ be such that $Y$ is normal. Then
\begin{equation}\label{eq91}
\pi_Y(\langle D\rangle)=P_{\sigma}(f^*(D)).
\end{equation}
\end{Lemma}
\begin{proof} We may assume that $Y=X$ so that $f^*D=D$.
After replacing $D$ with an ${\NZQ R}$-Cartier divisor numerically equivalent to $D$, we may assume that $D=\sum_{i=1}^r a_iG_i$ is an effective divisor, where $G_i$ are prime divisors and $a_i\in {\NZQ R}_{> 0}$. For $m\in {\NZQ Z}_{>0}$, write $mD=N_m+\sum_{i=1}^r\sigma_{G_i}(mD)_{{\NZQ Z}}G_i$. Then
$|mD|=|N_m|+\sum_{i=1}^r\sigma_{G_i}(mD)_{{\NZQ Z}}G_i$ where $|N_m|$ has no codimension one components in its base locus.
There exists a birational morphism $\phi_m:X_m\rightarrow X$ such that $X_m$ is normal and is a resolution of indeterminancy of the rational map determined by $|N_m|$ on $X$. Thus
$\phi_m^*(mD)=M_m+\sum_{i=1}^r\sigma_{G_i}(mD)_{{\NZQ Z}}\overline G_i+F_m$ where
$M_m$ and $F_m$ are effective, $F_m$ has exceptional support for $\phi_m$, $\overline G_i$ is the proper transform of $G_i$ on $X_m$ and
$|\phi_m^*(mD)|=|M_m|+\sum_{i=1}^r\sigma_{G_i}(mD)_{{\NZQ Z}}\overline G_i+F_m$ where
$|M_m|$ is base point free. Thus $M_m$ is a nef integral Cartier divisor on $X_m$.
Set
$D_m=\sum_{i=1}^r\frac{\sigma_{G_i}(mL)_{{\NZQ Z}}}{m}\overline G_i+\frac{F_m}{m}$, so that $D_m$ is an effective ${\NZQ R}$-Cartier divisor on $X_m$. We have that
$\frac{1}{m}M_m\le \langle D\rangle$ in $L^{d-1}(\mathcal X)$ so that
$\pi_X(\frac{1}{m}M_m)\le \pi_X\langle D\rangle$ in $L^{d-1}(X)$. Now
$$
\begin{array}{lll}
\pi_X(\frac{1}{m}M_m)&=&(\phi_{m})_*(\frac{1}{m}M_m)
=\frac{1}{m}(\phi_m)_*((\phi_m)^*(mD)-\sum_{i=1}^r\sigma_{G_i}(mD)_{{\NZQ Z}}\overline G_i-F_m)\\
&=&D-\sum_{i=1}^r \frac{\sigma_{G_i}(mD)_{{\NZQ Z}}}{m}G_i.
\end{array}
$$
Thus
$$
P_{\sigma}(D)=D-\sum_{i=1}^r\sigma_{G_i}(mD)G_i =\lim_{m\rightarrow \infty}(D-\sum_{i=1}^r\frac{\sigma_{G_i}(mD)_{{\NZQ Z}}}{m}G_i)\le \pi_X(\langle D\rangle)
$$
in $L^{d-1}(X)$.
Let $Z\in I(X)$ be normal, with birational map $g:Z\rightarrow X$ and $N$ be a nef and big ${\NZQ R}$-Cartier divisor on $Z$ and $E$ be an effective ${\NZQ R}$-Cartier divisor on $Z$ such that $N+E=g^*(D)$. Let $\Gamma$ be a prime divisor on $Z$. Then
$$
\sigma_{\Gamma}(g^*(D))\le \sigma_{\Gamma}(N)+\mbox{ord}_{\Gamma}(E)=\mbox{ord}_{\Gamma}(E).
$$
Thus $N_{\sigma}(g^*(D))\le E$ and so $N\le P_{\sigma}(g^*(D))$.
Let $\tilde \Gamma$ be a prime divisor on $X$ and let $\Gamma$ be the proper transform of $\tilde \Gamma$ on $Z$. Then $\sigma_{\Gamma}(g^*(D))=\sigma_{\tilde\Gamma}(D)$ so that $\pi_X(N)\le P_{\sigma}(D)$ in $WN^1(X)$.
Thus $\pi_X(\langle D \rangle)\le P_{\sigma}(D)$ in $L^{d-1}(X)$.
\end{proof}
Let $X$ be a projective variety and $L_1,\ldots, L_{d-1}\in N^1(X)$. Suppose that $D$ is a big and movable ${\NZQ R}$-Cartier divisor on $X$. Then the intersection product in $L^0(\mathcal X)={\NZQ R}$ is
\begin{equation}\label{eq90}
\begin{array}{lll}
L_1\cdot \ldots \cdot L_{d-1}\cdot \langle D\rangle
&=&\rho_X(L_1)\cdot\ldots\cdot \rho_X(L_{d-1})\cdot \langle D\rangle
=L_1\cdot\ldots\cdot L_{d-1}\cdot \pi_X(\langle D\rangle)\\
&=&(L_1\cdot\ldots\cdot L_{d-1}\cdot P_{\sigma}(D))_X
=(L_1\cdot\ldots\cdot L_{d-1}\cdot D)_X
\end{array}
\end{equation}
\subsection{Volume of divisors} Suppose that $X$ is a $d$-dimensional projective variety over a field $k$ and $D$ is a Cartier divisor on $X$.
The volume of $D$ is (\cite[Definition 2.2.31]{L})
$$
{\rm vol}(D)=\limsup_{n\rightarrow \infty}\frac{\dim_k(\Gamma(X,\mathcal O_X(nD))}{n^d/d!}.
$$
This lim sup is actually a limit. When $k$ is an algebraically closed field of characteristic zero, this is shown in Example 11.4.7 \cite{L}, as a consequence of Fujita Approximation \cite{F2} (c.f. Theorem 10.35 \cite{L}). The limit is established in
\cite{LM} and \cite{T} when $k$ is algebraically closed of arbitrary characteristic. A proof over an arbitrary field is given in \cite[Theorem 10.7]{C1}.
Since ${\rm vol}$ is a homogeneous function, it extends naturally to a function on
${\NZQ Q}$-divisors, and it extends to a continuous function on $N^1(X)$ (\cite[Corollary 2.2.45]{L}), giving the volume of an arbitrary ${\NZQ R}$-Cartier divisor.
We have (\cite[Theorem 3.1]{BFJ}, \cite[Theorems 5.2 and 5.3]{C}) that for a pseudo effective ${\NZQ R}$-Cartier divisor $D$ on $X$,
\begin{equation}\label{eq44}
{\rm vol}(D)=\langle D^d\rangle.
\end{equation}
Further, we have by \cite[Theorem 3.5]{FKL} that for an arbitrary ${\NZQ R}$-Cartier divisor $D$ (or even an ${\NZQ R}$-Weil divisor) on a normal variety $X$, that
$$
{\rm vol}(D)=\lim_{n\rightarrow \infty}\frac{\dim_k(\Gamma(X,\mathcal O_X(nD))}{n^d/d!}.
$$
Thus ${\rm vol}(D)={\rm vol}(P_{\sigma}(D))$ and so if $P_{\sigma}(D)$ is ${\NZQ R}$-Cartier, then ${\rm vol}(D)=\langle P_{\sigma}(D)^d\rangle$.
\begin{Lemma} Suppose that $L$ is an ${\NZQ R}$-Cartier divisor on a $d$-dimensional projective variety $X$ over a field $k$, $Y$ is a projective variety and $\phi:Y\rightarrow X$ is a generically finite morphism. Then
\begin{equation}\label{eq43}
{\rm vol}(\phi^*L)=\deg(Y/X)\,{\rm vol}(L).
\end{equation}
\end{Lemma}
\begin{proof}
First assume that $L$ is a Cartier divisor.
The sheaf $\phi_*\mathcal O_Y$ is a coherent sheaf of $\mathcal O_X$-modules. Let $R$ be the coordinate ring of $X$ with respect to some closed embedding of $X$ in a projective space. Then $R=\oplus_{i\ge 0}R_i$ is a standard graded domain over $R_0$, and $R_0$ a finite extension field of $k$. There exists a finitely generated graded $R$-module $M$ such that the sheafication $\tilde M$ of $M$ is isomorphic to $\phi_*\mathcal O_Y$ (by \cite[Proposition II.5.15 and Exercise II.5.9]{H} or \cite[Theorem 11.46]{AG}).
Let $S$ be the multiplicative set of nonzero homogeneous elements of $R$ and $\eta$ be the generic point of $X$. The ring $R_{(0)}$ is the set of homogeneous elements of degree 0 in the localization $S^{-1}R$ and the $R_{(0)}$-module $M_{(0)}$
is the set of homogeneous elements of degree 0 in the localization $S^{-1}M$.
The function field of $X$ is
$k(X)=\mathcal O_{X,\eta} =R_{(0)}$ and $(\phi_*\mathcal O_Y)_{\eta}=M_{(0)}$ is a $k(X)$-vector space of rank $r=\deg(Y/X)$.
Let $f_1,\ldots,f_r\in M_{(0)}$ be a $k(X)$-basis. Write $f_i=\frac{z_i}{s_i}$ where $z_i \in M$ is homogeneous of some degree $d_i$ and $s_i\in R$ is homogeneous of degree $d_i$. Multiplication by $z_i$ induces a degree 0 graded $R$-module homomorphism $R(-d_i)\rightarrow M$ giving us a degree 0 graded $R$-module homomorphism
$\oplus_{i=1}^rR(-d_i)\rightarrow M$. Let $K$ be the kernel of this homomorphism and $F$ be the cokernel. Let $\tilde K$ be the sheafification of $K$ and $\tilde F$ be the sheafification of $F$. We have a short exact sequence of coherent $\mathcal O_X$-modules
$0\rightarrow \tilde K\rightarrow \oplus_{i=1}^r\mathcal O_X(d_i)\rightarrow \pi_*\mathcal O_Y\rightarrow \tilde F\rightarrow 0$. Localizing at the generic point, we see that $\tilde K_{\eta}=0$ and $\tilde F_{\eta}=0$ so that the supports of $\tilde K$ and $\tilde F$ have dimension less than $\dim X$, and thus $K=0$ since it is a submodule of a torsion free $R$-module. Tensoring the short exact sequence
$0\rightarrow \oplus_{i=1}^r\mathcal O_X(d_i)\rightarrow \pi_*\mathcal O_Y\rightarrow \tilde F\rightarrow 0$ with $L^n$, we see that
$$
{\rm vol}(\phi^*L)=\lim_{n\rightarrow \infty}\frac{\dim_k\Gamma(Y,\phi^*L^n)}{n^d/d!}=\lim_{n\rightarrow\infty}\frac{
\dim_k(\oplus_{n=1}^r\Gamma(X,\mathcal O_X(d_i)\otimes L^n))}{n^d/d!}=\deg(Y/X)\,{\rm vol}(L).
$$
Since volume is homogeneous, (\ref{eq43}) is valid for ${\NZQ Q}$-Cartier divisors, and since volume is continuous on $N^1(X)$ and $N^1(Y)$, (\ref{eq43}) is valid for ${\NZQ R}$-Cartier divisors.
\end{proof}
\section{A theorem on volumes}
In this section we generalize \cite[Theorem 4.2]{C2}. The proof given here is a variation of the one given in \cite{C2}, using the theory of divisorial Zariski decomposition of ${\NZQ R}$-Weil divisors on normal varieties of \cite{FKL}.
Let $X$ be a $d$-dimensional normal projective variety over a field $k$. Suppose that $D$ is a big ${\NZQ R}$-Weil divisor on $X$. Let $E$ be a codimension one prime divisor on $X$. In \cite[Lemma 4.1]{FKL} the function $\sigma_E$ of Subsection \ref{Subsecsigma} is generalized to give the following definition (\cite[Lemma 4.1]{FKL})
$$
\sigma_E(D)=\lim_{m\rightarrow\infty}\min \frac{1}{m}\{\mbox{mult}_ED'\mid D'\sim_{{\NZQ Z}} mD, D'\ge 0\}.
$$
Suppose that $D$ is a big ${\NZQ R}$-Weil divisor and $E_1,\ldots,E_r$ are distinct prime divisors on $X$. Then by \cite[Lemma 4.1]{FKL}, for all $m\in {\NZQ N}$,
\begin{equation}\label{eq70}
\Gamma(X,\mathcal O_X(mD))=\Gamma(X,\mathcal O_X(mD-\sum_{i=1}^rm\sigma_{E_i}(D)E_i)).
\end{equation}
We now recall the method of \cite{LM} to compute volumes of graded linear series on $X$, as extended in \cite{C2} to arbitrary fields. We restrict to the situation of our immediate interest; that is, $D$ is a big ${\NZQ R}$-Weil divisor and $H$ is an ample Cartier divisor on $X$ such that $D\le H$.
Suppose that $p\in X$ is a nonsingular closed point and
\begin{equation}\label{eqGR2}
X=Y_0\supset Y_1\supset \cdots \supset Y_d=\{p\}
\end{equation}
is a flag; that is, the $Y_i$ are subvarieties of $X$ of dimension $d-i$ such that there is a regular system of parameters $b_1,\ldots,b_d$ in $\mathcal O_{X,p}$ such that $b_1=\cdots=b_i=0$ are local equations of $Y_i$ in $X$ for $1\le i\le d$.
The flag determines a valuation $\nu$ on the function field $k(X)$ of $X$ as follows. We have a sequence of natural surjections of regular local rings
\begin{equation}\label{eqGR3} \mathcal O_{X,p}=
\mathcal O_{Y_0,p}\overset{\sigma_1}{\rightarrow}
\mathcal O_{Y_1,p}=\mathcal O_{Y_0,p}/(b_1)\overset{\sigma_2}{\rightarrow}
\cdots \overset{\sigma_{d-1}}{\rightarrow} \mathcal O_{Y_{d-1},p}=\mathcal O_{Y_{d-2},p}/(b_{d-1}).
\end{equation}
Define a rank $d$ discrete valuation $\nu$ on $k(X)$ by prescribing for $s\in \mathcal O_{X,p}$,
$$
\nu(s)=({\rm ord}_{Y_1}(s),{\rm ord}_{Y_2}(s_1),\cdots,{\rm ord}_{Y_d}(s_{d-1}))\in ({\NZQ Z}^d)_{\rm lex}
$$
where
$$
s_1=\sigma_1\left(\frac{s}{b_1^{{\rm ord}_{Y_1}(s)}}\right),
s_2=\sigma_2\left(\frac{s_1}{b_2^{{\rm ord}_{Y_2}(s_1)}}\right),\ldots,
s_{d-1}=\sigma_{d-1}\left(\frac{s_{d-2}}{b_{d-1}^{{\rm ord}_{Y_{d-1}}(s_{d-2})}}\right).
$$
Let $g=0$ be a local equation of $H$ at $p$. For $m\in {\NZQ N}$, define
$$
\Phi_{mD}:\Gamma(X,\mathcal O_X(mD))\rightarrow {\NZQ N}^d
$$
by $\Phi_{mD}(f)=\nu(fg^m)$. The Okounkov body $\Delta(D)$ of $D$ is the closure of the set
$$
\cup_{m\in {\NZQ N}}\frac{1}{m}\Phi_{mD}(\Gamma(X,\mathcal O_X(mD)))
$$
in ${\NZQ R}^d$.
$\Delta(D)$ is a compact and convex set by \cite[Lemma 1.10]{LM} or the proof of \cite[Theorem 8.1]{C1}.
By the proof of \cite[Theorem 8.1]{C1} and of \cite[Lemma 5.4]{C3} we see that
\begin{equation}\label{GR4}
{\rm Vol}(D)=\lim_{m\rightarrow \infty}\frac{\dim_k\Gamma(X,\mathcal O_X(mD))}{m^d/d!}
=d![\mathcal O_{X,p}/m_p:k]{\rm Vol}(\Delta(D)).
\end{equation}
The following proposition is proven with the assumption that the ground field $k$ is perfect in i) implies ii) of Theorem B in \cite{FKL}. The assumption that $k$ is perfect is required in their proof as they use
\cite{T}, which proves that a Fujita approximation exists
to compute the volume of a Cartier divisor when the ground field is perfect. The theorem of \cite{dJ} is used in \cite{FKL} to conclude that a separable alteration exists if the ground field $k$ is perfect.
\begin{Proposition}\label{Prop1} Suppose that $X$ is a normal projective variety over a field $k$
and $D_1,D_2$ are big ${\NZQ R}$-Weil divisors on $X$ such that $D_1\le D_2$ and ${\rm Vol}(D_1)={\rm Vol}(D_2)$.
Then
$$
\Gamma(X,\mathcal O_X(nD_1))=\Gamma(X,\mathcal O_X(nD_2))
$$
for all $n\in {\NZQ N}$.
\end{Proposition}
\begin{proof} Write $D_2=D_1+\sum_{i=1}^r a_iE_i$ where the $E_i$ are prime divisors on $X$ and
$a_i\in {\NZQ R}_{>0}$ for all $i$. Let $H$ be an ample Cartier divisor on $X$ such that $D_2\le H$.
For each $i$ with $1\le i\le r$ choose a flag (\ref{eqGR2})
with $Y_1=E_i$ and $p$ a point such that $p\in X$ is a nonsingular closed point of $X$ and $E_i$ and $p\not\in E_j$ for $j\ne i$. Let
$\pi_1:{\NZQ R}^d\rightarrow {\NZQ R}$ be the projection onto the first factor. For $f\in \Gamma(X,\mathcal O_X(mD_j))$,
$$
\frac{1}{m}\mbox{ord}_{E_i}(fg^m)=\frac{1}{m}\mbox{ord}_{E_i}((f)+mD_j)+\mbox{ord}_{E_i}(H-D_j).
$$
Thus
$$
\pi_1^{-1}(\sigma_{E_i}(D_j)+\mbox{ord}_{E_i}(H-D_j))\cap \Delta(D_j)\ne \emptyset
$$
and
$$
\pi_1^{-1}(a)\cap \Delta(D_j)=\emptyset\mbox{ if }a<\sigma_{E_i}(D_j)+\mbox{ord}_{E_i}(H-D_j).
$$
Further, $\Delta(D_1)\subset \Delta(D_2)$ and ${\rm Vol}(D_1)={\rm Vol}(D_2)$, so $\Delta(D_1)=\Delta(D_2)$ by Lemma \cite[Lemma 3.2]{C2}.
Thus
$$
\sigma_{E_i}(D_1)+\mbox{ord}_{E_i}(H-D_1)=\sigma_{E_i}(D_2)+\mbox{ord}_{E_i}(H-D_2)
$$
for $1\le i\le r$. We obtain that
$$
D_2-\sum_{i=1}^r\sigma_{E_i}(D_2)E_i=D_1-\sum_{i=1}^r\sigma_{E_i}(D_1)E_i.
$$
By (\ref{eq70}), for all $m\ge 0$,
$$
\Gamma(X,\mathcal O_X(mD_1))=\Gamma(X,\mathcal O_X(mD_2)).
$$
\end{proof}
\begin{Lemma}\label{Lemma2}
Suppose that $X$ is a nonsingular projective variety and $D_1\le D_2$ are big ${\NZQ R}$-divisors on $X$. Then the following are equivalent
\begin{enumerate}
\item[1)] ${\rm vol}(D_1)={\rm vol}(D_2)$
\item[2)] $\Gamma(X,\mathcal O_X(nD_1))=\Gamma(X,\mathcal O_X(nD_2))$ for all $n\in {\NZQ N}$
\item[3)] $P_{\sigma}(D_1)=P_{\sigma}(D_2)$.
\end{enumerate}
\end{Lemma}
\begin{proof} 1) implies 2) is Proposition \ref{Prop1}. We now assume 2) holds and prove 3). Then
$|nD_2|=|nD_1|+n(D_2-D_1)$ for all $n\ge 0$. Thus
$$
\sigma_{\Gamma}(D_2)=\sigma_{\Gamma}(D_1)+\mbox{ord}_{\Gamma}(D_2-D_1),
$$
and so
$$
\begin{array}{lll}
P_{\sigma}(D_2)&=&D_2-N_{\sigma}(D_2)=D_1+(D_2-D_1)-(N_{\sigma}(D_1)+D_2-D_1)\\
&=& D_1-N_{\sigma}(D_1)=P_{\sigma}(D_1).
\end{array}
$$
Finally, we prove 3) implies 1). Suppose that $P_{\sigma}(D_1)=P_{\sigma}(D_2)$. Then
$$
{\rm vol}(D_1)={\rm vol}(P_{\sigma}(D_1))={\rm vol}(P_{\sigma}(D_2))={\rm vol}(D_2)
$$
by (\ref{eq70}).
\end{proof}
\section{The Augmented Base Locus}
Let $X$ be a normal variety over a field. Let $D$ be a big ${\NZQ R}$-Cartier divisor on $X$. The augmented base locus $B_+(D)$ is defined in \cite[Definition 1.2]{ELM} and extended to ${\NZQ R}$-Weil divisors in \cite[Definition 5.1]{FKL}. $B_+^{\rm div}(D)$ is defined to be the divisorial part of $B_+(D)$. It is shown in \cite[Proposition 1.4]{ELM} that if $D_1$ and $D_2$ are big ${\NZQ R}$-Cartier divisors and $D_1\equiv D_2$ then $B_+(D_1)=B_+(D_2)$. In \cite[Lemma 5.3]{FKL}, it is shown that if $A$ is an ample ${\NZQ R}$-Cartier divisor on $X$, then
\begin{equation}\label{eq61}
B_+^{\rm div}(D)=\mbox{Supp}(N_{\sigma}(D-\epsilon A))
\end{equation}
for all sufficiently small positive $\epsilon$.
The following Lemma is $i)$ equivalent to $ii)$ of \cite[Theorem B]{FKL}, in the case that $X$ is nonsingular, over an arbitrary field. We use Lemma \ref{Lemma2} to remove the assumption in \cite[Theorem B]{FKL} that the ground field is perfect.
\begin{Lemma}\label{Lemma60} Let $X$ be a nonsingular projective variety over a field. Let $D$ be a big ${\NZQ R}$-divisor on $X$ and $E$ be an effective ${\NZQ R}$-divisor. Then ${\rm vol}(D+E)={\rm vol}(D)$ if and only if $\mbox{Supp}(E)\subset B_+^{\rm div}(D)$.
\end{Lemma}
\begin{proof} Suppose that ${\rm vol}(D+E)={\rm vol}(D)$.
Let $D'$ be an ${\NZQ R}$-divisor such that $D'\equiv D$. Then ${\rm vol}(D'+E)={\rm vol}(D')$.
Lemma \ref{Lemma2} implies
$\Gamma(X,\mathcal O_X(nD'))=\Gamma(X,\mathcal O_X(nD'+sE))$ for all $n>0$ and $0\le s\le n$. Thus $\Gamma(X,\mathcal O_X(nD'))=\Gamma(X,\mathcal O_X(nD'+rE))$ for all $n>0$ and $r\ge 0$ by \cite[Lemma III.1.8, Corollary III.1.9]{N} or \cite[Lemma 4.1]{FKL}. Let $A$ be an ample ${\NZQ R}$-divisor on $X$ and suppose that $F$ is an irreducible component of $E$ and $F\not\subset \mbox{Supp}(N_{\sigma}(D-\epsilon A))$ for $\epsilon$ sufficiently small. By \cite[Lemma 4.9]{FKL}, there exists $m>0$ such that
$$
mD+F=(\frac{1}{2}m\epsilon A+F)+(\frac{1}{2}m\epsilon A+mP_{\sigma}(D-\epsilon A))+mN_{\sigma}(D-\epsilon A)
$$
is numerically equivalent to an effective divisor $G$ that does not contain $F$ in its support. Let $D'=\frac{1}{m}(G-F)\equiv D$. Then for $r$ sufficiently large,
$$
\dim_k\Gamma(X,\mathcal O_X(mD'+rE))\ge \dim_k\Gamma(X,\mathcal O_X(mD'+F))>\dim_k\Gamma(X,\mathcal O_X(mD')),
$$
giving a contradiction, and so by (\ref{eq61}), $\mbox{Supp}(E)\subset B_+^{\rm div}(D)$.
Now suppose that $\mbox{Supp}(E)\subset B_{+}^{\rm div}(D)$. Let $A$ be an ample ${\NZQ R}$-divisor on $X$. By (\ref{eq61}), we have that $\mbox{Supp}(E)\subset \mbox{Supp}(N_{\sigma}(D-\epsilon A))$ for all sufficiently small positive $\epsilon$. By \cite[Lemma 4.13]{FKL}, we have that ${\rm vol}(D+E-\epsilon A)={\rm vol}(D-\epsilon A)$ for all sufficiently small $\epsilon>0$.
Thus ${\rm vol}(D+E)={\rm vol}(D)$ by continuity of volume of ${\NZQ R}$-divisors.
\end{proof}
\section{The Minkowski equality}\label{SecMink}
In this section, we modify the proof sketched in \cite{LX2} of \cite[Proposition 3.7]{LX2} to be valid over an arbitrary field. Characteristic zero is required in the proof in \cite{LX2} as the existence of resolution of singularities is assumed and an argument using the theory of multiplier ideals is used, which requires characteristic zero as it relies on both resolution of singularities and Kodaira vanishing.
\begin{Proposition}\label{Prop3} Let $X$ be a nonsingular projective $d$-dimensional variety over a field $k$. Suppose that $L$ is a big ${\NZQ R}$-divisor on $X$, and $P$ and $N$ are ${\NZQ R}$-divisors on $X$ such that $L\equiv P+N$ where ${\rm vol}(L)={\rm vol}(P)$ and $N$ is pseudo effective. Then $P_{\sigma}(P)\equiv P_{\sigma}(L)$.
\end{Proposition}
\begin{proof} Write $N=P_{\sigma}(N)+N_{\sigma}(N)$.
Since $L$ and $P$ are big ${\NZQ R}$-Cartier divisors, by superadditivity and positivity of intersection products,
$$
\begin{array}{lll}
{\rm vol}(L)&=&\langle L^d\rangle
\ge\langle L^{d-1}\cdot P\rangle+\langle L^{d-1}\cdot N\rangle\\
&=& \langle(P+N)^{d-1}\cdot P\rangle + \langle L^{d-1}\cdot N\rangle\\
& \ge& \langle P^d\rangle +\langle L^{d-1}\cdot N\rangle
= {\rm vol}(P)+\langle L^{d-1}\cdot N\rangle.
\end{array}
$$
Thus $\langle L^{d-1}\cdot N\rangle =0$. Let $A$ be an ample Cartier divisor on $X$. There exists a small real multiple $\overline A$ of $A$ such that $B:=L-\overline A$ is a big ${\NZQ R}$-Cartier divisor.
$$
0=\langle (\overline A+B)^{d-1}\cdot N\rangle \ge \langle \overline A^{d-1}\cdot N\rangle=\langle \overline A^{d-1}\cdot P_{\sigma}(N)+N_{\sigma}(N)\rangle \ge
\langle \overline A^{d-1}\cdot P_{\sigma}(N)\rangle =\overline A^{d-1}\cdot \langle P_{\sigma}(N)\rangle
$$
by superadditivity and Lemma \ref{Lemma34}.
By Lemma \ref{Lemma31}, $P_{\sigma}(N)+\epsilon \overline A$ is big and movable, so by (\ref{eq90}),
$$
\overline A^{d-1}\cdot \langle P_{\sigma}(N)+\epsilon \overline A\rangle=\overline A^{d-1}\cdot(P_{\sigma}(N)+\epsilon \overline A),
$$
so
$$
\overline A^{d-1}\cdot \langle P_{\sigma}(N)\rangle =\lim_{\epsilon \rightarrow 0} \overline A^{d-1}\cdot \langle P_{\sigma}(N)+\epsilon \overline A\rangle=\overline A^{d-1}\cdot P_{\sigma}(N).
$$
Thus
\begin{equation}\label{eq6}
(A^{d-1}\cdot P_{\sigma}(N))_X=0
\end{equation}
and so $P_{\sigma}(N)\equiv 0$ by Lemma \ref{Lemma7}.
Thus $N\equiv N_{\sigma}(N)$. Thus, replacing $P$ with the numerically equivalent divisor $P+P_{\sigma}(N)$,
we may assume that $N$ is effective. By Lemma \ref{Lemma2}, we have that
$$
P_{\sigma}(P)=P_{\sigma}(P+N)\equiv P_{\sigma}(L).
$$
\end{proof}
\begin{Lemma}\label{Lemma10} Let $X$ be a nonsingular $d$-dimensional projective variety over a field $k$. Suppose that $L_1$ and $L_2$ are big ${\NZQ R}$-divisors on $X$. Set $s$ to be the largest real number $s$ such that $L_1-sL_2$ is pseudo effective. Then
\begin{equation}\label{eq11}
s^d\le \frac{{\rm vol}(L_1)}{{\rm vol}(L_2)}
\end{equation}
and if equality holds in (\ref{eq11}), then $P_{\sigma}(L_1)\equiv sP_{\sigma}(L_2)$.
\end{Lemma}
\begin{proof} The pseudo effective cone is closed, so $s$ is well defined. We have $L_1\equiv sL_2+\gamma$ where $\gamma$ is pseudo effective. Thus ${\rm vol}(L_1)\ge {\rm vol}(sL_2)=s^d{\rm vol}(L_2)$. If this is an equality, then $sP_{\sigma}(L_2)\equiv P_{\sigma}(L_1)$ by Proposition \ref{Prop3}.
\end{proof}
Let $X$ be a projective variety over a field $k$. An alteration $\phi:Y\rightarrow X$ is a proper and dominant morphism such that $Y$ is a nonsingular projective variety and $[k(Y):k(X)]<\infty$. If $X$ is normal and $D$ is a pseudo effective ${\NZQ R}$-Cartier divisor on $X$, then by \cite[Lemma 4.12]{FKL},
\begin{equation}\label{eqNew20}
\phi_*N_{\sigma}(\phi^*D)=\deg(Y/X)N_{\sigma}(D).
\end{equation}
It is proven in \cite{dJ} that for such $X$, an alteration always exists (although it may be that $k(Y)$ is not separable over $k(X)$ if $k$ is not perfect).
\begin{Lemma}\label{Lemma21} Suppose that $X$ is a projective variety over a field $k$, $\phi:Y\rightarrow X$ is an alteration and $L_1, L_2$ are pseudo effective ${\NZQ R}$-Cartier divisors on $X$.
Suppose that $s\in {\NZQ R}_{>0}$. Then $\phi^*(L_1)-sP_{\sigma}(\phi^*(L_2))$ is pseudo effective if and only if $P_{\sigma}(\phi^*(L_1))-sP_{\sigma}(\phi^*(L_2))$ is pseudo effective.
\end{Lemma}
\begin{proof} Certainly if $P_{\sigma}(\phi^*L_1)-sP_{\sigma}(\phi^*L_2)$ is pseudo effective then $\phi^*(L_1)-sP_{\sigma}(\phi^*L_2)$ is pseudo effective. Suppose $\phi^*(L_1)-sP_{\sigma}(\phi^*(L_2))$ is pseudo effective. Then there exists a pseudo effective ${\NZQ R}$-divisor $\gamma$ on $Y$ such that
$$
P_{\sigma}(\phi^*L_1)+N_{\sigma}(\phi^*L_1)=\phi^*L_1\equiv sP_{\sigma}(\phi^* L_2)+\gamma
=(sP_{\sigma}(\phi^*L_2)+P_{\sigma}(\gamma))+N_{\sigma}(\gamma).
$$
The effective ${\NZQ R}$-divisor $N_{\sigma}(\gamma)$ has the property that $\phi^*(L_1)-N_{\sigma}(\gamma)$ is movable by Lemma \ref{Lemma31}, so
$N_{\sigma}(\gamma))\ge N_{\sigma}(\phi^*L_1)$ by \cite[Proposition III.1.14]{N}. Thus $P_{\sigma}(\phi^*L_1)-sP_{\sigma}(\phi^*L_2)$ is pseudo effective.
\end{proof}
\begin{Lemma}\label{Lemma22}
Let $X$ be a $d$-dimensional projective variety over a field $k$. Suppose that $L_1$ and $L_2$ are big and movable ${\NZQ R}$-Cartier divisors on $X$. Let $s$ be the largest real number such that $L_1-sL_2$ is pseudo effective. Then
\begin{equation}\label{eq23}
s^d\le \frac{{\rm vol}(L_1)}{{\rm vol}(L_2)}
\end{equation}
and if equality holds in (\ref{eq23}), then $L_1$ and $L_2$ are proportional in $N^1(X)$.
\end{Lemma}
\begin{proof} Let $\phi:Y\rightarrow X$ be an alteration.
Let $L$ be a big and movable ${\NZQ R}$-Cartier divisor on $X$. Let $\Gamma\subset Y$ be a prime divisor which is not exceptional for $\phi$. Let $\tilde \Gamma$ be the codimension one subvariety of $X$ which is the support of $\phi_*\Gamma$.
Since $L$ is movable, there exist effective ${\NZQ R}$-Cartier divisors $D_i$ on $X$ such that $\lim_{i\rightarrow \infty}D_i=L$ in $N^1(X)$ and $\tilde\Gamma\not\subset\mbox{Supp}(D_i)$ for all $i$. We thus have that $\phi^*(L)=\lim_{i\rightarrow\infty}\phi^*(D_i)$ in $N^1(Y)$ and $\Gamma\not\subset\mbox{Supp}(\phi^*(D_i))$ for all $i$, so that $\sigma_{\Gamma}(\phi^*(D_i))=0$ for all $i$. Thus $\sigma_{\Gamma}(\phi^*(L))=0$ since $\sigma_{\Gamma}$ is continuous on the big cone of $Y$.
Thus $N_{\sigma}(\phi^*L)$ has exceptional support for $\phi$ and thus $\phi_*(P_{\sigma}(\phi^*L))=\phi_*(\phi^*L)=\deg(Y/X)L$ by (\ref{eq41}).
Let $s_Y$ be the largest real number such that $P_{\sigma}(\phi^*L_1)-s_YP_{\sigma}(\phi^*L_2))$ is pseudo effective. Then $s_Y\ge s$ since $\phi^*L_1-s\phi^*L_2$ is pseudo effective and by Lemma \ref{Lemma21}, and so
$$
s^d\le s_Y^d\le \frac{{\rm vol}(\phi^*L_1)}{{\rm vol}(\phi^*L_2)}=\frac{{\rm vol}(L_1)}{{\rm vol}(L_2)}
$$
by Lemma \ref{Lemma10} and (\ref{eq43}).
If $s^d=\frac{{\rm vol}(L_1)}{{\rm vol}(L_2)}$, then $P_{\sigma}(\phi^*(L_1))=sP_{\sigma}(\phi^*(L_2))$ in $N^1(Y)$ by Lemma \ref{Lemma10}, and so
$$
\deg(Y/X)(L_1-sL_2)=\phi_*(\phi^*(L_1)-s\phi^*(L_2))=\phi_*(P_{\sigma}(\phi^*(L_1))-s\phi_*(P_{\sigma}(\phi^*(L_2))=0
$$
in $N_{d-1}(X)$,
so that $0=L_1-sL_2$ in $N^1(X)$ by Lemma \ref{Lemma55}.
\end{proof}
The following proposition is proven over an algebraically closed field of characteristic zero in \cite[Proposition 3.3]{LX2}.
\begin{Proposition}\label{Prop13}
Suppose that $X$ is a projective $d$-dimensional variety over a field $k$ and $L_1,L_2$
are big and moveable ${\NZQ R}$-Cartier divisors on $X$. Then
$$
\langle L_1^{d-1} \rangle\cdot L_2 \ge{\rm vol}(L_1)^{\frac{d-1}{d}}{\rm vol}(L_2)^{\frac{1}{d}}
$$
with equality if and only if $L_1$ and $L_2$ are proportional in $N^1(X)$.
\end{Proposition}
\begin{proof}
Let $f:\overline X\rightarrow X$ be the normalization of $X$. Since $\overline X$ has no exceptional divisors for $f$, $f^*L_1$ and $f^*L_2$ are movable. We have that
$\langle f^*L_1^{d-1}\rangle \cdot f^*L_2 =\langle L_1^{d-1}\rangle \cdot L_2$ and $\rm{vol}(f^*L_i)={\rm vol}(L_i)$ for $i=1,2$. Further,
$f^*:N^1(X)\rightarrow N^1(\overline X)$ is an injection, so $L_1$ and $L_2$ are proportional in $N^1(X)$ if and only if $f^*L_1$ and $f^*L_2$ are proportional in $N^1(\overline X)$. We may thus replace $X$ with its normalization $\overline X$, and so we can can assume for the remainder of the proof that $X$ is normal.
We construct birational morphisms $\psi_m:Y_m\rightarrow X$ with
numerically effective ${\NZQ R}$-Cartier divisors $A_{i,m}$ and effective ${\NZQ R}$-Cartier divisors $E_{i,m}$ on $Y_m$ such that $A_{i,m}=\psi_m^*(L_i)-E_{i,m}$ and $\langle L_i\rangle =\lim_{m\rightarrow \infty}A_{i,m}$ in $L^{d-1}(\mathcal X)$ for $i=1,2$. We have that $\pi_X(A_{i,m})=\psi_{m,*}(A_{i,m})$ comes arbitrarily closed to $\pi_X(\langle L_j\rangle)=P_{\sigma}(L_j)=L_j$ in $L^{d-1}(X)$ by Lemma \ref{Lemma200}.
Let $s_L$ be the largest number such that $L_1-s_LL_2$ is pseudo effective and
let $s_m$ be the largest number such that $A_{1,m}-s_mA_{2,m}$ is pseudo effective.
We will now show that
given $\epsilon>0$, there exists a positive integer $m_0$ such that $m>m_0$ implies $s_m<s_L+\epsilon$. Since ${\rm Psef}(X)$ is closed, there exists $\delta>0$ such that the open ball $B_{\delta}(L_1-(s_L+\epsilon)L_2)$ in $N^1(X)$ of radius $\delta$ centered at $L_1-(s_L+\epsilon)L_2$ is disjoint from ${\rm Psef}(X)$. There exists $m_0$ such that $m\ge m_0$ implies
$\psi_{m*}(A_{1,m})\in B_{\frac{\delta}{2}}(L_1)$ and $\psi_{m*}(A_{2,m})\in B_{\frac{\delta}{(s_L+\epsilon)2}}(L_2)$. Thus
$\psi_{m*}(A_{1,m}-(s_L+\epsilon)A_{2,m})\not\in {\rm Psef}(X)$ for $m\ge m_0$ so that $s_m<s_L+\epsilon$.
By the Khovanski Teissier inequalities for nef and big divisors (\cite[Theorem 2.15]{BFJ} in characteristic zero, \cite[Corollary 6.3]{C}),
\begin{equation}\label{eq14}
(A_{1,m}^{d-1}\cdot A_{2,m})^{\frac{d}{d-1}}\ge {\rm vol}(A_{1,m}){\rm vol}(A_{2,m})^{\frac{1}{d-1}}
\end{equation}
for all $m$. By Proposition \ref{Prop35}, taking limits as $m\rightarrow \infty$, we have
$$
\langle L_1^{d-1}\cdot L_2\rangle \ge {\rm vol}(L_1)^{\frac{d-1}{d}}{\rm vol}(L_2)^{\frac{1}{d}}.
$$
Now for each $m$, we have
$$
A_{1,m}^{d-1}\cdot \psi_m^*(L_2)=A_{1,m}^{d-1}\cdot (A_{2,m}+E_{2,m})\ge A_{1,m}^{d-1}\cdot A_{2,m}
$$
since $E_{2,m}$ is effective and $A_{1,m}$ is nef. Taking limits as $m\rightarrow \infty$, we have
$\langle L_1^{d-1}\rangle \cdot L_2\ge \langle L_1^{d-1}\cdot L_2\rangle$. Thus
\begin{equation}\label{eq15}
\langle L_1^{d-1}\rangle\cdot L_2\ge \langle L_1^{d-1}\cdot L_2\rangle \ge {\rm vol}(L_1)^{\frac{d-1}{d}}{\rm vol}(L_2)^{\frac{1}{d}}.
\end{equation}
The Diskant inequality for big and nef divisors, \cite[Theorem 6.9]{C}, \cite[Theorem F]{BFJ} implies
$$
(A_{1,m}^{d-1}\cdot A_{2,m})^{\frac{d}{d-1}}-{\rm vol}(A_{1,m}){\rm vol}(A_{2,m})^{\frac{1}{d-1}}
\ge ((A_{1,m}^{d-1}\cdot A_{2,m})^{\frac{1}{d-1}}-s_m{\rm vol}(A_{2,m})^{\frac{1}{d-1}})^d.
$$
We have that
$(A_{1,m}^{d-1}\cdot A_{2,m})^{\frac{1}{d-1}}-s_m{\rm vol}(A_{2,m})^{\frac{1}{d-1}}\ge 0$ since
$s_m^d\le\frac{{\rm vol}(A_{1,m})}{{\rm vol}(A_{2,m})}$ by Lemma \ref{Lemma22} and by (\ref{eq14}).
We have that
$$
\begin{array}{lll}
\left[(A_{1,m}^{d-1}\cdot A_{2,m})^{\frac{d}{d-1}}-{\rm vol}(A_{1,m}){\rm vol}(A_{2,m})^{\frac{1}{d-1}}\right]^{\frac{1}{d}}
&\ge& (A_{1,m}^{d-1}\cdot A_{2,m})^{\frac{1}{d-1}}-s_m{\rm vol}(A_{2,m})^{\frac{1}{d-1}}\\
&\ge & (A_{1,m}^{d-1}\cdot A_{2,m})^{\frac{1}{d-1}}-(s_L+\epsilon){\rm vol}(A_{2,m})^{\frac{1}{d-1}}\end{array}
$$
for $m\ge m_0$.
Taking the limit as $m\rightarrow\infty$, we have
\begin{equation}\label{eq16}
\langle L_1^{d-1} \cdot L_2\rangle ^{\frac{d}{d-1}}-{\rm vol}(L_1){\rm vol}(L_2)^{\frac{1}{d-1}}
\ge [\langle L_1^{d-1}\cdot L_2\rangle^{\frac{1}{d-1}}-s_L{\rm vol}(L_2)^{\frac{1}{d-1}}]^d.
\end{equation}
If $(\langle L_1^{d-1}\rangle \cdot L_2)^{\frac{d}{d-1}}={\rm vol}(L_1){\rm vol}(L_2)^{\frac{1}{d-1}}$ then
$\langle L_1^{d-1}\rangle \cdot L_2=\langle L_1^{d-1}\cdot L_2\rangle$ by (\ref{eq15}) and
$(\langle L_1^{d-1}\rangle\cdot L_2)^{\frac{1}{d-1}}=s_L{\rm vol}(L_2)^{\frac{1}{d-1}}$, so that $s_L^d=\frac{{\rm vol}(L_1)}{{\rm vol}(L_2)}$ and thus $L_1$ and $L_2$ are proportional in $N^1(X)$ by Lemma \ref{Lemma22}.
Suppose $L_1$ and $L_2$ are proportional in $N^1(X)$, so that $L_1\equiv s_LL_2$ and $s_L^d=\frac{{\rm vol}(L_1)}{{\rm vol}(L_2)}$. Then
$$
\langle L_1^{d-1}\rangle \cdot L_2=s_L^{d-1}\langle L_2^{d-1}\rangle \cdot L_2=s_L^{d-1}\langle L_2^d\rangle
=\frac{{\rm vol}(L_1)^{\frac{d-1}{d}}}{{\rm vol}(L_2)^{\frac{d-1}{d}}}{\rm vol}(L_2)={\rm vol}(L_1)^{\frac{d-1}{d}}{\rm vol}(L_2)^{\frac{1}{d}}
$$
where the second equality is by (\ref{eq40}).
\end{proof}
The proof of the following theorem is deduced from Proposition \ref{Prop13} by extracting an argument from \cite[Theorem 4.11]{LX1}. Over algebraically closed fields of characteristic zero, it is \cite[Proposition 3.7]{LX2}.
\begin{Theorem}\label{Theorem18}
Let $L_1$ and $L_2$ be big and moveable ${\NZQ R}$-Cartier divisors on a $d$-dimensional projective variety $X$ over a field $k$. Then
\begin{equation}\label{eq97}
{\rm vol}(L_1+L_2)^{\frac{1}{d}}\ge {\rm vol}(L_1)^{\frac{1}{d}}+{\rm vol}(L_2)^{\frac {1}{d}}
\end{equation}
with equality if and only if $L_1$ and $L_2$ are proportional in $N^1(X)$.
\end{Theorem}
\begin{proof} By Theorem \ref{Theorem17}, we have that
$$
\frac{d}{dt}{\rm vol}(L_1+tL_2)=d\langle (L_1+tL_2)^{d-1}\rangle\cdot L_2
$$
for $t$ in a neighborhood of $[0,1]$. By Proposition \ref{Prop13},
$$
\langle (L_1+tL_2)^{d-1}\rangle\cdot L_2\ge {\rm vol}(L_1+tL_2)^{\frac{d-1}{d}}{\rm vol}(L_2)^{\frac {1}{d}}.
$$
Thus
\begin{equation}\label{eq19}
\begin{array}{lll}
{\rm vol}(L_1+L_2)^{\frac{1}{d}}-{\rm vol}(L_1)^{\frac{1}{d}}&=&\int_0^1{\rm vol}(L_1+tL_2)^{\frac{1-d}{d}}\langle(L_1+tL_2)^{d-1}\rangle\cdot L_2dt\\
& \ge& \int_0^1{\rm vol}(L_1+tL_2)^{\frac{1-d}{d}}{\rm vol}(L_1+tL_2)^{\frac{d-1}{d}}{\rm vol}(L_2)^{\frac {1}{d}}dt\\
&=& \int_0^1{\rm vol}(L_2)^{\frac{1}{d}}dt={\rm vol}(L_2)^{\frac{1}{d}}.
\end{array}
\end{equation}
Since positive intersection products are continuous on big divisors, we have equality in (\ref{eq19}) if and only if
$$
\langle(L_1+tL_2)^{d-1}\rangle\cdot L_2={\rm vol}(L_1+tL_2)^{\frac{d-1}{d}}{\rm vol}(L_2)^{\frac{1}{d}}
$$
for $0\le t\le 1$. Thus if equality holds in (\ref{eq97}), then $L_1$ and $L_2$ are proportional in $N^1(X)$ by Proposition \ref{Prop13}.
Since ${\rm vol}$ is homogeneous, if $L_1$ and $L_2$ are proportional in $N^1(X)$, then equality holds in (\ref{eq97}).
\end{proof}
The following theorem is proven over algebraically closed fields of characteristic zero in \cite[Theorem 1.6]{LX2}.
\begin{Theorem}\label{Theorem20} Let $X$ be a nonsingular $d$-dimensional projective variety over a field $k$. For any two big ${\NZQ R}$-divisors $L_1$ and $L_2$ on $X$,
$$
{\rm vol}(L_1+L_2)^{\frac{1}{d}}\ge {\rm vol}(L_1)^{\frac{1}{d}}+{\rm vol}(L_2)^{\frac {1}{d}}
$$
with equality if and only if $P_{\sigma}(L_1)$ and $P_{\sigma}(L_2)$ are proportional in $N^1(X)$.
\end{Theorem}
\begin{proof}
We have ${\rm vol}(P_{\sigma}(L_i))={\rm vol}(L_i)$ for $i=1, 2$. Since $L_i=P_{\sigma}(L_i)+N_{\sigma}(L_i)$ for $i=1,2$ where $P_{\sigma}(L_i)$ is pseudo effective and movable and $N_{\sigma}(L_i)$ is effective, we have by super additivity of positive intersection products of pseudo effective divisors and Theorem \ref{Theorem18} that
$$
{\rm vol}(L_1+L_2)^{\frac{1}{d}}\ge{\rm vol}(P_{\sigma}(L_1)+P_{\sigma}(L_2))^{\frac{1}{d}}
\ge {\rm vol}(P_{\sigma}(L_1))^{\frac{1}{d}}+{\rm vol}(P_{\sigma}(L_2))^{\frac{1}{d}}={\rm vol}(L_1)^{\frac{1}{d}}+{\rm vol}(L_2)^{\frac{1}{d}}.
$$
Thus if we have the equality ${\rm vol}(L_1+L_2)^{\frac{1}{d}}={\rm vol}(L_1)^{\frac{1}{d}}+{\rm vol}(L_2)^{\frac{1}{d}}$,
we have
$$
{\rm vol}(P_{\sigma}(L_1)+P_{\sigma}(L_2))^{\frac{1}{d}}={\rm vol}(P_{\sigma}(L_1))^{\frac{1}{d}}+{\rm vol}(P_{\sigma}(L_2))^{\frac{1}{d}}.
$$
Then $P_{\sigma}(L_1)$ and $P_{\sigma}(L_2)$ are proportional in $N^1(X)$ by Theorem \ref{Theorem18}.
Now suppose that $P_{\sigma}(L_1)$ and $P_{\sigma}(L_2)$ are proportional in $N^1(X)$. Then there exists $s\in {\NZQ R}_{>0}$ such that $P_{\sigma}(L_2)\equiv sP_{\sigma}(L_1)$, so that $B_+^{\rm div}(P_{\sigma}(L_1))=B_+^{\rm div}(P_{\sigma}(L_2))$.
Since ${\rm vol}(L_i)={\rm vol}(P_{\sigma}(L_i))$ for $i=1,2$, we have that
$\mbox{Supp}(N_{\sigma}(L_1)), \mbox{Supp}(N_{\sigma}(L_2))\subset B_+^{\rm div}(P_{\sigma}(L_1))$ by Lemma \ref{Lemma60}.
Thus $\mbox{Supp}(N_{\sigma}(L_1)+N_{\sigma}(L_2))\subset B_+^{\rm div}(P_{\sigma}(L_1))$, so that by Lemma \ref{Lemma60},
$$
{\rm vol}(L_1+L_2)={\rm vol}(P_{\sigma}(L_1)+sP_{\sigma}(L_1))=(1+s)^d{\rm vol}(P_{\sigma}(L_1)).
$$
Thus
$$
{\rm vol}(L_1+L_2)^{\frac{1}{d}}=(1+s){\rm vol}(P_{\sigma}(L_1))^{\frac{1}{d}}={\rm vol}(L_1)^{\frac{1}{d}}+{\rm vol}(L_2)^{\frac{1}{d}}.
$$
\end{proof}
\section{Characterization of equality in the Minkowski inequality}
\begin{Theorem}\label{Theorem21} Let $X$ be a normal $d$-dimensional projective variety. For any two big ${\NZQ R}$-Cartier divisors $L_1$ and $L_2$ on $X$,
$$
{\rm vol}(L_1+L_2)^{\frac{1}{d}}\ge {\rm vol}(L_1)^{\frac{1}{d}}+{\rm vol}(L_2)^{\frac {1}{d}}.
$$
If equality holds, then $P_{\sigma}(L_1)=sP_{\sigma}(L_2)$ in $N_{d-1}(X)$, where
$s=\left(\frac{{\rm vol}(L_1)}{{\rm vol}(L_2)}\right)^{\frac{1}{d}}$.
\end{Theorem}
\begin{proof} Here we use the extension of $\sigma$-decomposition to ${\NZQ R}$-Weil divisors on a normal projective variety of \cite{FKL}.
Let $\phi:Y\rightarrow X$ be an alteration. We have that $\phi^*L_1$ and $\phi^*L_2$ are big ${\NZQ R}$-Cartier divisors. By \cite[Lemma 4.12]{FKL}, for $i=1,2$,
$\phi_*N_{\sigma}(\phi^*L_i)=\deg(Y/X)\,N_{\sigma}(L_i)$. Since $\phi_*\phi^*L=\deg(Y/X)\,L$ by (\ref{eq41}), we have that
$\phi_*P_{\sigma}(\phi^*L_i)=\deg(Y/X)\,P_{\sigma}(L_i)$. Now ${\rm vol}(\phi^*L_i)=\deg(Y/X)\, {\rm vol}(L_i)$ for $i=1,2$ and
${\rm vol}(\phi^*L_1+\phi^*L_2)=\deg(Y/X)\,{\rm vol}(L_1+L_2)$ by (\ref{eq43}).
Thus the inequality of the statement of the theorem holds for $L_1$ and $L_2$ since it holds for $\phi^*L_1$ and $\phi^*L_2$ by Theorem \ref{Theorem20}.
Suppose that equality holds in the inequality. Then by Theorem \ref{Theorem20}, we have that there exists $s\in {\NZQ R}_{>0}$ such that $P_{\sigma}(\phi^*L_1)=sP_{\sigma}(\phi^*L_2)$ in $N^1(Y)$. Thus $\phi_*P_{\sigma}(\phi^*L_1)=s\phi_*P_{\sigma}(\phi^*L_2)$
in $N_{d-1}(X)$, so that $P_{\sigma}(L_1)=sP_{\sigma}(L_2)$ in $N_{d-1}(X)$. Since volume is homogeneous and $P_{\sigma}(\phi^*L_1)$, $sP_{\sigma}(\phi^*L_2)$ are numerically equivalent ${\NZQ R}$-Cartier divisors,
$$
\frac{{\rm vol}(L_1)}{{\rm vol}(L_2)}=\frac{{\rm vol}(\phi^*L_1)}{{\rm vol}(\phi^*L_2)}
=\frac{{\rm vol}(P_{\sigma}(\phi^*L_1))}{{\rm vol}(P_{\sigma}(\phi^*L_2))}=s^d.
$$
\end{proof}
\begin{Theorem}\label{Theorem22} Let $X$ be a $d$-dimensional projective variety over a field $k$. For any two big ${\NZQ R}$-Cartier divisors $L_1$ and $L_2$ on $X$,
\begin{equation}\label{Neweq20}
{\rm vol}(L_1+L_2)^{\frac{1}{d}}\ge {\rm vol}(L_1)^{\frac{1}{d}}+{\rm vol}(L_2)^{\frac {1}{d}}
\end{equation}
with equality if and only if $\langle L_1\rangle $ and $\langle L_2\rangle$ are proportional in $L^{d-1}(\mathcal X)$.
When this occurs, we have that $\langle L_1\rangle =s\langle L_2\rangle$ in $L^{d-1}(\mathcal X)$, where
$s=\left(\frac{{\rm vol}(L_1)}{{\rm vol}(L_2)}\right)^{\frac{1}{d}}$.
\end{Theorem}
In the case that $D_1$ and $D_2$ are nef and big, this is proven in \cite[Theorem 2.15]{BFJ} (over an algebraically closed field of characteristic zero) and in \cite[Theorem 6.13]{C} (over an arbitrary field). In this case of nef divisors, the condition that
$\langle L_1\rangle $ and $\langle L_2\rangle$ are proportional in $L^{d-1}(\mathcal X)$ is just that $D_1$ and $D_2$ are proportional in $N^1(X)$.
Theorem \ref{Theorem22} is obtained in the case that $D_1$ and $D_2$ are big and movable and $k$ is an algebraically closed field of characteristic zero in \cite[Proposition 3.7]{LX2}. In this case the condition for equality is that $D_1$ and $D_2$ are proportional in $N^1(X)$. Theorem \ref{Theorem22} is established in the case that $D_1$ and $D_2$ are big ${\NZQ R}$-Cartier divisors and $X$ is nonsingular, over an algebraically closed field $k$ of characteristic zero in \cite[Theorem 1.6]{LX2}. In this case, the condition for equality is that the positive parts of the $\sigma$ decompositions of $D_1$ and $D_2$ are proportional; that is, $P_{\sigma}(D_1)$ and $P_{\sigma}(D_2)$ are proportional in $N^1(X)$.
\begin{proof} Let $f:Y\rightarrow X\in I(X)$ with $Y$ normal. Then ${\rm vol}(f^*(L_1)+f^*(L_2))={\rm vol}(L_1+L_2)$ and ${\rm vol}(f^*L_j)={\rm vol}(L_j)$ for $j=1,2$ so that the inequality (\ref{Neweq20}) holds by Theorem \ref{Theorem21}.
Suppose that equality holds in (\ref{Neweq20}). Let
$s=\left(\frac{{\rm vol}(L_2)}{{\rm vol}(L_1)}\right)^{\frac{1}{d}}$.
Then by Theorem \ref{Theorem21}, $P_{\sigma}(f^*L_1)=sP_{\sigma}(f^*L_2)$ in $N_{d-1}(Y)$.
Thus $\pi_Y(\langle L_1\rangle)=s\pi_Y(\langle L_2\rangle)$ by (\ref{eq91}). Since the normal $Y\in I(X)$ are cofinal in $I(X)$, we have that
$\langle L_1\rangle =s\langle L_2\rangle$.
Suppose that $\langle L_1\rangle=s\langle L_2\rangle$ in $L^{d-1}(\mathcal X)$ for some $s\in {\NZQ R}_{>0}$. Then equality holds in (\ref{Neweq20}) by Proposition \ref{Prop35} and the fact that the positive intersection product is homogeneous.
\end{proof}
\begin{Definition} Suppose that $X$ is a projective variety and $\alpha,\beta\in N^1(X)$. The slope of $\beta$ with respect to $\alpha$ is the smallest real number $s=s(\alpha,\beta)$ such that $\langle \alpha\rangle \ge s\langle \beta\rangle$.
\end{Definition}
Let $X$ be a projective variety and $f:Z\rightarrow X$ be a resolution of singularities.
Suppose that $L_1$ and $L_2$ are ${\NZQ R}$-Cartier divisors on $X$. Let $\overline L_1=f^*(L_1)$ and $\overline L_2=f^*L_2$.
Suppose that $\phi:Y\rightarrow Z$ is a birational morphism of nonsingular projective varieties where $Y$ is nonsingular and $t\in {\NZQ R}$. We will show that
\begin{equation}\label{eqZ1}
P_{\sigma}(\overline L_1)-tP_{\sigma}(\overline L_2)\mbox{ is pseudo effective if and only if }
P_{\sigma}(\phi^*\overline L_1)-tP_{\sigma}(\phi^*\overline L_2)\mbox{ is pseudo effective.}
\end{equation}
The fact that $P_{\sigma}(\overline L_1)-tP_{\sigma}(\overline L_2)$ pseudo effective implies
$P_{\sigma}(\phi^*\overline L_1)-tP_{\sigma}(\phi^*\overline L_2)$ pseudo effective follows from Lemma \ref{Lemma21}.
If $P_{\sigma}(\phi^*\overline L_1)-tP_{\sigma}(\phi^*\overline L_2)$ is pseudo effective, then
$\phi_*(P_{\sigma}(\phi^*\overline L_1)-tP_{\sigma}(\phi^*\overline L_2))=P_{\sigma}(\overline L_1)-tP_{\sigma}(\overline L_2)$ is pseudo effective.
Let $s=s(L_1,L_2)$. Since the $Y\rightarrow Z$ with $Y$ nonsingular are cofinal in $I(X)$, we have that
\begin{equation}\label{eqZ2}
\begin{array}{l}
\mbox{$s$ is the largest positive number such that
$\pi_Z(\langle L_1\rangle -s\langle L_2\rangle)=P_{\sigma}(\overline L_1)-sP_{\sigma}(\overline L_2)$}\\
\mbox{is pseudo effective.}
\end{array}
\end{equation}
\begin{Proposition}\label{NewProp1} Suppose that $X$ is a variety over a field of characteristic zero and $L_1$, $L_2$ are big ${\NZQ R}$-Cartier divisors on $X$. Let $s=s(L_1,L_2)$. Then
\begin{equation}\label{Neweq1}
s^d\le\frac{\langle L_1^d\rangle}{\langle L_2^d\rangle}
\end{equation}
and we have equality in this equation if and only if
$\langle L_1\rangle$ is proportional to $\langle L_2\rangle$ in $L^{d-1}(\mathcal X)$. If we have equality, then
$\langle L_1\rangle=s\langle L_2\rangle$ in $L^{d-1}(\mathcal X)$.
\end{Proposition}
\begin{proof} Let $Y\in I(X)$ be nonsingular, with birational morphism $f:Y\rightarrow X$. Then by Lemma \ref{Lemma200},
$$
P_{\sigma}(f^*L_1)-sP_{\sigma}(f^*L_2)=
\pi_Y(\langle L_1\rangle) -s\langle L_2\rangle)\in{\rm Psef}(Y).
$$
Thus by Lemma \ref{Lemma10},
$$
s^d\le \frac{{\rm vol}(P_{\sigma}(f^*L_1))}{{\rm vol}(P_{\sigma}(f^*L_2))}=
\frac{{\rm vol}(L_1)}{{\rm vol}(L_2)}=\frac{\langle L_1^d\rangle}{\langle L_2^d\rangle},
$$
and so the inequality (\ref{Neweq1}) holds.
Suppose we have equality in (\ref{Neweq1}).
Let $Y\in I(X)$ be nonsingular with morphism $f:Y\rightarrow X$. We have that
$\pi_Y(\langle L_1\rangle)-s\pi_Y(\langle L_2\rangle)=
P_{\sigma}(f^*L_1)-sP_{\sigma}(f^*L_2)$ is pseudo effective and $s^d=\frac{{\rm vol}(P_{\sigma}(f^*L_1))}{{\rm vol}(P_{\sigma}(f^*L_2)}$, so we have that $P_{\sigma}(f^*L_1)= sP_{\sigma}(f^*L_2)$ in $N^1(Y)$ by (\ref{eqZ2}) and Lemma \ref{Lemma10}. Since the nonsingular $Y$ are cofinal in $I(X)$, we have that $\langle L_1\rangle=s\langle L_2\rangle$ by Lemma \ref{Lemma200} and (\ref{eq300}).
Suppose that $\langle L_1\rangle=t\langle L_2\rangle$ for some $t\in {\NZQ R}_{>0}$. Then $s=t$ and by Proposition \ref{Prop35},
$$
\langle L_1^d\rangle=\langle L_1\rangle\cdot \ldots\cdot \langle L_1\rangle
=\langle sL_2\rangle\cdot \ldots\cdot \langle sL_2\rangle
=s^d \langle L_2\rangle\cdot \ldots\cdot \langle L_2\rangle=s^d\langle L_2^d\rangle .
$$
\end{proof}
\begin{Theorem}\label{PropNew60}(Diskant inequality for big divisors)
Suppose that $X$ is a projective $d$-dimensional variety over a field $k$ of characteristic zero and $L_1,L_2$
are big ${\NZQ R}$-Cartier divisors on $X$. Then
\begin{equation}\label{eq16*}
\langle L_1^{d-1} \cdot L_2\rangle ^{\frac{d}{d-1}}-{\rm vol}(L_1){\rm vol}(L_2)^{\frac{1}{d-1}}
\ge [\langle L_1^{d-1}\cdot L_2\rangle^{\frac{1}{d-1}}-s(L_1,L_2){\rm vol}(L_2)^{\frac{1}{d-1}}]^d.
\end{equation}
\end{Theorem}
The Diskant inequality is proven for nef and big divisors in \cite[Theorem G]{BFJ} in characteristic zero and in \cite[Theorem 6.9]{C} for nef and big divisors over an arbitrary field. In the case that $D_1$ and $D_2$ are nef and big, the condition that $\langle D_1\rangle - s \langle D_2\rangle$ is pseudo effective in $L^{d-1}(\mathcal X)$ is that $D_1-sD_2$ is pseudo effective in $N^1(X)$. The Diskant inequality is proven when $D_1$ and $D_2$ are big and movable divisors and $X$ is a projective variety over an algebraically closed field of characteristic zero in \cite[Proposition 3.3, Remark 3.4]{LX2}. Theorem \ref{PropNew60} is a consequence of \cite[Theorem 3.6]{DF}.
\begin{proof} Let $s=s(L_1,L_2)$. Let $f:Z\rightarrow X$ be a resolution of singularities. After replacing $L_i$ with $f^*L_i$ for $i=1,2$, we may assume that $X$ is nonsingular.
We construct birational morphisms $\psi_m:Y_m\rightarrow X$ with
numerically effective ${\NZQ R}$-Cartier divisors $A_{i,m}$ and effective ${\NZQ R}$-Cartier divisors $E_{i,m}$ on $Y_m$ such that $A_{i,m}=\psi_m^*(L_i)-E_{i,m}$ and $\langle L_i\rangle =\lim_{m\rightarrow \infty}A_{i,m}$ in $L^{d-1}(\mathcal X)$ for $i=1,2$. We have that $\pi_X(A_{i,m})=\psi_{m,*}(A_{i,m})$ comes arbitrarily closed to $\pi_X(\langle L_j\rangle)=P_{\sigma}(L_j)$ in $L^{d-1}(X)$ by Lemma \ref{Lemma200}.
By (\ref{eqZ2}), $s$ is the largest number such that
$P_{\sigma}(L_1)-sP_{\sigma}(L_2)$ is pseudo effective (in $N^1(X)$).
Let $s_m$ be the largest number such that $A_{1,m}-s_mA_{2,m}$ is pseudo effective (in $N^1(Y_m)$).
We will now show that
given $\epsilon>0$, there exists a positive integer $m_0$ such that $m>m_0$ implies $s_m<s+\epsilon$. Since ${\rm Psef}(X)$ is closed, there exists $\delta>0$ such that the open ball $B_{\delta}(P_{\sigma}(L_1)-(s+\epsilon)P_{\sigma}(L_2))$ in $N^1(X)$ of radius $\delta$ centered at $P_{\sigma}(L_1)-(s+\epsilon)P_{\sigma}(L_2)$ is disjoint from ${\rm Psef}(X)$. There exists $m_0$ such that $m\ge m_0$ implies
$\psi_{m*}(A_{1,m})\in B_{\frac{\delta}{2}}(P_{\sigma}(L_1))$ and $\psi_{m*}(A_{2,m})\in B_{\frac{\delta}{(s+\epsilon)2}}(P_{\sigma}(L_2))$. Thus
$\psi_{m*}(A_{1,m}-(s+\epsilon)A_{2,m})\not\in {\rm Psef}(X)$ for $m\ge m_0$ so that $s_m<s+\epsilon$.
By the Khovanski Teissier inequalities for nef and big divisors (\cite[Theorem 2.15]{BFJ} in characteristic zero, \cite[Corollary 6.3]{C}),
\begin{equation}\label{eq14*}
(A_{1,m}^{d-1}\cdot A_{2,m})^{\frac{d}{d-1}}\ge {\rm vol}(A_{1,m}){\rm vol}(A_{2,m})^{\frac{1}{d-1}}
\end{equation}
for all $m$. By Proposition \ref{Prop35}, taking limits as $m\rightarrow \infty$, we have
\begin{equation}\label{eq20*}
\langle L_1^{d-1}\cdot L_2\rangle \ge {\rm vol}(L_1)^{\frac{d-1}{d}}{\rm vol}(L_2)^{\frac{1}{d}}.
\end{equation}
The Diskant inequality for big and nef divisors, \cite[Theorem 6.9]{C}, \cite[Theorem F]{BFJ} implies
$$
(A_{1,m}^{d-1}\cdot A_{2,m})^{\frac{d}{d-1}}-{\rm vol}(A_{1,m}){\rm vol}(A_{2,m})^{\frac{1}{d-1}}
\ge ((A_{1,m}^{d-1}\cdot A_{2,m})^{\frac{1}{d-1}}-s_m{\rm vol}(A_{2,m})^{\frac{1}{d-1}})^d.
$$
We have that
$(A_{1,m}^{d-1}\cdot A_{2,m})^{\frac{1}{d-1}}-s_m{\rm vol}(A_{2,m})^{\frac{1}{d-1}}\ge 0$ since
$s_m^d\le\frac{{\rm vol}(A_{1,m})}{{\rm vol}(A_{2,m})}$ by Lemma \ref{Lemma22} and by (\ref{eq14*}).
We have that
$$
\begin{array}{lll}
\left[(A_{1,m}^{d-1}\cdot A_{2,m})^{\frac{d}{d-1}}-{\rm vol}(A_{1,m}){\rm vol}(A_{2,m})^{\frac{1}{d-1}}\right]^{\frac{1}{d}}
&\ge& (A_{1,m}^{d-1}\cdot A_{2,m})^{\frac{1}{d-1}}-s_m{\rm vol}(A_{2,m})^{\frac{1}{d-1}}\\
&\ge & (A_{1,m}^{d-1}\cdot A_{2,m})^{\frac{1}{d-1}}-(s+\epsilon){\rm vol}(A_{2,m})^{\frac{1}{d-1}}\end{array}
$$
for $m\ge m_0$.
Taking the limit as $m\rightarrow\infty$, we have that (\ref{eq16*}) holds.
\end{proof}
\begin{Proposition}\label{Prop13*}
Suppose that $X$ is a projective $d$-dimensional variety over a field $k$ of characteristic zero and $L_1,L_2$
are big ${\NZQ R}$-Cartier divisors on $X$. Then
$$
\langle L_1^{d-1} \cdot L_2\rangle \ge{\rm vol}(L_1)^{\frac{d-1}{d}}{\rm vol}(L_2)^{\frac{1}{d}}.
$$
If equality holds, then $\langle L_1\rangle = s\langle L_2\rangle$ in $L^{d-1}(\mathcal X)$, where $s=s(L_1,L_2)=\left(\frac{{\rm vol}(L_2)}{{\rm vol}(L_1)}\right)^{\frac{1}{d}}$.
\end{Proposition}
\begin{proof} The inequality holds by (\ref{eq20*}). Let $s=s(L_1,L_2)$. By (\ref{eq16*}),
if $\langle L_1^{d-1} \cdot L_2\rangle^{\frac{d}{d-1}}={\rm vol}(L_1){\rm vol}(L_2)^{\frac{1}{d-1}}$ then
$\langle L_1^{d-1}\cdot L_2\rangle^{\frac{1}{d-1}}=s{\rm vol}(L_2)^{\frac{1}{d-1}}$, so that $s^d=\frac{{\rm vol}(L_1)}{{\rm vol}(L_2)}$ and thus
$\langle L_1\rangle =s\langle L_2\rangle$ in $L^{d-1}(\mathcal X)$ by Proposition \ref{NewProp1}.
\end{proof}
Suppose that $X$ is a complete $d$-dimensional algebraic variety over a field $k$ and $D_1$, $D_2$ are pseudo effective ${\NZQ R}$-Cartier divisors on $X$. We will write
$$
s_i=\langle D_1^i\cdot D_2^{d-i}\rangle\mbox{ for $0\le i\le d$}.
$$
We have the following generalization of the Khovanskii-Teissier inequalities to positive intersection numbers.
\begin{Theorem} (Minkowski Inequalities)\label{Ineq} Suppose that $X$ is a complete algebraic variety of dimension $d$ over a field $k$ and $D_1$ and $D_2$ are pseudo effective ${\NZQ R}$-Cartier divisors on $X$. Then
\begin{enumerate}
\item[1)] $s_i^2\ge s_{i+1}s_{i-1}$ for $1\le i\le d-1.$
\item[2)] $s_is_{d-i}\ge s_0s_d$ for $1\le i\le d-1$.
\item[3)] $s_i^d\ge s_0^{d-i}s_d^i$ for $0\le i\le d$.
\item[4)] $\langle (D_1+D_2)^d\rangle \ge \langle D_1^d\rangle^{\frac{1}{d}}+\langle D_2^d\rangle^{\frac{1}{d}}$.
\end{enumerate}
\end{Theorem}
\begin{proof} Statements 1) - 3) follow from the inequality of Theorem 6.6 \cite{C} (\cite[Theorem 2.15]{BFJ} in characteristic zero). Statement 4) follows from 3) and
the super additivity of the positive intersection product.
\end{proof}
When $D_1$ and $D_2$ are nef, the inequalities of Theorem \ref{Ineq} are proven by Khovanskii and Teissier \cite{T1}, \cite{T2}, \cite[Example 1.6.4]{L}. In the case that $D_1$ and $D_2$ are nef, we have that $s_i=\langle D_1^i\cdot D_2^{d-i}\rangle=(D_1^i\cdot D_2^{d-i})$ are the ordinary intersection products.
We have the following characterization of equality in these inequalities.
\begin{Theorem} (Minkowski equalities)\label{Minkeq} Suppose that $X$ is a projective algebraic variety of dimension $d$ over a field $k$ of characteristic zero, and $D_1$ and $D_2$ are big ${\NZQ R}$-Cartier divisors on $X$. Then the following are equivalent:
\begin{enumerate}
\item[1)] $s_i^2= s_{i+1}s_{i-1}$ for $1\le i\le d-1.$
\item[2)] $s_is_{d-i}= s_0s_d$ for $1\le i\le d-1$.
\item[3)] $s_i^d= s_0^{d-i}s_d^i$ for $0\le i\le d$.
\item[4)] $s_{d-1}^d=s_0s_d^{d-1}$.
\item[5)] $\langle (D_1+D_2)^d\rangle = \langle D_1^d\rangle^{\frac{1}{d}}+\langle D_2^d\rangle^{\frac{1}{d}}$.
\item[6)] $\langle D_1\rangle$ is proportional to $\langle D_2\rangle$ in $L^{d-1}(\mathcal X)$.
\end{enumerate}
\end{Theorem}
When $D_1$ and $D_2$ are nef and big, then Theorem \ref{Minkeq} is proven in \cite[Theorem 2.15]{BFJ} when $k$ has characteristic zero and in \cite[Theorem 6.13]{C} for arbitrary $k$. When $D_1$ and $D_2$ are nef and big, the condition
6) of Theorem \ref{Minkeq} is just that $D_1$ and $D_2$ are proportional in $N^1(X)$.
\begin{proof}
All the numbers $s_i$ are positive by Remark \ref{Remark50}.
Proposition \ref{Prop35} shows that 6) implies 1), 2), 3), 4) and 5).
Theorem \ref{Theorem22} shows that 5) implies 6).
Proposition \ref{Prop13*} shows that 4) implies 6). Since the condition of 3) is a subcase of the condition 4), we have that 3) implies 6).
Suppose that 2) holds. By the inequality 3) of Theorem \ref{Ineq} and the equality 2), we have that
$$
s_i^ds_{d-i}^d\ge (s_0^{d-i}s_d^i)(s_0^is_d^{d-i})=(s_0s_d)^d=(s_is_{d-i})^d.
$$
Thus the inequalities 3) hold.
Suppose that the inequalities 1) hold. Then
$$
\frac{s_{d-1}}{s_0}=\frac{s_{d-1}}{s_{d-2}}\frac{s_{d-2}}{s_{d-3}}\cdots\frac{s_1}{s_0}=\left(\frac{s_d}{s_{d-1}}\right)^{d-1}
$$
so that 4) holds.
\end{proof}
\begin{Remark} The existence of resolutions of singularities is the only place where characteristic zero is used in the proof of Theorem \ref{Minkeq}. Thus the conclusions of Theorem \ref{Minkeq} are valid over an arbitrary field for varieties of dimension $d\le3$ by \cite{Ab2}, \cite{CP}.
\end{Remark}
Generalizing Teissier \cite{T1}, we
define the inradius of $\alpha$ with respect to $\beta$ as
$$
r(\alpha;\beta)=s(\alpha,\beta)
$$
and the outradius of $\alpha$ with respect to $\beta$ as
$$
R(\alpha;\beta)=\frac{1}{s(\beta,\alpha)}.
$$
\begin{Theorem}\label{TheoremG} Suppose that $X$ is a $d$-dimensional projective variety over a field $k$ of characteristic zero and $\alpha,\beta$ are big ${\NZQ R}$-Cartier divisors on $X$. Then
\begin{equation}\label{eq106}
\frac{s_{d-1}^{\frac{1}{d-1}}-(s_{d-1}^{\frac{d}{d-1}}-s_0^{\frac{1}{d-1}}s_d)^{\frac{1}{d}}}{s_0^{\frac{1}{d-1}}}
\le r(\alpha;\beta)\le \frac{s_d}{s_{d-1}}.
\end{equation}
\end{Theorem}
\begin{proof} Let $s=s(\alpha,\beta)=r(\alpha,\beta)$. Since $\langle \alpha\rangle \ge s\langle \beta\rangle$, we have that $\langle \alpha^d\rangle\ge s\langle \beta\cdot\alpha^{d-1}\rangle$ by Lemma \ref{Lemma36}. This gives us the upper bound. We also have that
\begin{equation}\label{eq110}
\langle \alpha^{d-1}\cdot\beta\rangle^{\frac{1}{d-1}}-s\langle \beta^d\rangle^{\frac{1}{d-1}}\ge 0.
\end{equation}
We obtain the lower bound from Theorem \ref{PropNew60} (using the inequality $s_{d-1}^d\ge s_0s_d^{d-1}$ to ensure that the bound is a positive real number).
\end{proof}
\begin{Theorem}\label{TheoremH} Suppose that $X$ is a $d$-dimensional projective variety over a field $k$ of characteristic zero and $\alpha,\beta$ are big ${\NZQ R}$-Cartier divisors on $X$. Then
\begin{equation}\label{eq107}
\frac{s_{d-1}^{\frac{1}{d-1}}-(s_{d-1}^{\frac{d}{d-1}}-s_0^{\frac{1}{d-1}}s_d)^{\frac{1}{d}}}{s_0^{\frac{1}{d-1}}}
\le r(\alpha;\beta)\le \frac{s_d}{s_{d-1}}\le\frac{s_1}{s_0}\le R(\alpha;\beta)\le
\frac{s_d^{\frac{1}{d-1}}}{s_1^{\frac{1}{d-1}}-(s_1^{\frac{d}{d-1}}-s_d^{\frac{1}{d-1}}s_0)^{\frac{1}{d}}}.
\end{equation}
\end{Theorem}
\begin{proof} By Theorem \ref{TheoremG}, we have that
$$
\frac{s_1^{\frac{1}{d-1}}-(s_1^{\frac{d}{d-1}}-s_d^{\frac{1}{d-1}}s_0)^{\frac{1}{d}}}{s_d^{\frac{1}{d-1}}}
\le s(\beta,\alpha)\le\frac{s_0}{s_1}.
$$
The theorem now follows from the fact that $R(\alpha,\beta)=\frac{1}{s(\beta,\alpha)}$ and Theorem \ref{Ineq}.
\end{proof}
This gives a solution to \cite[Problem B]{T1} for big ${\NZQ R}$-Cartier divisors. The inequalities of Theorem \ref{TheoremH} are proven by Teissier in \cite[Corollary 3.2.1]{T1} for divisors on surfaces satisfying some conditions.
In the case that $D_1$ and $D_2$ are nef and big on a projective variety over a field of characteristic zero, Theorem \ref{TheoremH} follows from the Diskant inequality \cite[Theorem F]{BFJ}. In the case that $D_1$ and $D_2$ are nef and big on a projective variety over an arbitrary field, Theorem \ref{TheoremH} is proven in \cite[Theorem 6.11]{C}, as a consequence of the Diskant inequality \cite[Theorem 6.9]{C} for nef divisors.
|
1,477,468,751,130 | arxiv | \section{Introduction }
Many works have studied properties of the trajectory of a simple random walk.
These properties include the growth rate of the trajectory's range, location of the most frequently visited site, the number of the visits to this site,
the number of the sites of frequent visits, and so forth.
There remain many interesting unsolved questions concerning these properties.
The most frequently visited site among all the points of the range (of the walk of finite length) is called a favorite site
and a site which is revisited many times (in a certain specified sense) is called a frequently visited site.
About fifty years ago, Erd\H{o}s and Taylor \cite{dvo1} posed a problem concerning a simple random walk in ${\mathbb{Z}^{d}}$:
how many times does the random walk revisit the favorite site (up to a specific time)?
Open problems concerning the favorite site are raised by Erd\H{o}s and R\'ev\'esz \cite{er2}, \cite{er3} and Shi and T\'oth \cite{shi} but remain unsolved so far.
By Lifshits and Shi \cite{shi2} it is known that
the favorite site of a $1$-dimensional random walk tends to be far from the origin,
but almost nothing is known about its location for multi-dimensional walks.
In this paper, we focus on the most frequently visited site among the inner boundary points of the random walk range, rather than among all of the points of the range, and propose the question:
how many times does a random walk revisit the most frequently visited site among the inner boundary points?
Here, we briefly state our result and compare it with known results for the favorite site.
Let $M_0(n)$ be the number of visits to the favorite site by the walk up to time $n$ and
$M(n)$ be that of the most frequently visited site among the inner boundary points.
In Theorem \ref{m1}, we will prove that
for $d\ge2$
\begin{align*}
\lim_{n \to \infty} \frac{M(n)}{\log n}= \frac{1}{- \log P(T_0<T_b)} \quad \text{ a.s.}
\end{align*}
Here, $T_a$ is the first time the random walk started at the origin hits $a$ after time $0$, and
$b$ is a neighbor site of the origin.
To compare, a classical result of
Erd\H{o}s and Taylor \cite{dvo1} says that for $d\ge3$,
\begin{align*}
\lim_{n \to \infty} \frac{M_0(n)}{\log n}= \frac{1}{- \log P(T_0<\infty)} \quad \text{ a.s.},
\end{align*}
and for $d=2$, $M_0(n)/(\log n)^2$ is bounded away from zero and infinity a.s. (the limit exists and is identified \cite{Dembo} as mentioned later in Section 2).
These results illuminate the geometric structure of the random walk range as well as the nature of recurrence or transience of random walks.
We are able to infer that the favorite site is outside the inner boundary from some time onwards with probability one.
This may appear intuitively clear; it seems improbable for the favorite point to continue to be an inner boundary point since it must be visited many times, but
our result further shows that there are many inner boundary points that are visited many times,
with amounts comparable to that of the favorite point for $d\ge 3$.
In addition, the growth order of $M_0(n)$ is the same for all $d\geq 2$, meaning
the phase transition which occurs between $d=2$ and $d\geq3$ for $M_0(n)$ does not occur for $M(n)$.
In Theorem \ref{m2}, which is a strong claim in comparison to Theorem \ref{m1}, we will provide an explicit answer to the question of how many frequently visited sites among the inner boundary points exist.
The upper bounds for both Theorem \ref{m1} and Theorem \ref{m2} are obtained using the idea in \cite{dvo1}.
The Chebyshev inequality and the Borel-Cantelli lemma are also used in the same way as in \cite{dvo1}.
On the other hand, $M(n)$ is not monotone, while $M_0(n)$ is monotone.
We work with the walk and its trajectory at the times $2^k$ and find a process that is monotone and a bit larger than $M(n)$, but with the desired asymptotics.
On the other hand, the idea for the proof of the lower bound is different from that for the known results.
In \cite{Dembo}, a Brownian occupation measure was used in the proof.
Rosen \cite{Rosen} provided another proof to the result of \cite{Dembo},
in which he computed a crossing number instead of the number of the frequently visited site.
In this paper, we use the Chebyshev inequality and the Borel-Cantelli lemma as in \cite{Rosen} but for the number of the frequently visited sites among the inner boundary points.
In addition, as the proof of the upper bound, we estimate a number
slightly smaller than the number of the frequently visited site among the inner boundary points.
We conclude this introduction by mentioning some known results about the inner boundary points of the random walk range that are closely related to the present subject.
Let $L_n$ be the number of the inner boundary points up to time $n$.
In \cite{itai}, it is noticed that the entropy of a random walk is essentially governed
by the asymptotic of $L_n$.
In \cite{okada}, the law of large numbers for $L_n$ is shown and $\lim L_n/n$ is identified
for a simple random walk on ${\mathbb Z}^d$ for every $d\ge1$.
Let $J_n^{(p)}$ denote the number of $p$-multiplicity points in the inner boundary and be defined as
\begin{align*}
J_n^{(p)}=\sharp &\{S_i\in \partial R(n):\sharp\{m:0\le m \le n,S_m=S_i \}=p \},
\end{align*}
where $\partial R(n)$ is the set of the inner boundary of $\{S_0,S_1,...,S_n\}$ and $\sharp A$ is the number of elements in $A$.
In \cite{okada}, it is also shown that for a simple random walk in $\mathbb{Z}^2$, with $p\ge1$,
\begin{align}\notag
\frac{{\pi}^2}{2} \le \lim_{n\to \infty}EL_n\times\frac{(\log n)^2}{n} &\le 2{\pi}^2,
\\ \label{iii+}
\frac{\tilde{c}^{p-1}\pi^2}{4} \le \lim_{n\to \infty}EJ_n^{(p)}\times\frac{(\log n)^2}{n}
&\le \tilde{c}^{p-1}\pi^2,
\end{align}
where $\tilde{c}=P(T_0<T_b)$ for any/some neighbor site $b$ of the origin.
These may be compared with the results for the entire range; according to \cite{Fla},
$\sharp\{S_i:0\le i \le n\}$ in ${\mathbb{Z}}^2$ is asymptotic to $\pi n/\log n$ and
the asymptotic form of the number of $p$-multiplicity points in it is independent of $p$.
\section{Framework and Main Results }
Let $\{S_k\}_{k=0}^{\infty}$ be a simple random walk
on the $d$-dimensional square lattice ${\mathbb Z}^d$.
Let $P^a$ denote the probability of the simple random walk starting at $a$;
we simply write $P$ for $P^0$.
Let ${\mathbb N}=\{1,2,3,...\}$ and for $n\in{\mathbb N}$, set $R(n) = \{S_0,S_1, \ldots, S_n\}$ as the random walk range up to the time $n$.
We call $z \in \mathbb{Z}^d$ a neighbor of $a\in \mathbb{Z}^d$ if $|a-z|=1$.
Let ${\cal N}(a)$ be the set of all neighbors of $a$ defined as
$${\cal N}(a)=\{z\in{\mathbb Z}^d : |a-z|=1 \}.$$
The inner boundary of $A \subset \mathbb{Z}^d$ is denoted by $\partial A$, that is
$$\partial A =\{x \in A : {\cal N} (x) \not\subset A \}.$$
We denote the number of visits to $x$ of $S_m$, $0\le m\le n$ by $K(n,x)$.
That is,
$$K(n,x)=\sum_{j=0}^n1_{\{S_j=x\}}.$$
Moreover, we set $$M(n):=\max_{x \in \partial R(n)}K(n,x).$$
Clearly, this is the maximal number of visits of the random walk of length $n$ to
$\partial R(n)$, the inner boundary of $R(n)$.
Let $T_x$ denote the first passage time to $x$:
$T_x=\inf\{m\ge1: S_m=x\}$.
We are now ready to state our main theorems.
The first theorem provides us with sharp asymptotic behavior of $M(n)$.
\begin{thm}\label{m1}
For $d\ge2$
\begin{align*}
\lim_{n \to \infty} \frac{M(n)}{\log n}=\beta_d \quad \text{ a.s.},
\end{align*}
where
\begin{align*}
\beta_d=\frac{1}{- \log P(T_0<T_b)}
\end{align*}
for any $b\in {\cal N}(0)$.
Note that $P(T_0<T_b)$ does not depend on the choice of $b\in {\cal N}(0)$, but rather
depends only on the dimension.
\end{thm}
Leading to the second main theorem,
we first define $\Theta_n (\delta)$ for $n \in \mathbb{N}$ and $0<\delta<1$ as
\begin{align*}
\Theta_n(\delta):=\sharp \{ x\in \partial R(n) :
\frac{K(n,x)}{\log n}\ge \beta_d\delta \}.
\end{align*}
This is the cardinality of points in $\partial R(n)$ whose number of visits is comparable to the maximal order appearing in Theorem \ref{m1}, with a ratio greater than or equal to $\delta$.
Our second main theorem exhibits the sharp logarithmic asymptotic behavior of $\Theta_n(\delta)$ as $n \to\infty$.
\begin{thm}\label{m2}
For $d\ge2$ and $0<\delta<1$,
\begin{align*}
\lim_{n \to \infty} \frac{\log \Theta_n(\delta)}{\log n}=1-\delta \quad a.s.
\end{align*}
\end{thm}
We compare our main results to the corresponding results for the whole random walk range $R(n)$.
We denote the quantity corresponding to $M(n)$ by $M_0(n)$.
That is, $M_0(n)=\max_{x\in{\mathbb{Z}^{d}}}K(n,x)$ where
$M_0(n)$ represents the maximal number of visits of the random walk to a single site in the entire random walk range until time $n$.
Erd\H{o}s and Taylor \cite{dvo1} showed
that for $d\ge3$
\begin{align*}
\lim_{n \to \infty} \frac{M_0(n)}{\log n}= \frac{1}{-\log P(T_0<\infty)}\quad \text{ a.s.}
\end{align*}
For $d=2$, they obtained
\begin{align*}
\frac{1}{4\pi} \le
\liminf_{n \to \infty} \frac{M_0(n)}{(\log n)^2}\le
\limsup_{n \to \infty} \frac{M_0(n)}{(\log n)^2}\le \frac{1}{\pi} \quad \text{ a.s.},
\end{align*}
and conjectured that the limit exists and equals $1/\pi$ a.s.
Forty years later, Dembo et al. \cite{Dembo} verified this conjecture and also showed how many frequently visited sites
of order $(\log n)^2$ there are in the following sense.
Let $d=2$.
Then, for $0<a<1$, define
$$\Theta_{0,n}=\sharp\{ x\in {\mathbb{Z}^{2}}:
\frac{K(n,x)}{(\log n)^2}\ge \frac{a}{\pi} \}.$$
Then
\begin{align*}
\lim_{n\to \infty} \frac{\log \Theta_{0,n}}{\log n}=1-a \quad \text{ a.s.}
\end{align*}
In view of these results, Theorem \ref{m1} entails the following corollary.
\begin{cor}
For $d\ge2$, the favorite site does not appear in the inner boundary from some time onwards a.s.
In other words, $M_0(n)>M(n)$ for all but finitely many $n$ with probability one.
\end{cor}
The $p$-th hitting times $T^p_x$ for $p=0,1\,\ldots$ and the partial ranges $R[l,n]$ that we are now to define play significant roles.
Let $T_x^0=\inf\{j\ge0: S_j=x\}$ and for $p\ge1$,
\begin{align} \label{Tp}
T_x^p=\inf\{ j>T_x^{p-1}: S_j=x\}
\end{align}
with the convention $\inf \emptyset =\infty$.
For $l,n\in{\mathbb{N}}$ let
$$R[l,n]=\{S_l,S_{l+1},...,S_n\}$$
if $n\geq l$
and $R[l,n]=\emptyset$ if $l>n$ and $R[0,-1]=\emptyset$.
The inner boundary of the random walk range $R[l,n]$ is denoted simply by $\partial R[l,n]$ as it is for $R(n)$.
It is noted that $T_x=T_x^0$ for $x\neq S_0$ and $T_{x}=T_{x}^1$ if $x=S_0$. Also, $R(n)= R[0,n]$.
In the proofs given in the remainder of this paper we denote contextual constants by $C$ or $c$.
In addition, $\lceil a \rceil$ denotes the smallest integer $n$ with $n\ge a$,
and $A^c$ denotes a complementary set of $A$.
\section{The upper bound in Theorem \ref{m1}}
Here, we prove the following proposition.
\begin{prop}\label{po}
For $d\ge2$
$$\limsup_{n\to \infty} \frac{M(n)}{\log n}\le \beta_d\quad\text { a.s.}
$$
\end{prop}
Unlike the proof of the lower bound below, the proof of Proposition \ref{po} will be performed independently of the dimension $d$.
As mentioned above, neither $M(n)$ nor $\Theta_n(\delta)$ is monotone in $n\in \mathbb{N}$.
To mitigate this issue, we introduce the random variables.
For $\beta>0$, we set
\begin{align*}
\tilde{\Theta}_n(\beta)=\sharp \{ x\in\partial R(T_x^{\lceil \beta\log n/2\rceil }) :K(n,x)\ge \lceil \beta \log \frac{n}{2}\rceil \}
\end{align*}
($T_x^p$ is defined by (\ref{Tp})).
This is a modification of $\Theta_n(\beta/\beta_d)$ made by relaxing the constraint of being on the inner boundary.
Note that $\tilde\Theta_n(\beta)$ vanishes for all sufficiently large $n\in \mathbb{N}$ if $\beta>\beta_d$.
\begin{lem}\label{theta}
For $\beta>0$ there exists $C>0$ such that for any $n\in \mathbb{N}$
\begin{align*}
E[\tilde{\Theta}_n(\beta)]\le Cn^{1-\frac{\beta}{\beta_d}}.
\end{align*}
\end{lem}
\begin{proof}
First we introduce the elementary property.
For any intervals $I_0$, $I_1$, $I_2\subset {\mathbb N}\cup \{0\}$ with $I_0 \subset I_1 \subset I_2$, it holds that
\begin{align}\label{el*}
R(I_0)\cap \partial R (I_2) \subset \partial R(I_1).
\end{align}
Note that we can write
\begin{align}\label{by}
\tilde{\Theta}_n(\beta)
=\sum_{l=0}^n 1_{B_{l,n}},
\end{align}
where
\begin{align*}
B_{l,n}=\{S_l \in R(l-1)^c \cap \partial R(T_{S_l}^{\lceil \beta \log n/2\rceil}),
K(n,S_l)\ge \lceil \beta \log \frac{n}{2} \rceil \}.
\end{align*}
Since $K(l-1,S_l)=0$ on $\{S_l \in R(l-1)^c\}$,
for $l\le n$
\begin{align*}
P(B_{l,n})=
&P(\sum_{j=0}^n1_{\{S_j=S_l\}}\ge \lceil \beta \log \frac{n}{2}\rceil, S_l \in R(l-1)^c \cap \partial R(T_{S_l}^{\lceil \beta \log n/2\rceil}) )\\
=&P(\sum_{j=l}^n1_{\{S_j=S_l\}}\ge \lceil \beta \log \frac{n}{2}\rceil, S_l \in R(l-1)^c \cap \partial R(T_{S_l}^{\lceil \beta \log n/2\rceil}) )\\
\le &P(\sum_{j=l}^n1_{\{S_j=S_l\}}\ge \lceil \beta \log \frac{n}{2}\rceil,
S_l\in \partial R[l,T_{S_l}^{\lceil \beta \log n/2\rceil}])\\
=&P(K(n-l,0)\ge \lceil \beta \log \frac{n}{2}\rceil, 0 \in \partial R(T_{0}^{\lceil \beta \log n/2\rceil-1}) ).
\end{align*}
Here, the inequality comes from (\ref{el*})
with $I_0=\{ l \}$, $I_1=[l, T_{S_l}^{\lceil \beta \log n/2\rceil}]$
and $I_2=[0, T_{S_l}^{\lceil \beta \log n/2\rceil}]$.
The last equality follows from the Markov property and the translation invariance for $S_l$.
In addition, by applying the Markov property repeatedly, we obtain
\begin{align*}
&P(K(n-l,0)\ge \lceil \beta \log \frac{n}{2}\rceil, 0 \in \partial R(T_{0}^{\lceil \beta \log n/2\rceil-1}) )\\
\le &P( T_0^{\lceil \beta \log n/2\rceil-1}<\infty, 0 \in \partial R(T_{0}^{\lceil \beta \log n/2\rceil-1}) )\\
= &P(\cup_{b\in {\cal N}(0)}\{T_0^{\lceil \beta \log n/2\rceil-1 }< T_b\} )\\
\le &2dP(T_0<T_b)^{\lceil \beta \log n/2\rceil-1}
\le C n^{-\frac{\beta}{\beta_d}}.
\end{align*}
Hence, the assertion holds by $E[\tilde{\Theta}_n(\beta)]\le n\max_{1\le l \le n} P(B_{l,n})$ which follows from (\ref{by}).
\end{proof}
\begin{proof}[Proof of Proposition \ref{po}]
Since $M(n)$ is not monotonically increasing in $n$,
we instead first consider $\tilde{M}(n)=\max_{l\le n}M(l)$.
If $\tilde{M}(n)>m$, there exist $x_1\in \mathbb{Z}^d$ and $n_1\le n$ such that
$\tilde{M}(n)=M(n_1)=K(n_1,x_1)$ and $x_1\in \partial R(n_1)$.
Therefore, for such $n_1$, $m$ and $x_1$, it holds that $T_{x_1}^m\le n_1$ and hence $x_1\in\partial R(n_1)\cap \partial R(T_{x_1}^m)$ holds.
Further, $K(n_1,x_1)\le K(n,x_1)$.
Accordingly, we have
\begin{align*}
&P(\tilde{M}(n)\ge \lceil \beta \log \frac{n}{2}\rceil)\\
\le & P(\cup_{x \in \mathbb{Z}^2 }\{x \in \partial R(T_{x}^{\lceil \beta \log n/2\rceil}), K(n,x)\ge \lceil \beta \log \frac{n}{2}\rceil \} ).
\end{align*}
Thus, by Lemma \ref{theta} and the Chebyshev inequality, we obtain
\begin{align*}
P(\tilde{M}(n)\ge \lceil \beta \log \frac{n}{2}\rceil)
\le P(\tilde{\Theta}_n(\beta)\ge1)
\le Cn^{1-\frac{\beta}{\beta_d}}.
\end{align*}
By using the Borel-Cantelli lemma for any $\beta>\beta_d$, we can show that the events
$\{\tilde{M}(2^k)>\beta \log 2^{k-1}\}$
happen only finitely often with probability one.
Therefore, it holds that for any $\beta>\beta_d$,
\begin{align}\label{guu*}
\limsup_{k\to\infty} \frac{\tilde{M}(2^k)}{\beta \log 2^{k-1} }\le 1\quad\text{ a.s. }
\end{align}
Now we consider $M(n)$.
For any $k$, $n\in{\mathbb N}$ with $2^{k-1}\le n<2^k$ we have
$$ \frac{M(n)}{\log n}\le\frac{\tilde{M}(2^k)}{\log 2^{k-1}},$$
and so with (\ref{guu*}), for any $\beta>\beta_d$
\begin{align*}
\limsup_{n\to\infty} \frac{M(n)}{\beta\log n}
\le 1 \quad\text{ a.s. }
\end{align*}
Therefore, the proof is completed.
\end{proof}
\section{The lower bound in Theorem \ref{m1}}
\subsection{Reduction of the lower bound of Theorem \ref{m1} to key lemmas}
Our goal in this section is to prove the following Proposition \ref{po1}.
\begin{prop}\label{po1}
For $d\ge2$
\begin{align}\label{kii}
\liminf_{n\to \infty} \frac{M(n)}{ \log n}\ge \beta_d \quad\text{ a.s.}
\end{align}
\end{prop}
Unlike Sections $4.3$ and $4.4$, the argument of Sections $4.1$ and $4.2$ will be performed independently of the dimension $d$.
In what follows, we discuss the proof for each fixed $\beta<\beta_d$.
Let $$u_n=\lceil \exp(n^2)\rceil.$$
In this section, we will reduce the proof of Proposition \ref{po1} to two key lemmas given below
(Lemmas \ref{hh+} and \ref{hh}).
For $b\in {\cal N}(0)$ and
$A \subset \mathbb{Z}^d$, we define $\partial_b A$ as
$$\partial_b A:=\{x\in A : x+b \not\in A \}.$$
We can extend the property (\ref{el*}) in the following way:
for any intervals $I_0$, $I_1$, $I_2\subset {\mathbb N}\cup \{0\}$ with $I_0 \subset I_1 \subset I_2$, it holds that
\begin{align}\label{el}
R(I_0)\cap \partial_b R (I_2) \subset \partial_b R(I_1)\subset \partial R(I_1).
\end{align}
Let us define $\tilde{Q}_n$ as
\begin{align*}
\tilde{Q}_n:=\sharp \{x\in R(u_{n-1}) \cap \partial_b R(u_n),
T_{x}^{\lceil\beta n^2\rceil}\le u_{n-1}\}.
\end{align*}
We begin by providing a sufficient condition for the inequality (\ref{kii}) asserted in Proposition \ref{po1} to be true by means of $\tilde{Q}_n$.
\begin{lem}\label{fu}
If
\begin{align}\label{iii}
P(\tilde{Q}_n\ge 1 \quad\quad \text{ for all but finitely many }n)=1,
\end{align}
for any $\beta\in (0,\beta_d)$ then the inequality (\ref{kii}) holds.
\end{lem}
\begin{proof}
Set
$$L(n):=\max_{x \in R(u_{n-1}) \cap \partial_b R(u_n)}K(u_{n-1},x).$$
Note that $L(n) \ge \beta n^2$ if $\tilde{Q}_n\ge 1$.
Hence, it holds that
\begin{align*}
P(L(n)\ge\beta n^2\quad\quad \text{ for all but finitely many }n)=1.
\end{align*}
Moreover, by (\ref{el}), for any $m$, $n \in{\mathbb{N}}$ with $u_{m-1}\le n < u_m$
we have
\begin{align}\label{ooo}
R(u_{m-1}) \cap \partial_b R(u_m)
\subset \partial R(n)
\end{align}
and hence $L(m) \le \max_{x\in \partial R(n)}K(u_{m-1},x)\le \max_{x\in \partial R(n)}K(n,x) \le M(n)$.
Therefore, we conclude that for any $\beta<\beta_d$
\begin{align*}
\liminf_{n\to\infty} \frac{M(n)}{\beta \log n}\ge
\liminf_{m\to\infty} \frac{L(m)}{\beta \log u_m}\ge
1\quad \text{ a.s.,}
\end{align*}
as per (\ref{kii}) and as desired.
\end{proof}
In order to verify the condition (\ref{iii}),
we introduce a new quantity $Q_n$ by modifying the definition of $\tilde{Q}_n$.
To do this, we first introduce several notions.
Set
$T_{x,l}^0:=\inf\{j\ge l: S_j=x\}$ and
$T_{x,l}^p:=\inf\{j>T_{x,l}^{p-1}:S_j=x\}$.
Note that $T_{x,0}^a=T_x^a$ for any $a\in{\mathbb{N}}$ and $x\in \mathbb{Z}^d$,
and that $T_{S_l}^p=T_{S_l,l}^p$ holds for each $p$ on the event $S_l \notin R(l-1)$.
Note that $T_{S_l,l}^p$ is a stopping time while $T_{S_l}^p$ is not.
Although we can state the key lemmas without using this notion,
we introduce it for later use.
For $k\in {\mathbb{N}}$, let
$h_k=\beta\log P(T_0<T_b\wedge k)+1$.
Since $\lim_{k\to \infty}h_k=1-\beta/\beta_d>0$, we have $h_k>0$
for all sufficiently large $k$.
We fix such a $k$ and simply denote $h_k$ by $h$ hereafter.
Let $$I_n:=[\frac{u_{n-1}}{n^2}, u_{n-1}-k \lceil \beta_d n^2\rceil]\cap {\mathbb{N}}.$$
For any $l\in I_n$, we introduce the events $E_{l,n}$ and $A_{l,n}$ defined by
\begin{align*}
E_{l,n}:=\{ T_{S_l,l}^{j}-T_{S_l,l}^{j-1}<k \text{ for any }1\le j\le \lceil \beta n^2\rceil \},
\end{align*}
and
\begin{align*}
A_{l,n}:=\{S_l \in R(l-1)^c \cap \partial_b R(u_{n}) \}\cap E_{l,n}.
\end{align*}
Then, we set
\begin{align}\label{yy}
Q_n:=\sum_{l \in I_n }1_{A_{l,n}}.
\end{align}
Although $I_n$, $A_{l,n}$ and $Q_n$ depend on the choice of parameters $\beta$ and $k$,
we do not indicate such dependence explicitly by symbols.
By the definition of $I_n$,
$I_n \subset [0,u_{n-1}] \cap (\mathbb{N} \cup \{0\})$ and
$l+k \lceil \beta n^2\rceil \le u_{n-1}$ hold.
These facts imply $Q_n\le \tilde{Q}_n$.
As we will see, the verification of condition (\ref{iii}) is reduced to the following two estimates for $Q_n$.
\begin{lem} \label{hh+}
Let $\beta<\beta_d$ and take $k\in{\mathbb{N}}$ so that $h=h_k>0$ as above.
Then, there exists $c>0$ such that for any $n\in{\mathbb{N}}$, the following hold:
(i) When $d=2$,
\begin{align*}
EQ_n \ge \frac{c\exp(hn^2-2n)}{n^4}.
\end{align*}
(ii) When $d \ge 3$,
\begin{align*}
EQ_n \ge c\exp(hn^2-2n).
\end{align*}
\end{lem}
\begin{lem} \label{hh}
Let $\beta<\beta_d$ and take $k\in{\mathbb{N}}$ so that $h=h_k>0$ as above.
Then, there exists $C>0$ such that for any $n\in{\mathbb{N}}$, the following hold:
(i) When $d=2$,
\begin{align*}
\mathrm{Var} (Q_n) \le C\bigg(\frac{\exp(hn^2-2n)}{n^4}\bigg)^2 \frac{\log n}{n^2}.
\end{align*}
(ii) When $d \ge 3$,
\begin{align*}
\mathrm{Var} (Q_n) \le C\exp(2hn^2-4n)\times \frac{1}{n^{10}}.
\end{align*}
\end{lem}
Now we give a proof of Proposition \ref{po1} by assuming Lemmas \ref{hh+} and \ref{hh} are true.
\begin{proof}[Deduction of Proposition \ref{po1} from Lemmas \ref{hh+} and \ref{hh}]
If Lemmas \ref{hh+} and \ref{hh} hold,
then we only need to prove the assumption of Lemma \ref{fu} to obtain Proposition \ref{po1}.
Take $k\in {\mathbb{N}}$ and $h>0$ as above.
By the Chebyshev inequality, we have
\begin{align}\label{ine1}
P(|Q_n-EQ_n|>\frac{EQ_n}{2})\le \frac{4\mathrm{Var} (Q_n)}{(EQ_n)^2}.
\end{align}
By Lemmas \ref{hh+} and \ref{hh}, we can see that there exists $C>0$
such that the following is true:
\begin{align*}
\frac{\mathrm{Var} (Q_n)}{(EQ_n)^2}
\begin{cases}
&\le \frac{C\log n}{n^2}\quad \quad\quad \quad \text{if } d=2,\\
&\le \frac{C}{n^{10}} \quad \quad \quad \quad\quad \text{ if } d \ge 3.
\end{cases}
\end{align*}
As a result, the right hand side of (\ref{ine1}) is summable.
Since $|b-a|\le \frac{a}{2}$ implies $b\ge \frac{a}{2}$
for $a,b\ge0$, the Borel-Cantelli lemma yields
\begin{align}\label{rrr}
P(Q_n\ge\frac{1}{2}EQ_n\quad \quad \text{ for all but finitely many }n)=1.
\end{align}
Since $h>0$, Lemma \ref{hh+} implies $EQ_n \ge 2$ for all sufficiently large $n$.
Hence, the assertion of Lemma \ref{fu} holds by combining this fact with (\ref{rrr}).
\end{proof}
\subsection{Preparations for the proof of Lemmas \ref{hh+} and \ref{hh}}
In this section, we estimate $P(A_{l,n})$ using the strong Markov property.
For later use, we will consider more general events than $A_{l,n}$.
For any $n', l , \tilde{n}\in {\mathbb{N}} \cup \{0\}$ with $n'\le l \le \tilde{n}$ and $n\in {\mathbb{N}}$, let
\begin{align*}
F_{n', l, \tilde{n},n}=\{S_l \in R[n',l-1]^c \cap \partial_b R[n',\tilde{n}]\} \cap E_{l,n},
\end{align*}
which we will sometimes denote $F(n', l, \tilde{n},n)$ for typographical reasons.
Note that $F_{0,l,u_n,n}=A_{l,n}$ holds.
\begin{lem}\label{subs}
There are constants $c$, $C>0$ such that for any $n', l , \tilde{n}\in {\mathbb{N}} \cup \{0\}$ with $n'\le l \le \tilde{n}$ and $n,a \in {\mathbb{N}}$
with $l+k\lceil\beta_d n^2\rceil \le a\le \tilde{n}$
\begin{align*}
P(F_{n', l, \tilde{n},n} )
\le P(T_0<T_b\wedge k )^{\lceil \beta n^2\rceil}P(T_0 \wedge T_b >l-n')\times P(T_b>\tilde{n}-a)
\end{align*}
and
\begin{align*}
P(F_{n', l, \tilde{n},n})
\ge P(T_0<T_b\wedge k )^{\lceil \beta n^2\rceil}P(T_0 \wedge T_b >l-n')\times P(T_b>\tilde{n}).
\end{align*}
\end{lem}
\begin{proof}
We first remark that $l<T_{S_l,l}^{\lceil\beta n^2\rceil}$
holds and, hence, ${\cal F}(l) \subset {\cal F}(T_{S_l,l}^{\lceil\beta n^2\rceil})$.
By taking a conditional expectation with respect to ${\cal F}(T_{S_l,l}^{\lceil\beta n^2\rceil})$, we obtain
\begin{align}\notag
P(F_{n', l, \tilde{n},n})
=&E[1{\{S_l\in R[n',l-1]^c \cap \partial_b R[n',T_{S_l,l}^{\lceil\beta n^2\rceil}] \}\cap E_{l,n}}\\
\label{t*}
&\times P(S_l\in \partial_b R[T_{S_l,l}^{\lceil\beta n^2\rceil},\tilde{n}]
|{\cal F}(T_{S_l,l}^{\lceil\beta n^2\rceil} ))].
\end{align}
On the event $E_{l,n}$, we have
\begin{align*}
0\le T_{S_l,l}^{\lceil\beta n^2\rceil} \le k\lceil \beta_d n^2 \rceil+T_{S_l,l}^0.
\end{align*}
Since $l=T_{S_l,l}^0$, our choice of $a$ and this inequality imply
\begin{align}\label{rr*}
0\le T_{S_l,l}^{\lceil\beta n^2\rceil} \le a.
\end{align}
The Markov property and the translation invariance for $S_l$ yield
\begin{align}\label{t*1}
P(S_l\in \partial_b R[T_{S_l,l}^{\lceil\beta n^2\rceil},\tilde{n}]
|{\cal F}(T_{S_l,l}^{\lceil\beta n^2\rceil} ))
= P(0 \in \partial_b R(\tilde{n}-t))|_{t=T_{S_l,l}^{\lceil\beta n^2\rceil}}.
\end{align}
Substituting (\ref{t*1}) for (\ref{t*})
and keeping (\ref{rr*}) in mind, we obtain
\begin{align}\notag
P(F_{n', l, \tilde{n},n})
\le&P(\{S_l\in R[n',l-1]^c\cap \partial_b R[n',T_{S_l,l}^{\lceil\beta n^2\rceil}]\}
\cap E_{l,n})\\
\label{t1}
&\times P(0 \in \partial_b R(\tilde{n}-a)),
\end{align}
and
\begin{align}\notag
P(F_{n', l, \tilde{n},n})
\ge&P(\{S_l\in R[n',l-1]^c \cap \partial_b R[n',T_{S_l,l}^{\lceil\beta n^2\rceil}]\}
\cap E_{l,n})\\
\label{t2}
&\times P(0 \in \partial_b R(\tilde{n})).
\end{align}
Thus, the proof is reduced to the estimate of the common first factor in the right hand side of (\ref{t1}) and (\ref{t2}).
If we take a conditional expectation with respect to ${\cal F}(l)$,
by the Markov property and the translation invariance for $S_l$, we obtain
\begin{align}\notag
&P(\{S_l\in R[n',l-1]^c \cap \partial_b R[n',T_{S_l,l}^{\lceil\beta n^2\rceil}]\}
\cap E_{l,n})\\
\label{t3}
= &P(S_l\in R[n',l-1]^c \cap \partial_b R[n',l])
\times P(\{0 \in \partial_b R(T_{0}^{\lceil \beta n^2\rceil})\}
\cap E_{0,n}).
\end{align}
By the choice of $h$, it holds that
\begin{align}
\label{t4}
P(\{0 \in \partial_b R(T_{0}^{\lceil \beta n^2\rceil})\}
\cap E_{0,n})
=P(T_0<T_b\wedge k )^{\lceil \beta n^2\rceil},
\end{align}
where there exist $c$, $C>0$ such that for any $n\in \mathbb{N}$
\begin{align}\label{t4*}
c\exp((h-1)n^2)
\le P(T_0<T_b\wedge k )^{\lceil \beta n^2\rceil}\le C\exp((h-1)n^2).
\end{align}
By considering a time-reversal, we obtain
\begin{align}
\label{t5}
P(S_l \in R[n',l-1]^c \cap \partial_b R[n',l])
=P(0\in R[1,l-n']^c \cap \partial_b R(l-n')).
\end{align}
In addition, for $m\in \mathbb{N}$, we have
$P(0\in R[1,m]^c \cap \partial_b R(m))=P(T_0 \wedge T_b >m)$, and
$P(0 \in \partial_b R(m))=P(T_b>m)$.
Therefore, by (\ref{t1}), (\ref{t2}), (\ref{t3}), (\ref{t4}) and (\ref{t5}), the desired formulas hold.
\end{proof}
\begin{rem}\label{pl}
We substitute $T_{S_l,l}^{\lceil\beta n^2\rceil}$ for $\tilde{n}$ of $F_{n', l, \tilde{n},n}$.
That is, for any $n', l \in {\mathbb{N}} \cup \{0\}$ with $n'\le l $, we write
\begin{align*}
F(n', l, T_{S_l,l}^{\lceil\beta n^2\rceil},n)
=\{S_l \in R[n',l-1]^c \cap \partial_b R[n',T_{S_l,l}^{\lceil\beta n^2\rceil}]\} \cap E_{l,n} .
\end{align*}
By the same argument, we obtain the following: for any
$n', l \in {\mathbb{N}} \cup \{0\}$ with $n'\le l $
\begin{align}\label{ppp*}
P(F(n', l, T_{S_l,l}^{\lceil\beta n^2\rceil},n) )
\le P(T_0<T_b\wedge k )^{\lceil \beta n^2\rceil}P(T_0 \wedge T_b >l-n').
\end{align}
(See the argument after (\ref{t3}).)
\end{rem}
\begin{cor}\label{al}
For any $l\in I_n$ and all sufficiently large $n\in \mathbb{N}$
with $u_n/n^{11}\le u_n-u_{n-1}$,
\begin{align}\notag
P(A_{l,n} )
\le &P(T_0 \wedge T_b >l)
\times P(T_0<T_b \wedge k )^{\lceil\beta n^2\rceil}\\
\label{hy*}
&\times P(T_b >\frac{u_n}{n^{11}})
\end{align}
and
\begin{align}\notag
P(A_{l,n})
\ge &P(T_0 \wedge T_b >l)
\times P(T_0<T_b \wedge k )^{\lceil\beta n^2\rceil}\\
\label{hy}
&\times P(T_b >u_n).
\end{align}
\end{cor}
\begin{proof}
For $l\in I_n$
\begin{align*}
l=T_{S_l,l}^0 \le u_{n-1}- k\lceil \beta_d n^2\rceil
\end{align*}
holds.
Therefore, by using Lemma \ref{subs} with $F_{0,l,u_n,n}=A_{l,n}$ and substituting $u_{n-1}$ for $a$ in the assumption of Lemma \ref{subs} we obtain (\ref{hy}) and
\begin{align*}
P(A_{l,n} )
\le &P(T_0 \wedge T_b >l)
\times P(T_0<T_b \wedge k )^{\lceil\beta n^2\rceil}\\
&\times P(T_b >u_n-u_{n-1}).
\end{align*}
Therefore, we obtain (\ref{hy*}) for all sufficiently large $n\in \mathbb{N}$.
\end{proof}
\subsection{Proof of Lemmas \ref{hh+} and \ref{hh} for $d\ge3$}\label{e1}
By Corollary \ref{al},
we obtain the following estimate of $P(A_{l,n})$.
\begin{lem}
There exist constants $C$, $c>0$ such that for any $n\in \mathbb{N}$ and $l\in I_n$
\begin{align}\label{ju}
P(A_{l,n})
\le & C\exp((h-1)n^2)\\
\label{ju*}
P(A_{l,n})
\ge &c\exp((h-1)n^2).
\end{align}
Moreover, for any $n\in \mathbb{N}$ and $l\in I_n$
\begin{align}\notag
P(A_{l,n})
= &P(T_0\wedge T_b=\infty)\times P(T_0<T_b \wedge k )^{\lceil\beta n^2\rceil}\times P(T_b=\infty)\\
\label{ju**}
&+O(\exp((h-1)n^2-cn^2)).
\end{align}
\end{lem}
\begin{proof}
Since (\ref{t4*}) and (\ref{ju**}) yield (\ref{ju}) and (\ref{ju*}),
we only need to prove (\ref{ju**}).
First, we introduce some estimates of hitting times.
Since $P(S_n=0)=O(n^{-\frac{d}{2}})$, we obtain $P(T_0=n)=O(n^{-\frac{d}{2}})$
and, hence, for $d\ge 3$ and $b\in {\cal N}(0)$,
\begin{align}\label{g7}
P(n\le T_0< \infty)&\le \sum_{m=n}^{\infty} O(n^{-\frac{d}{2}} )=O( n^{-\frac{d}{2}+1}).
\end{align}
In addition, by the Markov property and the translation invariance for $b'$ we have
\begin{align}\label{kkk}
P(T_0 \ge a+1)
=\frac{1}{2d}\sum_{b'\in {\cal N}(0)}P^{b'}(T_0 \ge a)
=\frac{1}{2d}\sum_{b'\in {\cal N}(0)}P(T_{-b'} \ge a)
=P(T_{b}\ge a),
\end{align}
for $a\in \mathbb{N}$.
Hence, (\ref{g7}) yields
\begin{align}\label{g8}
P(n\le T_b<\infty)&=O(n^{-\frac{d}{2}+1}).
\end{align}
Moreover, it holds that
\begin{align*}
&P(n\le T_0\wedge T_b<\infty)
\le P(\{n\le T_0<\infty\}\cup\{ n\le T_b<\infty\})\\
=&P(n\le T_0<\infty)+P(n\le T_b<\infty),
\end{align*}
and so with (\ref{g7}) and (\ref{g8}),
\begin{align}
\label{g9}
P(n\le T_0\wedge T_b<\infty)&= O(n^{-\frac{d}{2}+1}).
\end{align}
Therefore, by (\ref{g7}), (\ref{g8}) and (\ref{g9}) there exists $c>0$ for any $n\in \mathbb{N}$ and $l \in I_n$
\begin{align}
\label{bb0}
&P(T_0 \wedge T_b>l )
= P(T_0\wedge T_b=\infty)+O(\exp(-cn^2)),\\
\label{bb1}
&P(T_b> u_{n})
=P(T_b=\infty)+O(\exp(-cn^2)),\\
\label{bb2}
&P(T_b >\frac{u_n}{n^{11}})
= P(T_b=\infty)+O(\exp(-cn^2)).
\end{align}
Substituting (\ref{bb0}), (\ref{bb1}) and (\ref{bb2}) for the right hand sides of (\ref{hy*}) and (\ref{hy}),
we obtain the desired formula.
\end{proof}
\begin{proof}[Proof of Lemma \ref{hh+} for $d\ge3$]
Since for all sufficiently large $n\in \mathbb{N}$
\begin{align}\label{lo}
\sharp I_n\ge u_{n-1}-k\lceil \beta_d n^2\rceil -\frac{u_{n-1}}{n^2}\ge c u_{n-1}
\end{align}
holds, by (\ref{ju*}) and the definition of $Q_n$ in (\ref{yy}), we obtain
\begin{align*}
EQ_n\ge \sharp I_n \times \min_{l\in I_n} P(A_{l,n})
\ge c\exp(hn^2-2n),
\end{align*}
as required.
\end{proof}
To prove Lemma \ref{hh},
we decompose $I_n\times I_n$ into three parts $J_{n,j}$, $j = 1,2,3$ defined by
\begin{align*}
&J_{n,1}:=\{(l,l')\in I_n\times I_n:0\le l'-l\le k \lceil \beta_d n^2\rceil \},\\
&J_{n,2}:=\{(l,l')\in I_n\times I_n:k \lceil \beta_d n^2\rceil < l'-l\le 2\lceil \frac{u_{n-1}}{n^{10}} \rceil \},\\
&J_{n,3}:=\{(l,l')\in I_n\times I_n:2\lceil \frac{u_{n-1}}{n^{10}} \rceil< l'-l \}.
\end{align*}
For all sufficiently large $n\in \mathbb{N}$, $k \lceil \beta_d n^2\rceil < 2\lceil u_{n-1}/n^{10} \rceil$ holds and hence
$J_{n,2}$ is non-empty.
By a simple computation,
\begin{align}\notag
\mathrm{Var} (Q_n)=&EQ_n^2-(EQ_n)^2\\
\notag
\le &2\sum_{l,l' \in I_n,l\le l'}
(E[1_{A_{l,n}}1_{A_{l',n}}]-E[1_{A_{l,n}}]E[1_{A_{l',n}}])\\
\notag
\le &2(\sum_{(l,l') \in J_{n,1}} E[1_{A_{l,n}}]
+\sum_{(l,l') \in J_{n,2}}
E[1_{A_{l,n}}1_{A_{l',n}}]\\
\label{formula2}
+&\sum_{(l,l') \in J_{n,3}}
(E[1_{A_{l,n}}1_{A_{l',n}}]-E[1_{A_{l,n}}]E[1_{A_{l',n}}])).
\end{align}
\begin{lem}\label{ku}
There exists $C>0$ such that for any $n\in \mathbb{N}$
\begin{align}\label{oo}
\sum_{(l,l') \in J_{n,1}} E[1_{A_{l,n}}]\le Cn^2\exp(hn^2-2n)
\end{align}
and
\begin{align}\label{o1}
\sum_{(l,l') \in J_{n,2}}
E[1_{A_{l,n}}1_{A_{l',n}}]
\le C\exp(2hn^2-4n)\times \frac{1}{n^{10}}.
\end{align}
\end{lem}
\begin{rem}\label{gh}
This Lemma also holds for $d=2$ by the same proof.
\end{rem}
\begin{proof}
First, we show (\ref{oo}).
By the definition, we have $\sharp J_{n,1} \le k\lceil \beta_d n^2\rceil u_{n-1}$.
Thus, (\ref{ju}) yields
\begin{align*}
\sum_{(l,l') \in J_{n,1}} E[1_{A_{l,n}}]
\le \sharp J_{n,1} \times \max_{l\in I_n}P(A_{l,n})
\le Cn^2\exp(hn^2-2n) .
\end{align*}
Hence, we obtain (\ref{oo}).
To show (\ref{o1}), let us introduce additional notations.
We define
\begin{align}
\label{f1}
\tilde{A}_{l,n}:=\{ S_l\in \partial_b R[l+1,T_{S_l,l}^{\lceil \beta n^2\rceil}] \}
\cap E_{l,n}.
\end{align}
Note that
\begin{align*}
&\tilde{A}_{l,n}=F(l, l, T_{S_l,l}^{\lceil\beta n^2\rceil},n),\\
&\tilde{A}_{l,n}\cap \{ S_l\in R(l-1)^c\cap \partial_b R(l) \}\cap \{ S_l\in \partial_b R[T_{S_l,l}^{\lceil \beta n^2\rceil},u_n] \}
=A_{l,n}
\end{align*}
and
\begin{align*}
\tilde{A}_{l,n}\in \sigma \{ S_j-S_l: j\in [l,T_{S_l,l}^{\lceil \beta n^2\rceil}]\}.
\end{align*}
By Remark \ref{pl}, we obtain
\begin{align}\label{hj}
P(\tilde{A}_{l,n})
=P(F(l, l, T_{S_l,l}^{\lceil\beta n^2\rceil},n))
\le C\exp((h-1)n^2).
\end{align}
We obtain (\ref{o1}) as follows:
by the definition of $A_{l,n}$ and $\tilde{A}_{l,n}$,
we have ${A}_{l,n}\subset \tilde{A}_{l,n}$ and ${A}_{l',n}\subset \tilde{A}_{l',n}$.
In addition, $\tilde{A}_{l',n}$ is independent of $\tilde{A}_{l,n}$ for $(l,l')\in J_{n,2}$.
Thus,
\begin{align*}
E[1_{A_{l,n}}1_{A_{l',n}}]
\le
E[1_{\tilde{A}_{l,n}}1_{\tilde{A}_{l',n}}]
=E[1_{\tilde{A}_{l,n}}]E[1_{\tilde{A}_{l',n}}],
\end{align*}
and so by (\ref{hj}),
\begin{align*}
\sum_{(l,l') \in J_{n,2}}
E[1_{A_{l,n}}1_{A_{l',n}}]
= &\sharp J_{n,2}\times
C(\exp((h-1)n^2))^2\\
\le &C\exp(2hn^2-4n)\times \frac{1}{n^{10}}.
\end{align*}
Therefore, we obtain (\ref{o1}).
\end{proof}
\begin{proof}[Proof of Lemma \ref{hh} for $d\ge3$]
We estimate the last sum appearing in (\ref{formula2}). To this end, set
\begin{align}\label{f2}
A'_{l,n}:=&\{ S_l\in R(l-1)^c \cap\partial_b R(l+\lceil \frac{u_{n-1}}{n^{10}} \rceil)\} \cap E_{l,n}\\
\label{f6}
A''_{l',n}:=&\{S_{l'}\in R[l'-\lceil \frac{u_{n-1}}{n^{10}} \rceil,l'-1]^c\cap \partial_b R[l'-\lceil \frac{u_{n-1}}{n^{10}} \rceil,u_{n}] \} \cap E_{l',n}.
\end{align}
Note that
\begin{align*}
A'_{l,n}=F_{0,l,l+\lceil \frac{u_{n-1}}{n^{10}} \rceil,n},\quad
A''_{l',n}=F_{l'-\lceil \frac{u_{n-1}}{n^{10}} \rceil,l',u_n,n}.
\end{align*}
By Lemma \ref{subs}, (\ref{bb0}) and (\ref{bb2}),
we can estimate $P(A'_{l,n})$ and $P(A''_{l',n})$ as follows:
for any $l \in I_n$ and all sufficiently large $n\in \mathbb{N}$ with
$ u_{n-1}/n^{11} \le (\lceil u_{n-1}/n^{10}\rceil-k\lceil\beta n^2\rceil) \wedge l$
\begin{align}\notag
&P(A'_{l,n})=P(F_{0,l,l+\lceil \frac{u_{n-1}}{n^{10}} \rceil,n} )\\
\notag
\le &P(T_0\wedge T_b>l )
\times P(T_0<T_b \wedge k )^{\lceil\beta n^2\rceil}
\times P(T_b> \frac{u_{n-1}}{(n-1)^{11}} )\\
\label{dd}
\le&
P(T_b=\infty)P(T_0\wedge T_b=\infty)
P(T_0<T_b \wedge k )^{\lceil\beta n^2\rceil}
+O(\exp((h-1)n^2-cn^2))
\end{align}
and for any $l' \in I_n$ and all sufficiently large $n\in \mathbb{N}$ with $u_n/n^{11}\le u_n-(l'+k\lceil\beta n^2\rceil)$ and $u_{n-1}/n^{11} \le\lceil u_{n-1}/n^{10}\rceil\le l'$
\begin{align}\notag
&P(A''_{l',n})=P(F_{l'-\lceil \frac{u_{n-1}}{n^{10}} \rceil,l',u_n,n} )\\
\notag
\le &P(T_0\wedge T_b>\lceil \frac{u_{n-1}}{n^{10}} \rceil )
\times P(T_0<T_b \wedge k)^{\lceil\beta n^2\rceil}
\times P(T_b> \frac{u_{n}}{n^{11}} )\\
\label{ddd}
\le &P(T_b=\infty)P(T_0\wedge T_b=\infty)
P(T_0<T_b \wedge k )^{\lceil\beta n^2\rceil}
+O(\exp((h-1)n^2-cn^2)).
\end{align}
Therefore, by (\ref{ju**}), (\ref{dd}) and (\ref{ddd}), we obtain
\begin{align}\notag
&\sum_{(l,l') \in J_{n,3}}
(E[1_{A_{l,n}}1_{A_{l',n}}]-E[1_{A_{l,n}}]E[1_{A_{l',n}}])\\
\notag
\le &\sum_{(l,l') \in J_{n,3}}
(E[1_{A'_{l,n}}]E[1_{A''_{l',n}}]-E[1_{A_{l,n}}]E[1_{A_{l',n}}])\\
\notag
\le &\sum_{(l,l') \in J_{n,3}}
(P(T_b=\infty)^2P(T_0\wedge T_b=\infty)^2P(T_0<T_b \wedge k )^{2\lceil\beta n^2\rceil}\\
\notag
&-P(T_b=\infty)^2P(T_0\wedge T_b=\infty)^2P(T_0<T_b \wedge k )^{2\lceil\beta n^2\rceil}\\
\notag
&+ O(\exp(2(h-1)n^2-cn^2) ))\\
\label{pp3}
\le &C(\exp(hn^2-2n))^2\times\exp(-cn^2).
\end{align}
By (\ref{oo}), (\ref{o1}) and (\ref{pp3}),
the right hand side of (\ref{formula2}) is bounded by
$\exp(2hn^2-4n)\times 1/n^{10}$.
This completes the proof of Lemma \ref{hh} for $d\ge3$.
\end{proof}
\subsection{Proof of Lemmas \ref{hh+} and \ref{hh} for $d=2$}\label{e2}
First, we state a lemma that is important for our proof of Lemma \ref{hh}.
\begin{lem}
There exists $C>0$ such that for any $n\in \mathbb{N}$ and $x\in {\mathbb{Z}^{2}}$
with $0<|x|< n\sqrt{u_{n-1}}$
\begin{align}\label{fo3*}
P(T_0\wedge T_x> \lceil \frac{u_n}{n} \rceil) \le \frac{\pi}{(n+1)^2}+C\frac{\log n}{n^4}.
\end{align}
Moreover, it holds that for $b \in{\cal N}(0)$
\begin{align}\label{fo3}
P(T_b\wedge T_{x+b}> \lceil \frac{u_n}{n} \rceil) \le \frac{\pi}{(n+1)^2}+C\frac{\log n}{n^4}.
\end{align}
\end{lem}
\begin{proof}
We prove only the first claim since the second one follows from (\ref{fo3*}) by a similar observation as in (\ref{kkk}).
Decomposing the whole event by means of the last exit time from $\{0,x\}$ by time $\lceil u_n/n\rceil$, we obtain
\begin{align}\notag
1=&\sum_{k=0}^{ \lceil u_n/n\rceil}P(S_{k}=0)P^0(0,x \notin R[1,\lceil \frac{u_n}{n} \rceil-k])\\
\label{r1}
+&\sum_{k=0}^{\lceil u_n/n \rceil}P(S_{k}=x)P^x(0,x \notin R[1,\lceil \frac{u_n}{n} \rceil-k]).
\end{align}
By the local central limit theorem (see Theorem $1.2.1$ in \cite{Law}), there exists
$c>0$ such that for any $k\in \mathbb{N}\cup \{0\}$, $x\in {\mathbb{Z}^2}$
\begin{align*}
\begin{cases}
&\displaystyle{P(S_k=x) \ge \frac{2}{\pi k} \exp(-\frac{|x|^2}{k})-\frac{c}{k^2}}\quad \quad\text{ if } k \rightleftharpoons x\\
&\displaystyle{P(S_k=x) =0} \quad \quad \quad \quad \quad \quad \quad \quad\text{ if } k+1 \rightleftharpoons x,
\end{cases}
\end{align*}
where $k\rightleftharpoons x=(x_1,x_2) (\in {\mathbb{Z}^2})$ means that $k+x_1+x_2$ is even.
Let
\begin{align*}
\gamma(n)=P(T_0\wedge T_x>\lceil \frac{u_n}{n}\rceil)
=P(0,x \notin R[1,\lceil \frac{u_n}{n} \rceil]).
\end{align*}
By the invariance property of $S_n$ under an isomorphism of ${\mathbb{Z}^2}$,
for $a\le \lceil u_n/n\rceil$
$$\gamma(n) \le P^0(0,x \notin R[1,a])
=P^0(-x,0 \notin R[1,a])
=P^x(0,x \notin R[1,a]).$$
Note that
$\sum_{k=1}^{\lceil u_n/n \rceil}\frac{2}{\pi k} 1_{\{k \rightleftharpoons x\}}
=\sum_{m=1}^{\lceil u_n/n \rceil/2}\frac{1}{\pi m}>\frac{1}{ \pi} \log (1+\lceil u_n/n \rceil/2)$ holds.
Then, by (\ref{r1}) we obtain
\begin{align*}
1\ge &\bigg(\sum_{k=1}^{\lceil u_n/n \rceil}
(\frac{2}{\pi k} -\frac{c}{k^2})1_{\{k \rightleftharpoons 0\}}
+\sum_{k=1}^{\lceil u_n/n\rceil}
\bigg(\frac{2}{\pi k} \exp(-\frac{|x|^2}{k})-\frac{c}{k^2}\bigg)
1_{\{k \rightleftharpoons x\}}
\bigg)\gamma(n)\\
\ge &\bigg(\frac{n^2}{\pi}-\frac{\log n}{\pi}-c
+\sum_{k=\lceil n|x|^2\rceil}^{\lceil u_n/n \rceil}\frac{2}{\pi k} \exp(-\frac{|x|^2}{k})1_{\{k \rightleftharpoons x\}}
\bigg)\gamma(n)\\
\ge &\bigg(\frac{n^2}{\pi}-\frac{\log n}{\pi}-c
+\sum_{k=\lceil n|x|^2\rceil}^{\lceil u_n/n \rceil}\frac{2}{\pi k} \exp(-\frac{1}{n})1_{\{k \rightleftharpoons x\}}
\bigg)\gamma(n)\\
\ge &\bigg(\frac{n^2}{\pi}-\frac{\log n}{\pi}-c
+\sum_{k=n^3 u_{n-1}}^{\lceil u_n/n \rceil}\frac{2}{\pi k} \exp(-\frac{1}{n})1_{\{k \rightleftharpoons x\}}
\bigg)\gamma(n)\\
\ge &\frac{1}{\pi}\bigg(n^2-\log n-c
+n^2-(n-1)^2-4\log n -(1-\exp(-\frac{1}{n}))2n\bigg)
\gamma(n)\\
\ge &\frac{1}{\pi}\bigg((n+1)^2-5\log n -c\bigg)
\gamma(n).
\end{align*}
Thus the assertion follows from an easy rearrangement.
\end{proof}
To prove Lemma \ref{hh+}, we first introduce the following lemma.
\begin{lem}
There exist constants $C$, $c>0$ such that for any $n\in \mathbb{N}$ and $l\in I_n$
\begin{align}
\label{lu}
P(A_{l,n})&\le \frac{C\exp((h-1)n^2)}{n^4},\\
\label{lu*}
P(A_{l,n})&\ge \frac{c\exp((h-1)n^2)}{n^4}.
\end{align}
In addition, for any $n\in \mathbb{N}\cap \{1\}^c$ and $l\in I_n$
\begin{align}
\label{q1}
P(A_{l,n})= \frac{\pi^2 P(T_0<T_b \wedge k )^{\lceil\beta n^2\rceil} }{2n^2(n-1)^2}+O(\frac{\exp((h-1)n^2)\log n}{n^6}).
\end{align}
\end{lem}
\begin{proof}
Since (\ref{t4*}) and (\ref{q1}) yield (\ref{lu}) and (\ref{lu*}),
we only prove (\ref{q1}).
First we introduce the following estimates: for any $M>0$ there exists $C>0$ such that for any $n\in \mathbb{N}$
\begin{align}
\label{fo1}
\frac{\pi}{n^2}-C\frac{\log n}{n^4} \le P(T_b>\frac{u_n}{n^M})\le \frac{\pi}{n^2}+C\frac{\log n}{n^4},\\
\label{fo2}
\frac{\pi}{2n^2}-C\frac{\log n}{n^4} \le P(T_0\wedge T_b> \frac{u_n}{n^M})\le \frac{\pi}{2n^2}+C\frac{\log n}{n^4}.
\end{align}
Since we know
$$P(T_0> n)=\frac{\pi}{\log n}+O\bigg(\frac{1}{(\log n)^2}\bigg)$$
(see $(2.5)$ in \cite{dvo1}),
the assertion (\ref{fo1}) follows by a simple calculation of (\ref{kkk}).
For the latter assertion, we already know a weaker estimate of (\ref{fo2}) involving only the leading term.
(See Lemma $3.3$ in \cite{okada}.)
We can obtain the error term of (\ref{fo2}) by modifying the proof in \cite{okada}
along the argument in \cite{dvo1} in a straightforward way.
Thus, we omit the proofs of (\ref{fo1}) and (\ref{fo2}).
From (\ref{fo1}) and (\ref{fo2}),
we already have estimates of each term in (\ref{hy*}) and (\ref{hy}).
Indeed, for any $l\in I_n$ and all sufficiently large $n\in \mathbb{N}$,
(\ref{fo1}) yields
\begin{align}\label{rw}
&P(T_b>\frac{u_{n}}{n^{11}} )=\frac{\pi}{n^2}+O(\frac{\log n}{n^4}),\\
\notag
&P(T_b> u_n)=\frac{\pi}{n^2}+O(\frac{\log n}{n^4}).
\end{align}
Since $l \in I_n$, (\ref{fo2}) implies
\begin{align}\label{n1}
P(T_0 \wedge T_b >l)
=\frac{\pi}{2(n-1)^2}+O(\frac{\log n}{n^4}).
\end{align}
Therefore, by substituting these estimates for the right hand sides of (\ref{hy*}) and (\ref{hy})
we obtain the desired formula.
\end{proof}
\begin{proof}[Proof of Lemma \ref{hh+} for $d=2$]
Recall (\ref{lo}).
Then, (\ref{yy}) and (\ref{lu*}) yield
\begin{align*}
EQ_n \ge \sum_{l\in I_n}
\frac{ c \exp((h-1)n^2) }{n^4}
\ge \frac{c\exp(hn^2-2n)}{n^4}
\end{align*}
for any $n\in \mathbb{N}$, as desired.
\end{proof}
\begin{proof}[Proof of Lemma \ref{hh} for $d=2$]
By the same argument for $d\ge 3$, we obtain (\ref{formula2}) for $d=2$.
We consider the estimate of the right hand side of (\ref{formula2}).
The first term and the second term of the right hand side of (\ref{formula2}) are already estimated by Lemma \ref{ku}.
(Note Remark \ref{gh}.)
To estimate the third term,
we will give a uniform upper bound of $P(A_{l,n}\cap A_{l',n})$ for $(l,l')\in J_{n,3}$.
Here, uniform means that the bound is independent of the choice of $(l,l')\in J_{n,3}$.
Instead of using $A''_{l',n}$ in (\ref{f6}) as we did when $d\ge 3$,
we use more complicated events.
Let
\begin{align*}
A'''_{l,l',n}:=A''_{l',n}\cap \{S_l\in \partial_b R[T_{S_{l'},l'}^{\lceil\beta n^2\rceil} ,u_{n}]\}.
\end{align*}
Recall the definition of $A'_{l,n}$ in (\ref{f6}).
By the definition of $A_{l,n}$ and ${A'}_{l,n}$,
we have ${A}_{l,n}\subset A'_{l,n}$ and ${A}_{l,n}\cap A_{l',n}\subset A'''_{l,l',n}$.
Note that $A'_{l,n}$ is not independent of $A'''_{l,l',n}$ for $(l,l')\in J_{n,3}$.
Denote the event $0<|S_{l'}-S_l|<n \sqrt{u_{n-1}}$ by $D_1$,
and the event $|S_{l'}-S_l|\ge n\sqrt{u_{n-1}}$ by $D_2$.
Since $S_{l'}\notin R(l'-1)$ on $A_{l',n}$,
we have $\{S_l=S_{l'}\} \cap A_{l',n}=\emptyset$ for $l<l'$.
Thus, $A_{l',n}=(A_{l',n}\cap D_1)\cup(A_{l',n}\cap D_2)$ and therefore
$A_{l,n} \cap A_{l',n}\subset
(A'_{l,n} \cap A'''_{l,l',n}\cap D_1)
\cup (\tilde{A}_{l,n} \cap \tilde{A}_{l',n} \cap D_2)$ holds.
Then, the following holds:
\begin{align}\label{se1}
E[1_{A_{l,n}}1_{A_{l',n}}]
\le
(E[1_{A'_{l,n}}1_{A'''_{l,l',n}}1_{D_1}]
+E[1_{\tilde{A}_{l,n}}1_{\tilde{A}_{l',n}}1_{D_2}]).
\end{align}
Hence, by putting (\ref{rw}) and (\ref{n1}) into the right hand side of the inequalities given in Lemma \ref{subs} we can see that there exists $C>0$
such that for any $ l\in I_n$ and all sufficiently large $n\in \mathbb{N}$ with
$u_{n-1}/(n-1)^{11}\le (\lceil u_{n-1}/n^{10}\rceil - k\lceil \beta n^2\rceil) \wedge l $
\begin{align}\notag
&P(A'_{l,n})=P(F_{0,l,l+\lceil \frac{u_{n-1}}{n^{10}} \rceil,n})\\
\notag
\le &P(T_0 \wedge T_b >l )
\times P(T_0<T_b\wedge k)^{\lceil \beta n^2\rceil}
\times
P(T_b> \frac{u_{n-1}}{(n-1)^{11}})\\
\label{q2+}
\le& \frac{\pi }{2(n-1)^2}
\times P(T_0<T_b\wedge k)^{\lceil \beta n^2\rceil}
\times \frac{\pi }{(n-1)^2}
+C \frac{\exp((h-1)n^2)\log n}{n^6}.
\end{align}
Taking tha conditional probability of the event $A'_{l,n}\cap A'''_{l,l',n} \cap D_1$ on ${\cal F}(T_{S_{l'},l'}^{\lceil\beta n^2\rceil})$
and using (\ref{q2+}), we see that for any $ l,l' \in I_n$,
\begin{align}\notag
&E[1_{A'_{l,n}}1_{A'''_{l,l',n}}1_{D_1}]\\
\notag
=&E[1_{A'_{l,n}} 1_{D_1}
1\{\{S_{l'}\notin R[l'-\lceil \frac{u_{n-1}}{n^{10}} \rceil,l'-1],\\
\notag
&S_{l'} \in \partial _b R[l'-\lceil \frac{u_{n-1}}{n^{10}} \rceil,T_{S_{l'},l'}^{\lceil\beta n^2\rceil}]\}\cap E_{l'n}\}\\
\label{eee}
&P(S_{l'},S_l\in\partial_b R[T_{S_{l'},l'}^{\lceil\beta n^2\rceil},u_{n}]
|{\cal F}(T_{S_{l'},l'}^{\lceil \beta n^2\rceil}))].
\end{align}
Note that $T_{x',l}^{\lceil\beta n^2\rceil} <u_{n-1}$ if $(l,l') \in J_{n,3}$.
Hence, by (\ref{fo3}), we see that
for all sufficiently large $n\in \mathbb{N}$
with $u_{n}-u_{n-1}\ge u_n/n$ and $x,x' \in {\mathbb{Z}^2}$ with $0<|x-x'|< n \sqrt{u_{n-1}}$,
it holds that
\begin{align*}
&P(x,x'\in \partial_b R[T_{x',l}^{\lceil\beta n^2\rceil},u_{n}]
|{\cal F}(T_{x',l}^{\lceil \beta n^2\rceil }))\\
=&P(0, x-x' \in \partial_b R(u_{n}-t))|_{t=T_{x',l}^{\lceil\beta n^2\rceil}}\\
&\le \max_{0<|x-x'|< n \sqrt{u_{n-1}} }
P(T_b\wedge T_{x-x'+b}>\lceil \frac{u_n}{n}\rceil )
\le \frac{\pi }{(n+1)^2}+\frac{C\log n}{n^4}.
\end{align*}
By the inequalities in the last line restricted to $x=S_l$ and $x'=S_{l'}$,
the right hand side of (\ref{eee}) is bounded by
\begin{align}\notag
&E[1_{A'_{l,n}}
1 \{ \{S_{l'}\in R[l'-\lceil \frac{u_{n-1}}{n^{10}} \rceil,l'-1]^c
\cap \partial_b R[l'-\lceil \frac{u_{n-1}}{n^{10}} \rceil,T_{S_{l'},l'}^{\lceil \beta n^2\rceil}\}\cap E_{l',n}\} ]\\
\label{cl}
&\times \bigg(\frac{\pi }{(n+1)^2}+\frac{C\log n}{n^4}\bigg).
\end{align}
Moreover, it holds that
\begin{align}\notag
&E[1_{A'_{l,n}}1\{ \{S_{l'}\in R[l'-\lceil \frac{u_{n-1}}{n^{10}} \rceil ,l'-1]^c
\cap \partial_b R[l'-\lceil \frac{u_{n-1}}{n^{10}} \rceil,T_{S_{l'},l'}^{\lceil \beta n^2\rceil}]\}\cap E_{l',n}\}] \\
\notag
=&E[1_{A'_{l,n}}] E[1\{ \{S_{l'}\in R[l'-\lceil \frac{u_{n-1}}{n^{10}} \rceil ,l'-1]^c
\cap \partial_b R[l'-\lceil \frac{u_{n-1}}{n^{10}} \rceil,T_{S_{l'},l'}^{\lceil \beta n^2\rceil}]\}\cap E_{l',n}\}]\\
\label{cl*}
=&E[1_{A'_{l,n}}]
E[1F(l'-\lceil \frac{u_{n-1}}{n^{10}} \rceil,l',T_{S_{l'},l'}^{\lceil \beta n^2\rceil},n )].
\end{align}
By substituting (\ref{ppp*}) for Remark \ref{pl}, we obtain
\begin{align*}
&P(F(l'-\lceil \frac{u_{n-1}}{n^{10}} \rceil,l',T_{S_{l'},l'}^{\lceil \beta n^2\rceil},n ))\\
\le &\frac{\pi P(T_0<T_b \wedge k )^{\lceil\beta n^2\rceil}}{2(n-1)^2}
+C\frac{ \exp((h-1)n^2) \log n}{n^{4}}.
\end{align*}
Therefore, (\ref{q2+}), (\ref{cl}) and (\ref{cl*}) yield
\begin{align}\notag
&E[1_{A'_{l,n}}1_{A'''_{l,l',n}}1_{D_1}]\\
\label{smp1}
\le &\frac{\pi^4P(T_0<T_b \wedge k )^{2\lceil\beta n^2\rceil}}{4(n-1)^6(n+1)^2}
+C\frac{ \exp(2(h-1)n^2) \log n}{n^{10}}.
\end{align}
Now, we turn to the estimate of $E[1_{\tilde{A}_{l,n}}1_{\tilde{A}_{l',n}}1_{D_2}]$.
From the large deviation result (see $(11)$ in \cite{Law3}),
there exist $C$, $c>0$ such that
for any $n$, $m\in\mathbb{N} \cap \{1\}^c$ with $m \le u_{n-1}$
\begin{align}\label{hhy}
P(|S_m|\ge n \sqrt{u_{n-1}})
\le Ce^{-cn}.
\end{align}
Thus, by the strong Markov property, we can estimate
$E[1_{\tilde{A}_{l,n}}1_{\tilde{A}_{l',n}}1_{D_2}]$
for any $(l,l') \in J_{n,3}$ as
\begin{align}\notag
E[1_{\tilde{A}_{l,n}}1_{\tilde{A}_{l',n}}1_{D_2}]
= &E[1_{\tilde{A}_{l,n}}1_{D_2}]
E[1_{\tilde{A}_{l',n}}]\\
\notag
=&E[1_{\tilde{A}_{l,n}}
E[1_{D_2}|{\cal F}(T_{S_{l},l}^{\lceil \beta n^2\rceil })]]
E[1_{\tilde{A}_{l',n}}]\\
\notag
=&E[1_{\tilde{A}_{l,n}}
P(|S_{l'-l-t}|\ge n \sqrt{u_{n-1}})_{t=T_{S_{l}, l}^{\lceil \beta n^2\rceil} }]
E[1_{\tilde{A}_{l',n}}]\\
\notag
\le&E[1_{\tilde{A}_{l,n}}
\max_{|l-l'|\le u_{n-1}}P(|S_{l'-l}|\ge n \sqrt{u_{n-1}})]
E[1_{\tilde{A}_{l',n}}]\\
\notag
=&E[1_{\tilde{A}_{l,n}}]\max_{m\le u_{n-1}}
P(|S_{m}|\ge n \sqrt{u_{n-1}})
E[1_{\tilde{A}_{l',n}}]\\
\label{smp}
\le &C\frac{\exp(2(h-1)n^2)}{e^{cn}}.
\end{align}
The last inequality comes from (\ref{hj}) and (\ref{hhy}).
Finally, by (\ref{q1}), (\ref{smp1}) and (\ref{smp}),
we obtain the following estimate.
Since $\sharp J_{n,3}\le (u_{n-1})^2$, for any $n\in {\mathbb N}\cap \{0\}^c$,
\begin{align}\notag
&\sum_{(l,l') \in J_{n,3}}
(E[1_{A'_{l,n}}1_{A'''_{l,l',n}}1_{D_1}]
+E[1_{\tilde{A}_{l,n}}1_{\tilde{A}_{l',n}}1_{D_2}]
-E[1_{A_{l,n}}]E[1_{A_{l',n}}])\\
\notag
\le &\sum_{(l,l') \in J_{n,3}}
\bigg(\frac{\pi^4 P(T_0<T_b \wedge k)^{2\lceil\beta n^2\rceil}}{4(n-1)^6(n+1)^2}-
\frac{\pi^4 P(T_0<T_b \wedge k )^{2\lceil\beta n^2\rceil}}{4(n-1)^4n^4}\\
\notag
&+C\frac{\exp(2(h-1)n^2)}{e^{cn}}+C\frac{\exp(2(h-1)n^2)\log n}{n^{10}}
\bigg)\\
\notag
\le &C\sum_{(l,l')\in J_{n,3}}
\frac{\exp(2(h-1)n^2)\log n}{n^{10}}\\
\label{pp3+1}
\le&C\bigg(\frac{\exp(hn^2-2n)}{n^4}\bigg)^2\times \frac{\log n}{n^2}.
\end{align}
The second inequality comes from the fact that there exists $C>0$ such that for any $n\in \mathbb{N}\cap \{1\}^c$
\begin{align}\label{nnn}
\frac{1}{(n-1)^6(n+1)^2}-\frac{1}{(n-1)^4n^4}\le \frac{C}{n^{10}}.
\end{align}
By (\ref{oo}), (\ref{o1}) and (\ref{pp3+1}),
the right hand side of (\ref{formula2}) is bounded by
$(\exp(hn^2-2n)/n^4)^2 \times \log n/n^2$.
Therefore, we obtain the proof of Lemma \ref{hh} for $d=2$.
\end{proof}
\begin{rem}\label{bb}
We observe what happens if we try to estimate the third term of the right hand side of
(\ref{formula2})
in the case
$d=2$ by the same argument as in the case $d\ge3$.
Recall the definition of $A''_{l',n}$ in (\ref{f6}).
Then, by substituting (\ref{ppp*}) for Lemma \ref{subs}, we can see that
for any $ l'\in I_n$,
$$P(A''_{l',n})\le \frac{\pi^2 P(T_0<T_b \wedge k )^{\lceil\beta n^2\rceil}}{2(n-1)^2n^2}+O(\frac{\exp((h-1)n^2)\log n}{n^{6}}).$$
Hence, if we choose $A''_{l',n}$ instead of $A'''_{l,l',n}$ in (\ref{se1}),
\begin{align*}
E[1_{A'_{l,n}}1_{A''_{l',n}}]
=&E[1_{A'_{l,n}}]E[1_{A''_{l',n}}]\\
\le &\frac{\pi^4 P(T_0<T_b \wedge k)^{2\lceil\beta n^2\rceil}}{4(n-1)^6n^2}+O(\frac{\exp(2(h-1)n^2)\log n}{n^{10}}).
\end{align*}
Based on this estimate, we obtain
\begin{align*}
&\sum_{(l,l') \in J_{n,3}}
(E[1_{A'_{l,n}}1_{A''_{l',n}}]
-E[1_{A_{l,n}}]E[1_{A_{l',n}}])\\
\le &\sum_{(l,l') \in J_{n,3}}
\bigg(\frac{\pi^4 P(T_0<T_b \wedge k )^{2\lceil\beta n^2\rceil}}{4(n-1)^6n^2}-
\frac{\pi^4 P(T_0<T_b \wedge k)^{2\lceil\beta n^2\rceil}}{4(n-1)^4n^4}\\
&+O(\frac{\exp(2(h-1)n^2)\log n}{n^{10}})\bigg)\\
\le &C\sum_{(l,l') \in J_{n,3}}
\frac{\exp(2(h-1)n^2)}{n^9}\\
\le&C(\frac{\exp(hn^2-2n)}{n^4})^2\times \frac{1}{n}.
\end{align*}
The second inequality comes from the fact that there exists $C>0$ such that for any $n\in \mathbb{N}\cap \{1\}^c$
\begin{align*}
\frac{1}{(n-1)^6n^2}-\frac{1}{(n-1)^4n^4}\le \frac{C}{n^9}.
\end{align*}
(Compare with (\ref{nnn}).)
Consequently, with the aid of Lemma \ref{hh+}, we obtain
$$\frac{\mathrm{Var} (Q_n)}{c^2 (EQ_n)^2}\le \frac{C}{n}.$$
This estimate is not sufficient to apply the Borel-Cantelli lemma as we did in the proof of Theorem \ref{m1} in Section $4.1$.
Thus, we cannot use the Borel-Cantelli lemma here.
\end{rem}
\section{Proof of Theorem \ref{m2}}
\subsection{Proof of the upper bound of Theorem \ref{m2} for $d\ge2$}
\begin{proof}
Note that if we substitute $\beta_d \delta$ for $\beta$ in Lemma \ref{theta}, we obtain $E[\tilde{\Theta}_n(\beta_d \delta)]=O(n^{1-\delta})$.
By the Chebyshev inequality, we find that for any $\epsilon>0$, there exists $C>0$
\begin{align*}
P\bigg(\tilde{\Theta}_n(\beta_d\delta)\ge \bigg(\frac{n}{2} \bigg)^{1-\delta+\epsilon}\bigg)
<Cn^{-\epsilon}.
\end{align*}
Using the Borel-Cantelli lemma, we see that the events
$\{\tilde{\Theta}_{2^k}(\beta_d\delta)\ge 2^{(k-1)(1-\delta+\epsilon)}\}$
happen only finitely often with probability one.
Hence, it holds that for any $\epsilon>0$
\begin{align}\label{hhhj}
\limsup_{k\to \infty } \frac{\log \tilde{\Theta}_{2^k}(\beta_d\delta)}{\log 2^{k-1} }
\le 1-\delta+\epsilon \quad { a.s.}
\end{align}
Note that if $K(n,x)\ge \lceil \beta_d \delta \log n \rceil $,
for all $k$, $n\in {\mathbb N}$ with $2^{k-1}\le n<2^k$
$T_x^{\lceil \beta_d \delta \log 2^{k-1} \rceil } \le n$ and $K(2^k,x)\ge \lceil \beta_d \delta \log 2^{k-1} \rceil$,
and hence $ {\Theta}_n(\delta) \le \tilde{\Theta}_{2^k}(\beta_d\delta)$ holds.
Thus, for all $k$, $n\in {\mathbb N}$ with $2^{k-1}\le n<2^k$, we have
$$ \frac{\log {\Theta}_n(\delta)}{\log n}
\le \frac{\log \tilde{\Theta}_{ 2^k }(\beta_d\delta)}{\log 2^{k-1} }.$$
Therefore, with (\ref{hhhj}) we obtain for any $\epsilon>0$,
\begin{align*}
\limsup_{n\to \infty } \frac{\log {\Theta}_n(\delta)}{\log n}
\le 1-\delta+\epsilon \quad { a.s.}
\end{align*}
The desired upper bound holds by combining these bounds.
\end{proof}
\subsection{Proof of the lower bound of Theorem \ref{m2} for $d\ge2$}
\begin{proof}
We closely follow the argument in the proof of Lemma $4.2$ with $\beta=\beta_d \delta$.
Take $k$ and $h_k$ as in section $4.1$.
Set
\begin{align*}
W_n(\beta_d\delta)=\sharp \{ x\in \partial R(u_n) \cap R(u_{n-1}):
K(u_{n-1},x)\ge \lceil \beta_d\delta n^2\rceil \}.
\end{align*}
Note that $W_n(\beta_d\delta) \ge Q_n$ holds for any $n\in {\mathbb N}$.
Indeed, if $x \in \partial_b R(u_n)$, then $x \in \partial R(u_n)$.
Moreover, if $l\in I_n$, then $l+k\lceil\beta_d\delta n^2\rceil\le u_{n-1}$ holds for all sufficiently large $n\in {\mathbb N}$.
Therefore, since we know (\ref{rrr}) and Lemma \ref{hh+}, we have
\begin{align*}
P(W_n(\beta_d\delta) \ge \frac{1}{2}EQ_n \ge \frac{c\exp(h_kn^2-2n)}{n^4}
\quad\quad \text{ for all but finitely many }n)=1.
\end{align*}
Let $u_{m-1}\le n < u_m$.
Note that $K(u_{m-1,x})\le K(n,x)$ holds for all $x \in \mathbb{Z}^d$
and by virtue of (\ref{el*}), $\partial R(u_m) \cap R(u_{m-1}) \in \partial R(n)$.
Hence, $W_m(\beta_d\delta) \le \Theta_n(\delta)$ holds.
Therefore, it holds that
\begin{align*}
\liminf_{n \to \infty} \frac{\log \Theta_n(\delta)}{\log n}
\ge h_k
\quad \text{ a.s.}
\end{align*}
Since $h_k\to 1-(\beta_d\delta)/\beta_d$ as $k \to\infty$, the desired result holds, completing the proof.
\end{proof}
|
1,477,468,751,131 | arxiv | \section*{}
Graphene (gr) layers on metallic supports are in the focus of solid state chemistry and physics and materials science for many years~\cite{Tontegode:1991ts,Oshima:1997ek,Batzill:2012,Dedkov:2015kp,Yang:2020fda}. These works, initially motivated by studies of the catalytic activity of transition metal surfaces, received increased attention after discovery of the fascinating properties of graphene in 2004~\cite{Novoselov:2005es,Zhang:2005gp}. Following these discoveries of the transport properties of graphene, the renewed interest to graphene-metal systems led to several interesting findings. For example, it was shown that huge single- and bi-layer-thick graphene sheets can be synthesised on polycrystalline metal foils and then transferred on the desired substrate for further applications (with some drawbacks)~\cite{Bae:2010,Ryu:2014fo}; it was proposed that graphene moir\'e structures on close-packed surfaces of $4d$ and $5d$ metals can be used for fabrication of ordered arrays of clusters~\cite{NDiaye:2009a,MartinezGalera:2014hs,DiezAlbar:2019kq}, which are useful for the fundamental studies of catalytic properties of single clusters. Initially studied protective properties of graphene on metals, where graphene is considered as an inhibitor for surface reactions~\cite{Dedkov:2008d,Dedkov:2008e,Sutter:2010bx,Weatherup:2015cx}, were extended to the studies of effects of confined catalysis at the graphene-metal interface where, for example, the quantum effects due the space confinement at the graphene-Pt(111) interface promote the CO-oxidation~\cite{Yao:2014hy,Shifa:2019gb}.
One of the interesting areas in the studies of graphene-metal interfaces is the possibility to modify the structural, electronic and magnetic properties of graphene via intercalation of different species. Here, the range of materials is spread from metallic elements, like Cu, Ag, Au or Fe~\cite{Dedkov:2001,Dedkov:2008e,Varykhalov:2010a} to gaseous or molecular species~\cite{Shikin:2000a,Granas:2013tl,Dedkov:2017jn}. Although, the mechanism of intercalation in graphene-based interfaces is not fully clear and different pathways for the species penetration were discussed in the literature~\cite{Emtsev:2011fo,Petrovic:2013vz,Vlaic:2014gx} without any theoretical support, this allows to create artificial graphene-metal interfaces with interesting structural, electronic and magnetic properties. However, for example, in the case of metals intercalation in graphene-metal interface, the formation of sharp metallic interfaces vs. surface (interfaces) alloying is always an open question during such studies, because in most cases the used experimental methods do not give a clear answer.
Here, we present a combined scanning tunnelling microscopy (STM) and density functional theory (DFT) study of Mn intercalation in the gr/Ru(0001) and gr/Ir(111) systems. Both parent systems present representative examples of strongly- and weakly buckled graphene-metal interfaces, respectively, which demonstrate strongly different behaviour for the interface formation after Mn intercalation. Despite the expected pseudomorphic growth of Mn on Ru(0001) and Ir(111) under graphene with the formation of strongly buckled graphene above, we found that the formation of the metallic interface is different for both cases. Whereas for gr-Mn-Ru(0001) the expected behaviour, with the formation of a strongly buckled graphene layer, is confirmed by STM and DFT, the gr-Mn-Ir(111) system surprisingly demonstrates the formation of flat graphene on top. Such behaviour in the later case is explained by the formation of a buried Mn-Ir alloy underneath the gr/Ir interface bi-layer. These findings are confirmed by DFT calculations and very good agreement is found between experimental and theoretical data.
Figure~\ref{fig:Mn_and_grIrRu_LargeScaleZoom} compiles the results on the Mn intercalation in gr/Ru(0001) and gr/Ir(111) (full graphene layer and graphene islands). Images in columns (i) and (ii) are large scale and atomically resolved STM images, respectively, of the parent systems. Both graphene-metal interfaces, gr/Ru(0001) and gr/Ir(111), are characterised by the relatively large mismatch between graphene and the metal lattices, that leads to the formation of so-called periodic moir\'e structures, clearly resolved in STM images as long wave modulations of the atomically-resolved imaging contrast~\cite{Dedkov:2015iza} (Fig.~\ref{fig:Mn_and_grIrRu_LargeScaleZoom}, column ii). Here, several high symmetry positions for adsorption of carbon atoms in a graphene layer on close-packed metallic surfaces can be identified. They are correspondingly called with respect to the adsorption position on metallic surface surrounded by a carbon ring -- ATOP, HCP, FCC~\cite{Voloshina:2012c,Dedkov:2015kp}, with the ATOP position having the largest distance between graphene and metallic surface. In STM images collected at small bias voltages ($U_T<|\pm1|$\,V), graphene on Ru(0001) is imaged in the \textit{direct} contrast with ATOP place as a bright spot~\cite{Stradi:2011be,Voloshina:2016jd}, whereas graphene on Ir(111) is imaged in the \textit{inverted} contrast with ATOP place as a dark spot~\cite{Voloshina:2013dq}, that is explained by the formation of the respective interface electronic states in the vicinity of the Fermi level.
In our experiments Mn was evaporated on gr/Ru(0001) and gr/Ir(111) from an e-beam source and in both cases this leads to the formation of the ordered arrays of Mn clusters on top of a graphene layer (see Fig.\,S1 in Supplementary Material for summary). Presented in Fig.\,S1 STM images for the systems with Mn clusters formed on a graphene moir\'e structure were collected at small bias voltages, which are usually used to obtain atomic resolution of a graphene lattice. The absence of the clear atomic resolution in the presented small-scale STM data (Fig.\,S1 in Supplementary Material) confirms the formation of ordered arrays of Mn clusters in the considered systems. The careful and systematic analysis of the available experimental data allows to conclude that Mn clusters are adsorbed at the HCP (C$^{top}$-C$^{fcc}$, carbon atoms are located above interfacial Ru atoms and above \textit{fcc} hollow sites) high-symmetry positions of gr/Ru(0001) and gr/Ir(111). In case of the adsorption of Mn clusters on graphene-islands on Ir(111), one can find that the Mn-coverage strongly depends on the islands' alignment on the Ir(111) surface and correspondingly on the periodicity of the graphene moir\'e lattice. As it was previously shown, this effect leads to different adsorption energies for metallic clusters on the graphene-metal system~\cite{Sutter:2012kb,Zhang:2020ba} and for some angles (moir\'e periodicities) graphene remains uncovered with metal atoms.
Annealing of the Mn/gr/Ru(0001) and Mn/gr/Ir(111) systems at $T_a=500^\circ$\,C for $15$\,min leads to the penetration of Mn atoms underneath the graphene layer (Fig.~\ref{fig:Mn_and_grIrRu_LargeScaleZoom}(iii,iv) and Fig.\,S2 in Supplementary Material). However, the systems obtained after Mn intercalation demonstrate strong difference between them in the resulting morphology, although in both cases the formation of a strongly corrugated graphene layer is expected after intercalation. This expectation is based on the previous experimental results on the intercalation of the open $3d$-shell metals, like Fe, Co, Ni in gr/Ru and gr/Ir interfaces~\cite{Liao:2012jw,Pacile:2013jc,Decker:2013ch,Bazarnik:2013gl,Vlaic:2018fg,Zhao:2018gh}. In all referenced cases a sharp interface between graphene and intercalated material is formed, where atoms of the intercalant are pseudomorphically arranged on close-packed surfaces of Ru(0001) or Ir(111). The resulting corrugation of a graphene layer in such systems is more than $1$\,\AA\ with spatially modulated interaction strength between graphene and the underlying $3d$ metal~\cite{Voloshina:2014jl,Dedkov:2015kp}.
After Mn intercalation in gr/Ru(0001) a strongly corrugated graphene layer is formed above Mn-Ru(0001), whereas a relatively flat graphene is formed on top of Mn-Ir(111) (Fig.~\ref{fig:Mn_and_grIrRu_LargeScaleZoom}(iii,iv) and Fig.\,S2 in Supplementary Material). The corrugation of the graphene layers in these systems as extracted from STM images is $1$\,\AA\ and $0.15$\,\AA\ for gr/Mn-Ru(0001) and gr/Mn-Ir(111), respectively. Moreover, a \mbox{$(2\times2)$} superstructure for the gr/Mn-Ir(111) system is clearly resolved in the STM images and confirmed by the corresponding Fast Fourier Transformation (FFT) analysis (see respective inset in Fig.~\ref{fig:Mn_and_grIrRu_LargeScaleZoom}). Taking into account all observations and facts for both considered graphene-based interfaces, we can conclude that sharp interfaces between graphene, Mn layer and Ru(0001) support are formed for the gr/Mn-Ru(0001) system. In case of gr/Mn-Ir(111) the situation is not so simple, requiring additional analysis.
In order to understand the observed effects, we performed large-scale DFT calculations for different Mn-intercalation systems. In the first case, $1$\,ML-Mn is placed between graphene and the close-packed metallic support, Ru(0001) or Ir(111) (Fig.~\ref{fig:DFT_results}(a)), while in the second case $1$\,ML-Mn or $1$\,ML of the Mn-Ir(Ru) alloy is buried below the interface Ir(Ru) layer under graphene (Fig.~\ref{fig:DFT_results}(b)). In the later case, ordered and disordered alloys were considered. As a criteria for the successful theoretical modelling of the experimental data, the total energy for two concurrent systems as well as agreement between experimental and theoretical STM images (particularly, the extracted graphene corrugation) are taken into account. Although, the alloying of the metallic intercalant with a graphene support is always discussed in graphene-metal related studies, this problem was not studied in details. Particularly, rare experimental and theoretical works always consider the formation of a surface alloy between atoms of intercalant and graphene support~\cite{Drnec:2015kn,Brede:2016fq}, which explains the obtained experimental data in these works, especially STM images. As it will be discussed further, although sharp interfaces are formed in the case of gr/Mn-Ru(0001), the earlier discussed considerations do not support the available experimental data for the Mn intercalation in gr/Ir(111). Table\,S3 in the Supplementary Material summarises all information obtained from the DFT calculations (size of the unit cell, number of atoms in the system, total energies, corrugation of graphene, etc.) and used below for analysis of the observed effects.
For the gr/Mn-Ru(0001) system, the DFT calculated total energy difference between gr/Ru/Mn/Ru(0001) and gr/Mn/Ru(0001) and is $+13.022$\,eV (Fig.~\ref{fig:DFT_results_energies}), which corresponds to $+12$\,meV per atom in the considered system (assuming the atoms alignment adopted from the experimental STM data: gr$^{ATOP}_{gr/Ru}$$\rightarrow$gr$^{HCP}_{gr/Mn/Ru}$ and gr$^{HCP}_{gr/Ru}$$\rightarrow$gr$^{ATOP}_{gr/Mn/Ru}$; C-atoms positions are taken with respect to the underlying metal slab). This result immediately shows that the gr/Mn/Ru(0001) system is formed during Mn intercalation in gr/Ru(0001). In this system graphene has a corrugation of $1.519$\,\AA\ with a minimal distance between graphene and Mn layer of $1.861$\,\AA, i.\,e. the morphology of the gr/Mn/Ru(0001) system obtained in our DFT calculations is similar to the one of the parent gr/Ru(0001) system (cf. $1.302$\,\AA\ and $2.123$\,\AA\ for corrugation and graphene-Ru distance, respectively). This result clearly supports the experimental observation, where the morphology of graphene is very similar for gr/Ru(0001) and gr/Mn/Ru(0001) areas in the STM images (Fig.~\ref{fig:Mn_and_grIrRu_LargeScaleZoom}). The respective calculated STM images for gr/Ru(0001) and gr/Mn/Ru(0001) are shown in Fig.~\ref{fig:DFT_results}(c).
In case of the gr/Mn-Ir(111) intercalation system the situation is opposite. The first assumption to describe this system by the formation of gr/Mn/Ir(111) with the Mn layer pseudomorphically arranged on Ir(111) and sharp interfaces between layers is not supported by the calculated equilibrium crystallographic structure. In such a structure graphene is strongly buckled with its corrugation of $1.525$\,\AA\ and minimal distance between graphene and Mn of $1.798$\,\AA. Thus such a structure is very similar to previously studied gr/Ni/Ir(111)~\cite{Pacile:2013jc}, gr/Co/Ir(111)~\cite{Decker:2013ch}, and gr/Mn/Ru(0001) (see above). This result is in contradiction to the observation from the STM experiments, where a relatively flat graphene layer was observed for the gr/Mn-Ir(111) system with a corrugation of only $0.15$\,\AA. The attempt to improve this situation via insertion of a Mn$_x$Ir$_y$ alloy (ordered and disordered) between graphene and Ir(111) does not lead to an acceptable result -- graphene remains strongly buckled with its corrugation of more than $1.8$\,\AA\ (see Table\,S3 in Supplementary Material), which is in strong contradiction with experimental results.
The significant improvement is only achieved when buried Mn or MnIr monolayers are considered for the modelling of the gr/Mn-Ir(111) intercalation systems. In this case the calculated total energies for these systems are significantly lower compared to those for the systems, where Mn or MnIr are placed directly underneath a graphene layer (see Table\,S3 in the Supplementary Material). The lowering of the total energy is ranged between $19.6$\,meV and $38.1$\,meV per atom in the considered systems (corresponds to $11.888$\,eV and $28.272$\,eV (Fig.~\ref{fig:DFT_results_energies}) for the systems consisting of more than 600 atoms; see Table\,S3 in the Supplementary Material for detailed structures). Also the graphene layer becomes relatively flat with its corrugation varied between $0.277$\,\AA\ and $0.429$\,\AA, which is comparable with the one of $0.358$\,\AA\ for the parent gr/Ir(111) interface. Taking into account the fact that during STM measurement the local electronic structure is probed and graphene corrugation is measured indirectly, one can conclude that a rather good agreement between experiment and theory is achieved in this case. Also comparing the experimental and calculated STM images, the formation of an ordered MnIr alloy buried below interface Ir layer under graphene is concluded. The respective calculated STM images for gr/Ir(111) and gr/Ir/IrMn/Ir(111) are shown in Fig.~\ref{fig:DFT_results}(d) and the FFT analysis of the calculated STM image of the later system also clearly indicates the existence of a $(2\times2)$ periodicity in these data (see Fig.~S4 in the Supplementary Material).
The above presented picture for gr/Mn-Ru(0001) and gr/Mn-Ir(111) is supported by the general consideration on the formation of surface and sub-surface (buried) alloys. The difference in the formation of the (A+B)/A and A/(A+B)/A systems is connected to the so-called segregation energy, $E_{seg}$, which can be calculated as the difference between the total energies of the system with the impurity in a surface layer and in the bulk~\cite{Ruban:1999kq,Ruban:1999kqa} (A is a close-packed host (Ru or Ir) and B is a solute (Mn)). If $E_{seg}<0$ then the surface alloy can be formed [i.\,e. (A+B)/A] and if $E_{seg}>0$ then the formation of the sub-surface alloy prevails [i.\,e. A/(A+B)/A]. The theoretical values of $E_{seg}$ for Mn-Ru and Mn-Ir are $-0.40$\,eV and $+0.09$\,eV, respectively, indicating the formation of sub-surface MnIr alloy for the gr/Mn-Ir(111) system~\cite{Okamoto:1996ku}. In case of the gr/Mn-Ru(0001) system the alloying of two metallic components is unfavourable according to their phase diagram~\cite{Hellawell:1959gf}, thus leading to the formation of the well-ordered gr/Mn/Ru(0001) intercalation system with sharp interfaces between layers (see Fig.\,S5 in Supplementary Material for Mn-Ru and Mn-Ir phase diagrams~\cite{Raub:1955aa}). Similar observations were also made for the gr/Mn-Rh(111) and hBN/BN-Rh(111) systems ($E_{seg}=-0.08$\,eV for Mn-Rh~\cite{Ruban:1999kq,Ruban:1999kqa}), where Mn was found in the surface region directly underneath a graphene layer~\cite{Zhang:2013bw}. However, a clear discrimination between formation of a single Mn layer or a MnRh surface alloy was not made, although phase diagram analysis favours the later case~\cite{Raub:1955aa}.
In summary, we studied the intercalation of thin Mn layers in the gr/Ru(0001) and gr/Ir(111) interfaces using systematic STM and DFT approaches. Our results unequivocally demonstrate different final results for both systems. While for the gr/Mn/Ru(0001) system the formation of sharp interfaces between all components is found, the intercalation of Mn in gr/Ir(111) can lead to the formation of a sub-surface (buried) alloy below and Ir interface layer underneath graphene. These findings are understood on the basis of large-scale DFT calculations giving a significant lowering of the total energy for the system with the buried MnIr layer compared to the one with a surface MnIr alloy under a graphene layer. These results are also supported by the general thermodynamical considerations of phase diagrams for the respective binary systems as well as general theoretical calculations on the impurities segregations in different close-packed metallic hosts. With these new results for the graphene-intercalation systems, we also suggest that additional structural spectroscopical studies of these or similar systems are performed for further revision of the previous experimental data. Our findings shed a light on one of the main questions rising in the studies of metal intercalation in graphene-based systems and are of paramount importance for the understanding of the structure and electronic properties of graphene-support interfaces. This knowledge will help in the synthesis of the desired interfaces for future graphene-based applications.
\section*{Experimental}
\paragraph{STM measurements.}
The STM measurements were performed in constant current modes at room temperature with an SPM Aarhus 150 equipped with a KolibriSensor from SPECS with a Nanonis Control system. In these measurements a sharp W-tip was used which was cleaned \textit{in situ} via Ar$^+$-sputtering. In the presented STM images the tunnelling bias voltage, $U_T$, is applied to the sample and the tunnelling current, $I_T$, is collected through the tip, which is virtually grounded. Tunnelling current and voltage values are given in the figure captions. The base pressure in the experimental station is below $8\times10^{-11}$\,mbar. Graphene layers were prepared on Ru(0001) and Ir(111) using C$_2$H$_4$ as a carbon precursor according to the recipes given in Refs.~\citenum{Voloshina:2016jd,Voloshina:2013dq}, respectively. Intercalation of Mn was performed via e-beam deposition and subsequent annealing of thin manganese layers (with thickness around $1$\,ML) on top of graphene layers. The respective annealing temperatures are noted in the text.
\paragraph{DFT calculations.}
DFT calculations based on plane-wave basis sets of $400$\,eV cut-off energy were performed with the Vienna \textit{ab initio} simulation package (VASP)~\cite{Kresse:1994cp,Kresse:1996kg}. The Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional~\cite{Perdew:1996abc} was employed. The electron-ion interaction was described within the projector augmented wave (PAW) method~\cite{Blochl:1994fq} with C ($2s$, $2p$), Ir ($5d$, $6s$), Ru ($4d$, $5s$), Mn ($3d$, $4s$) states treated as valence states. The Brillouin-zone integration was performed on $\Gamma$-centred symmetry reduced Monkhorst-Pack meshes using a Methfessel-Paxton smearing method of first order with $\sigma = 0.2$\,eV. The $k$ mesh for sampling the supercell Brillouin zone are chosen to be at least as dense as $30 \times 30$, when folded up to the simple graphene unit cell. Dispersion interactions were considered by adding a $1/r^6$ atom-atom term as parameterised by Grimme (``D2'' parameterisation)~\cite{Grimme:2006fc}. The systems studied in the present work were considered in the supercell geometry due to the relatively lattice sizes mismatch between graphene and underlying metal. Each of such supercell is constructed from a slab of five layers of metal, a graphene layer adsorbed on one (top) side of a metal slab and a vacuum region of approximately $20$\,\AA. The lattice constant in the lateral plane was set according to the optimised value of bulk metal ($a_\mathrm{Ir(111)} = 2.723$\,\AA\ and $a_\mathrm{Ru(0001)}=2.703$\,\AA). The positions ($x$, $y$, $z$-coordinates) of C atoms and intercalant as well as $z$-coordinates of the two topmost layers of the substrate were fully relaxed until forces became smaller than $0.02$/eV\,\AA$^{-1}$. The STM images are calculated using the Tersoff-Hamann formalism~\cite{Tersoff:1985}.
\begin{acknowledgement}
The North-German Supercomputing Alliance (HLRN) is acknowledged for providing computer time.
\end{acknowledgement}
\begin{suppinfo}
The following files are available free of charge
\begin{itemize}
\item Additional experimental and theoretical data (PDF) can be downloaded via link: https://pubs.acs.org/doi/10.1021/acs.jpclett.0c03271
\end{itemize}
\end{suppinfo}
|
1,477,468,751,132 | arxiv | \subsubsection*{Acknowledgment.}
This work was partly supported by the Région Bretagne and by the ANSSI (JavaSec project,
see \texttt{http://www.ssi.gouv.fr/site\_article226.html}).
\bibliographystyle{plain}
\def\leavevmode\hbox{$\rm {}^{TM}$}{\leavevmode\hbox{$\rm {}^{TM}$}}
\section{Introduction}
\label{sec:rt-introduction}
The initialization of an information system is usually a critical
phase where essential defense mechanisms are being installed and a
coherent state is being set up. In object-oriented software, granting
access to partially initialized objects is consequently a delicate
operation that should be avoided or at least closely monitored.
Indeed, the CERT recommendation for secure Java
development~\cite{cert_sun_secure_coding_standard} clearly requires to
\emph{not allow partially initialized objects to be accessed}
(guideline OBJ04-J). The CERT has assessed the risk if this
recommendation is not followed and has considered the severity as
\emph{high} and the likelihood as \emph{probable}%
. They consider this recommendation as a first priority on a scale of three
levels.
The Java language and the Java Byte Code Verifier (BCV) enforce some properties
on object initialization, \emph{e.g.} about the order in which constructors of
an object may be executed, but they do not directly enforce the CERT
recommendation. Instead, Sun provides a guideline that enforces the
recommendation. Conversely, failing to apply this guidelines may silently lead
to security breaches. In fact, a famous attack~\cite{dean96:java_security} used
a partially initialized class loader for privilege elevation.
We propose a twofold solution:
\begin{inparaenum}[(i)]
\item a modular type system which allows to express the initialization
policy of a library or program, \emph{i.e.} which methods may access
partially initialized objects and which may not; and
\item a type checker, which can be integrated into the BCV, to statically check
the program at load time.
\end{inparaenum}
To validate our approach, we have
\emph{formalized} our type system, \emph{machine checked} its soundness proof
using the Coq proof assistant, and
\emph{experimentally validated} our solution on a large number of classes from
Sun's Java Runtime Environment (JRE).
Section~\ref{sec:overview} overviews object initialization in Java and its impacts
on security. Section~\ref{sec:our-solution} then informally presents our type
system, which is then formally described in Section~\ref{sec:rt-formalization}.
Section~\ref{sec:experimentations} finally presents the experimental results we
obtained on Sun's JRE
\section{Related Work.}
\label{sec:rt-related-work}
\remarques{(done) The paper must follow the template provided by the conference
(i.e. numbering of sections, sub-sections, font size). Related work that is
presented in the introduction section could be a section of its own. }
Object initialization has been studied from different points of view.
Freund and Mitchell~\cite{freund03:type_system_java_bytecode_journal} have
proposed a type system that formalizes and enforces the initialization
properties ensured by the BCV, which are not sufficient to ensure that no
partially initialized object is accessed.
\remarques{(done) p2, the discussion of related work is unclear, especially for
[7,8].} %
Unlike local variables, instance fields have a default value (\lstinline!null!,
\lstinline!false! or \lstinline!0!) which may be then replaced by the program.
The challenge is then to check that the default value has been replaced before
the first access to the field (\emph{e.g.} to ensure that all field reads return
a non-null value). This is has been studied in its general form by F\"ahndrich
and Xia~\cite{fahndrich07:delayed_types}, and Qi and
Myers~\cite{qi09:masked_types}.
Those works are focused on enforcing invariants on fields and finely tracks the
different fields of an object. They also try to follow the objects after their
construction to have more information on initialized fields. This is an
overkill in our context.
Unkel and Lam studied another property of object initialization: stationary
fields~\cite{unkel08:infererence_stationary_fields}. A field may be stationary
if all its reads return the same value. There analysis also track fields of
objects and not the different initialization of an object. In contrast to our
analysis, they stop to track any object stored into the heap.
\\
Other work have targeted the order in which methods are called. It has been
studied in the context of rare events (\emph{e.g.} to detect anomaly, including
intrusions). We refer the interested reader to the survey of Chandola \emph{et
al.}~\cite{chandola09:anomaly_detection}. They are mainly interested in the
order in which methods are called but not about the initialization status of
arguments. While we guarantee that a method taking a fully initialized receiver
is called after its constructor, this policy cannot be locally expressed with an
order on method calls as the methods (constructors) which needs to be called on
a object to initialize it depends on the dynamic type of the object.
\section{Context Overview}
\label{sec:overview}
Fig.~\ref{fig:classloader-original} is an extract of class
\lstinline!ClassLoader! of SUN's JRE as it was before 1997. The security policy
which needs to be ensured is that \lstinline!resolveClass!, a security sensitive
method, may be called only if the security check l.~5 has succeeded.
\begin{figure}[t]
\centering
{\footnotesize
\begin{lstlisting}[numbers=left, numberstyle=\tiny, numbersep=5pt]
public abstract class ClassLoader {
private ClassLoader parent;
protected ClassLoader() {
SecurityManager sm = System.getSecurityManager();
if (sm != null) {sm.checkCreateClassLoader();}
this.parent = ClassLoader.getSystemClassLoader();
}
protected final native void resolveClass(Class c);
}
\end{lstlisting}}
\vspace*{-0.25em}{}
\caption{Extract of the ClassLoader of Sun's JRE}
\label{fig:classloader-original}
\vspace*{-.25em}{}
\end{figure}
To ensure this security property, this code relies on the properties enforced
on object initialization by the BCV.
\subsubsection{Standard Java Object Construction.}
\label{sec:standard-object-construction}
In Java, objects are initialized by calling a class-specific constructor
which is supposed to establish an invariant on the newly created object.
The BCV enforces two properties related to
these constructors. These two properties are necessary but, as
we shall see, not completely sufficient to avoid security problems due
to object initialization.
\begin{property}\label{sec:prop-bcv-all-constructors}\sl
Before accessing an object,
\begin{inparaenum}[(i)]
\item a constructor of its dynamic type has been called and
\item each constructor either calls another constructor of the same class or a
constructor of the super-class on the object under construction, except for
\lstinline!java.lang.Object! which has no super-class.
\end{inparaenum}
\end{property}
This implies that
at least one constructor of $C$ and of each super-class of $C$ is called: it is
not possible to bypass a level of constructor. To deal with
exceptional behaviour during object construction, the BCV enforces another
property ---~concisely described in \emph{The Java Language
Specification}~\cite{gosling05:JLS_3rd_edition}, Section 12.5, or implied by the
type system described in the JSR202~\cite{buckley06:jsr202}).
\begin{property}\sl
If one constructor finishes abruptly, then the whole construction of the
object finishes abruptly.
\end{property}
Thus, if the construction of an object finishes normally, then all constructors
called on this object have finished normally. Failure to implement this
verification properly led to a famous attack~\cite{dean96:java_security} in
which it was exploited that if code such as %
\lstinline!try {super();} catch(Throwable e){}! in a constructor is not
rejected by the BCV, then malicious classes can create security-critical classes
such as class loaders.
\subsubsection{Attack on the class loader and the patch from Sun.}
\label{sec:attack-class-loader-1}
However, even with these two properties enforced, it is not guaranteed that
uninitialized objects cannot be used.
In Fig.~\ref{fig:classloader-original}, if the check fails, the method
\lstinline!checkCreateClassLoader! throws an exception and therefore terminates
the construction of the object, but the garbage collector then call a
\lstinline!finalize()! method, which is an instance method and has the object to
be collected as receiver (cf. Section 12.6 of~\cite{gosling05:JLS_3rd_edition}).
An attacker could code another class that extends \lstinline!ClassLoader! and
has a \lstinline!finalize()! method. If run in a right-restricted context,
\emph{e.g.} an applet, the constructor of \lstinline!ClassLoader! fails and the
garbage collector then call the attacker's \lstinline!finalize! method. The
attacker can therefore call the \lstinline!resolveClass! method on it, bypassing
the security check in the constructor and breaking the security of Java.
The initialization policy enforced the BCV is in fact too weak: when a method is
called on an object, there is no guarantee that the construction of an object
has been successfully run. %
An ad-hoc solution to this problem is proposed by SUN~\cite{sun_guidelines} in
its Guideline 4-3 \emph{Defend against partially initialized instances of
non-final classes}: adding a special Boolean field to each class for which the
developer wants to ensure it has been sufficiently initialized. This field, set
to \lstinline!false! by default, should be private and should be set to
\lstinline!true! at the end of the constructor. Then, every method that relies
on the invariant established by the constructor must test whether this field is
set to \lstinline!true! and fail otherwise.
If \lstinline!initialized! is \lstinline!true!, the construction of the object up to
the initialization of \lstinline!initialized! has succeeded. Checking if
\lstinline!initialized! is \lstinline!true! allows to ensure that sensitive code is
only executed on classes that have been initialized up to the constructor of the
current class.
Fig.~\ref{fig:classloader-patched} shows the same extract as in
Fig.~\ref{fig:classloader-original} but with the needed instrumentation (this is
the current implementation as of JRE 1.6.0\_16).
\begin{figure}[t]
\centering
{\footnotesize
\begin{lstlisting}[numbers=left, numberstyle=\tiny, numbersep=5pt]
public abstract class ClassLoader {
private volatile boolean initialized;
private ClassLoader parent;
protected ClassLoader() {
SecurityManager sm = System.getSecurityManager();
if (sm != null) {sm.checkCreateClassLoader();}
this.parent = ClassLoader.getSystemClassLoader();
this.initialized = true;}
private void check() {
if (!initialized) {
throw new SecurityException(
"ClassLoader object not initialized");}}
protected final void resolveClass(Class c){
this.check();
this.resolveClass0(c);}
private native void resolveClass0(Class c);
}
\end{lstlisting}}
\vspace*{-0.25em}{}
\caption{Extract of the ClassLoader of Sun's JRE}
\label{fig:classloader-patched}
\vspace*{-.25em}{}
\end{figure}
Although there are some exceptions and some methods are designed to access
partially initialized objects (for example to initialize the object), most
methods should not access partially initialized objects. Following the
remediation solution proposed in the CERT's recommendation or Sun's guideline
4-3, a field should be added to almost every class and most methods should start
by checking this field. This is resource consuming and error prone because it
relies on the programmer to keep track of what is the semantic invariant,
without providing the adequate automated software development tools. It may
therefore lead not to functional bugs but to security breaches, which are
harder to detect. In spite of being known since 1997, this pattern is not always
correctly applied to all places where it should be. This has lead to security
breaches, see \emph{e.g.}, the Secunia Advisory SA10056~\cite{SecuniaSA10056}.
\section{The right way: a type system}
\label{sec:our-solution}
We propose a twofold solution: first, a way to specify the security policy which
is simple and modular, yet more expressive than a single Boolean field; second,
a modular type checker, which could be integrated into the BCV, to check that
the whole program respects the policy.
\subsection{Specifying an Initialization Policy with Annotations.}
\label{sec:initialization-policy}
We rely on Java annotations and on one instruction to specify our initialization
policy. We herein give
the grammar of the annotations we use.
{ \footnotesize
\begin{displaymath}
\begin{array}{rcl}
\texttt{V\_ANNOT} & ::= & \texttt{@Init | @Raw | @Raw(CLASS)}\\
\texttt{R\_ANNOT} & ::= & \texttt{@Pre(V\_ANNOT) | @Post(V\_ANNOT)}
\end{array}
\end{displaymath}
}
We introduce two main annotations: \lstinline!@Init!, which specifies that a
reference can only point to a fully initialized object or the \lstinline!null!
constant, and \lstinline!@Raw!, which specifies that a reference may point to a
partially initialized object. A third annotation, \lstinline!@Raw(CLASS)!,
allows to precise that the object may be partially initialized but that all
constructors up to and including the constructor of \lstinline!CLASS! must have
been fully executed. \emph{E.g.}, when one checks that \lstinline!initialized!
contains \lstinline!true! in \lstinline!ClassLoader.resolveClass!, one checks
that the receiver has the type \lstinline!@Raw(ClassLoader)!. The annotations
produced by the \lstinline!V_ANNOT! rule are used for fields, method arguments
and return values.
In the Java language, instance methods implicitly take another argument: a
receiver ---~reachable through variable \lstinline!this!.
We introduce a \lstinline!@Pre! annotation to specify the type of the receiver
at the beginning of the method. Some methods, usually called from constructors,
are meant to initialize their receiver. We have therefore added the possibility
to express this by adding a \lstinline!@Post! annotation for the type of the
receiver at the end of the method. These annotations take as argument an
initialization level produced by the rule \lstinline!V_ANNOT!.
Fig.~\ref{fig:raw_motivations} shows an example of \texttt{@Raw} annotations.
Class \lstinline!Ex1A! has an instance field \lstinline!f!, a constructor and a
getter \lstinline!getF!.
\begin{figure}[t]
\centering
\begin{multicols}{2}
\footnotesize
\begin{lstlisting}[numbers=left, numberstyle=\tiny, numbersep=5pt]
class Ex1A {
private Object f;
Ex1A(Object o){
securityManagerCheck()
this.f = o;}
@Pre(@Raw(Ex1A))
getF(){return this.f;}
}
\end{lstlisting}
\begin{lstlisting}[numbers=left, firstnumber=9, numberstyle=\tiny, numbersep=5pt]
class Ex1B extends Ex1A{
Ex1B(){
super();
... = this.getF();
}
}
\end{lstlisting}
\end{multicols}
\vspace*{-0.25em}{}
\caption{Motivations for \lstinline!Raw(CLASS)! annotations}
\label{fig:raw_motivations}
\vspace*{-.25em}{}
\end{figure}
This getter requires the object to be initialized at least up to
\lstinline!Ex1A! as it accesses a field initialized in its constructor. The
constructor of \lstinline!Ex1B! uses this getter, but the object is not yet
completely initialized: it has the type \lstinline!Raw(Ex1A)! as it has finished
the constructor of \lstinline!Ex1A! but not yet the constructor
\lstinline!Ex1B!. If the getter had been annotated with \lstinline!@Init! it
would not have been possible to use it in the constructor of \lstinline!Ex1B!.
Another part of the security policy is the \texttt{SetInit}{} instruction, which mimics
the instruction \lstinline!this.initialized = true! in Sun's guideline. It is
implicitly put at the end of every constructor but it can be explicitly placed
before. It declares that the current object has completed its initialization up
to the current class. Note that the object is not yet considered fully
initialized as it might be called as a parent constructor in a subclass.
\remarques{(done)why? because the constructor might in fact be called (as a
parent constructor) in a subclass?} %
The instruction can be used, as in Fig.\ref{fig:setinit-example}, in a
constructor after checking some properties and before calling some other method.
\begin{figure}[t]
\centering
{\footnotesize
\begin{lstlisting}[numbers=left, numberstyle=\tiny, numbersep=5pt]
public C() {
...
securityManagerCheck(); // perform dynamic security checks
SetInit; // declare the object initialized up C
Global.register(this); // the object is used with a method
} // that only accept Raw(C) parameters
\end{lstlisting}
}\vspace*{-0.25em}{}
\caption{An Example with \texttt{SetInit}{}}
\label{fig:setinit-example}
\vspace*{-.25em}{}
\end{figure}
Fig.~\ref{fig:classloader-annotated} shows class \lstinline!ClassLoader! with
its policy specification. The policy ensured by the current implementation of
Sun is slightly weaker: it does not ensure that the receiver is fully
initialized when invoking \lstinline!resolveClass! but simply checks that the
constructor of \lstinline!ClassLoader! has been fully run.
\remarques{(can we be more precise without repeating ourselves for the third
time?) It's not so clear precisely in what way the policy of Sun's current
implementation is weaker.}%
On this example, we can see that the constructor has the annotations
\lstinline!@Pre(@Raw)!, meaning that the receiver may be completely
uninitialized at the beginning, and \lstinline!@Post(@Raw(ClassLoader))!,
meaning that, on normal return of the method, at least one constructor for each
parent class of \lstinline!ClassLoader! and a constructor of
\lstinline!ClassLoader! have been fully executed.
\remarques{(done)For Fig 5, I was wondering if there is an implicit SetInit at
the end of every constructor. I don't think there is, right?}
\begin{figure}[th]
\centering
{\footnotesize
\begin{lstlisting}[numbers=left, numberstyle=\tiny, numbersep=5pt]
public abstract class ClassLoader {
@Init private ClassLoader parent;
@Pre(@Raw) @Post(@Raw(ClassLoader))
protected ClassLoader() {
SecurityManager sm = System.getSecurityManager();
if (sm != null) {sm.checkCreateClassLoader();}
this.parent = ClassLoader.getSystemClassLoader();
}
@Pre(@Init) @Post(@Init)
protected final native void resolveClass(@Init Class c);
}
\end{lstlisting}}
\vspace*{-0.25em}{}
\caption{Extract of the ClassLoader of Sun's JRE}
\label{fig:classloader-annotated}
\vspace*{-.25em}{}
\end{figure}
We define as default values the most precise type that may be use in each
context. This gives a \emph{safe by default} policy and lowers the burden of
annotating a program.\remarques{p7, the paragraph below fig 5 needs fixing.}
\begin{itemize}
\item Fields, method parameters and return values are fully initialized objects
(written \lstinline!@Init!).
\item Constructors take a receivers uninitialized at the beginning
(\lstinline!@Pre(@Raw)!) and initialized up-to the current class at the end
(written \lstinline!@Post(@Raw(C))! if in the class \lstinline!C!).
\item Other methods take a receiver fully initialized (\lstinline!@Pre(@Init)!).
\item Except for constructors, method receivers have the same type at the end as
at beginning of the method (written \lstinline!@Post(A)! if the method has the
annotation \lstinline!@Pre(A)!).
\end{itemize}
If we remove from Fig.~\ref{fig:classloader-annotated} the default annotations,
we obtain the original code in Fig.~\ref{fig:classloader-original}. It shows
that despite choosing the strictest (and safest) initialization policy as
default, the annotation burden can be kept low.
\subsection{Checking the Initialization Policy.}
\label{sec:checking-policy}
We have chosen static type checking for at least two reasons. %
Static type checking allows for more performances (except for some rare cases),
as the complexity of static type checking is linear in the \emph{code size},
whereas the complexity of dynamic type checking is linear in the \emph{execution
time}.
Static type checking also improves reliability of the code:
if a code passes the type checking, then the code is correct with respect to its
policy, whereas the dynamic type checking only ensures the correction of a
particular execution.
Reflection in Java allows to retrieve code from the network or to dynamically
generates code. Thus, the whole code may not be available before actually
executing the program. Instead, code is made available class by class, and
checked by the BCV at linking time, before the first execution of each method.
As the whole program is not available, the type checking must be modular: there
must be enough information in a method to decide if this method is correct and,
if an incorrect method is found, there must exist a safe procedure to end the
program (usually throwing an exception), \emph{i.e.} it must not be too late.
To a have a modular type checker while keeping our security policy simple,
method parameters, respectively return values, need to be contra-variant,
respectively co-variant, \emph{i.e.} the policy of the overriding methods needs
to be at least as general as the policy of the overridden method. Note that
this is not surprising: the same applies in the Java language (although Java
imposes the invariance of method parameters instead of the more general
contra-variance), and when a method call is found in a method, it allows to rely
on the policy of the resolved method (as all the method which may actually be
called cannot be known before the whole program is loaded).
\section{Formal Study of the Type System}
\label{sec:rt-formalization}
The purpose of this work is to provide a type system that enforces at
load time an important security property. The semantic soundness of
such mechanism is hence crucial for the global security of the Java
platform. In this section, we formally define the type system
and prove its soundness with respect to an operational semantics.
All the results of this section have been machine-checked with the Coq
proof assistant\footnote{The development can be downloaded
at \url{http://www.irisa.fr/celtique/ext/rawtypes/}}.
\newcommand{\mathit{Expr}}{\mathit{Expr}}
\newcommand{\mathit{Alloc}}{\mathit{Alloc}}
\newcommand{\mathit{Var}}{\mathit{Var}}
\newcommand{\mathit{Type}}{\mathit{Type}}
\newcommand{\mathit{Field}}{\mathit{Field}}
\newcommand{\mathit{Meth}}{\mathit{Meth}}
\newcommand{\mathit{Class}}{\mathit{Class}}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\mathit{Prog}}{\mathit{Prog}}
\mathlig{|->}{\mapsto}
\mathlig{|-}{\vdash}
\mathlig{<-}{\leftarrow}
\mathlig{->}{\rightarrow}
\mathlig{=>}{\implies}
\newcommand{\mathrel{\mathop{::}}=}{\mathrel{\mathop{::}}=}
\newcommand{\mathsf{classes}}{\mathsf{classes}}
\newcommand{\mathsf{fields}}{\mathsf{fields}}
\newcommand{\mathsf{lookup}}{\mathsf{lookup}}
\newcommand{\mathsf{main}}{\mathsf{main}}
\newcommand{\mathsf{super}}{\mathsf{super}}
\newcommand{\mathsf{methods}}{\mathsf{methods}}
\newcommand{\mathsf{init}}{\mathsf{init}}
\newcommand{\rightharpoonup}{\rightharpoonup}
\newcommand{\mathsf{instrs}}{\mathsf{instrs}}
\newcommand{\mathsf{handler}}{\mathsf{handler}}
\newcommand{\mathit{Exc}}{\mathit{Exc}}
\newcommand{\mathsf{pre}}{\mathsf{pre}}
\newcommand{\mathsf{post}}{\mathsf{post}}
\newcommand{\mathsf{rettype}}{\mathsf{rettype}}
\newcommand{\mathsf{argtype}}{\mathsf{argtype}}
\newcommand{\mathit{Init}}{\mathit{Init}}
\newcommand{\mathit{Raw}}{\mathit{Raw}}
\newcommand{\mathit{Instr}}{\mathit{Instr}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathit{ins}}{\mathit{ins}}
\newcommand{\mathit{null}}{\mathit{null}}
\newcommand{\mathit{arg}}{\mathit{arg}}
\newcommand{\mathit{if}}{\mathit{if}}
\newcommand{\mathit{super}}{\mathit{super}}
\newcommand{\mathit{return}}{\mathit{return}}
\newcommand{\mathit{new}}{\mathit{new}}
\newcommand{\mathit{init}}{\mathit{init}}
\newcommand{\mathit{this}}{\mathit{this}}
\newcommand{\mathit{SetInit}}{\mathit{SetInit}}
\newcommand{\mathit{Raw}^{\bot}}{\mathit{Raw}^{\bot}}
\newcommand{\mathbb{L}}{\mathbb{L}}
\newcommand{\mathbb{O}}{\mathbb{O}}
\newcommand{\mathbb{V}}{\mathbb{V}}
\newcommand{\mathbb{H}}{\mathbb{H}}
\newcommand{\mathsf{np}}{\mathsf{np}}
\newcommand{\mathbb{S}}{\mathbb{S}}
\newcommand{\mathbb{M}}{\mathbb{M}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\state}[1]{\langle #1 \rangle}
\newcommand{\overline{e}}{\overline{e}}
\subsubsection{Syntax}
\label{sec:rt-syntax}
\begin{figure}[t]
\centering
\begin{small}
$
\begin{array}{rrrrl}
\multicolumn{5}{c}{
x,y,r \in \mathit{Var} \quad
f \in \mathit{Field} \quad
e \in \mathit{Exc} \quad
i \in \mathcal{L} = \mathbb{N}} \\
p&\in&\mathit{Prog} & \mathrel{\mathop{::}}= & \{
\begin{array}[t]{l}
\mathsf{classes}\in\mathcal{P}(\mathit{Class}),~
\mathsf{main}\in\mathit{Class},\\
\mathsf{fields}\in\mathit{Field}\to\mathit{Type},~
\mathsf{lookup}\in\mathit{Class}\to\mathit{Meth}\rightharpoonup\mathit{Meth}\}
\end{array}\\
c&\in&\mathit{Class} & \mathrel{\mathop{::}}= & \{
\mathsf{super} \in \mathit{Class}_\bot,~
\mathsf{methods} \in \mathcal{P}(\mathit{Meth}),~
\mathsf{init} \in \mathit{Meth}
\} \\
m&\in&\mathit{Meth} & \mathrel{\mathop{::}}= & \{
\begin{array}[t]{l}
\mathsf{instrs}\in\mathit{Instr}~\mathit{array},~
\mathsf{handler}\in\mathcal{L}\to\mathit{Exc}\to\mathcal{L}_\bot,\\
\mathsf{pre}\in\mathit{Type},~\mathsf{post}\in\mathit{Type},~\mathsf{argtype}\in\mathit{Type},~\mathsf{rettype}\in\mathit{Type}
\}
\end{array}
\\
\tau&\in&\mathit{Type} & \mathrel{\mathop{::}}= & \mathit{Init} \mid \mathit{Raw}(c) \mid \mathit{Raw}^{\bot} \\
e&\in&\mathit{Expr} & \mathrel{\mathop{::}}= & \mathit{null} \mid x \mid e.f\\
\mathit{ins}&\in&\mathit{Instr} & \mathrel{\mathop{::}}= & x <- e \mid x.f <- y \mid
x <- \mathit{new}\ c(y) \mid \mathit{if}\ (\star)\ jmp \mid \\
&&&& \mathit{super}(y) \mid x <- r.m(y) \mid \mathit{return}~x \mid \mathit{SetInit}\\
\end{array}
$
\end{small}
\vspace*{-0.25em}{}
\caption{Language Syntax.}
\label{fig:syntax}
\vspace*{-.25em}{}
\end{figure}
Our language is a simple language in-between Java source and Java bytecode. Our
goal was to have a language close enough to the bytecode in order to easily
obtain, from the specification, a naive implementation at the bytecode level
while keeping a language easy to reason with. It is based on the decompiled
language from Demange \emph{et al.}~\cite{DEMANGE:2009:INRIA-00414099:2} that
provides a stack-less representation of Java bytecode programs.
Fig.~\ref{fig:syntax} shows the syntax of the language. A program is a record
that handles a set of classes, a main class, a type annotation for each field
and a lookup operator. This operator is used do determine during a virtual call
the method $(p.\mathsf{lookup}~c~m)$ (if any) that is the first overriding version of a
method $m$ in the ancestor classes of the class $c$. A class is composed of a
super class (if any), a set of method and a special constructor method
$\mathsf{init}$. A method handles an array of instructions, a handler function such
that $(m.\mathsf{handler}~i~e)$ is the program point (if any) in the method $m$ where
the control flows after an exception $e$ has been thrown at point $i$. Each
method handles also four initialization types for the initial value of the
variable \lstinline!this! ($m.\mathsf{pre}$), its final value ($m.\mathsf{post}$), the type of
its formal parameter\footnote{For the sake of simplicity, each method has a
unique formal parameter $\mathit{arg}$.} ($m.\mathsf{argtype}$) and the type of its return value
($m.\mathsf{rettype}$). The only expressions are the $\mathit{null}$ constant, local variables
and field reads. For this analysis, arithmetic needs not to be taken into
account. We only manipulate objects.
The
instructions are the assignment to a local variable or to a field,
object creation ($\mathit{new}$)\footnote{ Here, the same instruction
allocates the object and calls the constructor. At bytecode level
this gives raise to two separated instructions in the program
(allocation and later constructor invocation) but the intermediate
representation generator~\cite{DEMANGE:2009:INRIA-00414099:2} on
which we rely is able to recover such construct.},
(non-deterministic) conditional jump, super constructor call, virtual
method call, return, and a special instruction that we introduce for
explicit object initialization: $\mathit{SetInit}$.
\subsubsection{Semantic Domains}
\begin{figure}[t]
\centering
\begin{small}
$
\begin{array}{c}
\begin{array}{rclcll}
\overline{\mathit{Exc}} &\owns& \overline{e} & ::= & e \mid \bot & \text{(exception flag)} \\
\mathbb{L} &\owns &l&& & {\text{(location)}}\\
\mathbb{V} &\owns & v & ::= & l \mid \mathit{null} & {\text{(value)}}\\
\mathbb{M} = \mathit{Var} \rightarrow \mathbb{V} &\owns &\rho && &{\text{(local variables)}}\\
\mathbb{O} = \mathit{Class} \times \mathit{Class}_\bot \times (\mathit{Field} \rightarrow \mathbb{V}) &\owns& o
&::=& [c,c_\mathit{init},o] &{\text{(object)}}\\
\mathbb{H} = \mathbb{L} \to \mathbb{O}_\bot &\owns&\sigma&&&{\text{(heap)}} \\
CS &\owns &cs &::=& (m,i,l,\rho,r)::cs \mid \varepsilon &{\text{(call stack)}}\\
\mathbb{S} = \mathit{Meth}\times\mathcal{L}\times\mathbb{M}\times\mathbb{H}\times CS\times \overline{\mathit{Exc}} &\owns&st
&::=& \state{m,i,\rho,\sigma,cs}_{\overline{e}}
&{\text{(state)}}
\\
\end{array}
\end{array}
$
\end{small}\vspace*{-0.25em}{}
\caption{Semantic Domains.}
\label{fig:domains}\vspace*{-.25em}{}
\end{figure}
Fig.~\ref{fig:domains} shows the concrete domain used to model the
program states. The state is composed of the current method $m$, the
current program point $i$ in $m$ (the index of the next instruction to
be executed in $m.\mathsf{instrs}$), a function for local variables, a heap,
a call stack and an exception flag. The heap is a partial function which
associates to a location an object $[c,c_\mathit{init},o]$ with $c$ its type,
$c_\mathit{init}$ its current initialization level and $o$ a map from field to value
(in the sequel $o$ is sometimes confused with the object itself). An initialization
$c_\mathit{init}\in\mathit{Class}$ means that each constructors of $c_\mathit{init}$
and its super-classes have been called on the object and have returned without abrupt
termination. The exception flag is used to handle exceptions: a state
$\state{\cdots}_e$ with $e\in\mathit{Exc}$ is reached after an exception $e$
has been thrown. The execution then looks for a handler in the current method and if
necessary in the methods of the current call stack. When equal to $\bot$, the flag is
omitted (normal state). The call stack records the program points of the pending calls together
with their local environments and the variable that will be assigned with the result of the
call.
\subsubsection{Initialization types}
We can distinguish three different kinds of initialization
types.
Given a heap $\sigma$ we define a value type judgment $h\vdash v:\tau$ between
values and types with the following rules.
\begin{small}
\begin{center}
$
\inference{}{\sigma \vdash \mathit{null} : \tau}
\inference{}{\sigma \vdash l : \mathit{Raw}^{\bot}}
\inference{\sigma(l)=[c_{\mathit{dyn}},c_\mathit{init},o] \\
\forall c', c_{\mathit{dyn}} \preceq c' \land c \preceq c' \Rightarrow c_\mathit{init} \preceq c'}{\sigma \vdash l : \mathit{Raw}(c)}
\inference{\sigma(l)=[c,c,o]}{\sigma \vdash l : \mathit{Init}}
$
\end{center}
\end{small}
The relation $\preceq$ here denotes the reflexive transitive closure of
the relation induced by the $\mathit{super}$ element of each class.
$\mathit{Raw}^{\bot}$ denotes a reference to an object which may be
completely uninitialized (at the very beginning of each constructor).
$\mathit{Init}$ denotes a reference to an object which has been
completely initialized%
. Between those two ``extreme'' types, a value may be typed as
$\mathit{Raw}(c)$ if at least one constructor of $c$ and of each parent of $c$
has been executed on all objects that may be reference from this
value. We can derive from this definition the sub-typing relation
$\mathit{Init} \sqsubseteq \mathit{Raw}(c) \sqsubseteq \mathit{Raw}(c') \sqsubseteq \mathit{Raw}^{\bot}$
if $c\preceq c'$. It satisfies the important monotony property
\begin{center}
$\forall \sigma\in\mathbb{H},\forall v\in\mathbb{V}, \forall \tau_1,\tau_2\in\mathit{Type},~ \tau_1\sqsubseteq \tau_2\land
\sigma |- v : \tau_1 \Rightarrow \sigma |- v : \tau_2$
\end{center}
Note that the sub-typing judgment is disconnected from the static
type of object. In a first approach, we could expect to manipulate a
pair $(c,\tau)$ with $c$ the static type of an object and $\tau$ its
initialization type and consider equivalent both types $(c,\mathit{Raw}(c))$
and $(c,\mathit{Init})$. Such a choice would however impact deeply on the
standard dynamic mechanism of a JVM: each dynamic cast from $A$ to
$B$ (or a virtual call on a receiver) would requires to check that an
object has not only an initialization level set up to $A$ but also
set up to $B$.
\begin{figure}[t]
\centering
\begin{small}
$
\begin{array}[c]{c}
\inference{ m.\mathsf{instrs}[i] = x <- \mathit{new}~c(y) & x\not=\mathit{this} & \mathit{Alloc}(\sigma,c,l,\sigma')
& \sigma' \vdash \rho(y) : c.\mathsf{init}.\mathsf{argtype}}
{\state{m,i,\rho,\sigma,cs}\Rightarrow
\state{c.\mathsf{init},0,[\cdot\mapsto\mathit{null}][\mathit{this}\mapsto l][\mathit{arg}\mapsto\rho(y)],\sigma',(m,i,\rho,x)::cs}}
\\[1.5em]
\inference{ m.\mathsf{instrs}[i] = \mathit{SetInit}& m = c.\mathsf{init} & \rho(\mathit{this}) = l & \mathit{SetInit}(\sigma,c,l,\sigma')}
{\state{m,i,\rho,\sigma,cs}\Rightarrow
\state{m,i{+1},\rho,\sigma',cs}}
\\[1.5em]
\inference{ m.\mathsf{instrs}[i] = \mathit{return}~x & \rho(\mathit{this}) = l &
((\forall c,~m\not=c.\mathsf{init}) \Rightarrow \sigma=\sigma')\\
(\forall c,~m=c.\mathsf{init} \Rightarrow \mathit{SetInit}(\sigma,c,l,\sigma') \land x=\mathit{this})}
{\state{m,i,\rho,\sigma,(m',i',\rho',r)::cs}\Rightarrow
\state{m',i'{+1},\rho'[r\mapsto \rho(x)],\sigma',cs}}
\end{array}
$
\end{small}\vspace*{-0.25em}{}
\caption{Operational Semantics (excerpt).}
\label{fig:opsem}\vspace*{-.25em}{}
\end{figure}
\begin{figure}[!b]
\begin{small}
\centering
\begin{minipage}[t]{1.0\linewidth}
\bf {Expression typing}
\end{minipage}
$ \begin{array}{c}
\inference
{}
{L |- e.f : (p.\mathsf{fields}~f) }\qquad
\inference
{}
{L |- x : L(x)}\qquad
\inference
{}
{L |- \mathit{null} : \mathit{Init}}
\end{array}$\vspace*{1ex}
\begin{minipage}[t]{1.0\linewidth}
\bf {Instruction typing}
\end{minipage}
$ \begin{array}{c}
\inference%
{L |- e : \tau & x\not=\mathit{this}}%
{m |- x <- e : L -> L[x |-> \tau]} ~
\inference%
{L(y) \sqsubseteq (p.\mathsf{fields}~f)}%
{m |- x.f <- y : L -> L}~
\inference
{}
{\Gamma,m |- \mathit{if}\star\ jmp : L -> L}\\[1em]
\inference%
{L(\mathit{this}) \sqsubseteq m.\mathsf{post} &
L(x) \sqsubseteq m.\mathsf{rettype} &
(\forall c,~ m=c.\mathsf{init} \Rightarrow L(\mathit{this}) \sqsubseteq \mathit{Raw}(c.\mathsf{super}))}%
{m |- \mathit{return}~x : L -> L}\\[1em]
\inference%
{L(y) \sqsubseteq c.\mathsf{init}.\mathsf{argtype} }%
{m |- x <- \mathit{new}~c(y) : L -> L[x|->\mathit{Init}]}\qquad
\inference%
{c' = c.\mathit{super} & L(y) \sqsubseteq c'.\mathsf{init}.\mathsf{argtype} }%
{c.\mathsf{init} |- \mathit{super}(y) : L -> L[\mathit{this}|->\mathit{Raw}(c')]}\\[1em]
\inference%
{L(r) \sqsubseteq m.\mathsf{pre} &
L(y) \sqsubseteq m.\mathsf{argtype} & }%
{m |- x <- r.m'(y) : L -> L[r|->m.\mathsf{post}][x|->m.\mathsf{rettype}]} ~
\inference%
{L(\mathit{this}) \sqsubseteq \mathit{Raw}(c.\mathsf{super})}%
{c.\mathsf{init} |- \mathit{SetInit} : L -> L}\\[1em]
\end{array}$
\end{small}
\label{fig:typerul}
\vspace*{-0.25em}{}
\caption{Flow sensitive type system}
\end{figure}
\subsubsection{Operational Semantics}
We define the operational semantics of our language as a small-step transition relation
over program states. A fixed program $p$ is implicit in the rest of this section.
Fig.~\ref{fig:opsem} presents some selected rules for this relation.
The rule for the $\mathit{new}$ instruction includes both the allocation and the
call to the constructor. We use the auxiliary predicate
$\mathit{Alloc}(\sigma,c,l,\sigma')$ which allocate a fresh location
$l$ in heap $\sigma$ with type $c$, initialization type equals to $\bot$
and all fields set equal to $\mathit{null}$. The constraint
$\sigma' \vdash \rho(y) : c.\mathsf{init}.\mathsf{argtype}$ explicitly asks the caller
of the constructor to give a correct argument with respect to the policy of
the constructor. Each call rules of the semantics have similar constraints. The execution
is hence stuck when an attempt is made to call a method with badly typed parameters.
The $\mathit{SetInit}$ instruction updates the initialization level of
the object in $\mathit{this}$. It relies on the predicate
$\mathit{SetInit}(\sigma,c,l,\sigma')$ which specifies that $\sigma'$ is a copy of
$\sigma$ where the object at location $l$ has now the initialization tag set to $c$
if the previous initialization was $c.\mathit{super}$. It
forces the current object (\lstinline!this!) to be
considered as initialized up to the current class (\emph{i.e.} as if the
constructor of the current class had returned, but not necessarily the
constructors of the subsequent classes). This may be used in the constructor,
once all fields that need to be initialized have been initialized and if some
method requiring a non-raw object needs to be called. Note that this
instruction is really sensitive: using this instruction too early in a
constructor may break the security of the application.
The $\mathit{return}$ instruction uses the same predicate when invoked in a constructor. For convenience
we requires each constructor to end with a $\mathit{return}~\mathit{this}$ instruction.
\subsubsection{Typing judgment} Each instruction $\mathit{ins}$ of a method
$m$ is attached a typing rule (given in Fig.~\ref{fig:typerul}) $m
|- \mathit{ins} : L -> L'$ that constraint the type of variable before ($L$)
and after ($L'$) the execution of $\mathit{ins}$.
\begin{definition}[Well-typed Method]
A method $m$ is \emph{well-typed} if there exists flow sensitive variable
types $L\in\mathcal{L}\to\mathit{Var}\to\mathit{Type}$ such that
\begin{itemize}
\item $m.\mathsf{pre} \sqsubseteq L(0,\mathit{this})$ and $m.\mathsf{argtype} \sqsubseteq L(0,\mathit{arg})$,
\item for all instruction $\mathit{ins}$ at point $i$ in $m$ and every
successor $j$ of $i$, there exists a map of variable types
$L'\in\mathit{Var}\to\mathit{Type}$ such that $L'\sqsubseteq L(j)$ and the typing
judgment $m\vdash \mathit{ins}: L(i) \to L'$ holds. If $i$ is in the handler $j$ of an exception
$e$ (\emph{i.e} $(m.\mathsf{handler}~i~e=j)$) then $L(i)\sqsubseteq L(j)$.
\end{itemize}
\end{definition}
The typability of a method can be decided by turning the set of typing rules
into a standard dataflow problem. The approach is
standard~\cite{freund03:type_system_java_bytecode_journal} and not formalized
here.
\begin{definition}[Well-typed Program]
A program $p$ is \emph{well-typed} if all its methods are well-typed and the
following constraints holds:
\begin{enumerate}
\item for every method $m$ that is overridden by a method $m'$ (\emph{i.e}
there exists c, such that $(p.\mathsf{lookup}~c~m=~m')$),\\
\hspace*{12ex}$
\begin{array}[c]{cccc}
m.\mathsf{pre} \sqsubseteq m'.\mathsf{pre} &\land& m.\mathsf{argtype} \sqsubseteq m'.\mathsf{argtype} &\land \\
m.\mathsf{post} \sqsupseteq m'.\mathsf{post} &\land& m.\mathsf{rettype} \sqsupseteq m'.\mathsf{rettype}
\end{array}
$
\item in each method, every first point, jump target and handler point
contain an instruction and every instruction (except $\mathit{return}$) has a next instruction,
\item the default constructor $c.\mathsf{init}$ of each class $c$ is unique.
\end{enumerate}
\end{definition}
In this definition only point 1 is really specific to the current type system. The other points are necessary
to established the progress theorem of the next section.
\subsubsection{Type soundness}
We rely on an auxiliary notion of well-formed states that capture the semantics
constraints enforce by the type system. A state $\state{m,i,\rho,\sigma,cs}$ is
\emph{well-formed} (wf) if there exists a type annotation
$L_p\in(\mathit{Meth}\times\mathcal{L})\to(\mathit{Var}\to\mathit{Type})$ such that
\begin{center}
$
\begin{array}[c]{lr}
\forall l\in\mathcal{L}, \forall o\in\mathbb{O}, \sigma(l) = o \Rightarrow \sigma |- o(f) : (p.\mathsf{fields}~f) & \text{(wf. heap)}\\
\forall x\in\mathit{Var}, \sigma |- \rho(x) : L_p[m,i](x) & \text{(wf. local variables)}\\
\forall (m',i',\rho',r)\in cs,~ \forall x, \sigma |- \rho'(x) : L_p[m',i'](x) & \text{(wf. call stack)}\\
\end{array}
$
\end{center}
Given a well-typed program $p$ we then
establish two key theorems. First, any valid transition from a
well-formed state leads to another well-formed state
(\emph{preservation}) and then, from every well-formed state there
exists at least a transition (\emph{progress}). As a consequence we
can establish that starting from an initial state (which is always
well-formed) the execution is never stuck, except on final
configuration. This ensures that all initialization constraints given
in the operational semantics are satisfied without requiring any dynamic
verification.
\subsubsection{Limitations}
The proposed language has some limitations compared to the Java (bytecode)
language. Static fields and arithmetic have not been introduced but are handled
by our implementation and do not add particular difficulties. Arrays have not
been introduced in the language neither. Our implementation conservatively
handles arrays by allowing only writes of $\mathit{Init}$ references in arrays.
Although this approach seems correct it has not been proved and it is not
flexible enough (cf. Section~\ref{sec:experimentations}). Multi-threading as
also been left out of the current formalization but we conjecture the soundness
result still holds with respect to the Java Memory Model because of the flow
insensitive abstraction made on the heap. As for the BCV, native methods may
brake the type system. It is their responsibility to respect the policy
expressed in the program.
\newcommand{\comment}[1]{}
\comment{
The type system proposed in the previous sections is modular and allows the
type-checker to be integrated into the BCV, to statically verify at load time
that the whole program is correct.
For its integration, the type checker must be modular in the sens that it must
not need to load other classes to check the current method. To avoid loading
other classes to check the current method \lstinline!C.m!, the policy of the
methods called from \lstinline!C.m! must be included in class \lstinline!C!. Then,
when the method is resolved for the first time, the checker must check that copy
of the initialization policy included in the caller matches the initialization
policy of the resolved method.
Fields imposes the same requirements.
To avoid having to add the policy to each method call in every class, and to
ease the evaluation of our type system, we first have implemented a standalone
prototype. When an invoke instruction is found, the method is resolved and the
security policy is found as an attribute of the method (see Sect 4.6 and 4.7
of~\cite{lindholm99:jvm_spec} or Section 4.7 and 4.8 of~\cite{buckley06:jsr202}).
As shown by the attack sketch in Section~\ref{sec:rt-introduction}, constructors
are not the only methods which may be invoked by default with uninitialized
object. It is also the case with \lstinline!finalize! methods and with methods
used to deserialize~\footnote{\emph{Serialization} is the feature which allows
to transmit object from an instance of one process to another ---~either with
the same program later in time (as a storage feature) or with another
concurrent process (as a communication feature).} objects:
\lstinline!readObject!, \lstinline!readObjectNoData! and \lstinline!readResolve!.
}
\section{A Case Study: Sun's JRE}
\label{sec:experimentations}
\remarques{(done)In Section 4, the discussion of special/unhandled cases is a bit
confusing. It's unclear how the remaining 4 (=381-377) classes can be dealt
with, if at all. Are these the ``unhandled'' cases discussed on page 15 (but
there are only 3 of those?)
The various categories of tricky cases should be clearer:
\begin{itemize}
\item cases where you need the dynamic cast, but which can be handled then
\item one case with arrays
\item some cases with inner classes
\end{itemize}
Could for any of the last two categories (which you cannot handle at all,
right?) code be rewritten so that they could be handled? Or is some further
extension allowing more flexible handling of arrays needed?}
In order to show that our type system allows to verify legacy code with only a
few annotations, we implemented a standalone prototype, handling the full Java
bytecode, and we tested all classes of packages \lstinline!java.lang!,
\lstinline!java.security! and \lstinline!javax.security! of the JRE1.6.0\_20.
348 classes out of 381 were proven safe \emph{w.r.t.} the default policy without
any modification. By either specifying the actual policy when the default
policy was too strict, or by adding cast instructions (see below) when the type
system was not precise enough, we were able to verify 377 classes, that is to
say 99\% of classes.
We discuss below the 4 remaining classes that are not yet proven correct by our
analysis.
The modifications represent only 55 source lines of code out of 131,486 for the
three packages studied. Moreover most code modifications are to express the
actual initialization policy, which means existing code can be proven safe. Only
45 methods out of 3,859 (1.1\%) and 2 fields out of 1,524 were annotated. Last
but not least, the execution of the type checker takes less than 20 seconds for
the packages studied.
\remarques{(done) The presentation of information in Table 1 is bulk. I suggest
it should be visually improved. }
\begin{figure}[t!]\centering
\includegraphics[scale=0.17]{annotations}
\caption{\label{tab:annot-instr} Distribution of the 47 annotations
and 6 instructions added to successfully type the three packages
of the JRE.}
\vspace*{-1em}{}
\end{figure}
\paragraph{Adapting the security policy}
Fig.~\ref{tab:annot-instr} details the annotations and the \lstinline!SetInit!
added to specify the security policy. In the runtime library, a usual pattern
consists in calling methods that initialize fields during construction of the
object. In that case,
a simple annotation \lstinline!@Pre(@Raw(super(C)))! on methods of
class \lstinline!C! is necessary.
These cases represent the majority of the 37 annotations on method receivers.
6 annotations on method arguments are used, notably for some methods of
\lstinline!java.lang.SecurityManager! which check permissions on an object
during its initialization.
The instruction \lstinline!SetInit! is used when a constructor initializes all
the fields of the receiver and then call methods on the receiver that are not
part of the initialization. In that case the method called need at least a
\lstinline!Raw(C)! level of initialization and the \lstinline!SetInit!
instruction allows to express that the constructor finished the minimum
initialization of the receiver. Only 6 \lstinline!SetInit! intructions are
necessary.
\paragraph{Cast instructions}
Such a static and modular type checking introduces some necessary loss of
precision ---~which cannot be completely avoided because of computability
issues. To be able to use our type system on legacy code without deep
modifications, we introduce two dynamic cast operators: \texttt{\small(Init)}
and \texttt{\small(Raw)}. The instruction \lstinline!y = (Init)x;! allows to
dynamically check that \lstinline!x! points to a fully initialized object: if
the object is fully initialized, then this is a simple assignation to
\lstinline!y!, otherwise it throws an exception. As explained in
Section~\ref{sec:overview}, the invariant needed is often weaker and the
correctness of a method may only need a $\mathit{Raw}(c)$ reference.
\lstinline!y = (Raw(C))x! dynamically checks that \lstinline!x! points to an
object which is initialized up to the constructor of class \lstinline!C!.
Only 4 cast instructions are necessary. There are needed in two particular
cases. First, when a field must be annotated, but annotation on fields were
only necessary on two fields ---~they imply the use of 3 \lstinline!(Init)! cast
instructions. The second case is on a receiver in a \lstinline!finalize()!
method that checks that some fields are initialized, thereby checking that the
object was \lstinline!Raw(C)! but the type system could not infer this
information.
The later case implies to use the unique \lstinline!(Raw(C))! instruction
added.
\paragraph{Remaining classes}
Finally, only 4 classes are not well-typed after the previous modifications.
Indeed the compiler generates some code to compile inner classes and part of
this code needs annotations in 3 classes. These cases could be handled by doing
significant changes on the code, by adding new annotations dedicated to inner
classes or by annotating directly the bytecode. The one class remaining is not
typable because of the limited precision of our analysis on arrays: one can only
store \texttt{@Init} values in arrays. To check this later class, our type
system needs to be extended to handle arrays more precisely but this is left for
future work.
\paragraph{Special case of finalize methods}
\remarques{p16, the final discussion before the conclusion is unclear; did you
find any error in such code?}%
As previously exposed, \lstinline!finalize()! methods
may be invoked on a completely uninitialized receiver. Therefore, we study the
case of \lstinline!finalize()! methods in the packages \lstinline!java.*! and
\lstinline!javax.*!. In the classes of those packages there are 28
\lstinline!finalize()! methods and only 12 succeed to be well-typed with our
default annotation values. These are either empty or do not use their receiver
at all. For the last 16 classes,\remarques{\footnotesize Could you informally
discuss the impact of what is left outside the formal language,
e.g. concurrency, and external calls?} \remarques{\scriptsize Also, how
difficult would it be to integrate your type system with the base one? One
can hardly typecheck code independently for each error pattern.} the
necessary modifications are either the use of cast instructions when the code's
logic guarantees the success of cast, or the addition of \lstinline!@Pre(@Raw)!
annotations on methods called on the receiver. In that case, it is important to
verify that the code of any called method is defensive enough. Therefore, the
type system forced us to pay attention to the cases that could lead to security
breaches or crashes at run time for \lstinline!finalize()! methods. After a
meticulous checking of the code we added the necessary annotations and cast
instructions that allowed to verify the 28 classes.
\section{Conclusion and Future Work}
\label{sec:conclusion}
We have proposed herein a solution to enforce a secure initialization of objects
in Java. The solution is composed of a modular type system which allows to
manage uninitialized objects safely when necessary, and of a modular type
checker which can be integrated into the BCV to statically check a program at
load time. The type system has been formalized and proved sound, and the
type-checker prototype has been experimentally validated on more than 300
classes of the Java runtime library. \remarques{\scriptsize "rare cases
necessitate .... our solutions for arrays..." As in Section 4, you're not
very clear about these "rare cases", and which categories there are. Most of
these can be handled using these explicit casts (right?), except a few where
there are inner classes that need such casts, and then only these rarer cases
with problematic arrays remains, (right?). The status of your "solution for
arrays" is very unclear; in Section 4 you don't even mention that you have
one, albeit unproven \& unimplemented. I guess you have checked whether it
would work on the single problematic case in the JRE, right? How
simple/complex an extension of the system does it involve?}
The experimental results point out that our default annotations minimize the
user intervention needed to type a program and allows to focus on the few
classes where the security policy needs to be stated explicitly. The possible
adaptation of the security policy on critical cases allows to easily prevent
security breaches and can, in addition, ensure some finer initialization
properties whose violation could lead the program to crash. On one hand,
results show that such a static and modular type checking allows to prove in an
efficient way the absence of bugs. On the other hand, rare cases necessitate the
introduction of dynamic features and analysis to be extended to analyze more
precisely arrays. With such an extension, the checker would be able to prove
more classes correct, but this is left for future work.
On the formalization side, an obvious extension is to establish the
soundness of the approach in presence of multi-threading. We
conjecture the soundness result still holds with respect to the
Java Memory Model because of the flow insensitive abstraction made
on the heap.
The prototype and the Coq formalization and proofs can be downloaded from
\url{http://www.irisa.fr/celtique/ext/rawtypes/}.
|
1,477,468,751,133 | arxiv | \section{Introduction}
About half of field sdB stars, which reside in binary systems, can form
through common envelope ejection or stable Roche lobe overflow (Han et al.
2002, 2003).
It is more difficult to explain the formation of a single sdB star.
Two scenarios are possible: the merger of two low-mass helium white dwarfs and
an early hot helium flash; but both are not fully consistent with the
observations.
A recent review on these arguments is given by Heber (2009).
Another possibility, suggested by Soker (1998), is that the huge mass loss
needed to form an sdB star is triggered by low-mass bodies, planets or brown
dwarfs (BDs).
Although this possibility has not been tested by detailed models yet,
a planet to the pulsating sdB star V391~Peg (Silvotti et al. 2007) and three
circumbinary planets/BDs to the eclipsing sdB+M systems HW~Vir (Lee et al.
2009) and HS~0705+6700 (Qian et al. 2009) suggest that sdB planets/BDs could
be a relatively common phenomenon (see also Bear \& Soker 2010).
A systematic search for sdB substellar objects around 4 sdB stars using the
timing method is the main goal of the EXOTIME project (Schuh et al.
2010, Benatti et al. 2010).
\vspace{1mm}
V391~Peg (HS~2201+2610) is a particularly interesting system formed by an sdB
star and a 3.2/sin$i$ M$_{Jup}$ planet orbiting the host star in 3.2 years
at a distance of about 1.7 AU (Silvotti et al. 2007).
The sdB star is a hybrid pulsator showing $p$ and $g$-mode oscillations at
the same time ({\O}stensen et al. 2001, Lutz et al. 2009), offering a unique
opportunity to characterize the host star through asteroseismic methods.
A preliminary mode identification of the higher pulsation frequencies
($p$-modes) was proposed in Silvotti et al. (2002): the two main pulsation
periods of 349.5 and 354.1 s could be reproduced with $l$=0 ($k$=1) and $l$=1
($k$=1) respectively.
However this solution was not unique due to the small number of detected
frequencies and other solutions could not be excluded.
\vspace{13.6mm}
\section{Observations, reduction and analysis}
\begin{figure*}[th]
\vspace{12.6cm}
\special{psfile=silvotti_fig1.ps vscale=87 hscale=87 angle=0 voffset=-248
hoffset=-21}
\caption{Upper panels: representative u'g'r' light curves (20 October 2007),
with beating effects clearly visible.
Lower panels: amplitude spectra of the whole 8-nights run showing the two
regions of excited $p$- and $g$-modes.}
\label{f1}
\end{figure*}
V391~Peg was observed for 8 consecutive nights, from October 16 to 23, 2007,
using ULTRACAM (Dhillon et al. 2007) at the 4.2 m William Herschel telescope
(WHT).
In total we collected 260,592\, exposures in three
photometric bands ({\it u'g'r'}) of the SLOAN system, with exposure times
between 1.2 and 1.6~s and a dead time of only 25~ms between one frame and the
next.
The reduction was performed using the ULTRACAM pipeline (see, for example,
Littlefair et al. 2008), including bias and flat field correction and aperture
photometry.
Then we computed differential photometry, dividing the target's counts
by the counts of a comparison star, we binned the data to an effective
exposure time of 10 s and we performed a correction for the residual
extinction.
The last step is crucial for the $g$-modes which have periods of $\sim$0.5-2 h
and are particularly disturbed by variations that occur on similar
time scales.
Finally we applied the barycentric correction to the times.
More details regarding observations and data reduction will be given in a
forthcoming paper (Silvotti et al. in preparation).
At the end of the process, for each filter we obtained a single file with 2
columns: barycentric julian date and fractional brightness variation.
\begin{figure*}[th]
\vspace{11.0cm}
\special{psfile=silvotti_fig2.ps vscale=69 hscale=69 angle=0 voffset=-136
hoffset=-83}
\special{psfile=silvotti_fig3.ps vscale=69 hscale=69 angle=0 voffset=-136
hoffset=165}
\caption{Fit to the two dominant modes of V391~Peg, including non-adiabatic
effects.}
\label{f2}
\end{figure*}
These files were analyzed using Period04 (Lenz and Breger 2005) in order to
determine the pulsation frequencies and the amplitudes in the different bands.
A portion of the light curves and the amplitude spectra in the three
photometric bands are shown in Fig.~1.
The frequencies obtained were compared with those obtained from previous runs
and indeed we found a perfect agreement except for f4 and f5: the value
2921.816 $\mu$Hz found previously for f4 (Silvotti et al. 2002) is now
2910.272 $\mu$Hz, indicating that the old value was a 1-day alias of the
correct frequency.
The new value of f4 is confirmed by two other independent runs with high
frequency resolution at the 3.6 m TNG (August 2008) and 1.3 m MDM (October
2007) telescopes (Silvotti et al. in preparation).
F5 (2738.015 $\mu$Hz, Silvotti et al. 2002) is not found in any of the new
observations and the WHT/ULTRACAM data suggest 2 new low-amplitude
frequencies.
An updated list of frequencies, including also the low-frequency $g$-modes,
will be published (Silvotti et al. in preparation).
Using the improved list of frequencies, we measured the amplitudes of the
various frequencies by means of least-square sinusoidal fits.
In this paper we concentrate only on f1 and f2, for which the errors on the
amplitudes are sufficiently small to obtain significant results.
\section{Comparison with theoretical amplitudes and mode identification}
The amplitudes obtained have been compared with theoretical amplitudes
calculated following the same procedure as described in Randall et al. (2005).
As input we took the atmospheric parameters from {\O}stensen et al. 2001
($T_{\rm eff}$=29,300 K, $\log g$=5.4).
With the very small error bars of the ULTRACAM amplitudes, the quality of the
fit is very sensitive to the exact input values of the atmospheric parameters.
However, taking into account the $T_{\rm eff}$\ $\log g$\ uncertainties, we never obtain
any change in the mode identification.
The same is true when we use slitghtly different atmospheric parameters
obtained by one of us (G.F.) on the basis of spectra acquired at the
90 inch telescope at Kitt Peak ($T_{\rm eff}$=30,000 K, $\log g$=5.46)\, or\, at\, the \,MMT\,
($T_{\rm eff}$=29,900 K, $\log g$=5.51), made available to us by Betsy Green (private
communication).
The monochromatic\, atmospheric\, quantities were then convolved over the ULTRACAM
filters, taking into account the filter response curves as well as the quantum
efficency of the CCDs, and the transparency curve for a representative
observatory site (we used Cerro Tololo which is at a similar altitude as the
La Palma observatory).
Non-adiabatic effects were computed as they significantly influence the
theoretical colour-amplitudes.
They were estimated using the equations given in Randall et al. (2005) from
the adiabatic and non-adiabatic eigenfunctions calculated from second
generation envelope models (Charpinet et al. 1997), using the following
stellar parameters:
$T_{\rm eff}$=29,300 K, $\log g$=5.4, total stellar mass M$_{\ast}$=0.48 $M_{\odot}$\ and
fractional thickness of hydrogen-rich envelope $\log q(H)$=$-2.95$, although
the two last values do not really influence the results much, as long as they
take on reasonable values.
The model was then analysed with adiabatic and non-adiabatic pulsation codes,
and the non-adiabatic quantities {\it R} and $\Psi_{\it T}$ (defined in
Randall et al. 2005) were computed for the period spectrum of the model.
Since {\it R} and $\Psi_{\it T}$ are not dependent on the degree index while
they depend quite strongly on the period of the mode in question, their value
were interpolated in period space to find their optimal values for the
observed periods.
The wavelength-integrated atmospheric quantities and the non-adiabatic
parameters were then used to calculate the theoretical amplitudes.
As a last step, the theoretical amplitudes were fit to those observed using
the $\chi^2$ minimisation technique described in Randall et al. (2005).
The results, shown in Fig.~2, indicate a unique solution for the two main
pulsation modes of V391~Peg.
The 349.5~s period is a radial mode:
$\chi^2$($l=0$)=1.5, $\chi^2$($l=1$)=53.6, $\chi^2$($2 \leq l \leq 5$)$>$170.
The 354.1~s period is a dipole mode and again there is only one solution
compatible with the data:
$\chi^2$($l=1$)=2.5, $\chi^2$($l=2$)=17.1, $\chi^2$($l=0$)=47.4,
$\chi^2$($3 \leq l \leq 5$)$>$180.
These numbers translate into a value of the quality-of-fit parameter
(Press et al. 1986) Q $\ll$ 0.001 for both modes when we use an $l$ value
different from 0 and 1 respectively.
\section{Discussion}
Thanks to the high quality of the data, this is the first time that the mode
degree index has been uniquely identified from multicolor photometry for the
two main modes of a star as faint as V391~Peg (V=14.6).
To our knowledge, conclusive results were obtained only for two brighter
stars:
KPD~2109+4401 (V=13.4, Randall et al. 2005, see also Jeffery et
al. 2004) and Balloon 090100001, the brightest known sdBV star with V=12.1
(Baran et al. 2008, Charpinet et al. 2008).
The results reported in this article confirm that multicolor photometry can
set useful identification constraints on the pulsation modes of sdB rapid
pulsators, provided that the data have a high enough quality.
ULTRACAM on a 4~m class (or larger) telescope is an ideal instrument for such
studies.
\acknowledgements
R.S.\, thanks\, Stuart Littlefair for his help in data reduction.
R.S. acknowledges support from HELAS for participating to this conference.
\newpag
|
1,477,468,751,134 | arxiv | \section{INTRODUCTION}
Nowadays, most modern cars come with at least some advanced driver-assistance systems (ADAS). Systems like an automatic cruise control or a lane keeping assistant are already able to partially take over control of the car and safely steer it along a defined lane. While these problems have been addressed extensively in scientific literature \cite{ZHENG201416}, research about lateral control involving lane changes has not been studied as intensively up to now \cite{8005678}. For the challenge of driving fully autonomously, this naturally has to be addressed as well. Additionally, there is great potential and need for driver assistance systems supporting the driver in executing lane changes. Over 90\% of occurring accidents are attributed to human errors \cite{trafficsafetly}, of all accidents around 18\% happen during the execution of a lane change \cite{tsa}.
The term driving strategy describes planning for autonomous vehicles on different hierarchical levels, from map-based global mission planning to tactical planning, which takes into account driving lanes, other vehicles and obstacles. From a machine learning perspective one way to approach this problem is the use of supervised learning. Possible is for example behavioral cloning, in which a system learns from the demonstration of an expert \cite{NIPS2007_3293}. However, it is well known that small inaccuracies and errors aggregate over time \cite{pmlr-v9-ross10a}, further this method tends to overfit specific expert behavior and lacks the possibility of exploration.
Another possibility is the application of reinforcement learning, which lately has led to great success in different topics \cite{DBLP:journals/corr/MnihKSGAWR13}. This method though often depends on the availability and realism of a simulator, and discards the immense amounts of real data collected by car manufacturers.
We believe a combination of both paradigms will be necessary for creating data-driven algorithms for autonomous cars.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.27]{teaser}
\caption{Visualization of our system assessing a situation. For a timestep $t$ the LSTM network already has a compressed understanding of the past via its internal state, with the IDM several future positions are predicted for enabling the use of a Bidirectional LSTM. The situations are converted to the internal grid representation and fed to the network, the output represents the suitability of a situation for a lane change to the target lane.}
\label{figureteaser}
\end{figure}
An autonomous system is commonly designed in a layered architecture: A perception layer perceives the environment through different sensors (1), the results are fused in a fusion layer (2). Based on this a situation interpretation is done (3). This is followed by a planning (4) and control layer (5). This work is situated around layer 3, supporting the planning algorithm, thus circumventing the issues mentioned above arising by using machine learning for planning. Our algorithm is able to answer the question whether a given situation is safe and suited for executing a lane change. It is trained in a supervised fashion on actual driving data, and is easily extensible to other discrete driving decisions by simply exchanging the training data. A possible application not only is the integration into fully autonomous cars, but also some ADAS to support the driver in lane change decisions.
We compare our proposed method to existing ones and show its superiority.
As finding useful data labels is a challenge, we additionally give details about existing labeling methods and propose a new automatic labeling scheme.
Recently more interest has sparked in lateral planning \cite{7795631, BALAL201647, 7313255, 7576883}, possibly due to the announcement of several car manufacturers to produce autonomous cars within the next years \cite{bmw21, tesla21}. One big field is reinforcement learning, in which deep networks have been applied to decision making for lane changes \cite{A1}. This though somewhat differs from our task due to our focus on situation assessment without planning using supervised learning.
More related to our field mostly either mathematical models \cite{nagel}, rule-based systems \cite{BALAL201647} or ``classical'' machine learning methods like Support Vector Machines (SVMs) or decision trees \cite{7795631, 8005678} are used.
In this paper, we propose the first supervised deep learning approach on non-visual data, namely a Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM) units. We further extend this to a Bidirectional RNN. As Bidirectional RNNs either need to know the future of an observed signal or introduce a significant time lag, we integrate a prediction component, for which we use the Intelligent Driver Model (IDM). Fig. \ref{figureteaser} shows our system evaluating a situation with respect to its suitability for a lane change to the left.
We believe that deep learning greatly helps in the understanding of complex scenes and that the recurrent structure is much better suited for understanding the temporal component of this problem, compared to static frame-by-frame approaches. The evaluation of our method is done on the publicly available NGSIM datasets.
The key contributions of our work are:
\begin{itemize}
\item We propose a method for assessing situations with respect to their suitability for lane changes, which is easily extensible to other driving decisions.
\item Our proposed method consists of a Bidirectional RNN and is the first recurrent approach for this problem. It uses the IDM for obtaining an explicit representation of the beliefs over future states.
\item We propose a novel labeling method for this kind of problem and compare it to the one used in existing works.
\end{itemize}
\section{RELATED WORK}
In recent years first partially feasible algorithms for fully self-driving cars, capable of executing lane changes, have been developed and tested on the road. Mostly these were rule-based systems.
A probabilistic framework for decision making was developed by Ardelt et al. and tested during autonomous drives on highways \cite{6200871}.
Ulbrich and Maurer \cite{7313255} used Dynamic Bayesian Networks (DBNs) to evaluate situations to answer the question whether they are feasible and beneficial with respect to lane changes.
Men\'{e}ndez-Romero et al. introduced a multi-level planning framework which makes use of semantic and numeric reasoning \cite{7995915}.
Mukadam et al. used deep reinforcement learning for making lane change decisions \cite{A1}.
In existing literature a lane change maneuver is often categorized into one of three categories: Mandatory lane changes are lane changes forced upon the driver, e.g. due to traffic conditions like an ending lane.
Discretionary lane changes are performed to improve driving conditions and anticipatory lane changes are done preemptively to avoid, for example, future traffic congestions. Many lane change decision making models partition the process of a lane change in several phases, which start with the decision to change lanes and end with the acceptance of a suitable gap \cite{7795631, BALAL201647}. Assessing gaps thus is a crucial part, and usually a binary classification problem.
Several approaches have been proposed for modeling and predicting human behavior. Dou et al. \cite{7576883} considered mandatory lane change events at lane drops and predicted driver merging behavior with SVMs and simple Feedforward Neural Networks.
Different algorithms were compared by Motamedidehkordi et al. \cite{8005678} for solving the same problem of learning human driving behavior and predicting lane change decisions. The tested algorithms included, amongst others, SVMs and decision trees.
Another important and related problem is the prediction of future driving situations.
Recently LSTMs have been adopted by the autonomous driving community for this, leading to good results in predicting trajectories and future driving maneuvers, outperforming non-recurrent architectures \cite{7565491}.
Most related and comparable to our work are binary classification problems assessing the suitability of a situation for a lane change, as the previously mentioned work from Ulbrich \cite{7313255}.
Nie et al. \cite{7795631} described a gap assessment model for discretionary lane changes using SVMs. For evaluation purposes a selected subset of the NGSIM US Highway 101 (US 101) dataset was used. Their approach outperformed previous ones, correctly classifying 97.48\% of occurring lane changes.
Balal et al. \cite{BALAL201647} addressed the problem using fuzzy logic rules, with the aim of supporting the driver in lane change decisions. The NGSIM Interstate 80 (I-80) and the US 101 dataset were used while considering only certain lanes and time periods. Correct recommendations were given in 90.50\% of the cases. Jeong et al. used Convolutional Neural Networks (CNNs) to classify neighbored lanes as free or blocked based on camera data \cite{7995938}.
\section{PROBLEM FORMULATION}
Our goal is to classify situations into the categories suited and not suited for a lane change (one can also speak of safe and unsafe). Once a target lane has been identified, for each timestep such a classification has to be done.
We adopt the notation of Nie et al. \cite{7795631} concerning involved cars and input features, see Fig. \ref{figurestreet}.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.3]{street}
\caption{Cars involved in a lane change decision.}
\label{figurestreet}
\end{figure}
The ego car, for which we want to conduct the situation assessment, is marked by EGO. Its preceding and following car on the same lane are denoted by PV (preceding vehicle) and RV (rear vehicle), respectively. The preceding and following cars on the target lane are called putative leading vehicle (PLV) and putative following vehicle (PFV). Let $x_c$ be the longitudinal coordinate of vehicle $c$, $c \in \{EGO, PV, RV, PFV, PLV \}$, on the rectified projection of its current driving lane and $v'_c$ its velocity. Then the distance from the ego car to another vehicle $c$ is denoted by $d_c = \vert x_{EGO} - x_{c}\vert$, the relative velocity by $v_c = v'_{EGO} - v'_c$ and the temporal distance by $t_c = \frac{d_c}{v_c}$.
Let $f^t$ be a data frame at timestep $t$. Our algorithm assigns a classification $o^t$ to the frame, where $o^t \in \{1, 0\}$ is a label expressing that in timestep $t$ a lane change is possible (safe) or not. Denote the ground truth with $l^t \in \{1, 0\}$. As evaluation metric we use the average accuracy $acc = \frac{acc_p + acc_n}{2}$, where $acc_p$ and $acc_n$ are the fractions of correctly classified frames with $l^t = 1$ and $l^t = 0$, respectively.
\section{A RECURRENT MODEL FOR ASSESSING SITUATIONS}
\label{sectionrnn}
This section covers our main contribution, namely an LSTM network for assessing driving situations and its extension, a Bidirectional LSTM.
Bidirectional RNNs use information from future time steps. For training this is feasible, but not for an online implementation. Thus we include a prediction component in the network. This is essentially a black-box algorithm and can be any valid trajectory prediction algorithm. Here we use the IDM.
We train the models in a sequence-to-sequence fashion, meaning for each timestep $t$ we generate an output $o^t \in \{0, 1\}$, indicating the suitability of the situation for a lane change. As input features only $d_{PV}$, $d_{RV}$, $d_{PFV}$ are $d_{PLV}$ are passed to the network, in contrast to previous models \cite{7795631, BALAL201647}, as we expect the networks to learn an understanding of velocity (and more) out of the positions.
\subsection{Intelligent Driver Model}
To obtain a practically feasible Bidirectional RNN we need a component which is able to predict vehicle positions for the near future.
We use the Intelligent Driver Model (IDM) \cite{2000PhRvE..62.1805T}.
It is a car-following model, which describes a rational driver, who tries to keep a safe distance to the preceding car in dense traffic, but accelerates up to a certain desired velocity when this is feasible.
Denote with $X$ the vehicle in question and with $Y$ the vehicle directly in front of it. Let $x_c$ be the position of vehicle $c$ at time $t$, $v'_c$ its speed and $l_c$ its length, $c \in \{X, Y\}$. Define $s_X = x_Y - x_X - l_Y$ and $v_X = v'_X - v'_Y$ Then the model is described by the following system of differential equations:
\begin{equation}
\begin{split}
\dot{x_{X}} & = \frac{dx_{X}}{dt} = v'_{X} \\
\dot{v}'_{X} & = \frac{dv'_{X}}{dt} = a(1 - (\frac{v'_{X}}{v_0})^{\delta} - (\frac{s^*(v'_{X}, v_{X})}{s_{X}})^2) \\
s^*(v'_{X}, v_{X}) & = s_0 + v'_{X}T + \frac{v'_{X} v_{X}}{2 \sqrt{ab}}
\end{split}
\end{equation}
$v_0$, $s_0$, $T$, $a$, $b$ and $\delta$ are freely choosable model parameters with the following meaning: $v_0$ is the desired velocity,
$s_0$ a minimum spacing,
$T$ is the desired time headway.
$a$ describes the maximal vehicle acceleration,
$b$ the comfortable braking behavior.
$\delta$ is some exponent.
Best prediction results could of course be obtained by considering all cars in a scene, however this is not feasible, as for an online implementation we can only examine cars seen by the sensors of the ego car. With modern Radar sensors though it is definitely possible to spot and detect the surrounding vehicles PV, RV, PFV, PLV and even the cars preceding or trailing these (if these cars are within a certain distance of the ego car, but if not their influence in the IDM is negligible anyways).
Thus for each needed future timestep we use the IDM to predict the new velocity and position of the vehicles PV, RV, PFV, PLV and EGO. For EGO, RV and PFV our prediction will be more accurate, as their preceding vehicles PV and PLV are part of the prediction model and will be reevaluated in each step. For the vehicles PV and PLV we simply observe their preceding cars at the current moment, and from then on assume a constant velocity over the whole prediction period.
\subsection{Long Short-Term Memory Network}
In this subsection we introduce the non-predictive LSTM model, see \cite{gers1999learning} for a definition of LSTMs.
Experiments showed best performance when converting the inputs into an occupancy grid and embedding this before feeding it to the LSTM. The grid is partitioned into four parts - same lane before EGO; same lane behind EGO; target lane before EGO; target lane behind EGO. Each part describes 100 meters of road and is discretized into boxes representing 10 meters each. Only the vehicles PV, RV, PLV and PFV are considered and represented by a 1 in the corresponding vector, the remaining entries are filled with 0s. This way the embedding process resembles the embedding of discrete words, amongst others used in machine translation \cite{DBLP:journals/corr/MikolovSCCD13}, which inspired our decision.
The output of the LSTM is transformed with a softmax function into a probability distribution, from which the maximum is taken as the final classification of the network. The full network is described by the following equations, see Fig. \ref{figurelstm} for a visualization:
\begin{figure}[!t]
\centering
\includegraphics[scale=0.20]{bilstm}
\caption{Visualization of the LSTM network. The observed inputs $d_x$ are converted into the grid representation $g_x$, embedded and fed to the LSTM units. The output $o^t$ is obtained by applying a softmax function to the output of the LSTM units. The standard LSTM network consists only of the red colored LSTM cells, for the Bidirectional LSTM network also the orange ones are used. This additional layer processes information in a reversed temporal order.}
\label{figurelstm}
\end{figure}
\begin{equation}
\begin{split}
e^t & = emb(W_{emb}, b_{emb}, [g^t_{pv}; g^t_{rv}; g^t_{pfv}; g^t_{plv}] )\\
(h^t, c^t) & = LSTM(e^t, h^{t-1}, c^{t-1})\\
o^t & = softmax(W_o h^t + b_o)
\end{split}
\end{equation}
Here $g^t_x$ describes the one-hot vector for vehicle $x$ obtained by the discretization of $d^t_x$ described above, and $emb(W, b, [x_1, x_2, \ldots]) = [W x_1 + b; W x_2 + b; \ldots]$. The function $LSTM$ denotes all calculations done in a single LSTM unit, returned are the unit's output representation $h^t$ and its internal state $c^t$.
\subsection{Bidirectional Long Short-Term Memory Network}
Bidirectional LSTMs represent an extension of standard LSTM networks by adding another LSTM layer, which processes information backwards. The network then does not only have access to information from the past, but also from the future. In each step, the output of these otherwise separated layers is combined and further processed to obtain the final output.
For training real future trajectories are used, for testing predicted ones with the help of the IDM. This anticipation helps the algorithm assess traffic situations, as situation assessment is never a static problem. A prediction, which happened implicitly in, for example, the previously mentioned LSTM or an SVM model, now is done explicitly. Consider a situation in which a fast car approaches the ego vehicle in the target lane from behind, but starts braking strongly. At first a lane change might not seem safe, but it might well be after the braking maneuver. A conventional situation assessment algorithm might decide likewise, due to the detected braking, implicitly calculating future vehicle positions and resulting gaps. Our proposed bidirectional LSTM does this explicitly by querying a prediction component about future states.
We concatenate the outputs of both LSTM units before feeding them to the softmax layer.
The full modified model looks like this, compare Fig. \ref{figurelstm}:
\begin{equation}
\begin{split}
e^t & = emb(W_{emb}, b_{emb}, [g^t_{pv}; g^t_{rv}; g^t_{pfv}; g^t_{plv}] )\\
(h^t_F, c^t_F) & = LSTM(e^t, h^{t-1}, c^{t-1})\\
(h^t_B, c^t_B) & = LSTM(e^t, h^{t+1}, c^{t+1})\\
o^t & = softmax(W_o [h^t_F; h^t_B] + b_o)
\end{split}
\end{equation}
Let $C = \{PV, RV, PLV, PFV\}$.
For training in each iteration we feed sequences of length $T_F$ to the network, in which $g^t_x$ is derived from real trajectories for each $t \in \{1, \ldots, T_F\}$, $x \in C$. While doing so we reset the internal state of the backward LSTM, $c_B$, after each $T_B$ steps. This way at each timestep the network is limited to seeing at most $T_B$ future steps, which is the prediction horizon of our prediction component. During testing, in each iteration we feed sequences of length $T_B$ to the network, repeating the following for each time step $t \in \{1, \ldots, T_B\}$: $g^t_x$, $x \in C$ is derived from real vehicle positions, all $g_{t'}^x$, $t' \in \{t, \ldots, T_B\}$, $x \in C$ are derived from predicted trajectories. The output $o^t$ is saved and appended to the list of final outputs.
We use $T_F = T_B$ = 10 seconds, note though that arbitrary values are possible, especially with $T_F > T_B$. Greater values for $T_F$ and $T_B$ might increase the performance of the algorithm, it is recommended to set $T_B$ as large as a reliable prediction from the prediction component can be guaranteed.
\section{DATA DESCRIPTION AND DATA LABELING}
Since our solution relies on supervised learning, data labeling is necessary. For each data frame $f^t$ we want to annotate it with a label $l^t \in \{0, 1\}$, indicating whether $f^t$ is suited for a lane change or not.
In this section we first briefly introduce the used NGSIM datasets. Then we describe a labeling approach used in previous works and briefly discuss its drawbacks. Eventually we propose an improvement of this principle and additionally introduce a new labeling method.
\subsection{NGSIM Dataset}
The Next Generation Simulation (NGSIM) project contains several publicly available traffic data sets \cite{ngsim}. Here we use the Interstate 80 Freeway Dataset (I-80) and the US Highway 101 Dataset (US 101). In both cases, traffic information of a specific road segment was recorded for a period of time. The available information contains trajectories of every recorded vehicle and additional information, like lane boundaries. There are 6 highway lanes available, as well as an off- and on-ramp. See Fig. \ref{figureus101} for a visualization. Measurements were updated 10 times a second.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.3]{us101}
\caption{Area of study in US 101 dataset (size proportions not realistic). The I-80 dataset looks similar, except it is missing the off-ramp.}
\label{figureus101}
\end{figure}
\subsection{Action-Based Labeling}
To the best of our knowledge, all previous works use a principle which could be called ``action-based labeling'' to generate meaningful labels for data frames \cite{7795631, BALAL201647, 7576883}. The idea is to look for actual occurring lane changes and this way learn from the behavior of the human drivers. A lane change maneuver begins once a vehicle moves towards the lane boundary without oscillations and with a certain speed (0.213 m/s), and terminates when the vehicle crosses the lane boundary, in accordance with the definition in Nie et al. \cite{7795631}. This period of time is called $T_P$, and all included data frames are labeled with $l^ t = 1$. Unfortunately, negative samples are not as well defined or easily spottable, as the drivers' interior motives are not observable: existing works thus assume that before the execution of a lane change the driver takes a certain period of time $T_N$ to analyze the situation, assessing it as not suited for the lane change until eventually a suited situation occurs. All points in $T_N$ are labeled with $l^t = 0$.
We extend this approach by filtering out examples which present no information gain for any machine learning algorithm.
Intuitively, we only want to include situations in our learning set, in which there was a significant change from the negatively labeled data frame to the positively labeled one, and the situation changed from probably less suited for a lane change to probably more suited (see Fig. \ref{figuresituation}). Additionally, we require a minimum time gap of 1 second between samples with different labels (which translates to a not weighted ``maybe'' class for the LSTM networks).
\begin{figure}[!t]
\centering
\includegraphics[scale=0.45]{situation}
\caption{In both scenarios the lane change of the target car (displayed in green) happens in the second frame. In a) the situation several seconds before the lane change, i.e. in $T_N$, looks very similar, thus we deem this example unsuited. In scenario b) the situation changed significantly, thus this example has a higher chance of having a useful label.}
\label{figuresituation}
\end{figure}
This gives the following formal definition: Let $t_n \in T_N$ and $t_p \in T_P$. Then
\begin{equation}
\begin{split}
ad &= d_{PV} + \gamma * d_{PLV} + d_{RV} + \gamma * d_{PFV} \\
&+ \beta * (\vert v_{PV} \vert + \gamma * \vert v_{PLV} \vert + \vert v_{RV} \vert + \gamma * \vert v_{PFV} \vert) \\
sd &= v_{PV} + \gamma * v_{PLV} - v_{RV} - \gamma * v_{PFV} - ad
\end{split}
\end{equation}
with $\gamma = 2$ and $\beta = 1.8$. These values were chosen to represent the relative importance of the two lanes, as well as average driving behavior on highways.
The sample pair $(t_n, t_p)$ is only included in the learning set if $\frac{\vert ad_{t_n} - ad_{t_p} \vert}{ad_{t_n}} \ge 0.35 $ (1) and $sd_{t_n} \ge sd_{t_p}$ (2). (1) ensures enough relative change in situations, (2) the better suitability of the positive example. That $sd_t$ is indeed an approximation of the degree of suitability can easily be seen by inserting different variable values.
We call the NGSIM datasets annotated with this method ``action-based dataset''.
When using recurrent models one concern of this approach is a possible overfitting of the networks to the certain temporal structure exhibited by the created sequences, always starting with negative labels and ending with positive ones. To prevent this we augment our data to make the distribution look more random and diverse by adding prolonged sequences, starting and ending at random time points and occasionally containing only positive or negative labels.
{
\setlength\extrarowheight{2pt}
\begin{table}[b!]
\caption{Results on both datasets. A denotes the action-based dataset, B the automatic.}
\label{table_results}
\begin{center}
\tabcolsep=0.15cm
\begin{tabular}{| c | c | c | c | c | c | c |}
\hline
& IDM & SVM & SVM$^*$ & LSTM & Bi-LSTM$^*$ & Bi-LSTM\\
\hline
A & - & 77.24\% & 78.62\% & 88.76\% & 92.59\% & 88.19\% \\
\hline
B & 61.10\% & 80.70\% & 57.90\% & 83.08\% & 88.49\% & 87.03\% \\
\hline
\end{tabular}
\end{center}
\end{table}
}
\subsection{Automatic Labeling}
As alternative to the method described in the previous subsection we propose an automatic labeling scheme. For this, we run twice over the entire data set, once setting the left lane as hypothetical target lane, once the right lane. For each data frame $t$ we examine all data frames $f^i$ up to 3 seconds in the future and evaluate the observed time gaps in the target lane ($t_{PFV}$ and $t_{PLV}$). If for each examined data frame $f^i$ $t_{PFV} \ge 1$ and $t_{PLV} \ge 1$, $l^t = 1$, otherwise $l^t = 0$. The idea behind this labeling method is that a situation is deemed suitable for a lane change, if a lane change can be executed while all participants can continue driving close to their original speed, i.e. no hard acceleration or braking is necessary.
Note that this method cannot be used as a simple and perfect (with respect to this labeling) situation assessment algorithm, as it uses information about future states, which are not available in an online application.
We call the NGSIM datasets annotated with this scheme ``automatic dataset''.
When comparing labeled data frames from the action-based labeling to the labeling given by the automatic labeling, about 75\% of the labels match.
Advantages of the automatic labeling method are the number of labeled samples, which now is the full data set, compared to the fraction of data frames around lane change events in the action-based labeling scheme. Further, we prevent the problem of wrong negative labels due to the inobservability of the drivers' intentions and are able to manually model more aggressive or passive drivers by changing the minimum time gaps.
On the downside, the labels are created by a theoretical concept rather than by actual human behavior. Further, in dense traffic situations fixed time gaps are sometimes not applicable in practice - in order to manage a lane change, a driver might have to aggressively push into a gap and expect other drivers to react.
\section{RESULTS}
\label{sectionresults}
In this section we present our findings from evaluating the different algorithms. First we briefly describe an SVM implementation for a competitive evaluation of our novel algorithm.
We test all models on both the action-based and the automatic dataset, and also extend the SVM approach with future trajectory data to obtain a fair comparison.
For the predictive approaches we test with real future data as a theoretical upper bound for performance, as well as use the IDM for a realizable prediction to show that this bound can (almost) be reached.
\subsection{Support Vector Machine Model}
A reimplementation of the SVM model presented by Nie et al. \cite{7795631} is used. The examples are chosen accordingly by random, but balanced, sampling from all labeled data frames. The approach is extended by equipping each sample with future data frames (5 and 10 seconds later), in order to examine the influence of the prediction component.
\subsection{Evaluation}
The algorithms under examination are the standard SVM approach (SVM), the SVM approach making use of real future trajectories (SVM$^*$), the presented LSTM network (LSTM), the Bidirectional LSTM network with real future trajectories (Bi-LSTM$^*$) and the Bidirectional LSTM with trajectories predicted from the IDM (Bi-LSTM). For the automatic dataset, also a baseline using only the IDM is given: For this the prediction is the resulting automatic label while using the IDM to estimate future vehicle positions in the following 3 seconds.
For evaluation both the datasets I-80 and US 101 were merged and used. 5-fold cross-validation was used for the LSTM networks. For the SVM models the average results of 5 runs with an 80:20 ratio of training and test examples are shown, since the execution of the SVM approaches involves random sampling from the dataset.
During the splitting into training and test set every track and lane change maneuver was always fully assigned to either the training or the test set, never partially to both sets.
For the SVM models, a Gaussian radial basis function worked best. A grid search was performed to find the best values for the relevant parameters. Different experiments led to the used LSTM parameters. A single layer was used consisting of 128 hidden units, the regularization parameter was set to 0.001.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.27]{results1}
\caption{Visualization of different scenarios from the automatic dataset. The ego car is drawn in green, the important surrounding vehicles in white, which indicates the target lane. The situation fed to the algorithms is the one at $t=0$. For each scenario the development after 2 seconds is shown. Next to the scenarios the correct label and the predicted labels from the SVM, the LSTM and the Bidirectional LSTM are shown in this order. A ``1'' defines a situation which is suitable for a lane change, a ``0'' one that is not.}
\label{figureresult1}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.4]{results2}
\caption{A closer analysis of situation c) from Fig. \ref{figureresult1}. The situation is drawn 2 and 4 seconds later, once as predicted by the IDM and once as actually unfolded.}
\label{figureresult2}
\end{figure}
The results are shown in Table \ref{table_results}.
As can be seen, already the first LSTM approach outperforms the SVM methods on both datasets significantly. For the SVM approach only the real future trajectories were passed as inputs and the method was not tried in combination with the IDM, since already this theoretical perfect prediction model does not produce better results than the LSTM approaches. We also see, that the Bidirectional LSTM with real future trajectories further improves the classification score on both datasets. When substituting these with predictions from the IDM a small deterioration in accuracy is observed, which is to be expected. For the action-based dataset the results are comparable to those of the non-predictive LSTM network, indicating a relative accurate prediction. For the automatic dataset using the Bidirectional LSTM with predicted trajectories proves to be an improvement of the non-predictive case.
Possible explanations are the better suitability of the long continuously labeled tracks of the automatic dataset, and the lack of questionable labels compared to the action-based dataset.
The IDM baseline alone performs poorly, proving that the combination of situation assessment with deep learning and prediction is needed.
Fig. \ref{figureresult1} shows three scenarios from the automatic dataset. Scenario a) is recognized by the SVM, the LSTM and the Bidirectional LSTM (using predicted trajectories) approach correctly as suited for a lane change. In scenario b) the SVM fails, as it does not weigh the velocity of the approaching car in the target lane enough. In scenario c) also the LSTM approach fails. While the situation looks safe at $t=0$, the ego vehicle will have approached the preceding car in the target lane 2 seconds later. This is accurately predicted by the IDM, as can be seen in Fig. \ref{figureresult2}. Note that at $t=4$ EGO will have overtaken PFV, which is not considered in our prediction model: indeed, including these dynamics in our model represents an interesting future research direction.
Fig. \ref{fullres} shows the temporal development of an exemplary situation over 10 seconds. The scene starts in congested traffic, the situation is unsuited for a lane change. As the cars move, a suitable gap forms around the ego vehicle, which is anticipated by the Bidirectional LSTM, although a bit too early. Eventually the gap closes again due to a fast approaching car from behind, which is assessed relatively accurately by the Bidirectional LSTM. The SVM performs worse, it is less accurate and does not handle the temporal changes well. Note that a classification accuracy of 88\% does not mean that 12\% of the sequences are completely mistaken, but instead that of all frames 12\% are misclassified (see Fig. \ref{fullres}: the prediction output of the Bi-LSTM closely matches the ground truth, except the exact moments of label change do not align perfectly). By using for example a delayed or secure action planning, or an additional filter over multiple frames on top, one can expect very safe driving behavior.
\begin{figure*}[!t]
\centering
\includegraphics[width=\textwidth]{fullres}
\caption{Temporal development of a situation. The images are each taken 1 second apart. In the diagram below the correct label of each frame is displayed as well as the prediction from the SVM and the Bidirectional LSTM. Note that for this the probability $p^t$ for $o^t=1$ is shown (the output of the softmax layer) - compared to the discrete outputs 0 and 1 for SVM and ground truth - meaning the predicted output is $1$ if $p^t > 0.5$.}
\label{fullres}
\end{figure*}
\section{CONCLUSIONS AND FUTURE WORK}
In this paper we considered the problem of assessing situations with respect to their suitability for a lane change maneuver. We proposed a novel deep learning architecture based on LSTM networks. We also introduced a novel labeling technique and discussed drawbacks and limitations of the approach used by previous works. We tested our method on the publicly available NGSIM dataset and achieved an average accuracy of 83.08\% - 88.76\%, outperforming existing algorithms. By extending our model to a Bidirectional LSTM with an integrated prediction component, for which we used the IDM, our accuracy improved to 87.03\% - 88.19\%.
Possible applications are the integration of our system into ADAS or fully self-driving cars, supporting these in making decisions when to safely execute a lane change.
We believe our architecture is applicable to a wide range of problems by simply switching the underlying dataset, giving a method to use supervised learning with recorded driving behavior to assess the feasibility of many different maneuvers.
In the future we would like to extend and apply our method to these different situations, e.g. include rural and urban scenarios for lane changes and test different maneuvers like yielding and merging at crossroads and roundabouts.
Additionally, we would like to create a dataset labeled by humans according to their situation assessment and compare this to the two labeling methods used here.
Another interesting point of work is the improvement of the prediction component, here instead of the IDM many different methods, amongst others LSTMs, are thinkable of, possibly improving accuracy and narrowing the gap between realizable prediction and the upper bound of a perfect prediction.
Further, integrating additional sensor readings could improve performance of the LSTM model, which could easily be done by filling the occupancy grid with all detected objects.
{\small
\bibliographystyle{IEEEtran}
|
1,477,468,751,135 | arxiv | \section{\boldmath Introduction}
\label{sec:intro}
The flavor studies of the semi-leptonic $B$ decay processes have been developed these years to test the electroweak sector of the Standard Model (SM) and also to look for evidence of New Physics (NP) effects.
In particular, the charged current is described as
\begin{align}
\mathcal H_\text{eff} = 2\sqrt2 G_F V_{qb} C_\text{model} \, (\bar q \Gamma^\mu b) (\bar l \Gamma_\mu \nu) \,,
\end{align}
with $C_\text{SM} = 1$ and $\Gamma^\mu = \gamma^\mu P_L$ in the SM, where $G_F$, $V_{qb}$, and $P_{L/R}$ denote the Fermi coupling constant, the Cabibbo-Kobayashi-Maskawa~\cite{Cabibbo:1963yz,Kobayashi:1973fv} (CKM) matrix element, and chiral projection operators $(1\mp\gamma_5)/2$, respectively.
Throughout this paper, $l$ indicates all the charged leptons ($l=e,\mu,\tau$), whereas $\ell$ represents the light charged leptons ($\ell = e,\mu$).
Then, the NP effect in the Wilson coefficient (WC) $C_\text{NP}$, with an arbitrary Lorenz structure $\Gamma$, can be analyzed from the $B$ decay observables.
The exclusive processes of $\bar B \to {D^{(*)}} \ell \bar\nu$ and $\bar B \to \pi \ell \bar\nu$ are used to determine $|V_{cb}|$ and $|V_{ub}|$, respectively.
In Ref.~\cite{Iguro:2020cpg}, the authors extended the fit analysis for the $|V_{cb}|$ determination to include NP effects on the decay distributions with the help of the updated treatment of the form factors~\cite{Bordone:2019vic}.
Then, it turns out that non-zero NP contributions with the size of $C_\text{NP} \sim \mathcal O (\%)$ are possible in the $b \to c\ell\nu$ current, as will be shown later.
A similar study for $b \to u\ell\nu$ would be available in future.
On the other hand, the tau modes $\bar B \to {D^{(*)}} \tau \bar\nu$ are of particular interest
because of the excesses in the experimental measurements of $R_{{D^{(*)}}} = \mathcal B (\bar B \to {D^{(*)}} \tau\bar\nu) / \mathcal B (\bar B \to {D^{(*)}} \ell\bar\nu)$ compared with the SM predictions.
The current measurements are summarized as $R_{\scalebox{0.7}{$D$}}^\text{\scalebox{0.7}{exp:WA}} = 0.340(27)(13) $ and $R_{\scalebox{0.7}{$D^*$}}^\text{\scalebox{0.7}{exp:WA}} = 0.295(11)(8)$
while several SM predictions are reported, {\it e.g.}, $R_{\scalebox{0.7}{$D$}}^\text{\scalebox{0.7}{SM}} = \{0.299(4),\, 0.297(6),\, 0.289(4)\}$ and $R_{\scalebox{0.7}{$D^*$}}^\text{\scalebox{0.7}{SM}} = \{0.258(5),\, 0.245(4),\, 0.248(1)\}$,
where the first ones are from HFLAV~\cite{Amhis:2019ckw} and the latter two are from Ref.~\cite{Iguro:2020cpg} based on two different sets for the form factors.
Then, many studies point out that the excesses can be explained by several types of NP currents in $b \to c\tau\nu$ with $C_\text{NP} \sim \mathcal O (10\%)$.
There also exists the NP study for $b \to u\tau\nu$ as in Ref.~\cite{Tanaka:2016ijq}, and then the typical size of the constraint is $\mathcal O (10\%)$ as well.
In light of the above situation, it is natural to ask if we can access such NP effects of $1\,\text{--}\,10\%$ order at the large hadron collider (LHC) searches.
In Ref.~\cite{Greljo:2018tzh}, $\tau + \text{missing}$ searches by ATLAS~\cite{Aaboud:2018vgh} and CMS~\cite{Sirunyan:2018lbg} have been applied to put the LHC bound on $C_\text{NP}$ in the $b \to c\tau\nu$ current.
Then, they found that the current LHC data constrains NP scenarios addressing the $R_{{D^{(*)}}}$ excesses.
See also Refs.\cite{Dumont:2016xpj,Altmannshofer:2017poe,Iguro:2017ysu,Abdullah:2018ets,Iguro:2018fni,Baker:2019sli,Marzocca:2020ueu} for the other studies.
One can think that the $\ell + \text{missing}$ search by ATLAS with $139~\text{fb}^{-1}$~\cite{Aad:2019wvl} can probe the LHC bound in $b \to q \ell\nu$ with $q = c, u$ as well.
In this work, we will obtain the LHC bounds for all the possible types of the NP currents in $b \to q l\nu$.
{\it A novel point} of this work is, however, not only for such a comprehensive analysis, but rather for pointing out difference between the Effective-Field-Theory (EFT) and actual NP models, as bellow.
A common outlook on these LHC analyses is that NP constraints are obtained only from a high $p_T$ tail of a SM background (BG) due to a $W$ resonance.
To be precise, the transverse mass with $m_T \sim 2\,\text{--}\,3\,\text{TeV}$ is sensitive to the NP contributions.
In this case, the EFT description is not always appropriate for actual NP models, whose effect is usually encoded in $C_\text{NP}$, as will be shown in this work.
We will clarify this point and show that the LHC bound depends on the NP particle mass in the WC.
For this purpose, we focus on NP models with non-resonant $m_T$ distribution, namely, leptoquark (LQ) models.
Eventually, we will show that the EFT limit is {\it not} applicable for the LQ particle mass less than $\mathcal O (10) \,\text{TeV}$ due to angular and energy dependence of the charged lepton $l^\pm$, which might not much be paid attention so far.
Our paper is organized as follows.
In Sec.~\ref{sec:EFT} and Sec.~\ref{sec:LQ}, we define the model independent NP interactions in $b \to q l\nu$ and the corresponding LQ models, along with the summary of the current flavor bounds.
In Sec.~\ref{sec:collider}, we show the $m_T$ distribution of the cross section, and see how the EFT and the LQ model differ at the high $p_T$ tail.
Then we present our analysis for the LHC bound on the NP contribution.
In Sec.~\ref{sec:discussion}, we compare the flavor and LHC bounds, and indicate significance of non-EFT constraints.
Finally, our summary and discussion are given in Sec.~\ref{sec:conclusion}.
\section{\boldmath Effective Field Theory and flavor bound}
\label{sec:EFT}
In this work, we start with the weak EFT basis Hamiltonian for the semi-leptonic process $b \to ql\nu$:
\begin{align}
\label{Eq:effH}
{\mathcal H}_{\rm{eff}}=2 \sqrt 2 G_FV_{qb} \Bigl[
& (1 + C_{V_1}^{ql}) (\overline{q} \gamma^\mu P_Lb)(\overline{l} \gamma_\mu P_L \nu_{l}) \notag \\
& +C_{V_2}^{ql} (\overline{q} \gamma^\mu P_Rb)(\overline{l} \gamma_\mu P_L \nu_{l}) \notag \\[0.2em]
& +C_{S_1}^{ql} (\overline{q} P_Rb)(\overline{l} P_L \nu_{l}) \notag \\[0.2em]
& +C_{S_2}^{ql} (\overline{q} P_Lb)(\overline{l} P_L \nu_{l}) \\
& +C_{T}^{ql} (\overline{q} \sigma^{\mu\nu}P_Lb)(\overline{l} \sigma_{\mu\nu} P_L \nu_{l}) \Bigl] + \,\text{h.c.}. \notag
\end{align}
Then, the NP contributions are involved in the WCs $C_X^{ql}$ with the SM normalization $2 \sqrt 2 G_FV_{qb}$.
In this work, we take $| V_{cb} | = 0.0410(14)$ and $| V_{ub} | = 0.00382(24)$ from PDG2020~\cite{Zyla:2020zbs}.
We do not consider the right-handed neutrinos.
The $B$ meson decays are described with respect to the WCs at a low energy scale $\mu = m_b$ while a NP model is set at a scale $\mu = \Lambda$.
At flavor physics, the EFT limit $q^2 \ll \Lambda^2$ is a good approximation for $\Lambda \gtrsim \mathcal O(100)\,\text{GeV}$.
In this case, the corresponding WC is given as
\begin{align}
\label{eq:WCEFT}
2 \sqrt 2 G_FV_{qb} C_X (\Lambda) = N_X {h_1 h_2 \over M_\text{NP}^2} \,,
\end{align}
with a mass of a NP particle $M_\text{NP}$ and its couplings to the SM fermions $h_{1,2}$, and one may typically take $\Lambda = M_\text{NP}$.
The numerical factor $N_X$ depends on the Lorenz structure of the EFT operator.
Then, the WCs at these two scales, $C_X (m_b)$ and $C_X (\Lambda)$, are connected by renormalization-group equation (RGE).
In this work we follow Ref.~\cite{Iguro:2018vqb} for the formula.
Existing flavor bounds on $C_X (m_b)$ are summarized as below.
\begin{itemize}
\item
$b \to c \ell\nu$:
the comprehensive fit analysis~\cite{Iguro:2020cpg} of the semi-leptonic processes $\bar B \to {D^{(*)}} \ell\bar\nu$ points to non-zero preferred values of the WCs for $V_2$ and $T$.
The fit along with the form factors leads to $C_{V_2}^{c\ell} (m_b) = +0.02(1)$, $+0.05(1)$ and $C_{T}^{c\ell} (m_b) = +0.02(1)$ depending on the form factor description.
See Ref.~\cite{Iguro:2020cpg} for detail. \\[-0.7em]
%
\item
$b \to c \tau\nu$:
the excesses of the $R_{{D^{(*)}}}$ measurements have been studied with the EFT approach, and it has been indicating the possibility of NP explanations.
Based on the form factor in Ref.~\cite{Iguro:2020cpg}, we derive updated allowed regions for $C_{X}^{c\tau} (m_b)$ from the aforementioned latest experimental results, assuming $C_{X}^{c\ell} (m_b)=0$.
The fit result for each NP scenario can be written as $C_{V_1}^{c\tau} (m_b) = +0.09(3)$, $C_{V_2}^{c\tau} (m_b) = \pm0.42(7) i$, and $\text{Re} C_{T}^{c\tau} (m_b) = +0.15(7)$ with $\text{Im} C_{T}^{c\tau} (m_b)$ to be fixed as $\pm 0.19$.
We will also show allowed contour plots on the complex plane later (see Fig.~\ref{Fig:contour}) along with the LHC bound.
Note that our present analysis excludes the scalar scenarios $S_{1,2}$.
In particular, the $S_{2}$ solution to the excesses is not consistent with the condition $\mathcal B(B_c \to \tau\nu) \lesssim 30\%$, extrapolated from the $B_c$ lifetime~\cite{Alonso:2016oyd}.\footnote{
As known well, the $S_{1}$ scenario has no solution to the excesses, which results in more than $99.8\%$ CL exclusion.} \\[-0.7em]
%
\item
$b \to u \tau\nu$:
there is the NP study for $\bar B \to \pi \tau\bar\nu$ and $\bar B \to \tau\bar\nu$ in Ref.~\cite{Tanaka:2016ijq}.
We update the bounds as
$C_{V_1}^{u\tau} (m_b) = +0.03(15)$, $C_{V_2}^{u\tau} (m_b) = +0.02(15)$, $C_{S_{1/2}}^{u\tau} (m_b) = 0.00(4)$, $\mp0.53(4)$, and $C_{T}^{u\tau} (m_b) = +0.17(25)$, $-0.94(29)$,
which are zero-consistent within $1\sigma$, although the latter two have degenerated results.
\end{itemize}
In addition to them, we also evaluate NP constraints from $B_c \to \ell\nu$ and $B \to \ell\nu$.
The former fills missing pieces of the $C_{S_{1,2}}^{c\ell}$ constraints for $b \to c\ell\nu$, while the latter gives the flavor bound for $b \to u\ell\nu$, which is not shown above.
Note that these two body decays are not affected by the $T$ operator because of the Lorenz structure.
\begin{itemize}
\item
The way to derive $\mathcal B(B_c \to \tau\nu) \lesssim 30\%$ from the $B_c$ lifetime is indeed independent of the lepton flavor.
Therefore, the same condition can be employed to constrain the NP effect in $b \to c\ell\nu$.
Given the condition, the $S_{1,2}$ contributions are constrained as $| C_{S_{1,2}}^{ce, c\mu} (m_b) | < 0.8$ in the electron and muon modes.
For both lepton modes, the $V_{1,2}$ bounds are very loose.\footnote{
It is obtained as $| C_{V_{1,2}}^{c\ell} (m_b) | < \mathcal O(100)$.
We treat such a case as ``unbound'' in this paper.
} \\[-0.7em]
\item
The branching ratio of $B \to \mu\nu$ has been measured for the first time,
and the result is given as $\mathcal B (B \to \mu\nu) = (6.46 \pm 2.22 \pm 1.60) \times 10^{-7}$ ~\cite{Sibidanov:2017vph}.
Then, we can obtain $C_{V_{1/2}}^{u\mu} = \pm 0.2(4)$ and $C_{S_{1/2}}^{u\mu} = \pm 3(6) \times 10^{-3}$, $\mp 35(6) \times 10^{-3}$.
On the other hand, the upper limit of the branching ratio for $B \to e\nu$ is so far obtained as $< 9.8 \times 10^{-7}$ (90\% CL), which can be compared with the SM value, $(9.8 \pm 1.4) \times 10^{-12}$.
Even in this case, the $S_{1,2}$ scenarios are restricted as $| C_{S_{1,2}}^{ue} | < 0.02$.
\end{itemize}
With the help of these evaluations, we have the exhaustive list of the flavor bounds as in Table~\ref{Tab:flavorbound}.
These existing and newly evaluated flavor bounds will be compared with the LHC bounds obtained in this work.
\begin{table}[t]
\renewcommand{\arraystretch}{1.3}
\begin{center}
\scalebox{0.8}{
\begin{tabular}{lccccc}
\hline\hline
& $V_1$ & $V_2$ & $S_1$ & $S_2$ & $T$ \\
\hline
$C_X^{c e}$ & -- & $0.02(1)$ & $|<0.8|$ & $|<0.8|$ & $0.02(1)$ \\
\hline
$C_X^{c \mu}$ & -- & $0.02(1)$ & $|<0.8|$ & $|<0.8|$ & $0.02(1)$ \\
\hline
$C_X^{c \tau}$ & $0.09(3)$ & $\pm0.42(7)i$ & excluded & excluded & $0.15(7)^{**}$ \\
\hline
$C_X^{u e}$ & -- & -- & $|<0.02|$ & $|<0.02|$ & -- \\
\hline
$C_X^{u \mu}$ & $0.2(4)$ & $-0.2(4)$ & $3(6) \!\times\! 10^{-3} \,^*$ & $-3(6) \!\times\! 10^{-3} \,^*$ & -- \\
\hline
$C_X^{u \tau}$ & $0.03(15)$ & $0.02(15)$ & $0.00(4)^*$ & $0.00(4)^*$ & $0.17(25)^*$ \\
\hline\hline
\end{tabular}
}
\caption{
Status of applicable flavor bounds.
``$|< n|$'' means $|C_X| < n$ where those for $C_{S_{1,2}}^{ce,c\mu}$ are the bounds from the theoretical estimates whereas that for $C_{S_{1,2}}^{ue}$ is obtained from the $90\%$ CL upper limit.
The results with $\!\,^*$ indicate that there exist another best fit point outside of $C_X \sim 0$.
The $C_T^{c \tau}$ result $(^{**})$ is given real part by fixing the imaginary part to be $\text{Im} C_T^{c \tau} = \pm 0.19$.
See the main text for these details.
}
\label{Tab:flavorbound}
\end{center}
\end{table}
\section{\boldmath EFT realizations: leptoquark models }
\label{sec:LQ}
At collider physics, on the other hand, the EFT description is not always applicable.
It rather depends on details of the NP model and of the analysis to be used.
In this work, we will test the LQ models to see at which scale the EFT limit of Eq.~\eqref{Eq:effH} becomes a good approximation for the present and future LHC searches.
The LQ interactions are classified in Ref.~\cite{Buchmuller:1986zs} with the general $SU(3)_c \times SU(2)_L \times U(1)_Y$ invariant form.
In this work, we leave details of the model constructions, and then just consider the explicit interactions only relevant for the present study.
In the following subsections, we introduce LQ interactions that generate each operator in Eq. (\ref{Eq:effH}).
The SM gauge representations for the LQ fields are summarized in Appendix~\ref{sec:LQrepresentation}.
\subsection{\boldmath ${V_1}$ operator}
\label{sec:CV1}
The ${V_1}$ operator is constructed by the vector leptoquark $\text{U}_1^\mu$.
The interaction term of interest is written as
\begin{align}
\label{eq:V1op}
\mathcal L_{V_1} = h_\text{LQ}^{ij} \Big(\bar u_i \gamma_\mu P_L \nu_j + \bar d_i \gamma_\mu P_L \ell_j \Big) \text{U}_{1}^\mu + \text{h.c.},
\end{align}
and then the corresponding WC for $b \to q_m \ell_n\bar\nu_n$ is obtained as
\begin{align}
\label{eq:LQV1}
&2\sqrt{2}G_FV_{qb} C_{V_1}^{q_m\ell_n} = + { h_{\text{LQ}}^{mn} h_{\text{LQ}}^{*3n} \over M_{\text{LQ}}^2 } \,,&
\end{align}
for $q_{1,2} =(u, c)$ and $\ell_{1,2,3}=(e,\mu,\tau)$ by performing the Fierz transformation.
A similar realization for $C_{V_1}$ is possible by means of the other LQs of $\text{S}_1$, ${\mathbf S}_3$, and ${\mathbf U}_3^\mu$ as shown in Ref.~\cite{Sakaki:2013bfa}.
See also Appendix~\ref{sec:LQrepresentation}.
\subsection{\boldmath ${V_2}$ operator}
\label{sec:CV2}
The ${V_2}$ operator is constructed by the scalar leptoquark $\text{R}_2^{2/3}$, where $2/3$ denotes the electromagnetic charge.
The interaction term
\begin{align}
\label{eq:V2op}
\mathcal L_{V_2} = \left( h_{\text{LQ}_1}^{ij} \bar u_{i} P_L \nu_{j} + h_{\text{LQ}_2}^{ij} \bar d_{i} P_L \ell_{j} \right) \text{R}_2^{2/3} + \text{h.c.} \,,
\end{align}
leads to
\begin{align}
\label{eq:LQV2}
&2\sqrt{2}G_FV_{qb} C_{V_2}^{q_m\ell_n} = + { h_{\text{LQ}_1}^{mn} h_{\text{LQ}_2}^{*3n} \over 2M_\text{LQ}^2 } \,.&
\end{align}
Concerning a practical model, we need two $SU(2)$ doublet LQ fields $\text{R}_2 = (\text{R}_2^{5/3}~\text{R}_2^{2/3})$ and $\text{R}'_2 = (\text{R}_2^{\prime 2/3}~\text{R}_2^{\prime -1/3})$ to construct the SM gauge invariant form,
as written in Appendix~\ref{sec:LQrepresentation}.
Since the LHC signature for the current process is unchanged, we do not consider such a case.
\subsection{\boldmath ${S_1}$ operator}
\label{sec:CS1}
The ${S_1}$ operator is constructed by the vector leptoquark $\text{U}_1^\mu$ such that
\begin{align}
\label{eq:S1op}
\mathcal L_{S_1} = \left( h_{\text{LQ}_1}^{ij} \bar u_{i} \gamma_\mu P_L \nu_{j} + h_{\text{LQ}_2}^{ij} \bar d_{i} \gamma_\mu P_R \ell_{j} \right) \text{U}_{1}^\mu + \text{h.c.} \,,
\end{align}
and we have
\begin{align}
\label{eq:LQS1}
&2\sqrt{2}G_FV_{qb} C_{S_1}^{q_m\ell_n} = -{2 h_{\text{LQ}_1}^{mn} h_{\text{LQ}_2}^{*3n} \over M_\text{LQ}^2 }\,.&
\end{align}
Another realization is given by the $\text{V}_2^\mu$ LQ~\cite{Sakaki:2013bfa}.
\subsection{\boldmath ${S_2}$ and $T$ operators}
\label{sec:CS2}
The ${S_2}$ and $T$ operators are always connected due to property of the Fierz transformation.
They are constructed by the two scalar leptoquarks $\widetilde{\text{R}}_2^{2/3}$ and $\widetilde{\text{S}}_1$.
Note that for $\widetilde{\text{LQ}}$ index of the (quark, lepton) pair is flipped as (lepton, quark) in our notation just for our calculating convenience.
The single ${S_2}$ and $T$ terms are respectively realized from
\begin{align}
\label{eq:S2op}
\mathcal L_{S_2} =
& \left(-\tilde h_{\text{LQ}_1}^{ji} \bar \nu_{j} P_R d_{i}^c + \tilde h_{\text{LQ}_2}^{ji} \bar \ell_{j} P_L u_{i}^c \right) \widetilde{\text{S}}_1 \notag \\
& +\left( \tilde h_{\text{LQ}_2}^{ji} \bar \nu_{j} P_R u_{i} - \tilde h_{\text{LQ}_1}^{ji} \bar \ell_{j} P_L d_{i} \right) \widetilde{\text{R}}_2^{2/3} + \text{h.c.} \,, \\
%
\label{eq:LQS2}
& \text{with}~~ 2\sqrt{2}G_FV_{qb} C_{S_2}^{q_m\ell_n} = - { \tilde h_{\text{LQ}_1}^{n3*} \tilde h_{\text{LQ}_2}^{nm} \over M_\text{LQ}^2 } \,,
\end{align}
and
\begin{align}
\label{eq:Top}
\mathcal L_{T} =
& \left(-\tilde h_{\text{LQ}_1}^{ji} \bar \nu_{j} P_R d_{i}^c + \tilde h_{\text{LQ}_2}^{ji} \bar \ell_{j} P_L u_{i}^c \right) \widetilde{\text{S}}_1 \notag \\
& -\left( \tilde h_{\text{LQ}_2}^{ji} \bar \nu_{j} P_R u_{i} + \tilde h_{\text{LQ}_1}^{ji} \bar \ell_{j} P_L d_{i} \right) \widetilde{\text{R}}_2^{2/3} + \text{h.c.} \,, \\
%
\label{eq:LQT}
& \text{with}~~ 2\sqrt{2}G_FV_{qb} C_{T}^{q_m\ell_n} = + { \tilde h_{\text{LQ}_1}^{n3*} \tilde h_{\text{LQ}_2}^{nm} \over 4M_\text{LQ}^2 } \,,
\end{align}
where $u^c$ and $d^c$ denote charge conjugated quarks.
Again, a practical model requires the $SU(2)$ doublet LQ field $\text{R}_2 = (\text{R}_2^{5/3}~\text{R}_2^{2/3})$.
\subsection{\boldmath Mass scale restriction}
\label{sec:MassScale}
The LQ searches have been given by the QCD pair production and the single production channels with subsequent decays into a pair of quark and lepton.
Then the recent studies~\cite{Sirunyan:2018jdk,Aaboud:2019bye,Aad:2020iuy} obtain the lower limit on the LQ mass as $\sim 1.5~\text{TeV}$ by assuming 100\% branching ratio for the subsequent decay.
On the other hand, the present high $p_T$ searches with the $l + \text{missing}$ events can access a larger LQ mass region since LQs produce non-resonant signals in this case.
\begin{figure}[t!]
\begin{center}
\includegraphics[viewport=0 0 620 311, width=26em]{Fig_MassScale_v4.pdf}
\caption{
\label{Fig:MassScaleCheck}
Relations between the LQ mass and the WC at the NP scale by fixing the LQ coupling.
}
\end{center}
\end{figure}
Once $C_X$, at any scale, is given from flavor observables and/or LHC studies, the corresponding LQ mass is restricted as long as the LQ coupling is set not too large for perturbation theory \cite{DiLuzio:2017chi}.
In Fig.~\ref{Fig:MassScaleCheck}, we plot the relation between $C_X (\Lambda)$ and $M_\text{NP}$ of Eq.~\eqref{eq:WCEFT}.
The numerical factor $N_X$ is fixed as explained in the legend.
The Lorenz structures correspond to $N_{S_1}=2$; $N_{V_1,S_2} = 1$; $N_{V_2}=1/2$; and $N_T=1/4$ for the present LQ models.
Then, one can check the accessible LQ mass from the plot.
For instance, $|C_X^{cl} (\Lambda)| \sim \mathcal O(0.01)$ is produced with $M_\text{LQ} \lesssim 10\,\text{TeV}$ for $h_\text{LQ} \lesssim 1$ in $ b \to c l \nu$.
The mass could be $M_\text{LQ} \sim 50\,\text{TeV}$ at maximum if we allow the coupling as large as $h_\text{LQ} =4$.
\section{\boldmath Collider study}
\label{sec:collider}
At the LHC, the EFT operators of Eq.~\eqref{Eq:effH} contribute to $pp \to l^\pm + \text{missing}$ from $b \bar q \to l^-\bar\nu$ and $\bar b q \to l^+\nu$ for $q = u, c$.
The SM contribution is dominantly given by the $W^\pm$ exchange and generates a resonant structure at $M_{W^{\pm}}$ in the distribution for the transverse mass
\begin{align}
m_T = \sqrt{2p_\text{T}^l E_\text{T}^\text{miss} (1-\cos \phi_{l\nu})} \,,
\end{align}
where $\phi_{l\nu}$ is the azimuthal angle between $l$ and the missing momentum.
On the other hand, the present NP effects are off resonant and widely spread in the $m_T$ distribution.
Thus one can expect that a tail of the resonance, namely a larger $m_T$ range, is sensitive to the NP effect.
\subsection{\boldmath EFT limit}
\label{sec:Xsl}
To see NP effects on signal event distributions, here we show analytic forms of the parton-level cross sections for $b \bar c \to e^- \bar\nu$ in the LQ models,
and then see the EFT limit of the models.
Defining the four momenta as
\begin{align}
p_c^\mu & = E(1, 0, 0, 1) \,, \label{Eq:pb} \\
p_b^\mu & = E(1, 0, 0, -1) \,, \label{Eq:pc} \\
p_e^\mu & = E(1, \sin\theta, 0, \cos\theta) \,, \\
p_\nu^\mu & = E(1, -\sin\theta, 0, -\cos\theta) \,,
\end{align}
we obtain
\begin{align}
{ d\sigma_X \over d\cos\theta } = {1 \over 3} {|\mathcal M_X|^2 \over 128\pi E^2} \,,
\end{align}
with the spin averaged sum of the squared matrix element, namely $|\mathcal M_X|^2$ (${1 \over 4}\sum_\text{spin}$ is abbreviated), written as
\begin{align}
| \mathcal M_{V_1}^\text{LQ} |^2 =
& \,4\, (h_\text{LQ}^{21} h_\text{LQ}^{31*})^2 E^4 \hat C_{t}^2 (1 - \cos\theta)^2 \,, \label{Eq:XsV1} \\[0.5em]
%
| \mathcal M_{V_2}^\text{LQ} |^2 =
& \,(h_{\text{LQ}_1}^{21} h_{\text{LQ}_2}^{31*})^2 E^4 \hat C_{t}^2 (1 + \cos\theta)^2 \,, \label{Eq:XsV2} \\[0.5em]
%
| \mathcal M_{S_1}^\text{LQ} |^2 =
& \,16\, ( h_{\text{LQ}_1}^{21} h_{\text{LQ}_2}^{31*})^2 E^4 \hat C_{t}^2 \,, \\[0.5em]
%
| \mathcal M_{S_2/T}^\text{LQ} |^2 =
& \,( \tilde h_{\text{LQ}_2}^{12*} \tilde h_{\text{LQ}_1}^{13} )^2 E^4 \left[ \hat C_{t}^2 (1 + \cos\theta)^2 \right. \label{Eq:XsS2T} \\
&\left.+ \hat C_{u}^2 (1 - \cos\theta)^2 \pm 2 \hat C_{t} \hat C_{u} (1-\cos^2\theta) \right] \,, \notag
\end{align}
where $\hat C_{t}$ and $\hat C_{u}$ involve the LQ propagator written as
\begin{align}
\label{eq:Prop_t}
\hat C_{t} & = \left[ 2 E^2 (1 + \cos\theta) + M_\text{NP}^2 \right]^{-1} \,, \\
%
\label{eq:Prop_u}
\hat C_{u} & = \left[ 2 E^2 (1 - \cos\theta) + M_\text{NP}^2 \right]^{-1} \,.
\end{align}
At the EFT limit, the angular and energy dependence of the propagator is suppressed such that $\hat C_{t} \simeq \hat C_{u} \simeq 1/M_\text{NP}^2$, and thus we have
\begin{align}
| \mathcal M_{V_1}^\text{LQ} |^2 \simeq
& \,4\, { (h_\text{LQ}^{21} h_\text{LQ}^{31*})^2 \over M_\text{LQ}^4} E^4 (1 - \cos\theta)^2 \,, \\
%
| \mathcal M_{V_2}^\text{LQ} |^2 \simeq
& \, { (h_{\text{LQ}_1}^{21} h_{\text{LQ}_2}^{31*})^2 \over M_\text{LQ}^4} E^4 (1 + \cos\theta)^2 \,, \\
%
| \mathcal M_{S_1}^\text{LQ} |^2 \simeq
& \,16\, { (h_\text{LQ}^{21} h_\text{LQ}^{31*})^2 \over M_\text{LQ}^4} E^4 \,, \\
%
| \mathcal M_{S_2}^\text{LQ} |^2 \simeq
& \,4\, {( \tilde h_{\text{LQ}_2}^{12*} \tilde h_{\text{LQ}_1}^{13} )^2 \over M_\text{LQ}^4} E^4 \,, \\
%
| \mathcal M_{T}^\text{LQ} |^2 \simeq
& \,4\, {( \tilde h_{\text{LQ}_2}^{12*} \tilde h_{\text{LQ}_1}^{13} )^2 \over M_\text{LQ}^4} E^4 \cos^2\theta \,.
\end{align}
We can see that the relations of Eqs.~\eqref{eq:LQV1}, \eqref{eq:LQV2}, \eqref{eq:LQS1}, \eqref{eq:LQS2}, \eqref{eq:LQT} are achieved in this limit.
The parton energy $E$ fluctuates in the proton, and thus it is distributed as $0 < E < \sqrt{s}/2$.
The energy distribution is weighted with Parton Distribution Function (PDF) according to which $b$ and $\bar c$ quarks tend to have low energy.
Namely, the distribution in a high energy range is suppressed, and hence the EFT limit is a good approximation even for $M_\text{NP} \lesssim \mathcal O(1) \,\text{TeV}$ as long as the total cross section is concerned.
However, this is not the case for our analysis.
For the present process, the NP sensitivity is gained at large $p_T$ due to the large SM background at lower $p_T$.
In other words, the LHC bound for NP is provided from the signal event distribution with high $p_T$, which arises from the high energy parton at the cost of the PDF suppression.
To be precise, $m_T \sim 2\,\text{--}\,3\,\text{TeV} (\sim\,E)$ is the most sensitive for the present case.
Therefore, the EFT limit $E^2 \ll M_\text{NP}^2$ should be valid only for $M_\text{NP} > \mathcal O(10) \,\text{TeV}$.
If $E^2 \lesssim M_\text{NP}^2$ is in the case, the angular and energy dependence in the propagator $\hat C_{t}$ and $\hat C_{u}$ is of critical importance since it affects the $m_T$ distribution,
as will be seen below.
\subsection{\boldmath Numerical analysis}
\label{sec:analysis}
Here we perform numerical analyses to obtain constraints on the WCs from the $pp \to l^\pm + \text{missing}$ searches at the 13~TeV LHC by means of the aforementioned LQ models described in Sec.~\ref{sec:LQ}.
In this work, we apply numerical setup of the $\ell\nu$ and $\tau\nu$ searches by ATLAS~\cite{Aad:2019wvl} and CMS~\cite{Sirunyan:2018lbg}, respectively.
\subsubsection{simulation setup}
\label{sec:setup}
We generate 100K signal events for every LQ mass $M_\text{LQ} = 2$, $3$, $5$, $7.5$, $10$, $20$, and $100\,\text{TeV}$ in each model,
by using {\tt Madgraph5}~\cite{Alwall:2014hca} with {\tt NNPDF2.3}~\cite{Ball:2012cx} through {\tt PYTHIA8}~\cite{Sjostrand:2014zea} within the five flavor scheme.
These generated events are interfaced to {\tt DELPHES3}~\cite{deFavereau:2013fsa} for the fast detector simulation.
Then, we apply the following sets of the selection cuts.
For the $\ell=e,\mu$ modes, we require
$n_{\ell} = 1$, $p_{T,\ell}> 65~\text{GeV}$, $|\eta_{\ell}|> 2.5$,
and $E\!\!\!/_T > 55~\text{GeV}$ ($\mu$ mode), $65~\text{GeV}$ ($e$ mode)
following the ATLAS search with 139~fb$^{-1}$ at 13~TeV as in Ref.~\cite{Aad:2019wvl}.
Regarding the $\tau$ mode, we require exactly one hadronically decaying tau,
$n_{\tau_h} = 1$, with $p_{T,\tau} > 80~\text{GeV}$ and $|\eta_\tau| < 2.1$,
no isolated $e$ or $\mu$,
$E\!\!\!/_T > 200~\text{GeV}$, $0.7< p_{T,\tau}/E\!\!\!/_T < 1.3$,
and $|\Delta \phi(p_{\tau},E\!\!\!/_T)|>2.4$,
following the CMS search with 35.9~fb$^{-1}$ at 13~TeV as in Ref.~\cite{Sirunyan:2018lbg}.
Figure~\ref{Fig:Distributions} shows the $m_T$ distributions of $pp \to e^\pm + \text{missing}$ for the tensor type NP in the $bu\to e \nu$ mode
by fixing $C_T^{ue} (\Lambda_\text{LHC})$ $=1$, but with different values of $M_\text{LQ}$ as an illustration.
One can see that the distributions converge at the low $m_T$ region, which implies that the EFT limit is valid as it is for the flavor phenomenology.
On the other hand, the discrepancy among them at the high $m_T$ region is significant.
When the WC is fixed, the more events are expected as $M_\text{LQ}$ increases,
which is because the relative importance of the energy dependent terms in the $t/u$-channel propagators in Eqs. (\ref{eq:Prop_t}) and (\ref{eq:Prop_u}) is larger for the same $m_T$ value.
Similar tendencies are observed for all the type of operators, and also for the $bc\to l\nu$ cases.
Regarding the $bc\to l\nu$ cases, however, the size of the discrepancy among the masses are relatively small,
since the initial parton contributing to those processes is less energetic $c$-parton than $u$-parton.
Nevertheless, it generates a significant difference in the LHC bound.
\begin{figure}[t!]
\begin{center}
\includegraphics[viewport=0 0 567 568, width=20em]{Fig_mTdistribution_CTue.pdf}
\caption{
\label{Fig:Distributions}
The simulated $m_T$ distributions of $pp \to e^\pm + \text{missing}$ for the tensor type NP in the $bu\to e\nu$ mode with $M_\text{LQ}=2, 3, 5, 7.5, 10, 20$ and $100\,\text{TeV}$ by fixing $C_T^{ue} (\Lambda_\text{LHC})=1$.
}
\end{center}
\end{figure}
\subsubsection{LHC bound on $C_{X} (\Lambda_\text{LHC})$}
\label{sec:bound}
After the event selection cuts, we obtain the $m_T$ distributions and extract the constraints on the WCs based on them.
For the $e, \mu$ modes, we use the $95\%$ confidence level (CL) upper bounds on the model independent NP contributions of $m_T > m_{T,\min}$,
provided in Table 4 and Table 5 of Ref.~\cite{Aad:2019wvl}.
We take the $m_{T,\min}$ threshold value providing the strongest constraint for each model. Note that for the electron mode,
a deficit of the events in the tale region is observed, which results in stronger constraints than expected.
For the $\tau$ mode, we perform the same analysis based on the background $m_T$ distribution in Ref.~\cite{Sirunyan:2018lbg}.
To be conservative, we assigned a $30$\,--\,$50\%$ systematic uncertainty for $m_{T,\min}=$\,1.5\,--\,3~TeV.
Then, an excluded upper limit on the LQ coupling(s) $h_{\text{LQ}_{(i)}}$ is derived for every $M_\text{LQ}$.
Finally, we translate it into $C_X (\Lambda_\text{LHC})$.
In this work, we fix the LHC scale to be $\Lambda_\text{LHC} = 1\,\text{TeV}$ for simplicity.
Note that we found that the present data is sensitive to the considering NP signal events in the region of $m_T \sim 2\,\text{--}\,3\,\text{TeV}$.
\begin{figure*}[t!]
\begin{center}
\includegraphics[viewport=0 0 972 432, width=46em]{Fig_bcLHCbound_v3.pdf} \\
\includegraphics[viewport=0 0 972 432, width=46em]{Fig_buLHCbound_v3.pdf}
\caption{
\label{Fig:LHCbound}
The 95\% CL upper bounds on $C_X^{ql} (\Lambda_\text{LHC})$ obtained from the $\ell\nu$ and $\tau\nu$ search data by ATLAS~\cite{Aad:2019wvl} and CMS~\cite{Sirunyan:2018lbg}, respectively,
where we fix $\Lambda_\text{LHC} = 1\,\text{TeV}$.
}
\end{center}
\end{figure*}
In Fig.~\ref{Fig:LHCbound}, we show the upper bounds on $C_X^{ql} (\Lambda_\text{LHC})$ at 95\% CL with respect to the fixed $M_\text{NP}$ for all the combinations of $X = (V_1$, $V_2$, $S_1$, $S_2$, $T)$, $q=(c,u)$, and $l = (e, \mu, \tau)$.
One can clearly see that our results of the LHC bounds depend on the LQ mass for the region of $M_\text{LQ} < 10\,\text{TeV}$, while not for the larger LQ mass, as expected.
In particular, the lower LQ mass results in the milder LHC bound on the WCs,\footnote{
For the NP models with a $s$-channel mediator, {\it e.g.} a charged scalar or vector boson, the EFT description usually gives weaker bounds at the LHC, which is opposite to the present case.
}
which is straightforwardly inferred from Fig.~\ref{Fig:Distributions}.
We also found that the mass dependence for the $T$ type NP is more significant than one for the other types of NP.
This is a non-trivial feature from the angular dependence as in Eq.~\eqref{Eq:XsS2T}.
One also finds that the EFT results ($M_\text{LQ} > 10\,\text{TeV}$) are independent of the chirality of the fermions, and only sensitive to the Lorentz structure.
However, this does not hold when the EFT limit is not valid.
This is because that the angular and energy dependence in the propagator $\hat C_{t,u}$ of Eqs.~\eqref{Eq:XsV1} -- \eqref{Eq:XsS2T} affects the $m_T$ distribution.
One may be interested in the LQ scenario with respect to the single $\widetilde{\text{R}}_2$ ($\widetilde{\text{S}}_1$) LQ particle
that generates the relation $C_{S_2} = +4C_T$ ($C_{S_2} = -4C_T$) at the LQ scale in the EFT limit.
Indeed, the differential cross section for the $\widetilde{\text{R}}_2$ ($\widetilde{\text{S}}_1$) scenario is easily derived from Eq.~\eqref{Eq:XsS2T} by replacing $\hat C_u (\hat C_t) \to 0$.
Then, we can see that the expression coincides with Eq.~\eqref{Eq:XsV2} of the $V_2$ operator (Eq.~\eqref{Eq:XsV1} of $V_1$) by scaling the factor 1 (4) at the EFT limit.
Thus, the LHC bounds of these two scenarios, for the $e$ and $\mu$ modes, can be read off from those of $C_{V_1}$ and $C_{V_2}$ in Fig.~\ref{Fig:LHCbound}.
For the $\tau$ mode, on the other hand, the $\tau_L$/$\tau_R$ difference in the effective operators for $C_{V_{1,2}}$ and $C_{S_2}=\pm4C_T$ affects the analysis due to the $\tau$ decay property at the LHC~\cite{Papaefstathiou:2011kd}.
We will see this point in Sec.~\ref{sec:discussion}.
\subsection{\boldmath Future prospects}
\label{sec:prospect}
In turn, we discuss future sensitivities for the NP searches from $pp \to l^\pm + \text{missing}$ at the high luminosity (HL)-LHC with 3 ab$^{-1}$ data.
Re-scaling the BG information which is used to derive the current bounds including the BG uncertainties,
we can obtain the prospect of the HL-LHC bounds on the WCs in Table~\ref{Tab:LHCsummary} denoted as ``(w/sys)''.
In the table, the results for the $(2\,\text{TeV} \text{\,--\,} 100\,\text{TeV})$ mass of the LQ particle are summarized.
Since the BG uncertainty in the tail region is significant,
we also show the optimistic cases that only the statistic error is taken and the theory uncertainty is controlled (negligible) in future, which is given in ``(no~sys)'' rows.
Furthermore, since the SM background at the tail of the $m_T$ distribution is dominated by the $W^\pm$-contribution \cite{Sirunyan:2018lbg,Aad:2019wvl},
we test $S/B$ improvement by selecting $l^-$ events as explained below.
Since the luminosity functions for $u\bar{d}$ and $d\bar{u}$ are not identical but the ratio is $L(u\bar{d})/L(d\bar{u}) \gtrsim 4$ above $2\,\text{TeV}$,
the $l^+$ events are more observed than $l^-$ in the SM.
Thus, the ratio for the $l^+/l^-$ events would be expected as $N_{l^+}/N_{l^-}\gtrsim 4$ in the tail region for the BG contribution.
On the other hand, no charge asymmetry is expected between the $c\bar{b}$ and $b\bar{c}$ cases, namely $L(c\bar{b})/L(b\bar{c}) \sim 1$.
Therefore, selecting only the $l^-$ events would improve the significance for the $b \to c l\nu$ process.
The results by selecting $l^-$ events are given in the "(sys, $l^-$)" rows,
where we assume the BG contributions reduced to $1/4$
and the $S/B$ factor would be improved about twice.
It turned out that selecting only the $l^-$ events can potentially improve the sensitivity to $C_X^{cl}$ by $30\,\text{--}\,40\%$ as seen in the table.
We have also checked that the effect of the selecting $l^-$ events
with the no BG systematic uncertainty. The results are slightly improved
but almost the same numbers in the ``(no sys)" rows are obtained, thus, not shown in the table.
The reason is that in principle the selection cut does not improve $S/\sqrt{B}$ and neither does the resulting sensitivity, if already the BG uncertainty is well controlled.
In other words, the $S/B$ improvement by the selection cut
minimizes the effect of the BG uncertainty.
For the case of the $b \to u l\nu$ process, the signal charge asymmetry turns out to be larger than that for the SM BG due to the large ratio of
$L(u\bar{b})/L(\bar{u}b)$.
Hence, selecting $l^+$ is efficient for this case, but the improvement would be limited.
In any case, we think that the charge asymmetry defined as $A_{l}= (N_{l^+}-N_{l^-})/(N_{l^+}+N_{l^-})$ would be
useful for a more dedicated study in
$(m_T,A_{l})$ distribution.
If the systematic error is well controlled,
the $m_T$ bin with a large number of events will determine the
bounds, and thus
the smaller $m_T$ bin will become more relevant.
On the other hand, when the systematic error is large,
the bins closer to the tail will be more
effective to set the bounds, since the background number of
events should be non-negative.
We found that
the $m_{T,\min}$ value providing the strongest bound
lies in $2$\,--\,$3$~TeV even for the HL-LHC.
Therefore, the mass dependence will remain important, as long as the systematic error is non-negligible.
The detailed statistical analysis procedures for the future prospects are as follows.
For each threshold bin $i$, we compute the value of $S_{i}^{95}$ based on the Poisson
distribution satisfying the following criteria.
\begin{align}
\int_0^{B^0_i} dB_i f(B_i) P(S_{i}^{95} + B_i, N_{i,{\rm BG}}) = 0.05,
\end{align}
where $P(S, N)$ is the probability to observe $N$ or less based on Poisson distribution of $S$,
and $f(B_i)$ is the probability distribution of $B_i$, the BG contribution for the
threshold bin $i$, and taken as the Gaussian distribution with $B_i^0$ and $\Delta B_i$
restricted to be in the range of $0\leq B_i \leq B_i^0$, where the normalization is taken as $\int_0^{B^0_i} dB_i
f(B_i)=1$.
We take the observed number $N_{i, BG}$ in future
as the most frequent value based on the BG distribution.
Based on the above procedure, the corresponding upper bound
$C_X$ at 95\% CL for the each threshold bin $i$ is obtained.
The minimum value among $i$ is taken as the 95\% CL upper bound of $C_X$.
Another possibility for future NP searches is to utilize the pseudo rapidity distribution.
We discuss it at length in Appendix~\ref{sec:angulardistribution}.
\section{\boldmath Combined constraints on $C_{X} (m_b)$}
\label{sec:discussion}
Here we summarize all the constraints on the WCs at the low energy scale $\mu = m_b$ both from the LHC and flavor bounds evaluated in this work.
The RGE running effect of $C_X$ from $\Lambda_\text{LHC} = 1\,\text{TeV}$ to $m_b = 4.2\,\text{GeV}$ is numerically presented as
\begin{align}
\begin{pmatrix}
C_{V_1} (m_b) \\
C_{V_2} (m_b) \\
C_{S_1} (m_b) \\
C_{S_2}(m_b) \\
C_T (m_b)
\end{pmatrix}
\simeq
\begin{pmatrix}
1 & 0 &0&0&0\\
0 & 1&0&0&0\\
0 & 0&1.71&0&0\\
0 & 0&0&1.71&-0.27\\
0 & 0&0&0&0.84
\end{pmatrix}
\begin{pmatrix}
C_{V_1} (\rm{\Lambda_{LHC}}) \\
C_{V_2} (\rm{\Lambda_{LHC}}) \\
C_{S_1} (\rm{\Lambda_{LHC}}) \\
C_{S_2} (\rm{\Lambda_{LHC}}) \\
C_T (\rm{\Lambda_{LHC}})
\end{pmatrix} \,,
\label{eq:RGE}
\end{align}
independent of the $(ql)$ index by following the formula in Ref. \cite{Iguro:2018vqb}, (see, also Refs. \cite{Jenkins:2013wua,Alonso:2013hga,Gonzalez-Alonso:2017iyc}).
Hence, the LHC bounds of $C_X^{ql} (m_b)$ are easily obtained by rescaling our results in Fig.~\ref{Fig:LHCbound}.
The $S_2$-$T$ mixing propagates $C_{S_2} (\rm{\Lambda_{LHC}})$ and $C_T (\rm{\Lambda_{LHC}})$ to $C_{S_2} (m_b)$.
Regarding the flavor bounds, we derived the updated values as written in Sec.~\ref{sec:EFT}
with the use of the recent experimental data and theoretical input~\cite{Amhis:2019ckw,Zyla:2020zbs,Aoki:2019cca}.
\begin{figure*}[t!]
\begin{center}
\includegraphics[viewport=0 0 1080 386, width=48em]{Fig_combined_bc_v5.pdf} \\[0.5em]
\includegraphics[viewport=0 0 1080 386, width=48em]{Fig_combined_bu_v4.pdf}
\caption{
\label{Fig:combined}
Summary of the flavor and LHC bounds of the WCs at $\mu = m_b$ along with the future prospects at the HL-LHC with $3\,\text{ab}^{-1}$ from Table~\ref{Tab:LHCsummary}.
Unless the flavor bound indicates a favored direction on the complex plane, the WC is taken as real.
Note that the prospect for $\text{Re}\, C_T^{c\tau}$ cannot be drawn since the assumption $\text{Im}\, C_T^{c\tau} =0.19$ already exceeds the result.
}
\end{center}
\end{figure*}
In Fig.~\ref{Fig:combined}, we show the LHC and flavor bounds on $C_X^{ql}(m_b)$ for $q = (c,u)$ and $l = (e,\mu,\tau)$.
The LHC bounds at $95\%\,\text{CL}$ for the EFT (valid for $M_\text{LQ}>10\,\text{TeV}$) and the $M_\text{LQ}=2\,\text{TeV}$ LQs are displayed in blue and cyan, respectively.
Regarding the flavor bounds, the WC constraints from the semi-leptonic and leptonic processes are given in red and yellow, respectively.
The WCs are taken as real unless there exists a specific direction on the complex plane of the WC, favored by the flavor bound such as $C_{V_2}^{c\tau} (m_b)$ and $C_{T}^{c\tau} (m_b)$.
Note that the ``fit results'' (best point with $1\sigma$ uncertainty) are distinguished from the ``upper limit'' in the figure.
Comments are written as follows.
\begin{itemize}
\item
$b \to c \ell \nu$:
the NP effect on this process is significant since the $|V_{cb}|$ measurement is probed from the distribution data of the process.
According to the previous analysis in Ref.~\cite{Iguro:2020cpg}, non-zero NP contributions of $C_{V_2, T}^{c\ell} (m_b)$ are possible at $1$--$2\sigma$ significance as shown in the figure.
As for the $V_2$ scenario, the present LHC bounds generally look milder than the flavor ones.
The scalar scenarios for these light lepton modes are bounded from $\mathcal B(B_c \to \ell\nu) < 30\%$ in the same way as the tau lepton mode.
We can see that the LHC results for $C_T^{c\ell}$ and $C_{S_{1,2}}^{c\ell}$ are well competitive.
\\[-0.7em]
%
\item
$b \to c \tau \nu$:
the current $R_{D^{(*)}}$ excesses are explained by the $V_1$, $V_2$, and $T$ scenarios as shown in the red bars,
and the LHC bounds are very comparable to these flavor-favored regions.
Our constraints in the EFT limit are weaker than that of Ref.~\cite{Greljo:2018tzh} but consistent with Ref.~\cite{Marzocca:2020ueu}.
Then, it is quite significant whether the EFT limit is applicable or not to the corresponding LQ model.
In particular, we found that the $V_2$ and $T$ solutions to the excesses are almost excluded by the LHC bounds in the EFT limits
whereas these scenarios are still allowed in the LQ models with $M_\text{LQ} = 2\,\text{TeV}$.
For the scalar scenarios, see Sec.~\ref{sec:EFT}.
\\[-0.7em]
%
\item
$b \to u \ell \nu$:
At present, the flavor bound is available only from $B \to \ell\nu$.
For both the electron and muon modes, the scalar scenarios with $|C_{S_i}^{u\ell}| \lesssim \mathcal O(0.01)$ are allowed from the flavor bound, which is much severer than the LHC bound.
Regarding the vector scenarios, on the other hand, the flavor and LHC bounds are comparable for the muon mode.
Note that the other NP scenarios, $C_{V_{1,2},T}^{ue}$ and $C_{T}^{u\mu}$, are more constrained from our LHC study than the leptonic processes.
See also Ref.~\cite{Colangelo:2019axi}.
A comprehensive fit analysis for $B \to (\pi,\rho,a_1)\ell\bar\nu$ in the presence of the NP effects for the flavor bound would be our future work.
\\[-0.7em]
%
\item
$b \to u \tau \nu$:
the LHC bounds are loose, naively given as $|C_{X}^{u\tau} (m_b)| \lesssim \mathcal O(1)$.
Nevertheless, the LHC bound is already significant for the tensor scenario, which excludes a part of the allowed regions from the flavor bound.
We can also confirm importance of the non-EFT case as well as the other currents.
\end{itemize}
We also present the maximum reaches at the HL-LHC in the ``no sys'' scenario of Table~\ref{Tab:LHCsummary} for comparison.
As seen, we clarified the significance of the HL-LHC sensitivity for the NP scenarios.
For instance, the $V_2$ and $T$ solutions to the current $R_{D^{(*)}}$ excesses can be excluded.
We note that the mass of the NP model, namely LQ for the present case, is theoretically restricted apart from the above bounds.
For instance, once we employ the unitarity bound for NP in $b \to c \tau\nu$ to explain the $R_{D^{(*)}}$ excesses,
the NP mass is restricted as $\lesssim 9\,\text{TeV}$~\cite{DiLuzio:2017chi}.\footnote{
A loose restriction is obtained from Fig.~\ref{Fig:MassScaleCheck}.
}
If this is the case, the EFT description is no longer appropriate for the LHC analysis with the high $p_T$ tail, but it provides the overestimated bound.
To be precise, the EFTs with $V_{1,2}$ and $S_{1,2}$ types give $> 30 \%$ $(\sim 10\%)$ severer LHC bounds than the corresponding LQ cases with $m_\text{LQ} \sim 2\, (5)\,\text{TeV}$.
As for the $T$ type, the EFT -- $2\,\text{TeV}$ LQ difference is much larger as seen in Fig.~\ref{Fig:combined}.
Therefore, our non-EFT study is of critical importance and useful for practical NP scenarios with a NP mediator mass of $\mathcal O(1)\,\text{TeV}$.
\begin{figure*}[t!]
\begin{center}
\includegraphics[viewport=0 0 1180 350, width=48em]{Fig_contour_v3.pdf} \\
\includegraphics[viewport=0 0 1180 350, width=48em]{Fig_contourR2S1.pdf}
\caption{\label{Fig:contour}
The $R_{{D^{(*)}}}$ favored region (red:$1\sigma$\,/\,light-red:$2\sigma$) on the complex plane of $C_X^{c\tau}$
superimposing the LHC bounds at $95\%\,\text{CL}$ for the EFT, 2\,TeV LQ, and future prospects shown in blue, cyan, and gray, respectively.
The left/middle/right panel is for the $V_1/V_2/T$ scenario (upper), while the left/middle for the single $\widetilde{\text{S}}_1$/$\widetilde{\text{R}}_2$ scenario and right for their LHC bounds (lower).
}
\end{center}
\end{figure*}
Lastly, in Fig.~\ref{Fig:contour}, we provide the favored regions on the complex plane of $C_X^{c\tau} (m_b)$ to explain the $R_{D^{(*)}}$ excesses and compare it with the current and prospect LHC bounds.
The single $\widetilde{\text{R}}_2$ and $\widetilde{\text{S}}_1$ scenarios are also presented here since they also have the solution.
Note that the ratio $C_{S_2}/C_T$ at the $m_b$ scale varies for the different LQ mass, which affects the allowed region.\footnote{
For the present cases, $C_{S_2} (m_b)/C_T (m_b) \simeq \{ 8.2, 10.2\}$ for $\widetilde{\text{R}}_2$ with $M_\text{LQ} = \{2, 100\}\,\text{TeV}$, while $\simeq \{ -8.7, -11.2\}$ for $\widetilde{\text{S}}_1$.
}
One finds that the $\widetilde{\text{R}}_2$ solution is almost excluded by the LHC bound for the EFT case whereas it is still viable for $M_\text{LQ} \gtrsim 2\,\text{TeV}$.
For both cases, the HL-LHC could test this scenario.
We can also see that the LHC bound for the $\widetilde{\text{R}}_2$ ($\widetilde{\text{S}}_1$) scenario in terms of $C_T$ is a bit severer than what is translated from that for the $V_2$ ($V_1$) operator by $C_{V_{2(1)}} \to 4C_T$
(unlike the $e$ and $\mu$ cases, mentioned in Sec.~\ref{sec:bound}.)
This is due to the fact~\cite{Papaefstathiou:2011kd} that the fraction of $\tau_R$ to the visible $\tau$ decay is larger than that of $\tau_L$ at the LHC.
Thus, by determining the $\tau$ polarization we can distinguish a model that forms the $V_2$ ($V_1$) operator from $\widetilde{\text{R}}_2$ ($\widetilde{\text{S}}_1$) that generates $S_2$-$T$, although these two have (almost) the same 2 to 2 scattering kinematics.
A similar feature can be seen in measuring the $\tau$ polarization of $\bar B \to D^{(*)}\tau\bar\nu$ at Belle~II, which could distinguish the type of LQ responsible for the $R_{D^{(*)}}$ excesses, see, {\it e.g.}, Ref.~\cite{Iguro:2018vqb}.
We can conclude that one could discover the NP signal even for a heavier LQ mass of $\sim \mathcal O(10)\,\text{TeV}$, if the $R_{D^{(*)}}$ excesses are truly caused by the NP contribution.
Otherwise, these NP scenarios will be excluded.
\section{\boldmath Summary and Discussion}
\label{sec:conclusion}
\begin{table*}[t!]
\renewcommand{\arraystretch}{1.3}
\begin{center}
\scalebox{1.2}{
\begin{tabular}{llccccc}
\hline\hline
& & $S_1$ & $S_2$ & $V_1$ & $V_2$ & $T$ \\
\hline
$C_X^{c e} (\Lambda_\text{LHC})$ & current & $0.30$ -- $ 0.20$ & $0.33$ -- $ 0.20$ & $0.25$ -- $ 0.18$ & $0.30$ -- $ 0.18$ & $0.32$ -- $ 0.13$ \\
&exp (w/sys) & $0.35$ -- $ 0.21$ & $0.37$ -- $ 0.21$ & $0.29$ -- $ 0.19$ & $0.35$ -- $ 0.19$ & $0.37$ -- $ 0.15$ \\
&exp (w/sys, $l^-$) & $0.26$ -- $ 0.16$ & $0.27$ -- $ 0.16$ & $0.21$ -- $ 0.14$ & $0.26$ -- $ 0.14$ & $0.26$ -- $ 0.11$ \\
&exp (no sys) & $0.12$ -- $ 0.09$ & $0.13$ -- $ 0.09$ & $0.10$ -- $ 0.08$ & $0.12$ -- $ 0.08$ & $0.11$ -- $ 0.05$ \\
\hline
$C_X^{c \mu} (\Lambda_\text{LHC})$ &
current & $0.41$ -- $ 0.27$ & $0.45$ -- $ 0.28$ & $0.34$ -- $ 0.25$ & $0.42$ -- $ 0.25$ & $0.35$ -- $ 0.18$ \\
&exp (w/sys) & $0.34$ -- $ 0.22$ & $0.37$ -- $ 0.22$ & $0.28$ -- $ 0.20$ & $0.34$ -- $ 0.20$ & $0.32$ -- $ 0.14$ \\
&exp (w/sys, $l^-$) & $0.24$ -- $ 0.16$ & $0.26$ -- $ 0.17$ & $0.20$ -- $ 0.15$ & $0.25$ -- $ 0.15$ & $0.23$ -- $ 0.10$ \\
&exp (no sys) & $0.10$ -- $ 0.08$ & $0.11$ -- $ 0.08$ & $0.09$ -- $ 0.07$ & $0.11$ -- $ 0.07$ & $0.10$ -- $ 0.05$ \\
\hline
$C_X^{c \tau} (\Lambda_\text{LHC})$ &
current & $ 0.45 $ -- $ 0.32 $ & $ 0.47 $ -- $ 0.32 $ & $ 0.42 $ -- $ 0.32 $ & $ 0.51 $ -- $ 0.33 $ & $ 0.42 $ -- $ 0.20 $ \\
&exp (w/sys) & $ 0.41 $ -- $ 0.20 $ & $ 0.43 $ -- $ 0.22 $ & $ 0.38 $ -- $ 0.19 $ & $ 0.48 $ -- $ 0.25 $ & $ 0.48 $ -- $ 0.15 $ \\
&exp (w/sys, $l^-$) & $ 0.30 $ -- $ 0.18 $ & $ 0.32 $ -- $ 0.18 $ & $ 0.28 $ -- $ 0.18 $ & $ 0.35 $ -- $ 0.19 $ & $ 0.35 $ -- $ 0.12 $ \\
&exp (no sys) & $ 0.13 $ -- $ 0.10 $ & $ 0.13 $ -- $ 0.10 $ & $ 0.12 $ -- $ 0.10 $ & $ 0.14 $ -- $ 0.10 $ & $ 0.09 $ -- $ 0.06 $ \\
\hline\hline
$C_X^{u e} (\Lambda_\text{LHC})$ &
current & $0.72$ -- $ 0.37$ & $0.77$ -- $ 0.35$ & $0.59$ -- $ 0.34$ & $0.75$ -- $ 0.34$ & $0.77$ -- $ 0.23$ \\
&exp (w/sys) & $0.78$ -- $ 0.38$ & $0.84$ -- $ 0.37$ & $0.66$ -- $ 0.36$ & $0.82$ -- $ 0.35$ & $0.91$ -- $ 0.25$ \\
&exp (no sys) & $0.33$ -- $ 0.20$ & $0.36$ -- $ 0.20$ & $0.27$ -- $ 0.19$ & $0.34$ -- $ 0.19$ & $0.29$ -- $ 0.12$ \\
\hline
$C_X^{u \mu} (\Lambda_\text{LHC})$ &
current & $0.99$ -- $ 0.58$ & $1.07$ -- $ 0.57$ & $0.83$ -- $ 0.54$ & $1.04$ -- $ 0.53$ & $0.95$ -- $ 0.35$ \\
&exp (w/sys) & $0.81$ -- $ 0.45$ & $0.86$ -- $ 0.44$ & $0.67$ -- $ 0.42$ & $0.84$ -- $ 0.41$ & $0.83$ -- $ 0.27$ \\
&exp (no sys) & $0.27$ -- $ 0.18$ & $0.29$ -- $ 0.18$ & $0.22$ -- $ 0.17$ & $0.28$ -- $ 0.17$ & $0.25$ -- $ 0.11$ \\
\hline
$C_X^{u \tau} (\Lambda_\text{LHC})$ &
current & $ 1.17 $ -- $ 0.65 $ & $ 1.27 $ -- $ 0.66 $ & $ 1.08 $ -- $ 0.72 $ & $ 1.39 $ -- $ 0.70 $ & $ 1.09 $ -- $ 0.41 $ \\
&exp (w/sys) & $ 0.88 $ -- $ 0.31 $ & $ 0.95 $ -- $ 0.30 $ & $ 0.87 $ -- $ 0.35 $ & $ 1.05 $ -- $ 0.34 $ & $ 1.15 $ -- $ 0.22 $ \\
&exp (no sys) & $ 0.36 $ -- $ 0.23 $ & $ 0.39 $ -- $ 0.23 $ & $ 0.33 $ -- $ 0.24 $ & $ 0.42 $ -- $ 0.24 $ & $ 0.29 $ -- $ 0.14 $ \\
\hline\hline
\end{tabular}
}
\caption{
Summary of the current LHC bounds with $139$ fb$^{-1}$/$35.9$ fb$^{-1}$ for the $\ell$/$\tau$ modes, and the future HL-LHC potential with $3$ ab$^{-1}$, for the LQ cases of the $(2\,\text{TeV} \text{\,--\,} 100\,\text{TeV})$ masses.
Note that the latter case corresponds to the EFT limit.
See the main text for the detail.
}
\label{Tab:LHCsummary}
\end{center}
\end{table*}
With the help of the recent development for the $\bar B \to {D^{(*)}}$ form factors, the flavor fit analysis for the $|V_{cb}|$ determination has indicated possibilities of the non-zero NP contributions to the $b \to c \ell\nu$ current.
On the other hand, the experimental excesses for the $R_{D^{(*)}}$ measurements have been implying an indirect NP evidence in the $b \to c \tau\nu$ current.
A similar study regarding the $b \to u l\nu$ current also obtains upper limits on the NP effects.
These situations naturally attract us to direct searches at the LHC.
In this paper, we have considered both the Effective-Field-Theory description and the leptoquark models for all the types of the NP currents in $b \to ql\nu$ for $q = (c,u)$ and $l = (e,\mu,\tau)$, and then obtained the comprehensive flavor and LHC bounds with respect to the Wilson coefficients $C_X^{ql}$ defined as in Eq.~\eqref{Eq:effH}.
The $l^\pm + \text{missing}$ searches have been applied for this purpose, where the high $p_T$ tail of the SM background can be used to obtain the NP constraints.
A significant point is that the EFT description is not always valid to constrain the actual NP models from the present LHC searches,
since the NP sensitivity is gained from the transverse mass distribution at $m_T \sim 2\,\text{--}\,3\,\text{TeV}$,
and therefore the EFT limit breaks down for the NP mass to be the same order of the $m_T$ bin.
We have shown the clear mass dependence of the $m_T$ distribution in the LQ model for the fixed WC as in Fig.~\ref{Fig:Distributions}.
Our investigation is based on the ATLAS~\cite{Aad:2019wvl} and CMS~\cite{Sirunyan:2018lbg} analyses for $l = (e,\mu)$ and $l =\tau$, respectively.
The LHC bounds of our results are summarized in Fig.~\ref{Fig:LHCbound} and Table~\ref{Tab:LHCsummary}.
Then, we have seen the LQ mass dependence on the LHC bounds.
Furthermore, we have confirmed that the EFT limit is a good approximation for $M_\text{LQ} \gtrsim 10\,\text{TeV}$,
while the vector and scalar type EFTs provide $>30\%$ ($\sim10\%$) overestimated bounds for the smaller mass of $\sim 2 (5)\,\text{TeV}$.
Regarding the tensor type, the difference is much larger such as $> 60\%$.
We have also evaluated potential of the $l^\pm + \text{missing}$ searches at the HL-LHC with $3\,\text{ab}^{-1}$, and then obtained the future projections of the HL-LHC sensitivity.
Then we found that selecting only the $l^-$ events can potentially improve the sensitivity to $C_X^{cl}$ by $30\,\text{--}\,40\%$.
We conclude that the maximum reaches for the WC sensitivity at the HL-LHC are $|C_X^{cl}| \sim 0.1$ and $|C_X^{ul}| \sim 0.1\,\text{--}\,0.3$ mostly independent of the lepton flavor ($l=e,\mu,\tau$) and of the type of the NP operators ($X=V_1,V_2,S_1,S_2,T$).
Finally, we put the combined summary for the LHC and flavor bounds on the WCs at the low energy scale $\mu = m_b$ in Figs.~\ref{Fig:combined} and \ref{Fig:contour}.
For some cases, one finds that the current LHC bounds are comparable with the flavor bounds.
Here, we would like to stress again that the LHC bounds for the LQ models become milder in the case $M_\text{LQ} < 10\,\text{TeV}$ than those for the EFT,
which is quite significant for some of the LQ scenarios when comparing it with the flavor bounds.
In particular, the $V_2$, $T$, and $\widetilde{\text{R}}_2$-LQ (that generates the $S_2$-$T$ mixed operators) type solutions to the $R_{D^{(*)}}$ excesses are almost excluded by the LHC bounds in the EFT limits
(which was first pointed out in Ref.~\cite{Greljo:2018tzh})
whereas they are still allowed in the LQ models with $M_\text{LQ} \gtrsim 2\,\text{TeV}$.
This is the remarkable point of our work.
Note that the $V_1$ type NP effects for $e$ and $\mu$ can be absorbed by scaling $V_{qb}$ by $(1 + C_{V_1}^{q\ell})$ since the measurements of these processes determine the CKM elements.
Thus, it is usually hard to probe their constraints from the flavor observables.
On the other hand, the unambiguous bound on $C_{V_1}^{q\ell}$ is obtained at the LHC, thanks to the distinct $m_T$ distribution.
We would like to propose some idea for further improvements on the $l^\pm + \text{missing}$ searches as closing remarks.
A more dedicated analysis including an additional $b$-tagging in the $pp\to b l \nu$ mode along with the LQ mass dependence could be effective for the NP study.
Studying multi dimensional signal distribution
in terms of $(m_T, A_\ell, \eta)$ could enhance the NP sensitivity.
Therefore, we would encourage both the experimental groups
to provide the $(m_T, \eta)$ distributions
for each charge separately.
\section*{\boldmath Acknowledgement }
\label{sec:acknowledgement}
We thank A.~Greljo, J.~Martin Camalich and J.~D.~Ruiz-Álvarez, T.~Kitahara, and and M.~Endo for valuable comments and discussion on the analysis.
S.\,I. would like to thank the warm hospitality at KEK where he stayed during the work.
He enjoys the financial support from JSPS No.~19J10980.
M.\,T. is supported in part by the JSPS Grant-in-Aid for Scientific Research No.~18K03611 and 19H04613.
S.\,I. and M.\,T. are also supported by the JSPS Core-to-Core Program (Grant No.~JPJSCCA20200002).
R.\,W. is partially supported by the INFN grant ‘FLAVOR’ and the PRIN 2017L5W2PT.
|
1,477,468,751,136 | arxiv | \section{}
A locally compact group $G$ is said to be \emph{amenable} if and only if it has an invariant mean \cite{vN}, that is, there exists $\mu\in \ell^\infty(G)^*$ such that $\mu(1)=1$ and $g\mu=\mu$ for all $g\in G$.
For a countable discrete group this is equivalent to the Reiter condition \cite{Reiter}:
For each $g\in G$ and each $n\in\mathbb{N}$ there is an element $f_n(g)\in \text{Prob}(G)$ of finite support with
\begin{enumerate}
\item $hf_n(g)=f_n(hg)$, and
\item for all $g_0,g_1$, \quad
$\|f_n(g_1) - f_n(g_0)\|_{\ell^1}{\rightarrow} 0 \text{ as }{n\rightarrow \infty}.$
\end{enumerate}
Ringrose and Johnson proved the following characterisation of amenability for a locally compact group, in terms of bounded cohomology.
\begin{thm*}
A group $G$ is amenable if and only if $H^q_b(G,V^*)=0$ for all $q\geq 1$ and all Banach $G$-modules $V$.
\end{thm*}
Moreover amenability is characterised by the vanishing of a specific class in $H^1_b(G,(\ell^\infty/\mathbb{C})^*)$, namely the Johnson class $\mathcal{J}(g_0,g_1)=\delta_{g_1}-\delta_{g_0}$. Here $\delta_g$ denotes the Dirac delta function supported at $g$ in $\ell^1(G)$ which is included in $\ell^1(G)^{**}\cong (\ell^\infty/\mathbb{C})^*$ in the usual way. In \cite{ourpaper} we observed that in fact this characterisation also applies in classical (unbounded) group cohomology $H^1(G,(\ell^\infty/\mathbb{C})^*)$.
Nigel Higson posed the question of whether there is a corresponding result for metric spaces and property A. Property A was introduced by Yu \cite{Yu}, as a non-equivariant analogue of amenability. The group action in the definition of amenability is replaced by a controlled support condition.
In this paper we answer Higson's question affirmatively by constructing analogues of group cohomology, bounded cohomology and the Johnson class, for a metric space $X$. These cohomologies have coefficients in a geometric module over $X$.
We use the following definition of property A. This is equivalent to Yu's original definition for spaces of bounded geometry \cite{HR}.
\begin{defn}
\label{propAdef}
A metric space $X$ is said to have property A if for each $x\in X$ and each $n\in\mathbb{N}$ there is an element $f_n(x)\in \text{Prob}(X)$ and a sequence $S_n$ such that $\supp(f_n(x))\subseteq B_{S_n}(x)$ and for any $R\geq 0$
\[
\|f_n(x_1) - f_n(x_0)\|_{\ell^1}{\rightarrow} 0 \text{ as }{n\rightarrow \infty},
\]
uniformly on the set $\{ (x_0,x_1)\mid d(x_0,x_1)\leq R\}$.
\end{defn}
Note: This is an analogue of the Reiter condition for amenability. In Reiter's condition, uniform convergence, and the controlled support condition, follow from the pointwise convergence and finite support by equivariance. We view the convergence condition as asymptotic invariance in $x$ of the sequence $f_n(x)$.
Let $X$ be a metric space. An \emph{$X$-module} is a triple $\mathcal V= (V, \|\cdot \|, Supp)$ where the pair $(V, \|\cdot \|)$ is a Banach space, and $\text{Supp}$ is a function from $V$ to the power set of $X$ satisfying the following axioms:
\begin{itemize}
\item $\text{Supp}(v)=\emptyset$ if $v=0$,
\item $\text{Supp}(v+w)\subseteq \text{Supp}(v)\cup \text{Supp}(w)$ for every $v,w\in V$,
\item $\text{Supp}(\lambda v)=\text{Supp}(v)$ for every $v\in V$ and every $\lambda\not=0$.
\item if $v_n$ is a sequence converging to $v$ then $\supp(v)\subseteq \bigcup\limits_n \supp(v_n)$. \skipit {optional extra!}
\end{itemize}
\begin{example}
Let $V=\ell^1(X)$ equipped with the $\ell^1$- norm and for $f\in \ell^1(X)$ let $\text{Supp}(f)=\{x\in X\mid f(x)\not=0\}$.
\end{example}
Note that if $(W, \|\cdot\|)$ is a closed subspace of $(V, \|\cdot \|)$ then any function $\text{Supp}:V\mapsto 2^X$ satisfying conditions 1-3 above restricts to a function $\text{Supp}|_W$ so that $(W, \|\cdot\|, \text{Supp}|_W)$ is an $X$-module. We will consider the special case of the subspace $\ell^1_0(X)$ of $\ell^1(X)$ consisting of functions $f$ such that $\sum\limits_{x\in X}f(x)=0$, by analogy with the characterisation of amenability in terms of the vanishing of $\mathcal{J}$.
If $X$ is equipped with a $G$ action for some group $G$, then we may also consider the notion of a $G$-equivariant $X$ module. This is an $X$-module $(V, \|\cdot\|, \text{Supp})$ equipped with an isometric action of $G$ such that $g\text{Supp}(v)=\text{Supp}(gv)$ for every $g\in G$ and every $v\in V$.
\skipit
{
In answer to a question of Nigel Higson we give several cohomological characterisations of Yu's property A via the introduction of suitable controlled cohomologies. The theories have equivariant and a non-equivariant versions, with the equivariant version characterising amenability for a group. We form a bicomplex $\mathcal E^{p,q}(X,\mathcal V)$ associated to a space $X$ and an $X$-module $\mathcal{V}$ (where these may or may not be equipped with a group action). We then proceed to `complete' the bicomplex in two different ways yielding two new cohomologies. We exploit the interaction between the two to give cohomological characterisations of property A in both theories, and to give another characterisation in terms of the existence of an asymptotically invariant mean on the space. These cohomology theories are in some sense analogous to group cohomology, and we also construct sub-complexes of both theories which correspond to bounded cohomology.
The equivariant theory is analogous to bounded cohomology in that it is zero if and only if the group is amenable, and both theories parallel Johnson's construction of bounded cohomology for a Banach module.
There is an alternative way to characterise amenability using Block and Weinberger's uniformly finite homology, \cite{BlockWeinberger}. In section \ref{exact} we explore the interaction between this homology and the classical bounded cohomology in the context of a discrete group and give purely cohomological proofs of Johnson's and Block and Weinberger's results in this context. \skipit{DO WE WANT TO SAY SOMETHING HERE LIKE: In a later paper we will generalise these ideas to study Yu's property A by introducing an analogous homology theory.}
}
\skipit{
In simplest terms the following diagram illustrates the components of the construction, with each row providing a cochain complex, with the final row providing the cohomology of our theory.
\bigskip
\begin{center}
$\begin{CD}
\EE{m-1}{0}@>>> \EE{m}{0} @>>> \EE{m+1}{0}\\
@VVV @VVV @VVV \\
\EE{m-1}{ai}@>>> \EE{m}{ai} @>>> \EE{m+1}{ai}\\
@VVV @VVV @VVV \\
\EE{m-1}{ai}/ \EE{m-1} {0}@>>> \EE{m-1}{ai}/\EE{m}{0}@>>>\EE{m+1}{ai}/\EE{m-1}{0}\\
\end{CD}$
\end{center}
\bigskip
As we will later see this diagram is related to a family of natural bicomplexes which provide finer information about property A and provide a bridge between bounded cohomology and uniformly finite homology.
A curious feature of the setup is the introduction of the notion of an abstract support function for a Banach space to give it a ``module'' structure over a metric space. This is necessary since Yu's theory leverages control over supports to capture the local structure as well as the global coarse geometry. This stands in the tradition of controlled propagation, which, following the work of Roe and others, has done so much to illuminate K-theoretic aspects of coarse geometry.
}
We will construct two cohomology theories $H_{QA}^*(X,\mathcal V),H_{WA}^*(X,\mathcal V)$. Both of these are analogues of bounded cohomology and they share many properties. The former has the virtue that it is straightforward to construct explicit cocycles, while the latter is more theoretical. For the $X$-module $\ell^1_0(X)$, the cohomology groups both contain a Johnson class, and vanishing of this in either characterises property A, and moreover guarantees the vanishing of $H_{QA}^q(X,\mathcal V),H_{WA}^q(X,\mathcal V)$ for all $q\geq 1$ and all $X$-modules $\mathcal{V}$, Theorem \ref{MainTheorem2} and Theorem \ref{MainTheorem3}.
Vanishing of the Johnson class in $H_{QA}^1(X,\ell^1_0(X))$ yields an asymptotically invariant Reiter sequence as in the above definition of property A, while its vanishing in $H_{WA}^1(X,\ell^1_0(X))$ yields a new characterisation of property A in terms of the existence of an asymptotically invariant mean on the space $X$. As in the case of bounded cohomology and amenability, the vanishing theorem follows from an averaging argument, which utilises this asymptotically invariant mean.
A key step in the construction of $H_{QA}^*(X,\mathcal V),H_{WA}^*(X,\mathcal V)$, is to construct two corresponding analogues of classical group cohomology $H_Q^*(X,\mathcal V),H_W^*(X,\mathcal V)$. As in the case of a group, there are forgetful maps $H_{QA}^*(X,\mathcal V)\to H_Q^*(X,\mathcal V),H_{WA}^*(X,\mathcal V)\toH_W^*(X,\mathcal V)$, and the vanishing of the images of the Johnson classes again characterises property A, Theorem \ref{MainTheorem1}.
As in \cite{ourpaper}, the equivalence of property A and vanishing of the Johnson elements in $H_Q^*(X,\mathcal V),H_W^*(X,\mathcal V)$, is exhibited using a long exact sequence in cohomology, arising from a short exact sequence of coefficients:
$$0\to \ell^1_0(X),\to \ell^1(X)\to \mathbb{C}\to 0.$$
In each case, the Johnson class appears naturally as the image of the constant function 1 under the connecting map in the long exact sequence. The asymptotically invariant mean is a 0-cocycle for $H_W^*(X,\ell^1(X))$, which is in the image of the forgetful map from $H_{WA}^*(X,\ell^1(X))$.
There are also equivariant version of our cohomologies, when $X$ is a $G$-space, and $V$ a Banach $G$-module, for some group $G$. For convenience of exposition, we will assume throughout that we have such a $G$-action, allowing the possibility that $G=\{e\}$ as the non-equivariant case described above. In the case that $X=G$ is a countable discrete group with a proper left-invariant metric, equipped with the usual left-action, the cohomologies detect amenability.
The results in this paper are related in spirit to, but independent from the more recent results appearing in \cite{DN}. In their paper Douglas and Nowak prove that exactness of a finitely generated group $G$ (which by \cite{Oz} is equivalent to property A) implies vanishing of the bounded cohomology $H^q_b(G,V)$, for so-called Hopf $G$-modules of continuous linear operators with values in $\ell^\infty(G)$. They also give a characterisation of exactness in terms of the existence of an invariant conditional expectation on the group.
\skipit{
\begin{mainthm
Let $X$ be a discrete metric space. Then the following are equivalent:
\begin{enumerate}
\item $X$ has property $A$.
\item $[\mathbf 1_Q]\in \Im \pi_*$ in $H_Q^0(X, \mathbb{C})$.
\item $D[\mathbf 1_Q]=0$ in $H_Q^1(X, \ell^1_0(X))$.
\item $D[\mathbf 1_W]=0$ in $H_W^1(X, \ell^1_0(X))$.
\item $[\mathbf 1_W]\in \Im \pi_*$ in $H_W^0(X, \mathbb{C})$.
\item $X$ admits an asymptotically invariant mean.
\end{enumerate}
\end{mainthm}
\begin{mainthmtwo}
Let $X$ be a discrete metric space. Then the following are equivalent:
\begin{enumerate}
\item $H_{QA}^q(X, \mathcal V)= 0$ for all $q\geq 1$ and all modules $\mathcal V$ over $X$.
\item $[\mathcal J_Q^{0,1}]=0$ in $H_{QA}^1(X, \mathcal \ell^1_0(X))$.
\item $X$ has property $A$.
\end{enumerate}
\end{mainthmtwo}
\begin{mainthmthree}
Let $X$ be a discrete metric space. Then the following are equivalent:
\begin{enumerate}
\item $H_{WA}^q(X, \mathcal V)= 0$ for all $q\geq 1$ and all modules $\mathcal V$ over $X$.
\item $[\mathcal J_W^{0,1}]=0$ in $H_{WA}^1(X, \mathcal \ell^1_0(X))$.
\item $X$ has property $A$.
\end{enumerate}
\end{mainthmthree}
}
\skipit{
In fact the two theories will be defined in exactly the same way apart from the fact that the second invokes equivariance in every place where it makes sense to do so. The cohomology theory $H^m_B(G,\mathcal V)$ will be in some sense equivalent to the standard bounded cohomology theory once we have chosen appropriate modules. It is apparent that in fact it is sufficient for a particular cohomology group, indeed a particular class, to be trivial in order to establish that the group is amenable. Classically this class lives in a space of means on the group, and vanishing of this class provides an invariant mean establishing amenability directly. In our theory vanishing of a specific class furnishes the group with a F\o lner sequence, which for a discrete group is equivalent to amenability. In the case of $H^1_A(X, \ell^1_0X)$, vanishing of the class $\delta_z-\delta_y$ furnishes the space with a family of probability measures which exhibit property A. We note in passing that these functions are cochains in $E^{-1}_{ai}(X, \ell^1X)$.
}
\skipit{
\begin{section}{Homological characterisations of amenability}\label{exact}
The purpose of this section is to illuminate the relationship between the two following remarkable characterisations of amenability for a group. The definitions will follow the statements.
\begin{thm} (Ringrose and Johnson) A group $G$ is amenable if and only if $H^1_b(G,(\ell^{\infty}(G)/\mathbb{C})^*)=0$
\end{thm}
\begin{thm} (Block and Weinberger) A group $G$ is amenable if and only if $H_0(G, \ell^\infty G)\not = 0$.
\end{thm}
It should be noted that both statements are part of a much larger picture. In the case of bounded cohomology vanishing of the first cohomology with the given coefficients is guaranteed by the triviality of a particular cocycle, and furthermore this ensures triviality of bounded cohomology with any coefficients in dimensions greater than or equal to $1$. In the case of Block and Weinberger's uniformly finite homology, vanishing of the zero dimensional homology group is guaranteed by the triviality of a fundamental class. It should also be noted that the definition they gave applies in the much wider context of an arbitrary metric space.
Recall the following definitions.
\begin{defn}
A \emph{mean} on a group $G$ is a positive linear functional $\mu$ on $\ell^\infty G$ such that $\norm{\mu}=1$. A group $G$ is \emph{amenable} if it admits a $G$-invariant mean.
\end{defn}
Recall that for a Banach space $V$ equipped with an isometric action of a group $G$, $C_b^m(G,V^*)$ denotes the $G$-module of equivariant bounded cochains $\phi:G^{m+1}\rightarrow V^*$. (Here bounded is defined by the Banach norm on the dual space $V^*$). This yields a cochain complex $(C_b^m(G,V^*), d)$ where $d$ denotes the natural differential induced by the homogeneous bar resolution. The cohomology of this complex is the bounded cohomology of the group with coefficients in $V^*$, denoted $H^*_b(G, V^*)$. For $V=\ell^\infty G/\mathbb{C}$ there is a particular class in dimension $1$which detects amenability which we will call the Johnson element. This is represented by the function
\[
J(g_0, g_1)=\delta_{g_1}-\delta_{g_0},
\]
where $\delta_g$ denotes the Dirac delta function supported at $g$. Note that $J(g_0, g_1)$ lies in $\ell^1_0(G)$ which we view as a subspace of its double dual, $V^*$.
Dually we have the chain complex $(C_m^{\ell^1}(G,V), \partial)$, where $C_m^{\ell^1}(G,V)$ consists of equivariant functions $c:G^{m+1}\rightarrow V$ which are $\ell^1$ on the subspace $\{e\}\times G^m$. The boundary map is defined by
\[
\partial c(g_0, \ldots, g_{m-1})=\sum\limits_{g\in G, i\in\{0,\ldots, m\} } (-1)^ic(g_0, \ldots, g_{i-1}, g, g_i, \ldots, g_{m-1}).
\]
The homology of this complex is the $\ell^1$-homology of the group with coefficients in $V$, denoted $H^{\ell^1}_*(G, V)$. Note that there is a map $H_*(G, V)\rightarrow H^{\ell^1}_*(G, V)$ given by the forgetful map. The fundamental class of Block and Weinberger in $H_0(G, \ell^\infty G)$ is represented by the cycle $c:G\rightarrow \ell^\infty G$ defined by $c(g)(h)=1$ for all $g,h\in G$. Applying the forgetful functor we obtain an element of $H^{\ell^1}_0(G, \ell^\infty G)$, and we will see that non-vanishing of this also characterises amenability.
We note that the pairing of $V^*$ with $V$, denoted $\langle-. -\rangle_V$ induces a pairing of $H^m_b(G, V^*)$ with $H_m^{\ell^1}(G, V)$ defined by
\[
\langle[\phi], [c]\rangle = \sum\limits_{g_1,\ldots, g_m\in G}\langle\phi(e, g_1, \ldots, g_m), c(e, g_1, \ldots, g_m)\rangle_V.
\]
It is clear that the pairing is defined at the level of cochains. To verify that it is well defined on classes one checks that the differential $d$ is the adjoint of the boundary map $\partial$.
The proof of the following result is a standard application of the snake lemma:
\begin{prop}\label{ses}
The short exact sequence of $G$-modules
$$0\to \mathbb{C} \xrightarrow{\iota} \ell^\infty G \xrightarrow{\pi} \ell^\infty G/\mathbb{C} \to 0.$$
induces a short exact sequence of chain complexes
$$0\to C_m^{\ell^1}(G,\mathbb{C})\xrightarrow{\iota} C_m^{\ell^1}(G,\ell^\infty G) \xrightarrow{\pi} C_m^{\ell^1}(G,\ell^\infty G/\mathbb{C})\to 0$$
and hence a long exact sequence of $\ell^1$-homology groups.
The short exact sequence of $G$-modules
$$0\to (\ell^\infty G/\mathbb{C})^* \xrightarrow{\pi^*} \ell^\infty G^* \xrightarrow{\iota^*} \mathbb{C}\to 0$$
induces a short exact sequence of cochain complexes
$$0\to C^m_{b}(G,(\ell^\infty G/\mathbb{C})^*)\xrightarrow{\pi^*} C^m_{b}(G,\ell^\infty G^*) \xrightarrow{\iota^*} C^m_{b}(G,\mathbb{C})\to 0$$
and hence a long exact sequence of bounded cohomology groups.
\qed
\end{prop}
Let $\mathbf 1$ denote the constant function $G\rightarrow \mathbb C$ which takes the value $1$ at every $g\in G$. This function represents classes in all of the following objects: $H^0_b(G, \mathbb{C})$, $H_0(G, \mathbb{C})$, $H^{\ell^1}_0(G, \mathbb{C})$. Our point of view is that the Block-Weinberger fundamental class is $i[\mathbf 1]\in H_0(G, \ell^\infty G)$, while the Johnson cocycle is $d[\mathbf 1]\in H^1_b(G, (\ell^\infty G/\mathbb{C})^*)$, where $d$ denotes the connecting map $H^0_b(G, \mathbb{C})\rightarrow H^1_b(G, (\ell^\infty G/\mathbb{C})^*)$. The first of these observations is elementary. For the second, note that $d[\mathbf 1]$ is obtained by lifting $\mathbf 1$ to the element $g\mapsto \delta_g$ in $C^0_b(G,(\ell^\infty G)^*)$ and taking the coboundary. This produces the Johnson cocycle $J(g_0,g_1)=\delta_{g_1}-\delta_{g_0}$.
By exploiting the connecting maps arising in Proposition \ref{ses} together with these observations we will obtain a new proof that $G$ is amenable if and only if the Johnson cocycle in bounded cohomology vanishes, and that this is equivalent to non-vanishing of the Block-Weinberger fundamental class. The first hint of the interaction is provided by the observation that dualising $H_0(G, \ell^\infty G)$ we obtain $H^0(G, \ell^\infty (G)^*)$ and that this is equal to $H^0_b(G, \ell^\infty (G)^*)$ since equivariance ensures that 0-cochains are bounded. The non-vanishing of $H^0_b(G, \ell^\infty (G)^*)$ is equivalent to amenability, since elements of $H^0_{b}(G,\ell^\infty G^*)$ are maps $\phi:G\rightarrow \ell^\infty G^*$, which are $G$-equivariant and also, since they are cocycles, constant on $G$. Hence the value of a cocycle $\phi$ at any (and hence all) $g\in G$ is a $G$-invariant linear functional on $\ell^\infty G$. If $\phi$ is non-zero then taking its absolute value and normalising we obtain an invariant mean on the group. Conversely any invariant mean on the group is an invariant linear functional on $\ell^\infty G$ and hence gives a non-zero element of $H^0_{b}(G,\ell^\infty G^*)$.
\begin{thm}\label{beautiful}
Let $G$ be a countable discrete group. The following are equivalent:
\begin{enumerate}
\item \label{amen} $G$ is amenable.
\item \label{inv-mean} $\iota^*:H^0_{b}(G,\ell^\infty G^*) \to H^0_{b}(G,\mathbb{C})$ is surjective.
\item \label{johnson} The Johnson class $d[\mathbf 1]$ vanishes in $H^1_{b}(G,(\ell^\infty G/\mathbb{C})^*)$.
\item \label{pairing} $\langle d[\mathbf 1],[c]\rangle =0$ for all $[c]$ in $H_1^{\ell^1}(G,\ell^\infty G/\mathbb{C})$. (Hence for a non-amenable group, the non-triviality of $d[\mathbf 1]$ is detected by the pairing.)
\item \label{hom-last} $\iota[\mathbf 1] \in H_0^{\ell^1}(G,\ell^\infty G)$ is non-zero.
\item \label{B-W}The Block-Weinberger fundamental class $\iota[\mathbf 1] \in H_0(G,\ell^\infty G)$ is non-zero.
\end{enumerate}
\end{thm}
\begin{proof}
(\ref{amen})$\implies$ (\ref{inv-mean}) since $H^0_{b}(G,\mathbb{C})=\mathbb{C}$, and for $\mu$ an invariant mean $i^*[\mu]=[\mathbf 1]$.
(\ref{inv-mean}) $\iff$ (\ref{johnson}): By exactness, surjectivity of $\iota^*$ is equivalent to vanishing of $d$, hence in particular this implies $d[\mathbf 1]=0$. The converse follows from the fact that $[\mathbf 1]$ generates $H^0_{b}(G,\mathbb{C})$, so if $d[\mathbf 1]=0$ then $d=0$ and $\iota^*$ is surjective.
The implication
(\ref{johnson})$\implies$ (\ref{pairing}) is trivial.
(\ref{pairing}) $\implies$ (\ref{hom-last}): (\ref{pairing}) is equivalent to $\langle [\mathbf 1],\partial[c]\rangle =0$ for all $[c]$ in $H_1^{\ell^1}(G,\ell^\infty G/\mathbb{C})$ by duality. We note that the space of 0-cycles in $C_0^{\ell^1}(G,\mathbb{C})$ is $\mathbb{C}$, and noting that the pairing of the class $[\mathbf 1]$ in $H^0_{b}(G,\mathbb{C})$ with the class $[\mathbf 1]$ in $H_0^{\ell^1}(G,\mathbb{C})$ is $\langle [\mathbf 1],[\mathbf 1]\rangle =1$, we see that $[\mathbf 1]\in H_0^{\ell^1}(G,\mathbb{C})$ is not a boundary. Thus $H_0^{\ell^1}(G,\mathbb{C})=\mathbb{C}$ and the pairing with $H^0_{b}(G,\mathbb{C})$ is faithful so $\langle [\mathbf 1],\partial[c]\rangle =0$ for all $[c]$ implies $\partial=0$. From this we deduce that $\iota$ is injective by exactness, hence we have (\ref{hom-last}): $\iota[\mathbf 1]$ is non-zero.
(\ref{hom-last})$\implies$ (\ref{B-W}) since $\iota[\mathbf 1] \in H_0^{\ell^1}(G,\ell^\infty G)$ is the image of the corresponding element of $H_0(G,\ell^\infty G)$ under the forgetful map.
(\ref{B-W})$\implies$ (\ref{amen}): We will use an argument due to Nowak. Let $\delta:C^0_{}(G,\ell^1(G))\to C^1_{}(G,\ell^1(G))$ denote the restriction of $d$. This is the pre-dual of $\partial$. First we note that $\delta$ is not bounded below, since if it were then $\partial=\delta^*$ would be surjective and $H_0(G, \ell^\infty G)$ would vanish giving $\iota[\mathbf 1]=0$, which is a contradiction.
The fact that $\delta$ is not bounded below is precisely the assertion that there is a Reiter sequence for the group and that therefore it is amenable. \skipit{Do we need to say what a Reiter sequence is?}
\end{proof}
As an example of this approach we give a proof of non-amenability for $F_2$ by constructing an explicit element $[c]\in H_1^{\ell^1}(G,\ell^\infty G/\mathbb{C})$ for which $\langle d[\mathbf 1],[c]\rangle\not= 0$.
Let $\{a,b\}$ be a free basis for $F_2$, and let $\Gamma$ denote the Cayley graph of $F$ with respect to this generating set. $\Gamma$ is a tree and the action of $G$ on $\Gamma$ extends to the Gromov boundary. We choose a point $p$ in the Gromov boundary of $\Gamma$. For the sake of definiteness we set $p$ to be the endpoint of the ray $(a^n)$ where $n$ ranges over the positive integers, though this is not essential.
For a generator $g$ of $F_2$ (or its inverse) we set $c(e,g)(h)= 1$ if $(e, g)$ is the first edge on the geodesic from $e$ to $hp$ and set $c(e, g)(h)=0$ otherwise. Extending the definition by equivariance we obtain a function $c$ defined on the edges of $\Gamma$ with values in $\ell^\infty G$ and this represents an element $\overline c\in\ell^\infty G/\mathbb C$.
Now consider $\partial c(e)=\sum\limits_{g\in\{a^{\pm 1}, b^{\pm 1}\}}c(g,e)-c(e,g)$.
For a given $h$ exactly one of the edges $(e,a), (e,b), (e, a^{-1}), (e, b^{-1})$ is the first edge on the geodesic $[e, hp]$, so the sum $c(e,a)+ c(e,b)+ c(e, a^{-1})+c(e, b^{-1})$ is the constant function $\mathbf 1$ on $G$.
On the other hand for a generator $g$, $c(g,e)(h)=1$ if and only if the edge $(g, e)$ is the first edge on the geodesic from $g$ to $hp$. We now consider the function $c(a,e)+c(b,e)+c(a^{-1}, e)+c(b^{-1},e)$. For a given $h\in G$ there is a unique point in the set $\{a,b, a^{-1}, b^{-1}\}$ which lies on the geodesic from $e$ to $hp$, and this is the only one for which the corresponding term of the sum takes the value $0$, so the sum $c(a,e)+c(b,e)+c(a^{-1}, e)+c(b^{-1},e)$ is the constant function $\mathbf 3$.
Hence $\partial c(e)=\mathbf 3-\mathbf 1=\mathbf 2$. Now by equivariance $\partial c(k)=\mathbf 2$ for all $k$, hence $\partial \overline c$ vanishes in $\ell^\infty (G)/\mathbb C$, so $\overline c$ is a cycle and therefore represents an element $[\overline c]\in H_1^{\ell^1}(G,\ell^\infty G/\mathbb{C})$.
We now compute the pairing $\langle d[\mathbf 1], [\overline c]\rangle$.
\[
\langle d[\mathbf 1], [\overline c]\rangle=\langle [\mathbf 1], \partial [\overline c]\rangle=\langle [\mathbf 1], [\partial c]\rangle = \langle [\mathbf 1], [\mathbf 2]\rangle=2.
\]
Hence $F_2$ is not amenable.
We conclude this section by noting that amenability is also equivalent to vanishing of the Johnson class as an element of $H^1(G,(\ell^\infty G/\mathbb{C})^*)$. To see this, replace the pairing of $H^1_{b}(G,(\ell^\infty G/\mathbb{C})^*)$ and $H_1^{\ell^1}(G,\ell^\infty G/\mathbb{C})$ in the proof of Theorem \ref{beautiful} with the standard pairing of $H^1(G,(\ell^\infty G/\mathbb{C})^*)$ and $H_1(G,\ell^\infty G/\mathbb{C})$, hence deducing that vanishing of the Johnson element in $H^1(G,(\ell^\infty G/\mathbb{C})^*)$ implies non-vanishing of the Block-Weinberger fundamental class:
\begin{thm}\label{unboundedbeauty}
Let $G$ be a countable discrete group. The following are equivalent:
\begin{enumerate}
\item $G$ is amenable.
\item $\mathbf 1$ lies in the image of $i^*:H^0_{}(G,\ell^\infty G^*) \to H^0_{}(G,\mathbb{C})$.
\item The Johnson class $d[\mathbf 1]$ vanishes in $H^1_{}(G,(\ell^\infty G/\mathbb{C})^*)$.
\end{enumerate}
\end{thm}
\end{section}
}
\begin{section}{The cochain complex}
Let $X$ be a metric space, $G$ be a group acting by isometries on $X$ and $\mathcal V=(V, \|\cdot\|_V, \text{Supp})$ be a $G$-equivariant $X$ module. Associated to this data we will construct a bicomplex $\mathcal E^{p,q}(X,\mathcal V)$. This bicomplex also depends on the group $G$, however for concision we will generally omit $G$ from our notation.
This bicomplex is in some sense too small to detect property A, and we will construct two `completions' of the bicomplex to rectify this, yielding two cohomology theories $H_Q^p(X,\mathcal V)$ and $H_W^p(X,\mathcal V)$.
There are two principal cases of interest. When $G$ is trivial, we obtain non-equivariant cohomologies detecting property A. When $X=G$ is a group acting on itself by left multiplication and equipped with a proper left invariant metric, the cohomologies detect amenability for $G$.
For $\mathbf x\in X^{p+1}, \mathbf y \in X^{q+1}$, we make the standard convention that coordinates of $\mathbf x, \mathbf y$ are written $x_0,\dots,x_p$ and $y_0,\dots,y_q$.
For a positive real number $R$ let $\Delta_R^{p+1}$ denote the set $\{\mathbf x\in X^{p+1} \mid d(x_i, x_j)\leq R, \, \forall i,j\}$, and let $\Delta_R^{p+1,q+1}$ denote the set
$$\bigl\{(\bx,\by)\in X^{p+1}\times X^{q+1} \mid d(u,v)\leq R, \, \forall u,v\in\{x_0,\dots x_p,y_0,\dots,y_q\}\bigr\}.$$
Identifying $X^{p+1}\times X^{q+1}$ with $X^{p+q+2}$ in the obvious way, $\Delta_R^{p+1,q+1}$ can be identified with $\Delta_R^{p+q+2}$.
Given a function $\phi:X^{p+1}\times X^{q+1} \rightarrow V$ we set
\[\|\phi\|_R=\sup_{\mathbf x \in \Delta_R^{p+1}, \mathbf y\in X^{q+1}} \|\phi(\mathbf x, \mathbf y)\|_V.\]
We say that a function $\phi$ is of controlled supports if for every $R>0$ there exists $S>0$ such that whenever $(\mathbf x,\mathbf y)\in \Delta^{p+1,q+1}_R$ then $\text{Supp}(\phi(\bx,\by))$ is contained in $B_{S}(x_i)$ and $B_{S}(y_j)$ for all $i,j$.
The action of $G$ on $X$ extends to give diagonal actions on $X^{p+1}$ and $X^{q+1}$, and a function $\phi$ as above is said to be equivariant if for every $g\in G$
\[
g(\phi(\mathbf x,\mathbf y))=\phi(g\mathbf x, g\mathbf y).
\]
We define
\[
\mathcal E^{p,q}(X,\mathcal V)=\{\phi:X^{p+1}\times X^{q+1} \rightarrow V \mid \text{ $\phi$ is equivariant, of controlled supports and } \|\phi\|_R< \infty\; \forall R> 0 \}.
\]
We equip the space $\mathcal E^{p,q}(X,\mathcal V)$ with the topology arising from the semi-norms $\|\cdot\|_R$. It seems natural to allow $R$ to range over all positive values, however we note that the topology this gives rise to is the same as the topology arising from the countable family of seminorms $\|\cdot\|_R$ for $R\in\mathbb{N}$.
The usual boundary map $\partial: X^{m+1}\mapsto X^m$ induces a pair of anti-commuting coboundary maps which yields a bicomplex
\[
\begin{CD}
@.@A{D}AA @A{D}AA @A{D}AA\\
\qquad\qquad @.\mathcal E^{2,0} @>{d}>> \mathcal E^{2,1} @>{d}>> \mathcal E^{2,2} @>{d}>>\\
@A{p}AA @A{D}AA @A{D}AA @A{D}AA\\
@.\mathcal E^{1,0} @>{d}>> \mathcal E^{1,1} @>{d}>> \mathcal E^{1,2} @>{d}>>\\
@.@A{D}AA @A{D}AA @A{D}AA\\
@.\mathcal E^{0,0} @>{d}>> \mathcal E^{0,1} @>{d}>> \mathcal E^{0,2} @>{d}>>\vspace{1.5ex}\\
@.@. @>>{q}>
\end{CD}
\]
Specifically, $D: \mathcal E^{p,q}\rightarrow \mathcal E^{p+1,q}$ is given by
\[
D\phi \left((x_0, \dots, x_{p+1}), \mathbf{y}\right) = \sum_{i=0}^{p+1}(-1)^{i}
\phi((x_0, \ldots, \widehat{x_i}, \ldots, x_{p+1}), \mathbf{y})
\]
while $d: \mathcal E^{p,q}\rightarrow \mathcal E^{p,q+1}$ is
\[
d\phi(\mathbf{x}, (y_0, \dots, y_{q+1})) = \sum_{i=0}^{q+1} (-1)^{i+p} \phi(\mathbf{x}, (y_0, \dots, \widehat{y_i}, \dots, y_{q+1})).
\]
In Proposition \ref{acyclicrows} we will show that the rows of our bicomplex are acyclic, and for this reason it makes sense to consider an augmentation of the rows making them exact at $q=0$. We note that the definition of $\mathcal E^{p,q}$ and the maps $D:\mathcal E^{p,q}\to \mathcal E^{p+1,q}, d:\mathcal E^{p,q}\to \mathcal E^{p,q+1}$ make sense not just for positive $p,q$ but also when one of $p$ or $q$ is $-1$. We will be interested in $\mathcal E^{p,-1}(X,\mathcal V)$ which we will identify as the augmentation of row $p$. The augmentation map is the differential $d\mathcal E^{p,-1}\to \mathcal E^{p,0}$.
Elements of $\mathcal E^{p,-1}(X,\mathcal V)$ are maps $\phi:X^{p+1}\times X^0\to V$; for convenience of notation, we will suppress the $X^0$ factor, and write $\phi(\mathbf x)$ for $\phi(\mathbf x,())$. We note that the differential $d:\mathcal E^{p,-1}\to \mathcal E^{p,0}$ is defined by $d\phi(\mathbf x,(y))=\phi(\mathbf x,(\widehat y))$. Suppressing the empty vector we see that $d\phi(\mathbf x,(y))=\phi(\mathbf x)$, i.e.\ $d$ is the inclusion of $\mathcal E^{p,-1}(X,\mathcal V)$ into $\mathcal E^{p,0}(X,\mathcal V)$ as functions which are constant in the $y$ variable.
\begin{lemma}
The maps $D$ and $d$ are well-defined, continuous, anti-commuting differentials.
\end{lemma}
\begin{proof}
The fact that $D$ and $d$ are anti-commuting differentials on the larger space of \emph{all} equivariant functions from $X^{p+1}\times X^{q+1}$ to $V$ is standard. We must show that $D,d$ preserve finiteness of the semi-norms, and controlled supports. We note that $\|D\phi\|_R \leq (p+2) \|\phi\|_R$ by the triangle inequality, and a corresponding estimate holds for $\|d\phi\|_R$. Hence $D,d$ are continuous, and the semi-norms are finite as required.
For $\phi$ of controlled supports we now show that $D\phi$ is of controlled supports. Given $R>0$, take $(\bx,\by) \in \Delta^{p+2,q+1}_R$. Since $\phi$ is of controlled supports, there exists $S$ such that $\supp(\phi((x_0, \ldots, \widehat{x_i}, \ldots, x_{p+1}), \mathbf{y}))$ is contained in $B_{S}(x_{i'})$ and $B_S(y_j)$ for all $i'\neq i$, and for all $j$. Since for any $i'\neq i$ we have $d(x_i,x_{i'})\leq R$ we deduce that $\supp(\phi((x_0, \ldots, \widehat{x_i}, \ldots, x_{p+1}), \mathbf{y}))$ lies in $B_{S+R}(x_{i'})$ for all $i'$. By the axioms for $\supp$ the support of $D\phi$ is contained in $B_{S+R}(x_{i'})$ and $B_S(y_j)$ for all $i'$ and all $j$, since this holds for the summands.
The argument for $d\phi$ is identical, exchanging the roles of $\mathbf x,\mathbf y$.
\end{proof}
\bigskip
Let $H_\E^*(X,\mathcal V)$ denote the cohomology of the totalisation of the bicomplex $\mathcal E^{p,q}, p,q\geq 0$, with the differentials $D,d$.
\begin{remark}
If $X$ is equipped with two coarsely equivalent $G$-invariant metrics $d,d'$ then for any module over $X$ the controlled support conditions arising from these metrics are the same. Moreover the family of semi-norms is equivalent in the sense that for each $R$ there is an $S$ such that $\|\cdot\|_{R,d}\leq \|\cdot\|_{S,d'}$ and for each $R$ there is an $S$ such that$\|\cdot\|_{R,d}\leq \|\cdot\|_{S,d'}$. Hence the bicomplexes and the cohomology we obtain from each metric are identical. This applies in particular if $X=G$ is a countable group and the two metrics are both left-invariant proper metrics on $G$.
\end{remark}
We will now demonstrate exactness of the rows. This allows the cohomology of the totalisation to be computed in terms of the left-hand column.
\begin{prop}\label{acyclicrows}
For each $p$ the augmented row $(\mathcal E^{p,*},d)$, $p\geq -1$ is exact.
Specifically, for all $p\geq 0$ there is a continuous splitting $s:\mathcal E^{p,q}\to \mathcal E^{p,q-1}$ given by
\[
s\phi((x_0,\dots,x_p),(y_0,\dots,y_{q-1}))= (-1)^p\phi((x_0,\dots,x_p),(x_0,y_0,\dots,y_{q-1})).
\]
We have $(ds+sd)\phi=\phi$ for $\phi\in \mathcal E^{p,q}$ with $p\geq 0$, and $sd\phi=\phi$ for $\phi$ in $\mathcal E^{p,-1}$
\end{prop}
\begin{proof}
The fact that $s$ defines a splitting on the larger space of all equivariant functions from $X^{p+1}\times X^{q+1}$ to $V$ is standard homological algebra. We must verify that $s$ is continuous, from which it immediately follows that $s\phi$ has bounded $R$-norms, and that if $\phi$ is of controlled supports then so is $s\phi$.
Continuity is straightforward. For each $R\geq 0$ we have $\|s\phi\|_R\leq \|\phi\|_R$; this is immediate from the observation that if $(x_0,\dots,x_p)$ in $\Delta^{p+1}_R$ then
$$\|\phi((x_0,\dots,x_p),(x_0,y_0,\dots,y_{q-1}))\|_V\leq \|\phi\|_R.$$
It remains to verify that $s\phi$ is of controlled supports. Given $R>0$, since $\phi$ is of controlled supports we know there exists $S$ such that if $(\bx,\by)\in \Delta^{p++1,q+1}_R$ then $\supp(\phi(\bx,\by))$ is contained in $B_{S}(x_i)$ and $B_{S}(y_j)$ for all $i,j$. If $((x_0,\dots,x_p),(y_0,\dots,y_{q-1}))\in \Delta^{p+1,q}_R$ then we have $((x_0,\dots,x_p),(x_0,y_0,\dots,y_{q-1}))\in \Delta^{p+1,q+1}_R$, hence $\supp(s\phi((x_0,\dots,x_p),(y_0,\dots,y_{q-1})))$ is also contained in $B_{S}(x_i)$ and $B_{S}(y_j)$ for all $i,j$.
This completes the proof.
\end{proof}
We remark that the corresponding statement is false for the vertical differential $D$, since for $\phi\in \mathcal E^{p,q}(X,\mathcal V)$, the function $((x_0,\dots,x_{p-1}),(y_0,\dots,y_q)) \mapsto \phi((y_0,x_0,\dots,x_{p-1}),(y_0,\dots,y_{q})$ is only guaranteed to be bounded on sets of the form $\bigl\{((x_0,\dots,x_{p-1}),(y_0,\dots,y_q)) \mid d(u,v)\leq R\text{ for all }u,v \in \{x_0,\dots,x_{p-1},y_0\}\bigr\}$, and not on $\Delta^{p}_R\times X^{q+1}$. Hence the vertical `splitting' would not map $\mathcal E^{p,q}$ into $\mathcal E^{p-1,q}$.
\begin{corollary}
\label{acorollary}
The cohomology $H_\E^*(X,\mathcal V)$ is isomorphic to the cohomology of the cochain complex $(\mathcal E^{*,-1},D)$.
\end{corollary}
\begin{proof}
This follows from the exactness of the augmented rows of the bicomplex. Each cocycle in $\mathcal E^{p,q}(X,\mathcal V)$ is cohomologous to a cocycle in $\mathcal E^{p+q,0}(X,\mathcal V)$, whence $H_\E^*(X,\mathcal V)$ is isomorphic to the cohomology of the complex $\ker(d:\mathcal E^{p,0}\to \mathcal E^{p,1})$ with the differential $D$. The augmentation map $d:\mathcal E^{p,-1}\to \mathcal E^{p,0}$ yields an isomorphism from $(\mathcal E^{p,-1},D)$ to the kernel $\ker(d:\mathcal E^{p,0}\to \mathcal E^{p,1})$, and as $D,d$ anti-commute, the differential $D$ on the kernels is identified with the differential $-D$ on $\mathcal E^{p,-1}$. We note however that the change of sign does not affect the cohomology, so $H_\E^*(X,\mathcal V)$ is isomorphic to the cohomology of $(\mathcal E^{*,-1},D)$ as claimed.
\end{proof}
The cohomology $H_\E(X,\mathcal V)$ is not sufficiently subtle to detect property A. In the following section we will introduce two completion procedures which we will apply to the bicomplex $\mathcal E$, obtaining more refined cohomologies.
\end{section}
\section{Generalised completions}
\defpre-Fr\'echet {pre-Fr\'echet }
Let $\mathcal E$ be a vector space equipped with a countable family of seminorms $\|\cdot\|_i$ which separates points. We will call such a space a pre-Fr\'echet space. We have in mind that $\mathcal E=\mathcal E^{p,q}(X,\mathcal V)$, for some $p,q,X,G$ and $\mathcal V$.
If $\mathcal E$ is not complete then one constructs the classical completion of $\mathcal E$ as follows. Let $E_{\mathrm{cs}}$ denote the space of Cauchy sequences in $\mathcal E$ (i.e.\ sequences which are Cauchy with respect to each semi-norm), and let $E_0$ denote the space of sequences in $\mathcal E$ which converge to 0. Then the completion of $\mathcal E$ is precisely the quotient space $E_{\mathrm{cs}}/E_0$. As the topology of $\mathcal E$ is given by a countable family of seminorms, this completion is a Fr\'echet space.
In this section we will define two generalised completions which are somewhat larger than the classical one, and we will demonstrate various properties of the completions, and relations between the two.
The first completion is motivated by the classical case.
\begin{defn}
The \emph{quotient completion} of $\mathcal E$, denoted $\mathcal E_Q$ is the quotient space $E/E_0$ where $E$ denotes the space of bounded sequences in $\mathcal E$ and $E_0$ denotes the space of sequences in $\mathcal E$ which converge to 0.
\end{defn}
The second completion comes from functional analysis.
\begin{defn}
The \emph{weak-* completion} of $\mathcal E$, denoted $\mathcal E_W$ is the double dual of $\mathcal E$.
\end{defn}
The space $\mathcal E$ is not assumed to be complete, however this does not matter as the dual of $\mathcal E$ is the same as the dual of it's completion $\overline{\mathcal E}=E_{\mathrm{cs}}/E_0$. Indeed we note that both $\mathcal E_Q$ and $\mathcal E_W$ contain $\overline{\mathcal E}$, and indeed $\mathcal E_Q,\mathcal E_W$ are respectively isomorphic to the quotient and weak-* completions of $\overline{\mathcal E}$.
Since the space $\mathcal E$ need not be a normed space, we recall some basic theory of duals of Fr\'echet spaces. For simplicity we assume that the seminorms on $\mathcal E$ are monotonic, i.e.\ $\|\cdot \|_i\leq \|\cdot \|_j$ for $i<j$, this being easy to arrange.
For $\alpha\in\mathcal E^*$, we can define $\|\alpha\|^i=\sup\{|\langle\alpha,\phi\rangle| \mid \|\phi\|_i\leq 1\}$. We note that $\|\alpha\|^i$ takes values in $[0,\infty]$, and $\|\cdot \|^i\geq \|\cdot \|^j$ for $i<j$. The condition that $\alpha$ is continuous is the condition that $\|\alpha\|_i$ is finite for some $i$. For any sequence $r_1,r_2,\dots$ the set $\{\alpha \in \mathcal E^* \mid \|\alpha\|^i<r_i \text{ for some } i\}$ is a neighbourhood of $0$, and every neighbourhood of 0 contains such a set. Hence these sets determine the topology on $\mathcal E^*$.
Having equipped $\mathcal E^*$ with this topology, we can then form the space $\mathcal E^{**}$ of continuous linear functionals on $\mathcal E^*$. A linear functional $\eta$ on $\mathcal E^*$ is continuous if for all $i$, we have $\|\eta\|_i=\sup\{|\langle\eta,\alpha\rangle| \mid \|\alpha\|^i\leq 1\}<\infty$.
The space $\mathcal E_W=\mathcal E^{**}$ will be equipped with the weak-* topology. It follows by the Banach-Alaoglu theorem that all bounded subsets of $\mathcal E_W$ are relatively compact. In the language of Bourbaki, if $A\subseteq \mathcal E_W$ is bounded, i.e.\ there exists a sequence $r_i$ such that $\|\eta\|_i\leq r_i$ for all $i$, then $A$ is contained in the polar of $\{\alpha\in \mathcal E^* \mid \exists i,\,\|\alpha\|^i\leq 1/r_i\}$, which is compact.
\begin{remark}
From an abstract perspective, the weak-* completion is a natural way to enlarge $\mathcal E$. On the other hand, from the point of view of explicitly constructing elements of the space, the quotient completion is more tractable.
\end{remark}
\begin{defn}
We say that a short exact sequence $0\to\mathcal E\xrightarrow i\mathcal E'\xrightarrow \pi\mathcal E''\to 0$ of locally convex topological vector spaces is \emph{topologically exact} if the maps $i,\pi$ are open.
\end{defn}
Note that if the spaces are complete then the requirement that $i,\pi$ are open is automatic by the open mapping theorem.
\begin{prop}
\label{exact-completion}
Let $\mathcal E,\mathcal E'$ be pre-Fr\'echet spaces. Then a continuous map $T:\mathcal E\to\mathcal E'$ extends to give maps $T_Q:\mathcal E_Q\to \mathcal E_Q'$ and $T_W:\mathcal E_W\to \mathcal E_W'$. Moreover this process respects compositions, and takes short topologically exact sequences to short exact sequences.
\end{prop}
\begin{proof}
For the quotient completion, continuity of the map $T:\mathcal E\to \mathcal E'$ guarantees that applying $T$ to each term of a bounded sequence $\phi_n$ in $\mathcal E$ we obtain a bounded sequence $T\phi_n$ in $\mathcal E'$. If $\phi_n\to 0$ then $T\phi_n\to 0$ by continuity, hence we obtain a map $T_Q:\mathcal E_Q\to \mathcal E_Q'$. It is clear that this respects compositions.
Now suppose $0\to\mathcal E\xrightarrow i\mathcal E'\xrightarrow \pi\mathcal E''\to 0$ is a short exact sequence.
If $i_Q$ vanishes on a coset $[(\phi_n)]\in \mathcal E_Q$ then $i\phi_n\to 0$. Since $i$ is open and injective, $i\phi_n\to 0$ implies $\phi_n\to 0$. Hence $[(\phi_n)]=0$ and we have shown that $i_Q$ is injective.
If $[(\phi'_n)]\in \mathcal E_Q'$ with $\pi[(\phi'_n)]=0$ then $\pi\phi_n'\to 0$. The map $\pi$ induces a map $\mathcal E'/i\mathcal E\to \mathcal E''$ which is an isomorphism of pre-Fr\'echet spaces, hence the class of $\phi_n'$ tends to 0 in the quotient $\mathcal E'/i\mathcal E$. That is, there exists a sequence $\psi'_n$ in the $i\mathcal E$ such that $\phi_n'-\psi_n'\to 0$. We have $\psi'_n=i\psi_n$ for some $\psi_n\in \mathcal E$, and $[(\phi'_n)]=[(\psi'_n)]=[(i\psi_n)]$ in $\mathcal E_Q'$, hence we deduce that $[(\phi'_n)]$ is in the image of $i_Q$.
Finally, for surjectivity of $\pi_Q$ we note that if $[(\phi''_n)]\in \mathcal E_Q''$ then there exists $\phi'_n$ such that $\phi''_n=\pi\phi'_n$. By openness of $\pi$, the sequence $\phi'_n$ can be chosen to be bounded.
In the case of the weak-* completion, the maps $T_W,i_W,\pi_W$ are simply the double duals of $T,i,\pi$. The fact that this respects composition is then standard. The hypothesis that $i,\pi$ are open ensures that the corresponding sequence of classical completions is exact, whence exactness of the double duals is standard functional analysis.
\end{proof}
We now give a connection between the two completions.
\begin{prop}
\label{e_omega}
Let $\omega$ be a non-principal ultrafilter on $\mathbb{N}$. Then for any pre-Fr\'echet space $\mathcal E$ there is a linear map $e_\omega:\mathcal E_Q\to \mathcal E_W$, defined by $\langle e_\omega(\phi),\alpha \rangle=\lim\limits_{\omega}\langle\alpha,\phi_n\rangle$. Moreover for $T:\mathcal E\to \mathcal E'$ we have $e_\omega \circ T_Q=T_W\circ e_\omega$. If $I_Q,I_W$ denote the inclusions of $\mathcal E$ in $\mathcal E_Q,\mathcal E_W$ then $I_W=e_\omega \circ I_Q$ for all $\omega$.
\end{prop}
\begin{proof}
We view an element $\phi$ of $E$ as a map $\phi:\mathbb{N} \to \mathcal E\subseteq \mathcal E_W$ with bounded range. Hence the closure of the range is compact in the weak-* topology on $\mathcal E_W$ by the Banach-Alaoglu theorem. By the universal property of the Stone-\v Cech compactification it follows that $\phi$ extends to a map $\overline{\phi}:\beta\mathbb{N} \to \mathcal E_W$, which is continuous with respect to the weak-* topology on $\mathcal E_W$. We define $e_\omega(\phi)=\overline{\phi}(\omega)$.
Continuity of $\overline{\phi}$ guarantees that for each $\alpha\in \mathcal E^*$, we have $\langle \overline{\phi}(\cdot),\alpha\rangle$ a continuous function on $\beta\mathbb{N}$. This is the extension to $\beta\mathbb{N}$ of the bounded function $n\mapsto \langle\alpha,\phi_n\rangle$, hence evaluating at $\omega$ we have
$$\langle e_\omega(\phi),\alpha \rangle=\langle \overline{\phi}(\omega),\alpha\rangle=\lim\limits_{\omega}\langle\alpha,\phi_n\rangle.$$
The fact that $e_\omega \circ T_Q=T_W\circ e_\omega$ is now easily verified as
$$\langle e_\omega(T_Q\phi),\alpha\rangle=\lim\limits_{\omega}\langle\alpha,T\phi_n\rangle=\lim\limits_{\omega}\langle T^*\alpha,\phi_n\rangle=\langle e_\omega(\phi),T^*\alpha\rangle=\langle T_We_\omega(\phi),\alpha\rangle$$
for all $\alpha \in \mathcal E^*$. The final assertion is simply the observation that the extension to $\beta\mathbb{N}$ of a constant sequence is again constant.
\end{proof}
\medskip
We are now in a position to define our cohomology theories.
For $p\geq 0,q\geq -1$, let $\mathcal E_Q^{p,q}(X,\mathcal V)$ denote the quotient completion of $\mathcal E^{p,q}(X,\mathcal V)$, and let $\mathcal E_W^{p,q}(X,\mathcal V)$ denote the weak-* completion of $\mathcal E^{p,q}(X,\mathcal V)$. As $(D,d)$ are continuous anti-commuting differentials, the extensions of these to the completions (which we will also denote by $D,d$) are again anti-commuting differentials, hence taking $p,q\geq 0$ we have bicomplexes $(\mathcal E_Q^{p,q}(X,\mathcal V),(D,d))$ and $(\mathcal E_W^{p,q}(X,\mathcal V),(D,d))$.
Let $H_Q^*(X,\mathcal V)$ denote the cohomology of the totalisation of the bicomplex $\mathcal E_Q^{p,q}(X,\mathcal V), p,q\geq 0$, and let $H_W^*(X,\mathcal V)$ denote the cohomology of the totalisation of the bicomplex $\mathcal E_W^{p,q(X,\mathcal V)}, p,q\geq 0$.
Since the splitting $s$ is continuous it also extends to the completions and we deduce that the augmented rows of the completed bicomplexes are exact. This gives rise to the following.
\begin{corollary}
\label{anothercorollary}
The cohomologies $H_Q^*(X,\mathcal V),H_W^*(X,\mathcal V)$ are isomorphic respectively to the cohomologies of the cochain complexes $(\mathcal E_Q^{*,-1},D)$,$(\mathcal E_W^{*,-1},D)$.
\end{corollary}
The argument is identical to Corollary \ref{acorollary}
We note that the extension of $s$ to the completions ensures taking the kernel of $d:\mathcal E^{p,0}\to\mathcal E^{p,1}$ and then completing (in either way), yields the same result as first completing and then taking the kernel; one obtains the completion of $\mathcal E^{p,-1}$. The corresponding statement for $D$ would be false. The kernel of $D:\mathcal E_Q^{0,q}\to\mathcal E_Q^{1,q}$ will typically be much larger than the completion of the kernel of $D:\mathcal E^{0,q}\to\mathcal E^{1,q}$, and similarly for $\mathcal E_W$. These vertical kernels are of great interest and we will study them in Section \ref{ai-section}.
We now make a connection between the three cohomology theories $H_\E,H_Q,H_W$.
\begin{thm}\label{IQIW}
The inclusions of $\mathcal E^{p,q}(X,\mathcal V)$ in $\mathcal E_Q^{p,q}(X,\mathcal V)$ and $\mathcal E_W^{p,q}(X,\mathcal V)$ and the map $e_\omega:\mathcal E_Q^{p,q}(X,\mathcal V)\to \mathcal E_W^{p,q}(X,\mathcal V)$ induce maps at the level of cohomology:
$$\begin{matrix}
\vspace{1.5ex} H_\E^*(X,\mathcal V) & \xrightarrow{I_Q} & H_Q^*(X,\mathcal V)\\
\vspace{1.5ex} & \searrow^{I_W} & \downarrow {\scriptstyle{e_\omega}}\\
& & H_W^*(X,\mathcal V)
\end{matrix}$$
The above diagram commutes.
Moreover the kernels $\ker I_Q$ and $\ker I_W$ are equal, that is, a cocycle in $\mathcal E^{p,q}(X,\mathcal V)$ is a coboundary in $\mathcal E_Q^{p,q}(X,\mathcal V)$ if and only if it is a coboundary in $\mathcal E_W^{p,q}(X,\mathcal V)$.
\end{thm}
\begin{proof}
The existence of the maps at the level of cohomology follows from the fact that $D,d$ commute with the inclusion maps and $e_\omega$. The diagram commutes at the level of cochains by Proposition \ref{e_omega}. It is then immediate that $\ker I_Q\subseteq \ker I_W$. It remains to prove that if $\phi$ is a cocycle in $\mathcal E^{p,q}(X,\mathcal V)$ with $I_W\phi$ a coboundary, then $I_Q\phi$ is also a coboundary.
By exactness of the rows, every cocycle in $\mathcal E^{p,q}(X,\mathcal V)$ is cohomologous to an element of $\mathcal E^{p+q,0}(X,\mathcal V)$, hence without loss of generality we may assume that $q=0$. Moreover any cocycle in $\mathcal E^{p,0}(X,\mathcal V)$ is $d\phi$ for some $\phi$ in $\mathcal E^{p,-1}(X,\mathcal V)$, and the images of $d\phi$ under $I_Q,I_W$ will be coboundaries if and only if $I_Q\phi,I_W\phi$ are a coboundaries in the completions of the complex $(\mathcal E^{p,-1}(X,\mathcal V),D)$.
Suppose that $I_W\phi$ is a coboundary, that is viewing $\phi$ as an element of the double dual $\mathcal E_W^{p,-1}$, there exists $\psi$ in $\mathcal E_W^{p-1,-1}$ such that $D\psi=\phi$. We now appeal to density of $\mathcal E^{p-1,-1}$ in $\mathcal E_W^{p-1,-1}$ to deduce that there is a net $\theta_\lambda$ in $\mathcal E^{p-1,-1}$ converging to $\psi$ in the weak-* topology. By continuity of $D$ we have that $D\theta_\lambda \to D\psi=\phi$. As $D\theta_\lambda$ and $\phi$ lie in $\mathcal E^{p,-1}$, we have that this convergence is in the weak topology on $\mathcal E^{p,-1}$. On any locally convex topological vector space, a convex set is closed in the locally convex topology if and only if it is closed in the associated weak topology. Hence (as the locally convex topology of $\mathcal E^{p,-1}$ is metrizable) there is a sequence $\theta_n$ of convex combinations of the net $\theta_\lambda$ such that $D\theta_n$ converges to $\phi$ in the $R$-semi-norm topology on $\mathcal E^{p,-1}$. Thus $D[\theta_n]=I_Q\phi$ in $\mathcal E_Q^{p,-1}$.
Hence $I_Q\phi$ is also a coboundary, as required.
\end{proof}
\section{Morphisms, change of coefficients and the long exact sequence in cohomology}
Now we consider the effect on cohomology of varying the coefficient module. Let $X$ be a metric space, $G$ be a group acting by isometries on $X$ and let $\mathcal U=(U, |\cdot|_U, \supp_U)$ and $\mathcal V=(V, |\cdot|_V, \supp_V)$ be $G$-equivariant $X$-modules.
\begin{defn}\label{morphisms}
A $G$-equivariant \emph{$X$-morphism} from $\mathcal U$ to $\mathcal V$ is an equivariant bounded linear map $\Psi:U\rightarrow V$ for which there exists $S\geq 0$ such that for all $u\in U$, $\supp_V(\Psi(u))\subseteq B_S(\supp_U(u))$. When the group action is clear from the context, in particular when $G$ is trivial, we will simply refer to this as an $X$-morphism.
An $X$-morphism $\Psi$ is said to be a \emph{monomorphism} if it is injective and if there exists $T\geq 0$ such that for all $u\in U$, $\supp_U(u)\subseteq B_T(\supp_V(\Psi(u)))$.
An $X$-morphism $\Psi$ is said to be an \emph{epimorphism} if it is surjective and there exists $M\geq 0$ such that for all $R\geq 0$ there exists $S\geq 0$ such that for all $v\in V$ if $\supp_V(v)\subseteq B_R(x)$ then there exists $u\in \Psi^{-1}(v)$ such that $\|u\|_U\leq M\|v\|_V$ and $\supp_U(u)\subseteq B_S(x)$.
An $X$-morphism $\Psi$ is said to be an \emph{isomorphism} if it is both an epimorphism and a monomorphism.
\end{defn}
Note that a surjective monomorphism is automatically an epimorphism and therefore an isomorphism.
\skipit{Probably there is an injective epi that is not an isomorphism}
In this section we will use the usual convention that the term morphism refers to an $X$-morphism when both the space $X$ and the group $G$ are clear from the context.
We note that the concept of a monomorphism is constructed to ensure that $U$ may be viewed in some sense as a sub-module of $V$, while the concept of an epimorphism is designed to provide controlled splittings, providing, respectively, some notion of injectivity and surjectivity for supports.
Given a space $X$, and a group $G$ acting by isometries on $X$, a short exact sequence of $X$-modules is a short exact sequence of Banach spaces
\[
0\rightarrow U\xrightarrow{\iota} V\xrightarrow{\pi} W\rightarrow 0
\]
\noindent each with the structure of a $G$-equivariant $X$-module and where $\iota$ is a monomorphism of $X$-modules and $\pi$ is an epimorphism.
\begin{lemma} An $X$-morphism $\Psi:\mathcal U\rightarrow \mathcal V$ induces a continuous linear map $\Psi_*:\mathcal E^{p,q}(X, \mathcal U)\rightarrow \mathcal E^{p,q}(X, \mathcal V)$ commuting with both differentials. This extends to give maps on both completed bicomplexes.
A short exact sequence of $X$-modules induces a short exact sequence of bicomplexes for $\mathcal E,\mathcal E_Q$ and $\mathcal E_W$. Hence, by the snake lemma, we obtain long exact sequences in cohomology for $H_\E^{*}(X, -)$,$H_Q^{*}(X, -)$ and $H_W^{*}(X, -)$.
\end{lemma}
\begin{proof}
Given an element $\phi\in \mathcal E^{p,q}(X,\mathcal U)$, we need to check that $\Psi_*(\phi)=\Psi\circ\phi$ lies in $\mathcal E^{p,q}(X,\mathcal V)$. Equivariance of $\Psi_*(\phi)$ follows from equivariance of $\Psi$ and $\phi$.
Note that for any $R\geq 0$, the $R$-norm $\|\Psi\circ\phi\|_R$ in $V$ is at most $\|\Psi\|$ times the $R$-norm of $\phi$ in $U$, hence the map is continuous.
As $\phi$ has controlled supports in $\mathcal U$, for any $R>0$ there exists $S>0$ such that if $(\bx,\by)\in \Delta_R$ then $\supp_U(\phi(\mathbf x, \mathbf y))\subseteq B_S(x_i),B_S(y_j)$ for all $i,j$. It follows that $\supp_V(\Psi\circ\phi(\mathbf x, \mathbf y))\subseteq B_{S+S'}(x_i)$ for all $i$, where $S'$ is the constant given in the definition of a morphism. Hence $\psi\circ\phi$ is an element of $\mathcal E^{p,q}(X,\mathcal V)$. Combining this with continuity we deduce that $\Psi\circ\phi$ lies in $\mathcal E^{p,q}(X,\mathcal V)$.
The fact that $\Psi$ commutes with the differentials is immediate from linearity of $\Psi$ and the definitions of $D,d$. As the maps $\Psi_*$ is continuous, it induces maps on both completions.
Now suppose we are given a short exact sequence of $X$-modules
\[
0\rightarrow U\xrightarrow{\iota} V\xrightarrow{\pi} W\rightarrow 0.
\]
We will show that the sequence
\[
0\rightarrow E^{p,q}(X,\mathcal U)\xrightarrow{\iota_*} E^{p,q}(X,\mathcal V)\xrightarrow{\pi_*} E^{p,q}(X,\mathcal W)\rightarrow 0
\]
is topologically exact, i.e.\ it is exact and the maps are open.
Injectivity and openness of $\iota_*$ follows directly from the corresponding properties of $\iota$; as $\iota$ has closed range, it is open by the open mapping theorem.
Exactness at the middle term follows from the observation that if $\pi_*(\phi)=0$ then $\phi=\iota\circ \phi'$ for some function $\phi':X^{p+1}\times X^{q+1}\rightarrow U$, where $\phi'$ is uniquely defined by injectivity of $\iota$. We need to verify that $\phi'$ is an element of $\mathcal E^{p,q}(X,\mathcal U)$. Openness of $\iota$ yields the required norm estimates, whereas the support condition is satisfied because $\iota$ is a monomorphism, hence $\supp_U(\phi')\subseteq B_T(\supp_V(\iota\circ\phi'))= B_T(\supp_V(\phi))$ for some $T\geq 0$.
Surjectivity of $\pi_*$ follows from the definition of an epimorphism: Given $\phi: X^{p+1}\times X^{q+1}\rightarrow W$ which is an element of $\mathcal E^{p,q}(X, \mathcal W)$, for each $R>0$ there exists $S>0$ such that $(\bx,\by)\in\Delta_R\leq R$ implies that $\supp_W(\phi(\mathbf x, \mathbf y)\subseteq B_{S}(x_i),B_{S}(y_j)$ for all $i,j$. Since $\pi$ is an epimorphism, there exists $M,T>0$ such that for each $(\mathbf x, \mathbf y)$ there exists an element of $V$, which we denote $\phi'(\mathbf x, \mathbf y)$ such that $\|\phi'(\mathbf x, \mathbf y)\|_V\leq M\|\phi(\mathbf x, \mathbf y)\|_W$ and $\supp_V(\phi'(\mathbf x, \mathbf y))\subseteq B_{T}(x_i),B_{T}(y_j)$ for each $i,j$, so $\phi'$ is of controlled supports and has finite $R$-norms as required. These estimates for the $R$-norms also ensure that $\pi_*$ is open.
Hence by Proposition \ref{exact-completion} we obtain short exact sequences for both the $\mathcal E_Q$ and $\mathcal E_W$ bicomplexes. It is now immediate from the snake lemma that we obtain long exact sequences in cohomology:
\[
0\rightarrow H_\E^0(X,\mathcal U)\rightarrow H_\E^0(X,\mathcal V)\rightarrow H_\E^0(X,\mathcal W)\rightarrow H_\E^1(X,\mathcal U)\rightarrow H_\E^1(X,\mathcal V)\rightarrow H_\E^1(X,\mathcal W)\rightarrow \cdots
\]
\[
0\rightarrow H_Q^0(X,\mathcal U)\rightarrow H_Q^0(X,\mathcal V)\rightarrow H_Q^0(X,\mathcal W)\rightarrow H_Q^1(X,\mathcal U)\rightarrow H_Q^1(X,\mathcal V)\rightarrow H_Q^1(X,\mathcal W)\rightarrow \cdots
\]
\[
0\rightarrow H_W^0(X,\mathcal U)\rightarrow H_W^0(X,\mathcal V)\rightarrow H_W^0(X,\mathcal W)\rightarrow H_W^1(X,\mathcal U)\rightarrow H_W^1(X,\mathcal V)\rightarrow H_W^1(X,\mathcal W)\rightarrow \cdots
\]
\end{proof}
As an example we consider the following short exact sequence of $X$-modules, where we are taking the group $G$ to be trivial:
\[
0\rightarrow \ell^1_0(X)\xrightarrow{\iota} \ell^1(X)\xrightarrow{\pi} \mathbb{C}\rightarrow 0
\]
The function spaces are equipped with their usual support functions $\supp(f)=\{x\in X\mid f(x)\not = 0\}$ and $\mathbb{C}$ is equipped with the trivial support function $\supp(\lambda)=\emptyset$ for all $\lambda\in \mathbb{C}$. The map $\iota$ is the standard ``forgetful'' inclusion of $\ell^1_0(X)$ into $\ell^1(X)$ and is easily seen to be a monomorphism. The map $\pi$ is the evaluation of the $\ell^1$ sum and this is an epimorphism. To see this we argue as follows: since the support of any $\lambda\in \mathbb{C}$ is empty it lies within $R$ of any point $x\in X$. We choose the scaled Dirac delta function $\lambda\delta_x\in \ell^1(X)$ which clearly maps to $\lambda$, has norm $|\lambda|$ and $\supp(\lambda\delta_x)=\{x\}$, so putting $M=1$ and $S=0$ satisfies the conditions.
It follows that we obtain a long exact sequence of cohomology:
\[
0\rightarrow H_Q^0(X,\mathcal \ell^1_0)\xrightarrow{\iota_*} H_Q^0(X,\mathcal \ell^1)\xrightarrow{\pi_*} H_Q^0(G,\mathbb{C} )\xrightarrow{D} H_Q^1(X,\mathcal \ell^1_0)\xrightarrow{\iota_*} H_Q^1(X,\mathcal \ell^1)\xrightarrow{\pi_*} H_Q^1(X,\mathbb{C} )\xrightarrow{D} \cdots.
\]
\[
0\rightarrow H_W^0(X,\mathcal \ell^1_0)\xrightarrow{\iota_*} H_W^0(X,\mathcal \ell^1)\xrightarrow{\pi_*} H_W^0(X,\mathbb{C} )\xrightarrow{D} H_W^1(X,\mathcal \ell^1_0)\xrightarrow{\iota_*} H_W^1(X,\mathcal \ell^1)\xrightarrow{\pi_*} H_W^1(X,\mathbb{C} )\xrightarrow{D} \cdots.
\]
\section{A cohomological characterisation of property A}
As an application of the long exact sequence we give our first cohomological characterisation of Yu's property A.
Let $X$ be a metric space, and let $G$ be the trivial group. Recall we have a short exact sequence
\[
0\rightarrow \ell^1_0(X)\xrightarrow{\iota} \ell^1(X)\xrightarrow{\pi} \mathbb{C}\rightarrow 0
\]
where $\mathbb{C}$ is a given the support function where $\supp(\lambda)$ is empty for all $\lambda$ in $\mathbb{C}$, and $\ell^1_0(X),\ell^1(X)$ are given the usual support functions. Let $\mathbf 1\in \mathcal E^{0,-1}(X,\mathbb C)$ denote the constant function 1 on $X$.
\begin{lemma}\label{amazing}
A space $X$ has property $A$ if and only if $\mathcal E_Q^{0,-1}(X,\ell^1X)$ contains an element $\phi$ such that $D\phi=0$ and $\pi_*\phi=I_Q\mathbf 1$, where $\mathbf 1\in \mathcal E^{0,-1}(X,\mathbb C)$ denotes the constant function 1.
\end{lemma}
\begin{proof}
Given a sequence of functions $f_n(x)$ as in the definition of property A, we note that $\phi=f$ has the required properties: The fact that $f_x^n$ is a probability measure ensures that $\pi\phi_n(x)=1$ for all $x,n$, that is $\pi_*\phi=I_Q\mathbf 1$. The other hypotheses of Definition \ref{propAdef} are precisely the assertions that $\phi$ is of controlled supports, and $D\phi=0$ in $\mathcal E_Q^{1,-1}(X, \ell^1X)$.
Conversely, given an element $\phi\in \mathcal E_Q^{-1}(X,\ell^1X)$ such that $D\phi=0$ and $\pi_*\phi=I_Q\mathbf 1$, represented by a sequence $\phi_n$, we set $f_n(x)(z)=\frac{|\phi_n(x)(z)|}{\|\phi_n(x)\|_{\ell^1}}$. Since $\pi\phi_n(x)=1$ for all $x,n$ we have $\frac 1{\|\phi_n(x)\|_{\ell^1}}\leq 1$. As an element of $\ell^1(X)$, $f_n(x)$ has the same supports as $\phi_n(x)$, in particular $f$ is of controlled supports. The verification that $\|f_n(x_1) - f_n(x_0)\|_{\ell^1}$ tends to 0 uniformly on $\{ (x_0,x_1)\mid d(x_0,x_1)\leq R\}$ follows from the fact that $D\phi=0$ and the estimate $\frac 1{\|\phi_n(x)\|_{\ell^1}}\leq 1$.
\end{proof}
Note that $\mathbf 1\in \mathcal E^{0,-1}(X,\mathbb C)$ is a cocycle. Hence (applying $I_Q$, $I_W$) it represents an element $[\mathbf 1_Q]\in H_Q^0(X, \mathbb{C})$, and another element $[\mathbf 1_W]\in H_Q^0(X, \mathbb{C})$.
Recalling the long exact sequences in $H_Q,H_W$
\[
0\rightarrow H_Q^0(X,\mathcal \ell^1_0)\xrightarrow{\iota_*} H_Q^0(X,\mathcal \ell^1)\xrightarrow{\pi_*} H_Q^0(X,\mathbb{C} )\xrightarrow{D} H_Q^1(X,\mathcal \ell^1_0)\xrightarrow{\iota_*} H_Q^1(X,\mathcal \ell^1)\xrightarrow{\pi_*} H_Q^1(X,\mathbb{C} )\xrightarrow{D} \cdots.
\]
\[
0\rightarrow H_W^0(X,\mathcal \ell^1_0)\xrightarrow{\iota_*} H_W^0(X,\mathcal \ell^1)\xrightarrow{\pi_*} H_W^0(X,\mathbb{C} )\xrightarrow{D} H_W^1(X,\mathcal \ell^1_0)\xrightarrow{\iota_*} H_W^1(X,\mathcal \ell^1)\xrightarrow{\pi_*} H_W^1(X,\mathbb{C} )\xrightarrow{D} \cdots.
\]
we have classes $D[\mathbf 1_Q]$ in $H_Q^1(X, \ell^1_0(X))$, and $D[\mathbf 1_W]$ in $H_W^1(X, \ell^1_0(X))$.
We make the following definition.
\begin{defn}
An \emph{asymptotically invariant mean} for $X$ is an element $\mu$ in $\mathcal E_W^{0,-1}(X,\ell^1(X))$ such that $D\mu=0$ and $\pi_*(\mu)=\mathbf 1_W$.
\end{defn}
Let $\delta$ denote the map $X\to \ell^1(X)$, $x\mapsto \delta_x$. We note that as $\pi_*(\delta)=1$, by exactness we have $\pi_*(\mu)=\mathbf 1_W$ if and only if $\delta-\mu$ lies in the image of $\mathcal E_W^{0,-1}(X,\ell^1_0(X))$.
We now characterise property A as follows:
\begin{thm} \label{D[1]=0}\label{MainTheorem1}
Let $X$ be a discrete metric space. Then the following are equivalent:
\begin{enumerate}
\item $X$ has property $A$.
\item $[\mathbf 1_Q]\in \Im \pi_*$ in $H_Q^0(X, \mathbb{C})$.
\item $D[\mathbf 1_Q]=0$ in $H_Q^1(X, \ell^1_0(X))$.
\item $D[\mathbf 1_W]=0$ in $H_W^1(X, \ell^1_0(X))$.
\item $[\mathbf 1_W]\in \Im \pi_*$ in $H_W^0(X, \mathbb{C})$.
\item $X$ admits an asymptotically invariant mean.
\end{enumerate}
\end{thm}
\begin{proof}
We first show (1)$\iff$(2). Identifying $H_Q^0(X, \mathbb{C})$ with the kernel of $D:\mathcal E_Q^{0,-1}(X,\mathbb{C})\to \mathcal E_Q^{1,-1}(X,\mathbb{C})$, we have $[\mathbf 1_Q]\in \Im \pi_*$ if and only if $\mathcal E_Q^{0,-1}(X,\ell^1X)$ contains an element $\phi$ such that $D\phi=0$ and $\pi_*\phi=I_Q\mathbf 1$. This is equivalent to property A by Lemma \ref{amazing}, hence we have shown (1)$\iff$(2).
Conditions (2) and (3) are equivalent by exactness of the long exact sequence in cohomology, while (3) is equivalent to (4) by Theorem \ref{IQIW}. Conditions (4), (5) are equivalent by a further application of the long exact sequence (this time for the weak-* completion). Finally the equivalence of (5) and (6) is immediate from the definition of asymptotically invariant mean.
\end{proof}
To place this in context we consider the equivariant analog for a group. Let $G$ be a countable group equipped with a left invariant proper metric. Again we have two long exact sequences in cohomology:
\[
0\rightarrow H_Q^0(G,\mathcal \ell^1_0(G))\xrightarrow{\iota_*} H_Q^0(G,\mathcal \ell^1(G))\xrightarrow{\pi_*} H_Q^0(G,\mathbb{C} )\xrightarrow{D} H_Q^1(G,\mathcal \ell^1_0(G))\xrightarrow{\iota_*} \cdots
\]
\[
0\rightarrow H_W^0(G,\mathcal \ell^1_0(G))\xrightarrow{\iota_*} H_W^0(G,\mathcal \ell^1(G))\xrightarrow{\pi_*} H_W^0(G,\mathbb{C} )\xrightarrow{D} H_W^1(G,\mathcal \ell^1_0(G))\xrightarrow{\iota_*} \cdots
\]
\begin{thm} \label{amenableD[1]=0}
The following are equivalent:
\begin{enumerate}
\item $G$ is amenable.
\item $[\mathbf 1]\in \Im \pi_*$
\item $D[\mathbf 1_Q]=0$ in $H_Q^1(G, \ell^1_0(G))$.
\item $D[\mathbf 1_W]=0$ in $H_W^1(G, \ell^1_0(G))$.
\end{enumerate}
\end{thm}
\begin{proof}
Suppose that $[\mathbf 1]\in \Im \pi_*$. Identifying $H_Q^0(G,\mathcal \ell^1(G))$ with the kernel of $D:\mathcal E_Q^{0,-1}\rightarrow \mathcal E_Q^{1,-1}$ we see that there exists a sequence of equivariant functions $\phi_n:G \rightarrow\ell^1(G)$ which represents a cocycle and for which $1=\sum\limits_h \phi_n(g)(h)\leq \sum\limits_h |\phi_n(g)(h)| =\|\phi_n(g)\|_{\ell^1}$ for every $g\in X$ and every $n\in \mathbb{N}$. Set $f_g^n=|\phi_n(g)|/\|\phi_n(g)\|_{\ell^1}$ to obtain an equivariant element of $\text{Prob}(G)$.
For a given $g,n$ $\supp(f_g^n)=\supp(|\phi_n(g)|/\|\phi_n(g))=\supp(\phi_n(g)$. Since $\phi_n$ is of controlled supports given any $R>0$ there exists $S>0$ such that $\supp(\phi_n(g))\subseteq B_{S}(g)$. Hence $\supp(f^n_g)\subseteq B_{S}(g)$ as required.
Since $\phi_n$ represents a cocycle, $\|D\phi_n((g_0,g_1))\|_{\ell^1}$ converges to zero uniformly on the set $\{(g_0, g_1)\mid d(g_0, g_1\leq R\}$ so
\[
\|f^n_{g_1} - f^n_{g_0}\|_{\ell^1}{\rightarrow} 0 \text{ as }{n\rightarrow \infty},
\]
uniformly on the set $\{ (g_0, g_1)\mid d(g_0,g_1)\leq R\}$, hence (2) implies (1). The converse is easy.
Conditions (2) and (3) are equivalent by exactness of the long exact sequence in cohomology, while (3) is equivalent to (4) by Theorem \ref{IQIW}.
\end{proof}
This result should be compared with the statement and proof of Theorem \ref{unboundedbeauty} parts (1), (2), (3) which invoked the long exact sequence:
\[
0\rightarrow H^0(G,( \ell^\infty G/\mathbb C)^*)\xrightarrow{\pi^*} H^0(G,\ell^\infty G^*)\xrightarrow{\iota^*} H^0(G,\mathbb{C} )\xrightarrow{d} H^1(G, ( \ell^\infty G/\mathbb C)^*)\xrightarrow{\pi^*} \cdots
\]
induced by the short exact sequence
\[
0\rightarrow \ell^1_0(G)\xrightarrow{\iota} \ell^1(G)\xrightarrow{\pi^*} \mathbb{C}\rightarrow 0
\]
We therefore consider the cohomology groups $H_{\sim}^0(G,\mathcal \ell^1(G)), H_{\sim}^0(G,\mathbb{C} ), H_{\sim}^1(G,\mathcal \ell^1_0(G))$, where $\sim$ denotes $Q$ or $W$, as asymptotic analogs of the groups $H^0(G,\ell^\infty G^*),H^0(G,\mathbb{C} ), H^1(G, ( \ell^\infty G/\mathbb C)^*)$ respectively.
\skipit{
\section{Transfer of module structures by uniform embeddings}
Here we consider transfer of module structures induced by an equivariant uniform embedding of metric spaces, and in particular establish that the ($G$-equivariant) cohomology of a space is an invariant up to ($G$-equivariant) coarse equivalence.
Let $f:X\rightarrow Y$ be a $G$-equivariant uniform embedding of metric spaces and let $\mathcal V$ be a $G$-equivariant $Y$-module, $(V, \|\cdot\|, \supp_Y)$. We define the coarse pullback $f^*\mathcal V:=(V, \|\cdot\|, \supp_X)$, by $\supp_X(v)=f^{-1}p\supp_Y(v)$ where $p:2^Y\rightarrow 2^{f(X)}$ is defined by
\[p(A)=\{y\in f(X)\mid \exists a\in A \text{ with }\lceil d_Y(y,a)\rceil< \lceil d_Y(y',a)\rceil, \forall y'\in f(X)\}.
\]
The map $p$ is a coarse version of the nearest point map; when the metric takes values in $\mathbb{N}$, e.g., the edge metric on the vertices of a graph, $p(A)$ is the set of nearest points in $f(X)$ to points in $A$.
It is easy to verify that $f^*\mathcal V$ is an $X$-module. The transfer map $f^*$ also induces a map on cohomology as follows:
Given a cochain $\phi\in E^{p,q}(Y, \mathcal V)$ we obtain an element $f^*\phi\in E^{p,q}(X, f^*\mathcal V)$ by setting $f^*\phi(\mathbf x, \mathbf x', n)=\phi(f(\mathbf x), f(\mathbf x'), n)$. Coarseness of $f$ ensures that for any $R\geq 0$ there exists $S\geq 0$ such that $\|f^*\phi\|_R\leq \|\phi\|_S$ and for all $n$, $\|f^*\phi\|_R^n\leq \|\phi\|_S^n$ .
Given $\phi\in E^{p,q}(Y, \mathcal V)$ and a proper sequence $R_n$ there exists a proper sequence $S_n$ such that $d(y_i, y_j')\leq R_n$ implies that $\supp_Y\phi(\mathbf y, \mathbf y', n)\subseteq B_{S_n}(y_i)$ for each $i$. Now consider $f^*\phi$ Given a proper sequence $R_n'$ coarseness of $f$ yields a proper sequence $R_n$ such that $d(x_i, x_j')\leq R_n'$ implies that $d(f(x_i), f(x_j'))\leq R_n$ so there exists a proper sequence $S_n$ such that $d(x_i, x_j')\leq R_n'$ implies that $\supp_Y\phi(f(\mathbf x), f(\mathbf x'), n)\subseteq B_{S_n}(f(x_i))$ for each $i$. Hence
\[
p(\supp_Y\phi(f(\mathbf x), f(\mathbf x'), n))\subseteq p(B_{S_n}(f(x_i)))\subseteq B_{2S_n+1}(f(x_i)).
\]
As $f$ is a uniform embedding it follows that there is a further proper sequence $S_n'$ such that the preimage of $B_{2S_n+1}(f(x_i))$ is contained in $B_{S_n'}(x_i)$, so $\supp_X(f^*\phi(\mathbf x, \mathbf x', n))\subseteq B_{S_n'}(x_i)$ for each $i$. Hence $f^*\phi$ is of controlled supports.
Hence $f^*\phi\in E^{p,q}(X, f^*\mathcal V)$ as required. Moreover if $\phi\in E^{p,q}_0(Y, \mathcal V)$ then $f^*\phi\in E^{p,q}_0(X, f^*\mathcal V)$
We note that $Df^*\phi=f^*D\phi$ and $df^*\phi=f^*d\phi$ by the usual argument so we obtain a map on bicomplexes inducing maps
\begin{align*}
f^*&:H_{E_0}^*(Y, \mathcal V)\rightarrow H_{E_0}^*(X, f^*\mathcal V)\\
f^*&:H_E^*(Y, \mathcal V)\rightarrow H_E^*(X, f^*\mathcal V)\\
f^*&:H_Q^*(Y, \mathcal V)\rightarrow H_Q^*(X, f^*\mathcal V)\\
\end{align*}
In the special case where $f$ is the inclusion of a subspace $X$ in $Y$ and the group $G$ is trivial we obtain the well known result:
\begin{lemma}
If $X$ is a subspace of a metric space $Y$ and $Y$ has property A then $X$ has property A.
\end{lemma}
\begin{proof}
The transfer map on cocycles is restriction and maps the Johnson element in $H_Q^1(Y, \ell^1_0(Y))$ to the Johnson element in $H_Q^1(X, f^*\ell^1_0(Y))$. If $Y$ has property A then its Johnson element is trivial implying that its image is trivial in $H_Q^1(X, f^*\ell^1_0(Y))$. But $f^*\ell^1_0(Y)=\ell^1_0(X)$, so this implies that $X$ has property A.
\end{proof}
}
\section{The asymptotically invariant complex}\label{ai-section}
We pause for a moment to recall the classical definition of bounded cohomology for a group. One first takes the homogeneous bar resolution wherein the $k$-dimensional cochains consist of all bounded functions from $G^{k+1}$ to $\mathbb C$ This cochain complex is exact so has trivial cohomology. This is exhibited by taking a basepoint splitting which is induced by the map $G^k\rightarrow G^{k+1}$ given by inserting the basepoint as an additional (first) co-ordinate. Now one takes the $G$-invariant part of this complex, where $G$ acts diagonally and $\mathbb C$ is equipped with the trivial action of $G$. Since the splitting is not equivariant the corresponding cochain complex is not necessarily exact. When the group $G$ is amenable one can average the splitting over orbits using the invariant mean, and this produces an equivariant splitting which therefore kills all the cohomology in dimensions greater than or equal to $1$.
In this section we will carry out an analogous process for property A and a metric space. Replacing the classical (split) cochain complex by the first row of the $\mathcal E_Q$ bicomplex, $(\mathcal E_Q^{0,q},d)$, (which is acyclic since $(\mathcal E^{0,q},d)$ is acyclic by Proposition \ref{acyclicrows}) we then take the kernels under the vertical differential $D$ to produce a new cochain complex, which we will call the \emph{asymptotically invariant subcomplex of $\mathcal E_Q$}. The splitting $s$ of the horizontal differential $d$ does not restrict to this cochain complex leaving room for interesting cohomology. This is the analogue of taking the invariant parts in group cohomology. Similarly $(\mathcal E_W^{0,q},d)$ is acyclic, but taking the kernels under the vertical differential $D$ we obtain the \emph{asymptotically invariant subcomplex of $\mathcal E_W$}. We will then show that if the space $X$ has property A, one can asymptotically average the splitting to obtain a splitting of the asymptotically invariant complexes. Hence we deduce (analogously to the case of bounded cohomology) that if $X$ has property $A$ then the cohomologies of both asymptotically invariant complexes vanish in all dimensions greater than 0.
\begin{defn}
We say that an element $\phi$ of $\mathcal E_Q^{0,q}$ (respectively $\mathcal E_W^{0,q}$) is \emph{asymptotically invariant} if $D\phi=0$ in $\mathcal E_Q^{1,q}$ (respectively $\mathcal E_W^{1,q}$) Let $\mathcal E_{QA}^q$, $\mathcal E_{WA}^q$, denote the sets of asymptotically invariant elements in $\mathcal E_Q^{0,q}$ and $\mathcal E_W^{0,q}$ respectively. We note as usual that this is defined for $q\geq -1$.
\end{defn}
For notational convenience when considering elements of $\mathcal E^{0,q}$ we will write $\phi(x,(y_0,\dots,y_q))$, suppressing the parentheses around the single $x$ variable.
The term asymptotically invariant is motivated by the case of $\mathcal E_Q^{0,q}$. An element of $\mathcal E_Q^{0,q}$ is asymptotically invariant if it is represented by a sequence $\phi_n:X\times X^{q+1} \to V$ which is asymptotically invariant in the $x$ variable the following sense. For all $R>0$ the difference $\phi_n(x_1,\mathbf y)-\phi_n(x_0,\mathbf y)$ tends to zero uniformly on $\{((x_0,x_1),\mathbf y) \mid d(x_0,x_1)\leq R, \mathbf y\in X^{q+1}\}$.
We remark that it is essential that we first complete the complex $\mathcal E$ and then take the kernels of $D$, not the other way around. If we were to take the kernel of $D:\mathcal E^{0,q}\to \mathcal E^{1,q}$ we would get functions $\phi(x,(y_0,\dots,y_q))$ which are constant in the $x$ variable, that is, we have invariant rather than asymptotically invariant elements. The kernel of $D:\mathcal E_Q^{0,q}\to\mathcal E_Q^{1,q}$ will typically be much larger than the completion of these constant functions, and similarly for $\mathcal E_W$.
We now make the following elementary observation.
\begin{prop}
The differential $d$ maps $\mathcal E_{QA}^q(X,\mathcal V)$ to $\mathcal E_{QA}^{q+1}(X,\mathcal V)$, and maps $\mathcal E_{WA}^q$ to $\mathcal E_{WA}^{q+1}(X,\mathcal V)$. Hence $(\mathcal E_{QA}^q(X,\mathcal V),d)$, $(\mathcal E_{WA}^q(X,\mathcal V),d)$ are complexes.
\end{prop}
\begin{proof}
This is immediate from anti-commutativity of the differentials $D,d$.
\end{proof}
Recall that there is a splitting $s:\mathcal E^{0,q}\to \mathcal E^{0,q-1}$ extending to both generalised completions. Note however that $s$ does not necessarily map either asymptotically invariant complex into itself, as illustrated by the following simple example.
\begin{example}
Let $X$ be the integers equipped with the subspace metric from the reals, and let $\mathcal V$ be the $X$-module $\ell^1\mathbb{Z}$. For elements $x,y_0,y_1\in \mathbb{Z}$ set $\phi(x,(y_0,y_1))=\delta_{y_1}-\delta_{y_0}$, where $\delta_y$ denotes the Dirac function with support $\{y\}$. Clearly $\phi\in \mathcal E^{0,1}$, and since $\phi$ is independent of $x$, $D\phi=0$, so $I_Q\phi$ lies in $\mathcal E_{QA}^1$, and $I_W\phi$ lies in $\mathcal E_{WA}^1$. However
$$Ds\phi((x_0,x_1),(y_0))=s\phi(x_1,(y_0))-s\phi(x_0,(y_0))=(\delta_{y_0}-\delta_{x_1})-(\delta_{y_0}-\delta_{x_0})=\delta_{x_0}-\delta_{x_1}$$
which has $\ell^1$-norm equal to 2 for all $x_0\neq x_1$. Hence $DsI_Q\phi=I_QDs\phi\neq 0$ and $DsI_W\phi=I_WDs\phi\neq 0$, so neither $sI_Q\phi$ nor $sI_W\phi$ is asymptotically invariant.
\end{example}
\begin{defn}
Let $H_{QA}^*(X,\mathcal V)$ denote the cohomology of the complex $\mathcal E_{QA}^*(X,\mathcal V)$ and let $H_{WA}^*(X,\mathcal V)$ denote the cohomology of the complex $\mathcal E_{WA}^*(X,\mathcal V)$
\end{defn}
We will refer to the inclusions of $\mathcal E_{QA}^q(X,\mathcal V)\hookrightarrow \mathcal E_Q^{0,q}(X,\mathcal V)$ and $\mathcal E_{WA}^q(X,\mathcal V)\hookrightarrow \mathcal E_W^{0,q}(X,\mathcal V)$ as \emph{augmentation maps}. By analogy with the horizontal case, we will denote these by $D$.
\begin{lemma}
\label{injective-augmentation}
The augmentation maps induce maps on cohomology $H_{QA}^q(X,\mathcal V)\rightarrow H_Q^{q}(X,\mathcal V)$ and $H_{WA}^q(X,\mathcal V)\rightarrow H_W^{q}(X,\mathcal V)$, which are isomorphisms for $q=0$, and are injective for $q=1$.
\end{lemma}
\begin{proof}
Let $\phi$ be a cocycle in $\mathcal E_{QA}^q(X,\mathcal V)$. Then $D\phi=0$ since $\phi \in \mathcal E_{QA}^q(X,\mathcal V)$, and $d\phi=0$ since $\phi$ is a cocycle. Hence including $\mathcal E_{QA}^q(X,\mathcal V)$ into $\mathcal E_Q^{0,q}(X,\mathcal V)$, the element $\phi$ is a cocycle in the totalisation of the bicomplex, yielding a map $H_{QA}^q(X,\mathcal V)\to H_Q^{q}(X,\mathcal V)$. In degree 0 every cocycle is non-trivial, and the condition that $\phi$ is a cocycle is that $D\phi=0$ and $d\phi=0$ in both theories, whence the map is an isomorphism. In degree 1, if $\phi\in \mathcal E^{0,1}(X,\mathcal V)$ is a coboundary in the totalisation of the bicomplex then there is an element $\psi$ of $\mathcal E_Q^{0,0}(X,\mathcal V)$ such that $(D\oplus d)\psi$ is $(0\oplus \phi)$ in $\mathcal E_Q^{1,0}(X,\mathcal V)\oplus \mathcal E_Q^{0,1}(X,\mathcal V)$. That is $D\psi=0$, so $\psi$ is an element of $\mathcal E_{QA}^{0}(X,\mathcal V)$, and $d\psi=\phi$. Hence $\phi$ is also a coboundary in $\mathcal E_{QA}^1(X,\mathcal V)$. Hence the inclusion of $\mathcal E_{QA}^1(X,\mathcal V)$ into $\mathcal E_Q^{0,1}(X,\mathcal V)$ gives an injection of cohomology.
The proof for $\mathcal E_W$ is identical.
\end{proof}
Now we restrict to the case where $G$ is trivial, and $\mathcal V$ is $\ell^1_0(X)$, equipped with the $\ell^1$ norm and the usual support function. Consider the Johnson elements $\mathcal J^{0,1}(x,(y_0,y_1))=\delta_{y_1}-\delta_{y_0}$ in $\mathcal E^{0,1}(X,\mathcal V)$ and $\mathcal J^{1,0}((x_0,x_1),(y))=\delta_{x_1}-\delta_{x_0}$ in $\mathcal E^{1,0}(X,\mathcal V)$. Since $D\mathcal J^{0,1}=0$ and $d\mathcal J^{0,1}=0$ we deduce that $I_Q\mathcal J^{0,1}$, $I_W\mathcal J^{0,1}$ give classes in $H_{QA}^1((X,\mathcal V))$ and $H_{QA}^1((X,\mathcal V))$, which we will denote $[\mathcal J_Q^{0,1}]$ and $[\mathcal J_W^{0,1}]$.
Applying the augmentation map to $[\mathcal J_Q^{0,1}]$ we obtain an element of $H_Q^1(X,\mathcal V)$. We will verify that this is cohomologous to $D[\mathbf 1_Q]$, hence the vanishing of $[\mathcal J_Q^{0,1}]$ is equivalent to property A. Similarly $[\mathcal J_W^{0,1}]$ is cohomologous to $D[\mathbf 1_W]$ and the vanishing of $[\mathcal J_Q^{0,1}]$ is equivalent to property A.
\begin{thm}\label{PropA} The augmentation map for $H_Q$ takes $[\mathcal J_Q^{0,1}]$ in $H_{QA}^1(X, \mathcal \ell^1_0(X))$ to $D[\mathbf 1_Q]$ in $H_Q^1(X, \mathcal \ell^1_0(X))$, and the augmentation map for $H_W$ takes the class $[\mathcal J_W^{0,1}]$ in $H_{WA}^1(X, \mathcal \ell^1_0(X))$ to the class $D[\mathbf 1_W]$ in $H_W^1(X, \mathcal \ell^1_0(X))$. Hence the following are equivalent:
\begin{enumerate}
\item $X$ has property $A$.
\item $[\mathcal J_Q^{0,1}]=0$ in $H_{QA}^1(X, \mathcal \ell^1_0(X))$.
\item $[\mathcal J_W^{0,1}]=0$ in $H_{WA}^1(X, \mathcal \ell^1_0(X))$.
\end{enumerate}
\end{thm}
\begin{proof}
The Johnson classes $D[\mathbf 1_Q],D[\mathbf 1_W]$ are given by respectively $I_Q[\mathcal J^{1,0}]$ and $I_W[\mathcal J^{1,0}]$. We will show that $\mathcal J^{1,0}$ is cohomologous to $\mathcal J^{0,1}$ in the totalisation of $\mathcal E^{*,*}(X, \mathcal \ell^1_0(X))$. From this it is immediate that we have $[\mathcal J_Q^{0,1}]=[\mathcal J_Q^{1,0}]=D[\mathbf 1_Q]$ in $H_Q^1(X, \mathcal \ell^1_0(X))$ and $[\mathcal J_W^{0,1}]=[\mathcal J_W^{1,0}]=D[\mathbf 1_W]$ in $H_W^1(X, \mathcal \ell^1_0(X))$.
Let $\phi(x,(y))=\delta_y-\delta_x$. We note that this lies in $\mathcal E^{0,0}(X,\ell^1_0(X))$. Computing the coboundaries we have $D\phi((x_0,x_1),(y))=-\delta_{x_1}+\delta_{x_0}=-\mathcal J^{1,0}((x_0,x_1),(y))$, while $d\phi(x,(y_0,y_1))=\delta_{y_1}-\delta_{y_0}=\mathcal J^{0,1}(x,(y_0,y_1))$. Hence $(D\oplus d)\phi=-\mathcal J^{1,0}\oplus \mathcal J^{0,1}$, whence $[\mathcal J^{1,0}]=[\mathcal J^{0,1}]$ in $H_\E^1(X, \mathcal \ell^1_0(X))$.
By Lemma \ref{injective-augmentation}, $[\mathcal J_Q^{0,1}]$ is zero in $H_{QA}^1(X, \mathcal \ell^1_0(X))$ if and only if it is zero in $H_Q^1(X, \mathcal \ell^1_0(X))$, and we have seen that the latter is $D[\mathbf 1_Q]$. By Theorem \ref{D[1]=0}, this vanishes if and only if $X$ has property A.
Similarly $[\mathcal J_W^{0,1}]$ is zero in $H_{WA}^1(X, \mathcal \ell^1_0(X))$ if and only if it is zero in $H_W^1(X, \mathcal \ell^1_0(X))$, and $[\mathcal J_W^{0,1}]=D[\mathbf 1_W]=0$ in $H_W^1(X, \mathcal \ell^1_0(X))$ if and only if $X$ has property A.
\end{proof}
\begin{section} {Vanishing theorems}
We have seen that the map $s$ does not split the coboundary map $d$ in the complexes $\mathcal E_{QA}^*,\mathcal E_{WA}^*$, however if $X$ has property A then we can use the probability functions given by the definition to asymptotically average the $s\phi$. Having done so we will obtain an actual splitting for the cochain complex, demonstrating the vanishing of the cohomology.
We will make use of the following convolution operator.
\begin{defn}
For $f\in\mathcal E^{p,-1}(X,\ell^1(X))$ and $\theta\in \mathcal E^{0,q}(X,V)$, define $f*\theta$ by
$$(f*\theta)(\mathbf{x},\mathbf y) = \sum_{z} f(\mathbf x)(z)\theta(z,\mathbf{y}).$$
\end{defn}
We make the following estimate:
$$
\|f*\theta\|_R\leq \sup\limits_{\mathbf{x} \in \Delta^{p+1}_R, \mathbf{y}\in X^{q+1}}\sum\limits_{z\in X}|f(\mathbf{x})(z)|\|\theta(z,\mathbf{y})\|_V\leq \sup\limits_{\mathbf{x}\in \Delta_R^{p+1}}\sum\limits_z|f({\bf x},(z))|\|\theta\| = \norm{f}_R\norm{\theta}.
$$
We remark that as $\theta$ lies in the bottom row of the bicomplex, its $R$-norm does not in fact depend on $R$, hence we suppress it from the notation.
This estimate shows that for each $f$ the map $\theta\mapsto f*\theta$ is continuous, and for each $\theta$ the map $f\mapsto f*\theta$ is continuous.
We note that $D(f*\phi)(\mathbf{x},\mathbf{y})=\sum\limits_{z}\sum\limits_{i}(-1)^i f((x_0,\dots,\widehat{x_i},\dots,x_p))(z)\phi(z,\mathbf y)=(Df)*\phi(\mathbf{x},\mathbf{y})$, by exchanging the order of summation.
Similarly $d(f*\phi)(\mathbf{x},\mathbf{y})=(f* d\phi)(\mathbf{x},\mathbf{y})$.
\medskip
The convolution extends in an obvious way to the quotient completion. For $f\in \mathcal E_Q^{q,-1}(X,\ell^1(X))$, $\phi\in \mathcal E_Q^{0,q}(X,V)$ we define $f*\phi\in\mathcal E_Q^{p,q}(X,V)$ by $(f*\phi)_n = f_n*\phi_n$. We note that if either of the sequences $f_n,\phi_n$ tends to 0 as $n\to\infty$, then $(f*\phi)_n$ tends to 0, by the above norm estimate. Hence the convolution is a well defined map
$$\mathcal E_Q^{p,-1}(X,\ell^1(X)) \times \mathcal E_Q^{0,q}(X,V)\to \mathcal E_Q^{p,q}(X,V),$$
i.e.\ as an element of $\mathcal E_Q^{p,q}(X,V)$, the convolution $f*\phi$ does not depend on the choice of sequences representing $f,\phi$.
Since the convolution is defined term-by-term in $n$, the identities $D(f*\phi)=(Df)*\phi$ and $d(f*\phi)=f*d\phi$ carry over to the quotient completion.
We recall that by Lemma \ref{amazing}, property A is equivalent to the existence of an element $f$ of $\mathcal E_Q{0,-1}(X,\ell^1(X))$ with $Df=0$ and $\pi_*(f)=I_Q\mathbf 1$. The idea of the proof of the following theorem is that convolving with $f$ allows us to average the splitting $s\phi$, to get an asymptotically invariant element.
\begin{thm}\label{Triv} If $X$ is a metric space satisfying Yu's property A, then for every $q\geq 1$, $H_{QA}^q(X, \mathcal V)=0$ for every $X$-module $\mathcal V$.
\end{thm}
\begin{proof}
Let $\phi\in \mathcal E_{QA}^q(X,\mathcal V)$ with $q\geq 1$. The element $\phi$ is represented by a sequence $\phi_n$ in $\mathcal E^q(X,\mathcal V)$ and $s\phi$ is represented by the sequence
\[
s\phi_n(x,(y_0, \ldots, y_{q-1}))= \phi_n(x,(x,y_0, \ldots, y_{q-1})).
\]
Since $D\phi=0$, the sequence $D\phi_n$ tends to zero, that is for all $R>0$, $\|D\phi_n\|_R\to 0$ as $n\to\infty$. By a diagonal argument, if $S_n$ is a sequence tending to infinity sufficiently slowly, then $\|D\phi_n\|_{S_n}\to 0$ as $n\to\infty$. We choose a sequence $S_n$ with this property.
Take an element $f$ in $\mathcal E_Q^{0,-1}(X,\ell^1(X))$ with $Df=0$ and $\pi_*(f)=I_Q\mathbf 1$, and let $f_n$ be a sequence representing $f$. For simplicity we assume that $\pi(f_n(x))=1$ for all $x,n$; if $f_n'$ is a sequence representing $f$, then $f_n'(x)+(1-\pi(f_n(x)))\delta_x$ also represents $f$, and has sum 1 as required.
By repeating the terms of the sequence $f_n$ we can arrange that $\supp(f_n(x))\subseteq B_{S_n}(x)$ for all $x,n$. Note that our choice of $f$ therefore depends on $S_n$ and hence on $\phi$.
As a remark in passing, we note that taking such a `supersequence' of $f_n$ corresponds in some sense to taking a subsequence of $\phi_n$. If we were working in the classical completion $E_{\mathrm{cs}}/E_0$, then the subsequence would represent the same element of $E_{\mathrm{cs}}/E_0$, however for $\mathcal E_Q$ this need not be true.
For each $q'$ we now define $s_f:\mathcal E_Q^{0,q'}(X,V)\to \mathcal E_Q^{0,q'-1}(X,V)$ by $s_f\psi=f*s\psi$. We first note that for any $\psi$ we have $s_f\psi$ asymptotically invariant. This follows from asymptotic invariance of $f$, since $Ds_f\phi=D(f*s\phi)=(Df)*s\phi=0$. Hence in fact we have a map $s_f:\mathcal E_Q^{q'}(X,V)\to \mathcal E_{QA}^{q'-1}(X,V)$ which restricts to the asymptotically invariant complex.
We claim that for our given $\phi$ we have $(ds_f+s_fd)\phi=\phi$. We have $ds_f\phi=d(f*s\phi)=f*ds\phi$, while $s_fd\phi=f*sd\phi$ by definition. Hence $(ds_f+s_fd)\phi=f*(ds+sd)\phi=f*\phi$ since $ds+sd=1$. It thus remains to show that $f*\phi=\phi$. Notice that since $\sum\limits_{z\in X}f_n(x)(z)=1$ we have $\phi_n(x,\mathbf y)=\sum\limits_{z\in X}f_n(x)(z)\phi_n(x, \mathbf y)$, so we have
\begin{align*}
(f_n*\phi_n-\phi_n)(x,\mathbf y)&=\sum\limits_{z\in X}f_n(x)(z)(\phi_n(z,\mathbf y)-\phi_n(x,\mathbf y))\\
&=\sum\limits_{z\in X}f_n(x)(z)D\phi_n((x,z),\mathbf y).\\
\end{align*}
Taking norms we have $\norm{f_n*\phi_n-\phi_n}\leq \norm{f_n}\norm{D\phi_n}_{S_n}$, since if $d(x,z)>S_n$ then $f_n(x)(z)$ vanishes. We know that $\norm{D\phi_n}_{S_n}\to 0$ as $n\to\infty$, hence we conclude that $f*\phi-\phi=0$ in $\mathcal E_{QA}^q(X,\mathcal V)$.
We have shown that for every element $\phi \in \mathcal E_{QA}^q(X,\mathcal V)$ with $q\geq 1$, we can construct maps $s_f:\mathcal E_{QA}^{q'}(X,\mathcal V)\to \mathcal E_{QA}^{q'-1}(X,\mathcal V)$ such that $(ds_f+s_fd)\phi=\phi$. (We remark that the choice of $f$, and hence the map $s_f$ depends on the element $\phi$ in question.) It follows that if $\phi$ is a cocycle then $\phi=(ds_f+s_fd)\phi=ds_f\phi$, so every cocycle is a coboundary. Thus we deduce that $H_{QA}^q(X, \mathcal V)=0$ for $q\geq 1$.
\end{proof}
Combining this with Theorem \ref{PropA} we obtain the following.
\begin{thm}\label{MainTheorem2}
Let $X$ be a discrete metric space. Then the following are equivalent:
\begin{enumerate}
\item $H_{QA}^q(X, \mathcal V)= 0$ for all $q\geq 1$ and all modules $\mathcal V$ over $X$.
\item $[\mathcal J_Q^{0,1}]=0$ in $H_{QA}^1(X, \mathcal \ell^1_0(X))$.
\item $X$ has property $A$.
\end{enumerate}
\end{thm}
\bigskip
We will now prove a corresponding result for the weak-* completion. The role of $f$ in the above argument will be replaced by an asymptotically invariant mean $\mu$ in $\mathcal E_W^{0,-1}(X,\ell^1(X))$.
We begin by extending the convolutions to the weak-* completions. First we define $f*\phi$ for $f\in\mathcal E^{p,-1}(X,\ell^1(X))$ and $\phi\in \mathcal E_W^{0,q}(X,V)$. This is defined via its pairing with an element $\alpha$ of $\mathcal E^{p,q}(X,V)^*$:
$$\langle f*\phi,\alpha \rangle=\langle \phi,\alpha_f \rangle, \text{ where } \langle \alpha_f,\theta\rangle = \langle \alpha,f*\theta\rangle, \text { for all } \theta \in \mathcal E^{0,q}(X,V).$$
In other words the operator $\phi\mapsto f*\phi$ on $\mathcal E_W^{0,q}(X,V)$ is the double dual of the operator $\theta\mapsto f*\theta$ on $\mathcal E^{0,q}(X,V)$.
We have $|\langle \alpha_f,\theta\rangle| \leq \norm{\alpha}_R\norm{f*\theta}_R \leq\norm{\alpha}_R\norm{f}_R\norm{\theta}$ for some $R$ (depending on $\alpha$). Hence for each $\alpha$ there exists $R$ such that
$$|\langle f*\phi,\alpha\rangle|\leq \norm{\phi}_R\norm{\alpha}_R\norm{f}_R$$
so $f*\phi$ is a continuous linear functional.
We now want to further extend the convolution to define $\eta*\phi$ in $\mathcal E_W^{p,q}(X,V)$, for $\eta\in\mathcal E_W^{p,-1}(X,\ell^1(X))$ and $\phi\in \mathcal E_W^{0,q}(X,V)$. The definition is motivated by the requirement that $(I_Wf)*\phi = f*\phi$. Hence for $\alpha$ in $\mathcal E^{p,q}(X,V)^*$ we will require
$$\langle (I_Wf)*\phi,\alpha \rangle = \langle f*\phi,\alpha\rangle.$$
For $\phi\in \mathcal E_W^{0,q}(X,V)$, $\alpha \in \mathcal E^{p,q}(X,V)^*$, define $\sigma_{\phi,\alpha} \in \mathcal E^{p,-1}(X,\ell^1(X))^*$ by
$$\langle \sigma_{\phi,\alpha}, f\rangle = \langle f*\phi,\alpha\rangle=\langle \phi,\alpha_f\rangle.$$
The above inequalities ensure that $\sigma_{\phi,\alpha}$ is a continuous linear functional.
We observe that $f*\phi$ is determined by the property that $\langle f*\phi,\alpha\rangle=\langle \sigma_{\phi,\alpha},f\rangle=\langle I_Wf, \sigma_{\phi,\alpha}\rangle$. We use this to give the general definition: For $\eta\in\mathcal E_W^{p,-1}(X,\ell^1(X))$ and $\phi\in \mathcal E_W^{0,q}(X,V)$, we define $\eta*\phi$ in $\mathcal E_W^{p,q}(X,V)$ by
$$\langle \eta*\phi, \alpha\rangle = \langle \eta, \sigma_{\phi,\alpha}\rangle$$
for all $\alpha$ in $\mathcal E^{p,q}(X,V)^*$.
\begin{lemma}\label{identities}
For $\eta\in \mathcal E_W^{p,-1}(X,\ell^1(X))$ and $\phi\in \mathcal E_W^{0,q}(X,\mathcal V)$ we have $D(\eta*\phi)=(D\eta)*\phi$ and $d(\eta*\phi)=\eta*d\phi$.
\end{lemma}
\begin{proof}
The elements $D(\eta*\phi),d(\eta*\phi)$ are defined by their pairings with respectively $\alpha$ in $\mathcal E^{p+1,q}(X,\mathcal V)^*$ and $\beta$ in $\mathcal E^{p,q+1}(X,\mathcal V)^*$. These are given by pairing $\eta$ with respectively $\sigma_{\phi,D^*\alpha}$ and $\sigma_{\phi,d^*\beta}$.
Since for $f\in \mathcal E^{p,-1}(X,\ell^1(X))$ we have $\langle \sigma_{\phi,D^*\alpha},f\rangle=\langle \phi, (D^*\alpha)_f\rangle$ and $\langle \sigma_{\phi,d^*\beta},f\rangle=\langle \phi, (d^*\beta)_f\rangle$, we must determine $(D^*\alpha)_f$ and $(d^*\beta)_f$. Pairing these with an element $\theta$ in $\mathcal E^{0,q}(X,\mathcal V)$ we have
$$\langle (D^*\alpha)_f,\theta\rangle = \langle\alpha,D(f*\theta)\rangle = \langle\alpha,(Df)*\theta)\rangle, \text{ and } \langle (d^*\beta)_f,\theta\rangle = \langle\beta,d(f*\theta)\rangle = \langle\beta,f*d\theta\rangle.$$
Hence $(D^*\alpha)_f=\alpha_{Df}$ and $(d^*\beta)_f=d^*(\beta_f)$, so we have $\sigma_{\phi,D^*\alpha}=D^*\sigma_{\phi,\alpha}$ and $\sigma_{\phi,d^*\beta}=\sigma_{d\phi,\beta}$. It follows that $D(\eta*\phi)=(D\eta)*\phi$ and $d(\eta*\phi)=\eta*d\phi$ as required.
\end{proof}
Before proceeding with the proof of the vanishing theorem we first establish the following result.
\begin{lemma}\label{TrivialPairing}
If $\eta \in \mathcal E_W^{0,-1}(X,\ell^1(X))$ is in the image of $\mathcal E_W^{0,-1}(X,\ell^1_0(X))$, and $\phi\in \mathcal E_W^{0, q}(X, V)$ with $D\phi=0$ then $\eta*\phi=0$.
\end{lemma}
\begin{proof}
The statement that $\eta*\phi=0$, amounts to the assertion that $\langle \eta, \sigma_{\phi,\alpha}\rangle=0$ for all $\alpha$ in $\mathcal E^{0,q}(X,V)^*$. Since the image of $I_W$ is dense in $\mathcal E_W^{0,-1}(X,\ell^1_0(X))$ in the \text{weak-*}\ topology, it suffices to show that $\langle \sigma_{\phi,\alpha}, f\rangle = 0$ for all $f\in \mathcal E^{0,-1}(X,\ell^1_0(X))$. We note that
$$\langle \sigma_{\phi,\alpha}, f\rangle = \langle f*\phi,\alpha\rangle = \langle \phi, \alpha_f \rangle.$$
We will show that $\alpha_f$ is a `boundary,' that is $\alpha_f$ is in the range of $D^*$. As $D\phi=0$ it will follow that the pairing is trivial.
We define a boundary map $\partial:\ell^1(X\times X)\to \ell^1_0(X)$ by $(\partial H)(z_0)=\sum\limits_{z\in X} H(z_1,z_0)-H(z_0,z_1)$. Equivalently, we can write $\partial F=\sum\limits_{z_0,z_1\in X} H(z_0,z_1)(\delta_{z_1}-\delta_{z_0})$.
We note that $\partial$ is surjective: For $h\in \ell^1_0(X)$ and $x$ in $X$, let $H(z_0,z_1)=h(z_1)$ if $z_0=x,z_1\neq x$ and let $H(z_0,z_1)=0$ otherwise. Then $\partial H=h$. We note that $\norm{H}\leq \norm{h}$, and $\supp(H)\subseteq \{x\}\times \supp(h)$. For each $x$, let $F(x)$ be the lift of $f(x)$ constructed in this way, so that $\norm{F(x)}\leq \norm{f(x)}$ for all $x$, and as $f$ is of controlled supports there exists $R$ such that if $F(x)(z_0,z_1)\neq 0$ then $z_0,z_1\in B_R(x)$.
Writing $(\partial F)(x)=\partial(F(x))$, for $\theta\in \mathcal E^{0, q}(X, V)$, we have
$$\langle \alpha_f,\theta \rangle = \langle \alpha, f*\theta\rangle = \langle \alpha, (\partial F)*\theta\rangle.$$
Now compute $(\partial F)*\theta$. We have
\begin{align*}
((\partial F)*\theta)(x,\mathbf{y})&=\sum_z \partial F(x)(z)\theta(z,\mathbf{y})
=\sum_{z,z_0,z_1} F(x)(z_0,z_1)(\delta_{z_1}(z)-\delta_{z_0}(z))\theta(z,\mathbf{y})\\
&=\sum_{z_0,z_1}F(x)(z_0,z_1)D\theta((z_0,z_1),\mathbf{y})
\end{align*}
We define $T_F:\mathcal E^{1, q}(X, V)\to \mathcal E^{0, q}(X, V)$ by $(T_F\zeta)(x,\mathbf y)= \sum\limits_{z_0,z_1}F(x)(z_0,z_1)\zeta((z_0,z_1),\mathbf{y})$. As $F(x)(z_0,z_1)\neq 0$ implies $z_0,z_1$ lie in the ball $B_R(x)$, we have the estimate
$$
\|T_F\zeta\|\leq \sup\limits_{x\in X, \mathbf{y}\in X^{q+1}}\sum\limits_{z_0,z_1\in X}|F(x)(z_0,z_1)|\|\zeta((z_0,z_1),\mathbf{y})\|_V\leq \sup_{x\in X}\norm{F(x)}\norm{\zeta}_R\leq \norm{f}\norm{\zeta}_R.
$$
hence $T_F$ is continuous.
We conclude that
$$\langle \alpha_f,\theta\rangle = \langle \alpha, (\partial F)*\theta\rangle = \langle \alpha, T_FD\theta\rangle=\langle D^*T_F^*\alpha,\theta\rangle$$
for all $\theta$, hence $\alpha_f=D^*T_F^*\alpha$, so that
$$\langle \phi, \alpha_f\rangle=\langle \phi, D^*T_F^*\alpha\rangle=\langle D\phi, T_F^*\alpha\rangle=0.$$
This completes the proof.
\end{proof}
We now prove the vanishing theorem.
\begin{thm}\label{TrivW} If $X$ is a metric space satisfying Yu's property A, then for every $q\geq 1$, $H_{WA}^q(X, \mathcal V)=0$ for every $X$-module $\mathcal V$.
Specifically, if $\mu$ is an asymptotically invariant mean then $s_\mu\phi=\mu*s\phi$ defines a splitting of the asymptotically invariant complex.
\end{thm}
\begin{proof}
By Theorem \ref{D[1]=0}, property A guarantees the existence of an asymptotically invariant mean $\mu$, that is an element $\mu$ in $\mathcal E_{WA}^{0,-1}$ such that $\pi_*(\mu)=0$.
We define $s_\mu:\mathcal E_W^{0,q}(X,\mathcal V)\to\mathcal E_W^{0,q-1}(X,\mathcal V)$ by $s_\mu\phi=\mu*s\phi$. By Lemma \ref{identities} we have $Ds_\mu\phi=D(\mu*s\phi)=(D\mu)*s\phi$. Since $\mu$ is asymptotically invariant $D\mu=0$, so $s_\mu\phi$ is also asymptotically invariant. Hence $s_\mu$ restricts to a map $s_\mu:\mathcal E_{WA}^{0,q}(X,\mathcal V)\to\mathcal E_{WA}^{0,q-1}(X,\mathcal V)$.
We must now verify that $s_\mu$ is a splitting. By Lemma \ref{identities}, and using the fact that $ds+sd=1$ we have
$$(ds_\mu+s_\mu d)\phi = d(\mu*s\phi)+\mu*sd\phi=\mu*ds\phi+\mu*sd\phi=\mu*\phi.$$
It thus remains to show that $\mu*\phi=\phi$.
Let $\delta$ denote the map $X\to \ell^1(x), x\mapsto \delta_x$. We have $\pi_*(I_W \delta)=1=\pi_*(\mu)$, so for $\eta=\delta-\mu$ we have $\pi_*(\eta)=0$. Hence $\eta$ is in the image of $\mathcal E_W^{0,-1}(X,\ell^1_0(X))$. As $D\phi=0$, it follows from Lemma \ref{TrivialPairing} that $\eta*\phi=0$. Thus $\mu*\phi=(I_W\delta)*\phi=\delta*\phi$. It is easy to see that convolution with $\delta$ yields the identity map on $\mathcal E^{0,q}(X,\mathcal V)$, hence its double dual is again the identity map. Thus $\mu*\phi=\delta*\phi=\phi$ as required.
This completes the proof.
\end{proof}
Combining this with Theorem \ref{PropA} we obtain the following.
\begin{thm}\label{MainTheorem3}
Let $X$ be a discrete metric space. Then the following are equivalent:
\begin{enumerate}
\item $H_{WA}^q(X, \mathcal V)= 0$ for all $q\geq 1$ and all modules $\mathcal V$ over $X$.
\item $[\mathcal J_W^{0,1}]=0$ in $H_{WA}^1(X, \mathcal \ell^1_0(X))$.
\item $X$ has property $A$.
\end{enumerate}
\end{thm}
\end{section}
\skipit{
\section{Amenability}
\skipit{EDIT THIS TEXT-By invoking equivariance throughout, in definitions and assertions, where this makes sense, we obtain a variation on Johnson's theorem as follows, concerning the characterisation of amenability in terms of the vanishing of bounded cohomology. Where Johnson's theorem proceeds in large part by demonstrating that vanishing of a certain cocycle yields the existence of an invariant mean on the group, vanishing in our theory yields a F\o lner sequence.}
Now recall that given a group $G$ acting by isometries on a metric space $X$ an $X$-module $\mathcal V=(V, \|\cdot\|, \supp)$ is said to be $G$-equivariant if it is equipped with an isometric action of $G$ by bounded linear maps, such that $\supp(gv)=g\supp(v)$ for all $g\in G$ and all $v\in V$. The action of $G$ extends in the usual way to a diagonal action on the cochain spaces in the bicomplex $E^{p,q}(X, \mathcal V)$ defined by $(g\cdot\phi)(\mathbf x, \mathbf y, n)=g(\phi(g^{-1}(\mathbf x), g^{-1}(\mathbf y), n))$. The action commutes with the differentials and preserves the subcomplex $E^{p,q}_0(X, \mathcal V)$. The $G$-invariant part, which comprises the $G$-equivariant cochains is preserved by the differentials in both bicomplexes. We now take the quotient complex in which we factor out the $G$-invariant asymptotically vanishing cochains in the $G$-invariant cochains. As in the non-equivariant case the rows of the bicomplex are acyclic since the splitting given in that argument is $G$-equivariant. The spaces ${E^{p,-1}(X, \mathcal V)^G\over E_0^{p,-1}(X, \mathcal V)^G}$ are the augmentation of this complex and equipping these with the restriction of $D$ they form a cochain complex whose cohomology is isomorphic to the cohomology of the totalisation.
As in the non-equivariant case we also consider the augmentation of the columns given by the kernels of
\[
D:{E^{0,q}(X, \mathcal V)^G\over E_0^{0,q}(X, \mathcal V)^G}\rightarrow {E^{1,q}(X, \mathcal V)^G\over E_0^{1,q}(X, \mathcal V)^G}.
\]
The horizontal differential $d$ restricts to these kernels hence we obtain a new cochain complex. The cohomology groups are denoted $H^q_B(X, V)$.
\begin{remark}
If $X$ is equipped with two coarsely equivalent metrics and $G$ acts by isometries on both then for any $G$-equivariant module $\mathcal V$ over $X$ the equivariant bicomplexes we obtain from each metric are identical. This follows from the corresponding remark in the non-equivariant case. In particular taking $X=G$ to b a countable discrete group acting on itself by left multiplication and equipped with a proper left invariant metric we observe that invoking uniqueness of the metric up to coarse equivalence that the cohomology does not depend on the metric and we denote it by $H^q_B(G, V)$.
\end{remark}
\begin{thm}\label{MainTheorem2}
Let $G$ be a discrete group. The following are equivalent:
\begin{enumerate}
\item $H^q_B(G, V)= 0$ for all $q\geq 1$ and all $G$-equivariant modules $V$ over $G$.
\item $H^1_B(G, \ell^1_0G)= 0$.
\item The class $[J]=[\delta_{y_1}-\delta_{y_0}]\in H^1_B(G, \ell^1_0G)$ is $\text{zero}$.
\item $G$ is amenable.
\end{enumerate}
\end{thm}
\begin{proof}
$(1)\implies (2)\implies( 3)$ is trivial.
$(3)\implies (4)$: As in the proof of Theorem ?? part ??? the cohomology $H^1_B(G, \ell^1_0(G))$ maps to the cohomology of the totalisation of the bicomplex, which we identify with the cohomology of the complex $\left ({E^{p,-1}(G, \ell^1_0(G))^G\over E^{p,-1}_0(G, \ell^1_0(G))^G}, D\right)$
The image of $[J]$ is cohomologous to $D[\mathbf 1]$. The latter vanishes if and only if $\mathbf 1$ is the image of a cocycle in ${E^{0,-1}(G, \ell^1(G))^G\over E^{0,-1}_0(G, \ell^1(G))^G}$. Such an element is represented by a $G$-equivariant function $\phi:G\times \mathbb{N}\rightarrow \ell^1(G)$ such that for any $R\geq 0$ the norm $\|\phi(g,n)-\phi(g',n)\|_{\ell^1}\to 0$ as $n\to \infty$ uniformly on those $g,g'$ with $d(g,g')\leq R$. Note that for all $g\in G$ we have $\|g\phi(e,n)-\phi(e,n)\|_{\ell^1}=\|\phi(g,n)-\phi(e,n)\|_{\ell^1}$ which tends to $0$ as $n\to \infty$ so setting $f^n:={|\phi(e,n)|\over\|\phi(e,n)\|_{\ell^1}}$ we obtain a Reiter sequence for the group.
$(4)\implies(1)$. We follow the same argument as in Theorem \ref{Triv}. Given a Reiter sequence $f^n$, let $f^n_g=gf^n$. Using this equivariant family to construct a splitting of the differential $d$ as in the proof of Theorem \ref{Triv}, the splitting we produce is now equivariant. Hence $H^q_B(G, V)$ vanishes for all $q\geq 1$.
\end{proof}
}
\skipit{
|
1,477,468,751,137 | arxiv | \section{Introduction}
Privacy is becoming a paramount concern for machine learning and data analysis
tasks, which often operate on personal data. For just one example of
the tension between machine learning and data privacy, Netflix released an
anonymized dataset of user movie ratings for teams competing to develop an
improved recommendation mechanism. The competition was a great success (the
winning team improved on the existing recommendation system by more than
$10\%$), but the ad hoc anonymization was not as successful: \citet{NV08} were
later able to re-identify individuals in the dataset, leading to a
lawsuit and the cancellation of subsequent competitions.
{\em Differentially private query release} is an attempt to solve this problem.
Differential privacy is a strong formal privacy guarantee (that, among other
things, provably prevents re-identification attacks), and the problem of {\em
query release} is to release accurate answers to a set of statistical queries.
As observed early on by \citet{BDMN05}, performing private query release is
sufficient to simulate any learning algorithm in the {\em statistical query
model} of \citet{Kearns98}.
Since then, the query release problem has been extensively studied in the
differential privacy literature. While simple perturbation can
privately answer a small number of queries \citep{DMNS06}, more sophisticated
approaches can accurately answer nearly exponentially many queries in the size
of the private database \citep{BLR08,DNRRV09,DRV10,RR10,HR10,GRU12,HLM12}. A
natural approach, employed by many of these algorithms, is to
answer queries by generating \emph{synthetic data}: a safe version of the
dataset that approximates the real dataset on every statistical query of
interest.
Unfortunately, even the most efficient approaches for query release have a
per-query running time linear in the size of the \emph{data universe}---the
number of possible kinds of records. If each record is specified by a set of
attributes, the size of the data universe is exponential in the number of
attributes \citep{HR10}. Moreover, this running
time is necessary in the worst case \citep{Ullman13,UV11}.
This exponential runtime has hampered practical evaluation of query release
algorithms. One notable exception is
due to \citet{HLM12}, who perform a thorough experimental evaluation of one
such algorithm, which they called \mbox{{\sf MWEM}}\xspace (Multiplicative Weights Exponential
Mechanism). They find that \mbox{{\sf MWEM}}\xspace has quite good accuracy in practice and scales
to higher dimensional data than suggested by a theoretical (worst-case)
analysis. Nevertheless, running time remains a problem, and the approach does
not seem to scale to high dimensional data (with more than $30$ or so attributes
for general queries, and more when the queries have more structure\footnote{%
\citet{HLM12} are able to scale up to $1000$ features on synthetic data when the
features are partitioned into a number of small buckets, and the queries
depend on at most one feature per bucket.}).
The critical bottleneck is the size of the state maintained by the algorithm:
\mbox{{\sf MWEM}}\xspace, like many query release algorithms, needs to manipulate an object that has
size linear in the size of the data universe. This
quickly becomes impractical for records with even a modest number of attributes.
We present \mbox{{\sf DualQuery}}\xspace, an alternative algorithm which is {\em dual} to \mbox{{\sf MWEM}}\xspace in a sense
that we will make precise. Rather than manipulating an object of exponential
size, \mbox{{\sf DualQuery}}\xspace solves a concisely represented (but NP-hard) optimization
problem. Critically, the optimization step does not require a solution that is
private or exact, so it can be handled by existing, highly optimized
solvers. Except for this step, all parts of
our algorithm are efficient. As a result, \mbox{{\sf DualQuery}}\xspace requires (worst-case)
space and (in practice) time only linear in the number of {\em queries} of
interest, which is often significantly smaller than the size of the data
universe. Like existing algorithms for query release, \mbox{{\sf DualQuery}}\xspace has a provable
accuracy guarantee and satisfies the strong differential privacy guarantee. Both
\mbox{{\sf DualQuery}}\xspace and \mbox{{\sf MWEM}}\xspace generate \emph{synthetic data}: the output of both algorithms is a
data set from the same domain as the input data set, and can thus be used as the
original data set would have been used. It is important to note that the
that the output data set is guaranteed to be
similar to the input data set only with respect to the query class; the
synthetic data might not be similar to the
real data in other respects.
We evaluate \mbox{{\sf DualQuery}}\xspace on a variety of datasets by releasing {\em 3-way marginals}
(also known as {\em conjunctions} or {\em contingency tables}), demonstrating
that it solves the query release problem accurately and efficiently even when
the data includes hundreds of thousands of features. We know of no other
algorithm that can perform accurate, private query release for this class of
queries on real data with more than even $100$ features.
\subsection*{Related work}
Differentially private learning has been studied since \citet{BDMN05} showed how
to convert learning algorithms in the \emph{SQ model} of \citet{Kearns98} into
differentially private learning algorithms with similar accuracy guarantees.
Since then, private machine learning has become a very active field with both
foundational sample complexity results \citep{KLNRS08,CH11,BNS13,DJW13} and
numerous efficient algorithms for particular learning problems
\citep{CM08,CMS11,RBHT09,KST12,CSS12,TS13}.
In parallel, there has been a significant amount of work on privately releasing
synthetic data based on a true dataset while preserving the answers to large
numbers of statistical queries \citep{BLR08,DNRRV09,RR10,DRV10,HR10,GRU12}.
These results are extremely strong in an information theoretic sense: they
ensure the consistency of the synthetic data with respect to an exponentially
large family of statistics. But, all of these algorithms (including the notable
multiplicative weights algorithm of \citet{HR10}, which achieves the
theoretically optimal accuracy and runtime) have running time exponential in the
dimension of the data. Under standard cryptographic assumptions, this is
necessary in the worst case for mechanisms that answer arbitrary statistical
queries \citep{Ullman13}.
Nevertheless, there have been some experimental evaluations of these approaches
on real datasets. Most related to our work is the evaluation of the \mbox{{\sf MWEM}}\xspace
mechanism by \citet{HLM12}, which is based on the private multiplicative weights
mechanism \citep{HR10}. This algorithm is inefficient---it manipulates a
probability distribution over a set exponentially large in the dimension of the
data space---but with some heuristic optimizations, \citet{HLM12} were able to
implement the multiplicative weights algorithm on real datasets with up
to 77 attributes (and even more when the queries are restricted to take positive
values only on a small number of disjoint groups of features). However, it seems
difficult to scale this approach to higher dimensional data.
Another family of query release algorithms are based on the Matrix Mechanism
\citep{CHRMM10,LM12}. The runtime guarantees of the matrix mechanism are
similar to the approaches based on multiplicative weights---the algorithm
manipulates a ``matrix'' of queries with dimension exponential in the number of
features. \citet{CPSY12} evaluate an approach based on this family of
algorithms on low dimensional datasets, but scaling to high dimensional data
also seems challenging. A recent work by \citet{PrivBayes} proposes
a low-dimensional approximation for high-dimensional data distribution by
privately constructing Bayesian networks, and shows that such a
representation gives good accuracy on some real datasets.
Our algorithm is inspired by the view of the synthetic data generation problem
as a zero-sum game, first proposed by \citet{HRU13}. In this interpretation,
\citet{HLM12} solves the game by having a {\em data player} use a no-regret
learning algorithm, while the {\em query player} repeatedly best responds by
optimizing over queries. In contrast, our algorithm swaps the roles of the two
players: the query player now uses the no-regret learning algorithm, whereas the
data player now finds best responses by solving an optimization problem. This is
reminiscent of ``Boosting for queries,'' proposed by \citet{DRV10}; the
main difference is that our optimization problem is over single records
rather than sets of records. As a result, our optimization can be
handled non-privately.
There is also another theoretical approach to query release due to Nikolov,
Talwar, and Zhang (and later specialized to marginals by Dwork, Nikolov, and
Talwar) that shares a crucial property of the one we present here---namely that
the computationally difficult step does not need to be solved privately
\citep{NTZ13,DNT14}. The benefit of our approach is that it yields synthetic data
rather than just query answers, and that the number of calls to the optimization
oracle is smaller. However, the approach of \citet{NTZ13,DNT14} yields
theoretically optimal accuracy bounds (ours does not), and so that approach
certainly merits future empirical evaluation.
\section{Differential privacy background}
Differential privacy has become a standard algorithmic notion
for protecting the privacy of individual records in a statistical
database. It formalizes the requirement that the addition or removal
of a data record does not change the probability of any outcome of the
mechanism by much.
To begin, databases are multisets of elements from an abstract domain
$\cX$, representing the set of all possible data records. Two databases $D,
D'\subset \cX$ are {\em neighboring} if they differ in a single data element
($| D \triangle D'| \leq 1$).
\begin{definition}[\citet{DMNS06}] A mechanism
$M\colon \cX^n \rightarrow R$ satisfies $(\varepsilon, \delta)$-differential
privacy if for every $S\subseteq R$ and for all neighboring databases
$D, D'\in \cX^n$, the following holds:
\[
\Pr[M(D)\in S] \leq e^\varepsilon \Pr[M(D') \in S] + \delta
\]
If $\delta =0$ we say $M$ satisfies $\varepsilon$-differential privacy.
\end{definition}
\begin{definition}
The \emph{(global) sensitivity} of a query $q$ is its maximum difference
when evaluated on two neighboring databases:
\[
GS_f = \max_{D, D' \in \cX^n : |D\triangle D'| = 1} |q(D) - q(D')|.
\]
\end{definition}
In this paper, we consider the private release of information for the
classes of linear queries, which have sensitivity $1/n$.
\begin{definition}
For a predicate $\varphi\colon \cX\rightarrow \{0,1\}$, the \emph{linear
query} $q_\varphi\colon \cX^n \rightarrow [0,1]$ is defined by
\[
q_\varphi(D) = \frac{\sum_{x\in D} \varphi(x)}{|D|} .
\]
We will often represent linear queries a different form, as a vector $q \in \{
0, 1\}^{|\cX|}$ explicitly encoding the predicate $\phi$:
\[
q(D) = \frac{\sum_{x \in D} q_x}{|D|} .
\]
\end{definition}
We will use a fundamental tool for private data analysis: we can bound the
privacy cost of an algorithm as a function of the privacy costs of its
subcomponents.
\begin{lemma}[\citet{DRV10}] \label{composition}
Let $M_1,\ldots , M_k$ be such that each $M_i$ is $(\varepsilon_i,0)$-private with
$\varepsilon_i\leq \varepsilon'$. Then $M(D) = (M_1(D), \ldots , M_k(D))$ is
$(\varepsilon,0)$-private for
$ \varepsilon = \sum_{i = 1}^k \varepsilon_i,$
and $(\varepsilon, \delta)$-private for
\[
\varepsilon = \sqrt{2 \log(1 / \delta) k}\varepsilon' + k\varepsilon'(e^{\varepsilon'} - 1)
\]
for any $\delta \in (0, 1)$. The sequence of mechanisms can be chosen
adaptively, i.e., later mechanisms can take outputs from previous mechanisms as
input.
\end{lemma}
\section{The query release game} \label{sec:qr:game}
The analysis of our algorithm relies on the interpretation of query release as a two
player, zero-sum game~\citep{HRU13}. In the present section, we review this idea
and related tools.
\subsection*{Game definition}
Suppose we want to answer a set of queries $\cQ$. For each query $q \in \cQ$, we
can form the {\em negated query} $\bar{q}$, which takes values $\bar{q}(D) = 1 -
q(D)$ for every database $D$. %
Equivalently, for a linear query defined by a
predicate $\varphi$, the negated query is defined by the negation
$\neg \varphi$ of the predicate.
For the remainder, we will assume that $\cQ$ is closed under negation; if not,
we may add negated copies of each query to $\cQ$.
Let there be two players, whom we call the {\em data} player and {\em query}
player. The data player has action set equal to the data universe $\cX$, while
the query player has action set equal to the query class $\cQ$. Given a play $x
\in \cX$ and $q \in \cQ$, we let the payoff be
\begin{equation} \label{eq:payoff}
A(x, q) := q(D) - q(x),
\end{equation}
where $D$ is the true database. To define the zero-sum game, the data player
will try to minimize the payoff, while the query player will try to maximize the
payoff.
\subsection*{Equilibrium of the game}
Let $\Delta(\cX)$ and $\Delta(\cQ)$ be the set of probability distributions over
$\cX$ and $\cQ$. We consider how well each player can do if they randomize over
their actions, i.e., if they play from a probability distribution over their
actions. By von Neumann's minimax theorem,
\[
\min_{u \in \Delta(\cX)} \max_{w \in \Delta(\cQ)} A(u, w) =
\max_{w \in \Delta(\cQ)} \min_{u \in \Delta(\cX)} A(u, w),
\]
for any two player zero-sum game, where
\[
A(u, w) := \mathbb{E}_{x \sim u, q \sim w} A(x, q)
\]
is the expected payoff. The common value is called the {\em value of the game},
which we denote by $v_A$.
Intuitively, von Neumann's theorem states that there is no advantage in a
player going first: the minimizing player can always force payoff at most $v_A$,
while the maximizing player can always force payoff at least $v_A$.
This suggests that each player can play an optimal strategy, assuming best play
from the opponent---this is the notion of equilibrium strategies, which we now
define. We will soon interpret these strategies as solutions to the query
release problem.
\begin{definition}
Let $\alpha > 0$. Let $A$ be the payoffs for a two player, zero-sum game with
action sets $\cX, \cQ$. Then, a pair of strategies $u^* \in \Delta(\cX)$
and $w^* \in \Delta(\cQ)$ form an {\em $\alpha$-approximate mixed Nash
equilibrium} if
\[
A(u^*, w) \leq v_A + \alpha
\qquad \text{and} \qquad
A(u, w^*) \geq v_A - \alpha
\]
for all strategies $u \in \Delta(\cX)$ and $w \in \Delta(\cQ)$.
\end{definition}
If the true database $D$ is normalized to be a distribution $\hat{D}$ in
$\Delta(\cX)$, then $\hat{D}$ always has zero payoff:
\[
A(\hat{D}, w) = \mathbb{E}_{x \sim \hat{D}, q \sim w} [q(D) - q(x)] = 0.
\]
Hence, the value of the game $v_A$ is at most $0$. Also, for any
data strategy $u$, the payoff of query $q$ is the negated payoff of the negated
query $\bar{q}$:
\begin{align*}
A(u, \bar{q}) &= \mathbb{E}_{x \sim u} [\bar{q}(D) - \bar{q}(x)]
= \mathbb{E}_{x \sim u} [q(x) - q(D)],
\end{align*}
which is $A(u, \bar{q})$. Thus, any query strategy that places equal weight on
$q$ and $\bar{q}$ has expected payoff zero, so $v_A$ is at least $0$. Hence,
$v_A = 0$.
Now, let $(u^*, w^*)$ be an $\alpha$-approximate equilibrium. Suppose that the
data player plays $u^*$, while the query player always plays query $q$. By the
equilibrium guarantee, we then have $A(u^*, q) \leq \alpha$, but the expected
payoff on the left is simply $q(D) - q(u^*)$. Likewise, if the query player
plays the negated query $\bar{q}$, then
\[
-q(D) + q(u^*) = A(u^*, \bar{q}) \leq \alpha,
\]
so $q(D) - q(u^*) \geq -\alpha$. Hence for every query $q \in \cQ$, we know
$|q(u^*) - q(D)| \leq \alpha$. This is precisely what we need for query release:
we just need to privately calculate an approximate equilibrium.
\subsection*{Solving the game}
To construct the approximate equilibrium, we will use the multiplicative weights update
algorithm (MW).\footnote{%
The MW algorithm has wide applications; it has been rediscovered in various
guises several times. More details can be found in the comprehensive survey
by \citet{AHK12}.}
This algorithm maintains a distribution over actions (initially uniform) over a
series of steps. At each step, the MW algorithm receives a (possibly adversarial)
loss for each action. Then, MW reweights the distribution to favor actions with
less loss.
The algorithm is presented in \Cref{alg:MW}.
\begin{algorithm}[h!]
\begin{algorithmic}
\STATE{Let $\eta > 0$ be given, let $\cA$ be the action space}
\STATE{Initialize $\tilde{A}^1$ uniform distribution on $\cA$}
\STATE{For $t = 1,2,\dots,T$:}
\INDSTATE[1]{Receive loss vector $\ell^t$}
\INDSTATE[1]{{\bf For each} $a \in \cA${\bf :}}
\INDSTATE[2]{Update $A^{t+1}_a = e^{- \eta \ell^t_a} \tilde{A}^{t}_a$ for every
$a \in \cA$}
\INDSTATE[1]{Normalize $\tilde{A}^{t+1} = \frac{A^{t + 1}}{\sum_i A^{t + 1}_i}$}
\end{algorithmic}
\caption{The Multiplicative Weights Algorithm}
\label{alg:MW}
\end{algorithm}
For our purposes, the most important application of MW is to solving zero-sum
games. \citet{FS96} showed that if one player maintains a distribution over
actions using MW, while the other player selects a {\em best-response} action
versus the current MW distribution (i.e., an action that maximizes his expected
payoff), the average MW distribution and empirical best-response distributions
will converge to an approximate equilibrium rapidly.
\begin{theorem}[\citet{FS96}]
\label{thm:mw-eq}
Let $\alpha > 0$, and let $A(i, j) \in [-1, 1]^{m \times n}$ be the payoff
matrix for a zero-sum game. Suppose the first player uses multiplicative
weights over their actions to play distributions $p^1, \dots, p^T$, while
the second player plays $(\alpha/2)$-approximate best responses $x^1, \dots,
x^T$, i.e.,
\[
A(x^t, p^t) \geq \max_x A(x, p^t) - \alpha/2.
\]
Setting $T = 16 \log n /\alpha^2$ and $\eta = \alpha/4$ in the MW algorithm,
the empirical distributions
\[
\frac{1}{T}\sum_{i = 1}^T p^i
\quad \text{and} \quad
\frac{1}{T} \sum_{i = 1}^T x^i
\]
form an $\alpha$-approximate mixed Nash equilibrium.
\end{theorem}
\section{Dual query release}
Viewing query release as a zero-sum game, we can interpret the algorithm of
\citet{HR10} (and the \mbox{{\sf MWEM}}\xspace algorithm of \citet{HLM12}) as solving the game by
using MW for the data player, while the query player plays best responses. To
guarantee privacy, their algorithm selects the query best-responses privately
via the exponential mechanism of \citet{MT07}. Our algorithm simply reverses
the roles: while \mbox{{\sf MWEM}}\xspace uses a no-regret algorithm to maintain the data player's
distribution, we will instead use a no-regret algorithm for the query player's
distribution; instead of finding a maximum payoff query at each round, our
algorithm selects a minimum payoff record at each turn.
In a bit more detail, we maintain a distribution $\cQ$ over the queries,
initially uniform. At each step $t$, we first simulate the data player making a
best response against $\cQ$, i.e., selecting a record $x^t$ that maximizes the
expected payoff if the query player draws a query according to $\cQ$. As we
discuss below, for privacy considerations we cannot work directly with the
distribution $\cQ$. Instead, we sample $s$ queries $\{ q_i \}$ from $\cQ$, and
form ``average'' query $\tilde{q}$, which we take as an estimate of the true
distribution $\cQ$. The data player then best-responds against $\tilde{q}$.
Then, we simulate the query player updating the distribution $\cQ$ after seeing
the selected record $x^t$---we will reduce weight on queries that have similar
answers on $x^t$ the database $D$, and we will increase weight on queries that
have very different answers on $x^t$ and $D$. After running for $T$ rounds, we
output the set of records $\{ x_t \}_t$ as the synthetic database. Note that this is a data set from the same domain as the original data set, and so can be used in any application that the original data set can be used in -- with the caveat of course, that except with respect to the query class used in the algorithm, there are no guarantees about how the synthetic data resembles the real data. The full
algorithm can be found in \Cref{alg:dualquery}; the main parameters are $\alpha
\in (0, 1)$, which is the target maximum additive error of \emph{any} query on
the final output, and $\beta \in (0, 1)$, which is the probability that the
algorithm may fail to achieve the accuracy level $\alpha$. These parameters two
control the number of steps we take ($T$), the update rate of the query
distribution ($\eta$), and the number of queries we sample at each step ($s$).
Our privacy argument differs slightly from the analysis for \mbox{{\sf MWEM}}\xspace. There, the
data distribution is public, and finding a query with high error requires access
to the private data. Our algorithm instead maintains a distribution $Q$ over
queries which depends directly on the private data, so we cannot use $Q$
directly. Instead, we argue that \emph{queries sampled from $Q$} are privacy
preserving. Then, we can use a non-private optimization method to find a
minimal error record versus queries sampled from $Q$. We then trade off
privacy (which degrades as we take more samples) with accuracy (which improves
as we take more samples, since the distribution of sampled queries converges to
$Q$).
Given known hardness results for the query release problem \citep{Ullman13}, our
algorithm must have worst-case runtime polynomial in the universe size $|\cX|$,
so is not theoretically more efficient than prior approaches. In fact, even
compared to prior work on query release (e.g., \citet{HR10}), our algorithm has
a weaker accuracy guarantee. However, our approach has an important practical
benefit: the computationally hard step can be handled with standard, non-private
solvers (we note that this is common also to the approach of \cite{NTZ13,DNT14}).
The iterative structure of our algorithm, combined with our use of constraint
solvers, also allows for several heuristics improvements. For instance, we may run
for fewer iterations than predicted by theory. Or, if the optimization problem
turns out to be hard (even in practice), we can stop the solver early at a
suboptimal (but often still good) solution. These heuristic tweaks can improve
accuracy beyond what is guaranteed by our accuracy theorem, while always
maintaining a strong \emph{provable} privacy guarantee.
\begin{algorithm}
\begin{algorithmic}
\STATE{\textbf{Parameters:} Target accuracy level $\alpha \in (0, 1)$,
target failure probability $\beta \in (0, 1)$.}
\STATE{\textbf{Input:}
Database $D \in \R^{|\cX|}$ (normalized) and linear queries
$q_1, \dots, q_k \in \{0, 1\}^{|\cX|}$.}
\STATE{\textbf{Initialize:}
Let $\cQ = \bigcup_{j = 1}^{k} q_j \cup \bar{q_j}$,
$Q^1$ be a uniform distribution on $\cQ$,
\[
T = \frac{16 \log |\cQ|}{\alpha^2},
\qquad
\eta = \frac{\alpha}{4},
\qquad \text{and} \qquad
s = \frac{48 \log \left({2 |\cX| T}/{\beta} \right)}{\alpha^2} .
\]
Let the payoff function $A_D : \cX \times [0, 1]^{|\cX|} \to \R$ be:
\[
A_D(x, \tilde{q}) = \tilde{q}(D) - \tilde{q}(x) ,
\qquad
\text{where}
\qquad
\tilde{q}(D) = \frac{1}{|D|} \sum_{x \in D} \tilde{q}_x .
\]
}
\STATE{For $t = 1,\dots,T$:}
\INDSTATE[1]{Sample $s$ queries $\{q_i\}$ from $\cQ$ according to
$Q^{t}$.}
\INDSTATE[1]{Let $\tilde{q} := \frac{1}{s} \sum_i q_i$.}
\INDSTATE[1]{Find $x^t$ with $A_D(x^t, \tilde{q}) \geq \max_x
A_D(x, \tilde{q}) - \alpha/4$.}
\INDSTATE[1]{\textbf{Update:} For each $q \in \cQ$:}
\INDSTATE[2]{$Q^{t+1}_q := \exp(-\eta A_D(x^t, q) \rangle)
\cdot Q^{t}_q.$}
\INDSTATE[1]{Normalize $Q^{t+1}$.}
\STATE{Output synthetic database $\hat{D} := \bigcup_{t = 1}^T x^t.$}
\end{algorithmic}
\caption{\mbox{{\sf DualQuery}}\xspace}
\label{alg:dualquery}
\end{algorithm}
\subsection*{Privacy}
The privacy proofs are largely routine, based on the composition theorems.
Rather than fixing $\varepsilon$ and solving for the other parameters as is typical in
the literature, we present the
privacy cost $\varepsilon$ as function of parameters $T, s, \eta$. This form will lead
to simpler tuning of the parameters in our experimental evaluation.
We will use the privacy of the following mechanism (result due to \citet{MT07})
as an ingredient in our privacy proof.
\begin{lemma}[\citet{MT07}]
\label{def:expmech}
Given some arbitrary output range $R$, the {\em exponential mechanism} with
score function $S$ selects and outputs an element $r\in R$ with probability
proportional to
\[
\exp\left(\frac{\varepsilon S(D, r)}{2\cdot GS_S}\right),
\]
where $GS_S$ is the {\em sensitivity} of $S$, defined as
\[
GS_S = \max_{D, D': |D\triangle D'| = 1, r\in R} |S (D, r) - S (D', r)|.
\]
The exponential mechanism is $\varepsilon$-differentially private.
\end{lemma}
We first prove pure $\varepsilon$-differential privacy.
\begin{theorem}
\label{thm:privacy2}
\mbox{{\sf DualQuery}}\xspace is $\varepsilon$-differentially private for
\[
\varepsilon = \frac{\eta T(T-1)s}{n}.
\]
\end{theorem}
\begin{proof}
We will argue that sampling from $Q^t$ is equivalent to running the
exponential mechanism with some quality score. At round $t$, let $\{x^i\}$
for $i \in [t-1]$ be the best responses for the previous rounds. Let $r(D, q)$
be defined by
\[
r(D, q) = \sum_{i = 1}^{t - 1} ( q(D) - q(x^i) ),
\]
where $q \in \cQ$ is a query and $D$ is the true database. This function is
evidently $((t-1)/n)$-sensitive in $D$: changing $D$ changes each $q(D)$ by
at most $1/n$. Now, note that sampling from $Q^t$ is simply sampling from the
exponential mechanism, with quality score $r(D, q)$. Thus, the privacy cost of
each sample in round $t$ is $\varepsilon_t' = 2\eta (t-1)/n$ by \Cref{def:expmech}.
By the standard composition theorem (\Cref{composition}), the total privacy cost is
\[
\varepsilon = \sum_{t = 1}^{T} s\varepsilon_t' = \frac{2 \eta s}{n} \cdot
\sum_{t = 1}^{T} (t-1) = \frac{\eta T(T - 1)s}{n}.
\]
\end{proof}
We next show that \mbox{{\sf DualQuery}}\xspace is $(\varepsilon, \delta)$-differentially private, for a much
smaller $\varepsilon$.
\begin{theorem}
\label{thm:privacy}
Let $0 < \delta < 1$. \mbox{{\sf DualQuery}}\xspace is $(\varepsilon, \delta)$-differentially private for
\[
\varepsilon = \frac{2 \eta (T-1)}{n} \cdot \left[ \sqrt{2s(T-1) \log(1/\delta)} + s(T-1) \left(
\exp\left( \frac{2 \eta (T-1)}{n} \right) - 1 \right) \right].
\]
\end{theorem}
\begin{proof}
Let $\varepsilon$ be defined by the above equation. By the advanced composition
theorem (\Cref{composition}), running a composition of $k$ $\varepsilon'$-private
mechanisms is $(\varepsilon, \delta)$-private for
\[
\varepsilon = \sqrt{2k \log(1/\delta)} \varepsilon' + k \varepsilon' (\exp(\varepsilon') -
1).
\]
Again, note that sampling from $Q^t$ is simply sampling from the exponential
mechanism, with a $(T-1)/n$-sensitive quality score. Thus, the privacy cost of
each sample is $\varepsilon' = 2\eta (T-1)/n$ by \Cref{def:expmech}. We plug in $k
= s(T-1)$ samples, as in the first round our samplings are $0$-differentially
private.
\end{proof}
\subsection*{Accuracy}
The accuracy proof proceeds in two steps. First, we show that ``average query''
formed from the samples is close to the true weighted distribution $Q^t$. We
will need a standard Chernoff bound.
\begin{lemma}[Chernoff bound] \label{chernoff}
Let $X_1, \dots, X_N$ be IID random variables with mean $\mu$, taking
values in $[0, 1]$. Let $\bar{X} = \frac{1}{N} \sum_i X_i$ be the empirical
mean. Then,
\[
\Pr[ |\bar{X} - \mu| > T ] < 2 \exp (-NT^2/3)
\]
for any $T$.
\end{lemma}
\begin{lemma}
\label{lem:sampling}
Let $\beta \in (0, 1)$, and let $p$ be a distribution over queries. Suppose we draw
\[
s = \frac{48 \log \left(\frac{2 |\cX|}{\beta} \right)}{\alpha^2}
\]
samples $\{\hat{q_i}\}$ from $p$, and let $\bar{q}$ be the aggregate query
\[
\bar{q} = \frac{1}{s} \sum_{i = 1}^s \hat{q_i}.
\]
Define the true weighted answer $Q(x)$ to be
\[
Q(x) = \sum_{i = 1}^{|\cQ|} p_i q_i(x).
\]
Then with probability at least $1 - \beta$, we have $|\bar{q}(x) - Q(x) | <
\alpha/4$ for every $x \in \cX$.
\end{lemma}
\begin{proof}
For any fixed $x$, note that $\bar{q}(x)$ is the average of random variables
$\hat{q_1}(x), \dots ,\hat{q_s}(x)$. Also, note that $\mathbb{E}[\bar{q}(x)] =
Q(x)$. Thus, by the Chernoff bound (\Cref{chernoff}) and our choice of $s$,
\[
\Pr[ |\bar{q}(x) - Q(x)| > \alpha/4 ] < 2 \exp (-s\alpha^2/48) =
\beta / |\cX|.
\]
By a union bound over $x \in \cX$, this equation holds for all $x \in \cX$
with probability at least $1 - \beta$.
\end{proof}
Next, we show that we compute an approximate equilibrium of the query release
game. In particular, the record best responses form a synthetic database that
answer all queries in $\cQ$ accurately. Note that our algorithm doesn't require
an exact best response for the data player; an approximate best response will
do.
\ar{I'm a bit confused by the following theorem -- I guess $\varepsilon$ and
$\delta$ are the privacy parameters associated with the algorithm parameters
specified in the algorithm -- but we haven't actually stated that the
algorithm is (eps,delta) private yet -- maybe we should state that as a
corollary of the privacy theorem.}
\jh{Fixed, hopefully. Do you think we should present the remark in the short
version?}
\begin{theorem}
\label{thm:utility}
With probability at least $1 - \beta$, \mbox{{\sf DualQuery}}\xspace finds a synthetic database that
answers all queries in $\cQ$ within additive error
$\alpha$.
\end{theorem}
\begin{proof}
As discussed in \Cref{sec:qr:game}, it suffices to show that the distribution
of best responses $x^1, \dots, x^T$ forms is an $\alpha$-approximate
equilibrium strategy in the query release game. First, we set the number of
samples $s$ according to in \Cref{lem:sampling} with failure probability
$\beta / T$. By a union bound over $T$ rounds, sampling is successful for
every round with probability at least $1 - \beta$; condition on this event.
Since we are finding an $\alpha/4$ approximate best response to the sampled
aggregate query $\bar{q}$, which differs from the true distribution by at most
$\alpha/4$ (by \Cref{lem:sampling}), each $x^i$ is an $\alpha/4 + \alpha/4 =
\alpha/2$ approximate best response to the true distribution $Q^t$. Since $q$
takes values in $[0, 1]$, the payoffs are all in $[-1, 1]$. Hence,
\Cref{thm:mw-eq} applies; setting $T$ and $\eta$ accordingly gives the result.
\end{proof}
\begin{remark}
The guarantee in \Cref{thm:utility} may seem a little unusual, since the
convention in the literature is to treat $\varepsilon, \delta$ as inputs to the
algorithm. We can do the same: from \Cref{thm:privacy} and plugging in for $T,
\eta, s$, we have
\begin{align*}
\varepsilon = \frac{4\eta T \sqrt{2 sT \log(1/\delta)}}{n}
= \frac{256 \log^{3/2} |\cQ| \sqrt{6 \log (1/\delta)
\log(2|\cX|T/\beta)}}{\alpha^3 n}.
\end{align*}
Solving for $\alpha$, we find
\[
\alpha = O \left( \frac{\log^{1/2}|\cQ| \log^{1/6} (1/\delta)
\log^{1/6}(2|\cX|/\gamma)}{n^{1/3} \varepsilon^{1/3}} \right),
\]
for $\gamma < \beta/T$.
\end{remark}
\section{Case study: 3-way marginals}
In our algorithm, the computationally difficult step is finding the data
player's approximate best response against the query player's distribution. As
mentioned above, the form of this problem depends on the particular query class
$\cQ$. In this section, we first discuss the optimization problem in general,
and then specifically for the well-studied class of \emph{marginal} queries. For instance, in a database of medical information
in binary attributes, a particular marginal query may be: What fraction of the
patients are over 50, smoke, and exercise?
\subsection*{The best-response problem}
Recall that the query release game has payoff $A(x, q)$ defined by
\Cref{eq:payoff}; the data player tries to minimize the payoff, while the query
player tries to maximize it. If the query player has distribution $Q^t$ over
queries, the data player's best response minimizes the expected loss:
\[
\argmin_{x \in \cX} \Ex{q \leftarrow Q^t}{q(D) - q(x)}.
\]
To ensure privacy, the data player actually plays against the distribution of
samples $\hat{q_1}, \dots, \hat{q_s}$. Since the database $D$ is fixed and
$\hat{q_i}$ are linear queries, the best-response problem is
\[
\argmin_{x \in \cX} \frac{1}{s} \sum_{i = 1}^s \hat{q_i}(D) - \hat{q_i}(x)
= \argmax_{x\in \cX} \sum_{i=1}^s \hat{q_i}(x).
\]
By \Cref{thm:utility} it even suffices to find an approximate maximizer, in
order to guarantee accuracy.
\subsection*{3-way marginal queries}
To look at the precise form of the best-response problem, we consider {\em
3-way marginal} queries. We think of records as having $d$ binary attributes,
so that the data universe $|\cX|$ is all bitstrings of length $d$. We write
$x_i$ for $x \in \cX$ to mean the $i$th bit of record $x$.
\begin{definition}
Let $\cX = \{0, 1\}^d$. A {\em 3-way marginal query} is a linear query
specified by 3 integers $a \neq b \neq c \in [d]$,
taking values
\[
q_{abc}(x) = \left\{
\begin{array}{ll}
1 &: x_a = x_b = x_c = 1 \\
0 &: \text{otherwise.}
\end{array}
\right.
\]
\end{definition}
Though everything we do will apply to general $k$-way marginals, for
concreteness we will consider $k = 3$: 3-way marginals.
Recall that the query class $\cQ$ includes each query and its negation. So, we also
have negated conjunctions:
\[
\bar{q_{abc}}(x) = \left\{
\begin{array}{ll}
0 &: x_a = x_b = x_c = 1 \\
1 &: \text{otherwise.}
\end{array}
\right.
\]
Given sampled conjunctions $\{\hat{u_i}\}$ and negated conjunctions
$\{\hat{v_i}\}$, the best-response problem is
\[
\argmax_{x \in \cX} \sum_i \hat{u_i}(x) + \sum_j \hat{v_j}(x).
\]
In other words, this is a MAXCSP problem---we can associate a clause to each
conjunction:
\[
q_{abc} \Rightarrow (x_a \wedge x_b \wedge x_c)
\quad \text{and} \quad
\bar{q_{abc}} \Rightarrow (\bar{x_a} \vee \bar{x_b} \vee \bar{x_c}),
\]
and we want to find $x \in \{0, 1\}^d$ satisfying as many clauses as
possible.\footnote{%
Note that this is almost a MAX3SAT problem, except there are also
``negative'' clauses.}\eg{I wonder about the footnote...}
Since most solvers do not directly handle MAXCSP problems, we convert this
optimization problem into a more standard, integer program form. We introduce a
variable $x_i$ for each literal $x_i$, a variable $c_i$ for each sampled query,
positive or negative. Then, we form the following integer program encoding to
the best-response MAXCSP problem.
\begin{align*}
\max &\sum_i c_i & \\
\text{such that } & x_a + x_b + x_c \geq 3c_i &\text{ for each } \hat{u_i} = q_{abc} \\
& (1 - x_a) + (1 - x_b) + (1 - x_c) \geq
c_j &\text{ for each } \hat{v_j} = \bar{q_{abc}} \\
& x_i, c_i \in \{0, 1\}
\end{align*}
Note that the expressions $x_i, 1 - x_i$ encode the literals $x_i, \bar{x_i}$,
respectively, and the clause variable $c_i$ can be set to $1$ exactly when the
respective clause is satisfied. Thus, the objective is the number of satisfied
clauses. The resulting integer program can be solved using any standard solver;
we use
\mbox{{\sf CPLEX}}\xspace.
\section{Case study: Parity queries}
In this section, we show how to apply \mbox{{\sf DualQuery}}\xspace to another well-studied class of
queries: {\em parities}. Each specified by a subset $S$ of features, these
queries measure the number of records with an even number of bits on in $S$
compared to the number of records with an odd number of bits on in $S$.
\begin{definition}
Let $\cX = \{-1, +1\}^d$. A {\em $k$-wise parity query} is a linear query
specified by a subset of features $S \subseteq [d]$ with $|S| = k$, taking
values
\[
q_S(x) = \left\{
\begin{array}{ll}
+1 &: \text{even number of } x_i = +1 \text{ for } i \in S \\
-1 &: \text{otherwise.}
\end{array}
\right.
\]
Like before, we can define a {\em negated $k$-wise parity query}:
\jh{Consider: call them even and odd parity queries instead? Maybe more
intuitive.}
\[
\bar{q_S}(x) = \left\{
\begin{array}{ll}
+1 &: \text{odd number of } x_i = +1 \text{ for } i \in S \\
-1 &: \text{otherwise.}
\end{array}
\right.
\]
\end{definition}
For the remainder, we specialize to $k = 3$.
Given sampled parity queries $\{ \hat{u_i} \}$ and negated parity queries $\{
\hat{v_i} \}$, the best response problem is to find the record $x \in \cX$ that
takes value $1$ on as many of these queries as possible. We can construct an
integer program for this task: introduce $d$ variables $x_i$, and two variables
$c_q, d_q$ for each sampled query. The following integer program encodes the
best-response problem.
\begin{align*}
\max &\sum_i c_i & \\
\text{such that } & \sum_j \sum_{p \in S_j} x_p = 2 d_i + c_i - 1 & \text{for each } \hat{u_i} = q_S \\
& \sum_j \sum_{p \in S_j} x_p = 2 d_i + c_i & \text{for each } \hat{v_i} = \bar{q_S} \\
& x_p, c_i \in \{0, 1\}, \quad d_i \in \{ 0, 1, 2 \} &
\end{align*}
Consider the (non-negated) parity queries first. The idea is that each variable
$c_i$ can be set to $1$ exactly when the corresponding parity query takes value
$1$, i.e., when $x$ has an even number of bits in $S$ set to $+1$. Since $|S|
\leq 3$, this even number will either be $0$ or $2$, hence is equal to $2d_i$
for $d_i \in \{0, 1\}$. A similar argument holds for the negated parity queries.
\begin{figure}[h]
\centering
\begin{center}
\begin{tabular}{ | l || l | l |l |}
\hline
Dataset & Size & Attributes & Binary attributes \\
\hline
Adult & 30162 & 14 & 235 \\
KDD99 & 494021 & 41 & 396 \\
Netflix & 480189 & 17,770 & 17,770 \\
\hline
\end{tabular}
\end{center}
\caption{Test Datasets}
\label{fig:data}
\end{figure}
\section{Experimental evaluation}
\begin{figure*}[ht]
\centering
\subfloat{
\includegraphics[width=0.325\textwidth]{adult-eps-error}}
\hfill
\subfloat{
\includegraphics[width=0.325\textwidth]{kdd99-eps-error}}
\hfill
\subfloat{
\includegraphics[width=0.325\textwidth]{netflix-eps-error}}
\caption{Average max error of $(\varepsilon, 0.001)$-private \mbox{{\sf DualQuery}}\xspace on $500{,}000$
$3$-way marginals versus $\varepsilon$.}
\label{fig:accuracy}
\end{figure*}
\begin{figure*}[ht]
\centering
\subfloat{
\includegraphics[width=0.325\textwidth]{kdd-q-avg-err}%
\label{fig:q:kdd-avg}}
\hfill
\subfloat{
\includegraphics[width=0.325\textwidth]{kdd-q-max-err}%
\label{fig:q:kdd-max}}
\hfill
\subfloat{
\includegraphics[width=0.325\textwidth]{kdd-q-rt}%
\label{fig:q:kdd-rt}}
\caption{Error and runtime of $(1, 0.001)$-private \mbox{{\sf DualQuery}}\xspace on KDD99 versus number
of queries.}
\label{fig:queries}
\end{figure*}
\begin{figure*}[ht]
\centering
\subfloat{
\includegraphics[width=0.325\textwidth]{ran-attr-avgerr}%
\label{fig:att-error}}
\hfill
\subfloat{
\includegraphics[width=0.325\textwidth]{ran-attr-err}%
\label{fig:att-max}}
\hfill
\subfloat{
\includegraphics[width=0.325\textwidth]{ran-attr-rt}
\label{fig:att-rt}}
\caption{Error and runtime of $(1, 0.001)$-private \mbox{{\sf DualQuery}}\xspace on $100{,}000$ $3$-way
marginal queries versus number of attributes.}
\label{fig:attributes}
\end{figure*}
\begin{figure*}[h]
\centering
\subfloat{
\includegraphics[width=0.35\textwidth]{compareMWEM-avg-NEW}}
\subfloat{
\includegraphics[width=0.35\textwidth]{compareMWEM-max-NEW}}
\caption{Comparison of the accuracy performance between \mbox{{\sf DualQuery}}\xspace and
\mbox{{\sf MWEM}}\xspace on a low-dimensional dataset with 17 attributes. Both
algorithms answer 10,000 queries for the Adult dataset under
$(\varepsilon, 0)$-differential privacy, where $\varepsilon$ ranges from 1 to
5. In both plots, we show the average and maximum error as a
function of the privacy parameter $\varepsilon$.}
\label{fig:mwem}
\end{figure*}
We evaluate \mbox{{\sf DualQuery}}\xspace on a large collection of $3$-way marginal queries on
several real datasets (\Cref{fig:data}) and high dimensional synthetic
data. Adult (census data) and KDD99 (network packet data) are from the
UCI repository~\citep{uci}, and have a mixture of discrete (but
non-binary) and continuous attributes, which we discretize into binary
attributes. We also use the (in)famous \citet{netflix} movie ratings
dataset, with more than 17{,}000 binary attributes. More precisely, we
can consider each attribute (corresponding to a movie) to be $1$ if a
user has watched that movie, and $0$ otherwise.
Rather than set the parameters as in \Cref{alg:dualquery}, we
experiment with a range of parameters. For instance, we frequently run
for fewer rounds (lower $T$) and take fewer samples (lower $s$). As
such, the accuracy guarantee (\Cref{thm:utility}) need not hold for
our parameters. However, we find that our algorithm gives good error,
often much better than predicted. In all cases, our parameters satisfy
the privacy guarantee \Cref{thm:privacy}.
We will measure the accuracy of the algorithm using two
measures:~\emph{average error} and~\emph{max error}. Given a
collection of queries $Q=\{q_1, \ldots, q_k\}$, input database $D$ and
the synthetic database $\hat D$ output by our query release algorithm,
the average error is defined as
\[
\frac{1}{|Q|} \sum_j |q_j(D) - q_j(\hat D)|
\]
and the max error is defined as
\[
\max_j |q_j(D) - q_j(\hat D)|.
\]
We will run the algorithm multiple times and take the average of
both error statistics. While our algorithm uses ``negated'' $3$-way marginal
queries internally, all error figures are for normal, ``positive'' $3$-way
marginal queries only.
\eg{netflix is not exactly binary, it contains dates and rating, we
make a binary version out of it.}
\eg{[A different try to the above two paragraphs, the first part talks
about the queries and data, the second about the parameters, that
IMHO, are not so important:]
We evaluate \mbox{{\sf DualQuery}}\xspace on large collections of $3$-way marginal queries
over several real datasets (\Cref{fig:data}). Adult (census
data) and KDD99 (network packet data) come from the
UCI repository~\citep{uci}, while \citet{netflix} is the (in)famous
movie ratings dataset. For our experiments, we discretize them into
binary attributes.
We have evaluated the algorithm with a wide variety of
privacy-preserving parameters beyond the settings in
\Cref{alg:dualquery}. For instance, we frequently run for fewer
rounds and take less samples (lower $T$ and $s$). In that setting
the accuracy guarantee (\Cref{thm:utility}) does not necessarily
hold, however, we obtain good error, often much better than
predicted.
}
\sw{Good try. Although this is not so much different from what's
already written except the ordering.}
\subsection*{Accuracy} \label{sec:accuracy}
We evaluate the accuracy of the algorithm on $500{,}000$ $3$-way marginals on
Adult, KDD99 and Netflix. We report maximum error in \Cref{fig:accuracy},
averaged over $5$ runs. (Marginal queries have range $[0, 1]$, so error $1$ is
trivial.) Again, all error figures are for normal, ``positive'' $3$-way marginal
queries only.
The runs are $(\varepsilon, 0.001)$-differentially private, with $\varepsilon$ ranging from
$0.25$ to $5$.\footnote{%
Since our privacy analysis follows from
\Cref{composition}, our algorithm actually satisfies $(\varepsilon,
\delta)$-privacy for smaller values of $\delta$. For example, our
algorithm is also $(\sqrt{2}\varepsilon, \delta')$-private for $\delta' =
10^{-6}$. Similarly, we could choose any arbitrarily small value of
$\delta$, and \Cref{composition} would tell us that our algorithm
was $(\varepsilon', \delta)$-differentially private for an appropriate
value $\varepsilon'$, which depends only sub-logarithmically on $1/\delta$.
}
For the Adult and KDD99 datasets, we set step size $\eta = 2.0$, sample size $s =
1000$ while varying the number of steps $T$ according to the privacy budget
$\varepsilon$, using the formula from \Cref{thm:privacy}. For the Netflix dataset, we
adopt the same heuristic except we set $s$ to be $5000$.
The accuracy improves noticeably when $\varepsilon$ increases from $0.25$ to
$1$ across 3 datasets, and the improvement diminishes gradually with larger
$\varepsilon$. With larger sizes, both KDD99 and Netflix datasets allow \mbox{{\sf DualQuery}}\xspace to
run with more steps and get significantly better error.
\subsection*{Scaling to More Queries}
Next, we evaluate accuracy and runtime when varying the number of queries. We
use a set of $40{,}000$ to $2$ million randomly generated marginals $\cQ$ on the
KDD99 dataset and run \mbox{{\sf DualQuery}}\xspace with $(1, 0.001)$-privacy.
For all experiments, we use the same set of parameters: $\eta = 1.2, T = 170$
and $s = 1750$. By \Cref{thm:privacy}, each run of the experiment satisfies $(1,
0.001)$-differential privacy. These parameters give stable performance as the
query class $\cQ$ grows.
As shown in \Cref{fig:queries}, both average and max error remain
mostly stable, demonstrating improved error compared to simpler
perturbation approaches. For example, the Laplace mechanism's error
growth rate is $O(\sqrt{|\cQ|})$ under $(\varepsilon, \delta)$-differential
privacy.
The runtime grows almost linearly in the number of queries, since we maintain a
distribution over all the queries.
\subsection*{Scaling to Higher Dimensional Data}
We also evaluate accuracy and runtime behavior for data dimension ranging
from $50$ to $512{,}000$. We evaluate \mbox{{\sf DualQuery}}\xspace under $(1, 0.001)$-privacy on
$100{,}000$ $3$-way marginals on synthetically generated datasets. We report
runtime, max, and average error over $3$ runs in \Cref{fig:attributes}; note the
logarithmic scale for attributes axis. We do not include query evaluation in our
time measurements---this overhead is common to all approaches that answer a set
of queries.
When generating the synthetic data, one possibility is to set each attribute to
be $0$ or $1$ uniformly at random. However, this generates very uniform
synthetic data: a record satisfies any $3$-way marginal with probability $1/8$,
so most marginals will have value near $1/8$. To generate more challenging and
realistic data, we pick a separate bias $p_i \in [0, 1]$ uniformly at random for
each attribute $i$. For each data point, we then set attribute $i$ to be $1$
independently with probability equal to $p_i$. As a result, different $3$-way
marginals have different answers on our synthetic data.
For parameters, we fix step size $\eta$ to be $0.4$, and increase the
sample size $s$ with the dimension of the data (from $200$ to
$50{,}000$) at the expense of running fewer steps. For these
parameters, our algorithm is $(1, 0.001)$-differentially private by
\Cref{thm:privacy}. With this set of parameters, we are able to
obtain $8\%$ average error in an average of $10$ minutes of runtime,
excluding query evaluation.
\subsection*{Comparison with a Simple Baseline}
When interpreting our results, it is useful to compare them to a simple baseline
solution as a sanity check. Since our real data sets are sparse, we would
expect the answer of most queries to be small. A natural baseline is
the \emph{zeros data set}, where all records have all attributes set to $0$.
\jh{TODO.}\sw{put real-data discussion here}
For the experiments reporting max-error on real datasets, shown
in~\Cref{fig:accuracy}, the accuracy of~\mbox{{\sf DualQuery}}\xspace out-performs the zeros
data set: the zeros data set has a max error of $1$ on both Adult
and KDD99 data sets, and a max error of $0.51$ on the Netflix
data set. For the experiments reporting average error in~\Cref{fig:queries}, the
zeros data set also has worse accuracy---it has an average error of
$0.11$ on the KDD dataset, whereas $\mbox{{\sf DualQuery}}\xspace$ has an average error below $0.01$.
For synthetic data we can also compare to the zeros dataset, but it is a poor
benchmark since the dataset is typically not sparse. Hence, we also compare to a
\emph{uniform data set} containing one of each possible record.\footnote{%
Note that this is the distribution that the MWEM algorithm starts with
initially, so this corresponds to the error of MWEM after running it for 0
rounds (running it for a non-zero number of rounds is not feasible given our
data sizes)}
Recall that we generate our
synthetic data by first selecting a bias $p_i$ uniformly at random from $[0, 1]$
for each attribute $i$, then setting each record's attribute $i$ to be $1$ with
probability $p_i$ (independently across the attributes).
Given this data generation model, we can analytically compute the expected
average error on both benchmarks. Consider a query $q$ on
literals $a, b, c$. The probability that $q$ evaluates to $1$ on a randomly
generated record is $p_a \cdot p_b \cdot p_c$. So, the expected value of $q$
when evaluated on the synthetic data is
\[
\mathbb{E} [ p_a \cdot p_b \cdot p_c ] = 0.125
\]
since $p_a, p_b, p_c$ are drawn independently and uniformly from $[0, 1]$.
On the zeros data set, $q$ takes value $0$ unless it is of the form $\neg a
\land \neg b \land \neg c$, when it takes value $1$. If we average over all
queries $q$, this occurs on exactly $1/8$ of the queries. Thus, the expected
average error of the zeros database is
\[
1/8 \cdot \mathbb{E} [ 1 - p_a \cdot p_b \cdot p_c ]
+
7/8 \cdot \mathbb{E} [ p_a \cdot p_b \cdot p_c ]
=
7/32 = 0.21875 .
\]
Similarly, the max-error will tend to $1$ as the size of the data set and number
of queries performed grows large. In practice, the max-error in our experiments
was above $0.98$.
We can perform a similar calculation with respect the uniform benchmark. Fix any
query $q$ on three literals $a,b,$ and $c$. Every query $q$ evaluates to exactly
$1/8$ on the uniform benchmark. Thus, the expected error of the uniform
benchmark on $q$ is
\[
\mathbb{E}[ | 1/8 - p_a \cdot p_b \cdot p_c | ] ,
\]
where $p_a, p_b, p_c$ are drawn independently and randomly from $[0, 1]$. This
error is
\[
\mathbb{E}[ | 1/8 - p_a \cdot p_b \cdot p_c | ]
=
1/256 \cdot (7 + 18 \ln 2 + 2 (\ln 8)^3) \approx 0.11 ,
\]
by a direct calculation.
Similarly, the max-error will tend to $1 - 1/8 = 0.875$ as the size of the data
set and number of queries performed grows large. In practice, the max-error in
our experiments was above $0.85$. Thus, we see that \mbox{{\sf DualQuery}}\xspace outperforms both of
these baseline comparisons on synthetic data.
\subsection*{Comparison with \mbox{{\sf MWEM}}\xspace}
Finally, we give a brief comparison between the algorithm \mbox{{\sf MWEM}}\xspace of
\citet{HLM12} and \mbox{{\sf DualQuery}}\xspace in terms of the accuracy performance. We tested
both algorithms on the Adult dataset with a selection of $17$ attributes
to answer $10{,}000$ queries. The accuracy performance of \mbox{{\sf MWEM}}\xspace is better
than \mbox{{\sf DualQuery}}\xspace on this low-dimensional dataset (shown
in~\Cref{fig:mwem}). Note that this matches with theoretical
guarantees---our theoretical accuracy bounds are worse than \mbox{{\sf MWEM}}\xspace as
shown by \citet{HR10,HLM12}. More generally, for low dimensional
datasets for which it is possible to maintain a distribution over
records, \mbox{{\sf MWEM}}\xspace likely remains the state of the art. Our work
complements \mbox{{\sf MWEM}}\xspace by allowing private data analysis on
higher-dimensional data sets. In this experiment, we fix the step size
of $\mbox{{\sf DualQuery}}\xspace$ to be $\eta = 0.4$, and set the sample size and number of
steps to be $(s, T) = (35, 47), (40, 62), (45 , 71), (50, 78),(55,
83)$ to achieve privacy levels of $\varepsilon = 1 , 2,3,4,5$
respectively. For the instantiation of \mbox{{\sf MWEM}}\xspace, we set the number of
steps to be $T = 15$ (see~\cite{HLM12} for more details of the
algorithm).
\subsection*{Methodology}
In this section, we discuss our experimental setup in more detail.
\paragraph*{Implementation details.}
The implementation is written in OCaml, using the \mbox{{\sf CPLEX}}\xspace constraint
solver. We ran the experiments on a mid-range desktop
machine with a 4-core Intel Xeon processor and 12 Gb of
RAM. Heuristically, we set a timeout for each \mbox{{\sf CPLEX}}\xspace call to $20$
seconds, accepting the best current solution if we hit the
timeout. For the experiments shown, the timeout was rarely reached.
\eg{As we start to understand the oracles and the algorithm better, my
guess is that in optimal settings we will hit the timeout almost
always, if there is enough privacy budget, that is.
}
\paragraph*{Data discretization.}
We discretize KDD99 and Adult datasets into binary attributes by
mapping each possible value of a discrete attribute to a new binary
feature. We bucket continuous attributes, mapping each bucket to a new
binary feature. We also ensure that our randomly generated $3$-way marginal
queries are sensible (i.e., they don't require an original attribute to take two
different values).
\paragraph*{Setting free attributes.}
Since the collection of sampled queries may not involve all of the attributes,
\mbox{{\sf CPLEX}}\xspace often finds solutions that leave some attributes unspecified.
We set these {\em free} attributes heuristically:
for real data, we set the attributes to $0$ as these datasets are fairly
sparse;%
\footnote{%
The adult and KDD99 datasets are sparse due to the way we discretize
the data; for the Netflix dataset, most users have only viewed a
tiny fraction of the $17{,}000$ movies.}
for synthetic data, we set attributes to $0$ or $1$ uniformly at random.%
\footnote{%
For a more principled way to set these free
attributes, the sparsity of the dataset could be estimated at a
small additional cost to privacy.}
\eg{We are assuming as common cases --- for instance that cplex
doesn't set most of the attributes --- cases that are not common
when working under optimal parameters. IMHO we should be careful
with that. I liked the old Oracle discussion more, IMHO it provided
some structure to this problem.}
\paragraph*{Parameter tuning.}
\mbox{{\sf DualQuery}}\xspace has three parameters that can be set in a wide variety of configurations
without altering the privacy guarantee (\Cref{thm:privacy}): number of
iterations ($T$), number of samples ($s$), and learning rate ($\eta$), which
controls how aggressively to update the distribution. For a fixed level of
$\varepsilon$ and $\delta$, there are many feasible private parameter settings.
Performance depends strongly on the choice of parameters: $T$ has an
obvious impact---runtime scales linearly with $T$. Other parameters have a more
subtle impact on performance---increasing $s$ increases the number of
constraints in
the integer program for \mbox{{\sf CPLEX}}\xspace, for example. We have investigated a range of
parameters, and for the experiments we have used informal
heuristics coming from our observations. We briefly describe some
of them here. For sparse datasets like the ones in \Cref{fig:data},
with larger $\eta$ (in the range of 1.5 to 2.0) and smaller $s$ (in
the same order as the number of attributes), we obtain good accuracy
when we set the free attributes to be $0$. But for denser data like
our synthetic data, we get better accuracy when we have smaller $\eta$
(around 0.4) and set the free attributes randomly. As for runtime, we
observe that \mbox{{\sf CPLEX}}\xspace could finish running quickly (in seconds) when the
sample size is in about the same range as the number of attributes
(factor of 2 or 3 different).
Ultimately, our parameters are chosen on a case-by-case basis for each data-set, which is not differentially private. Parameter setting should be done under differential privacy for a truly
realistic evaluation (which we do not do). Overall, we do not know of an approach that is both principled and practical to handle
this issue end-to-end; private parameter tuning is an area of active research (see e.g.,
\citet{CV13}).
\section{Discussion and conclusion}
We have given a new private query release mechanism that can handle datasets
with dimensionality multiple orders of magnitude larger than what was previously
possible.
Indeed, it seems we have not reached the limits of our approach---even
on synthetic data with more than $500{,}000$ attributes, \mbox{{\sf DualQuery}}\xspace continues to
generate useful answers with about $30$ minutes of overhead on top of query
evaluation (which by itself is on the scale of hours). We believe that \mbox{{\sf DualQuery}}\xspace
makes private analysis of high dimensional data practical for the first time.
There is still plenty of room for further research. For example, can our
approach be improved to match the optimal theoretical accuracy guarantees for
query release? We do not know of any fundamental obstacles, but we do not know
how to give a better analysis of our current methods. The projection-based
approach of \citet{NTZ13,DNT14} seems promising---it achieves theoretically
optimal bounds, and like in our approach, the ``hard'' projection step does not
need to be solved privately. It deserves to be studied empirically. On the other
hand this approach does not produce synthetic data.
\mg{Sorry for bringing back a thread of comments that seems
closed. I'm not sure why do we need to talk about ``hours'' here. Or
better this sounds like there is something wrong because we didn't
mention about it in the experimental section but only here in the
conclusion. Maybe we can put a sentence in the experimental section
and then here just mention the overhead?}
\ar{The point here is that 30 minutes of overhead is very small, since it is a
small fraction of the time you have to spend otherwise. If query evaluation
took only seconds, then 30 minutes of overhead would be a long time.}
\sw{moved the last paragraph up in the comparison section}
\subsection*{Acknowledgments}
We thank Adam Smith, Cynthia Dwork and Ilya Mironov for productive discussions,
and for suggesting the Netflix dataset. This work was supported in part by NSF
grants CCF-1101389 and CNS-1065060.
Marco Gaboardi has been supported by the
European Community's Seventh Framework Programme FP7/2007-2013 under
grant agreement No. 272487.
This work was partially supported by a grant from the Simons Foundation (\#360368
to Justin Hsu).
\bibliographystyle{plainnat}
|
1,477,468,751,138 | arxiv | \section{Introduction}\label{Introduction}}
\else
\section{Introduction}
\label{Introduction}
\fi
\IEEEPARstart{D}{uring} the past years, panoramic video \cite{neumann2000immersive} has become increasingly popular due to its immersive and interactive experience.
To achieve this immersive and interactive experience, humans can control the field of view (FoV) in the range of $360^{\circ} \times 180^{\circ}$ by wearing head-mounted displays, when watching panoramic video.
In other words, humans are able to freely move their heads within a sphere to make their FoVs focus on the attractive content (see Figure \ref{fig-one} for an example).
The content outside the FoVs cannot be observed by humans, i.e., not given any attention by the viewer.
Consequently, head movement (HM) plays a key role in deploying human attention on panoramic video.
HM prediction thus emerges as an increasingly important problem in modeling attention on panoramic video.
In fact, human attention on panoramic video is composed of two parts: HM and eye fixations.
HM determines FoV as the region to be seen in panoramic video through the position of HM sampled at each frame, called the HM position in this paper. Meanwhile, eye fixations decide which region can be captured at high resolution (i.e., fovea) within the FoV.
Accordingly, HM prediction is the first step towards modeling human attention. Given the predicted HM, the eye fixations within the FoV can be further estimated using the conventional saliency detection methods \cite{borji2013state} for 2D video.
The same as traditional 2D video, the attention model can be extensively utilized in many areas of panoramic video, such as region-of-interest compression \cite{de2016video}, visual quality assessment \cite{gaddam2016tiling, Xu17}, rendering \cite{stengel2016gaze}, synopsis \cite{Pritch08}, and automatic cinematography \cite{su2016pano2vid}.
Unfortunately, few approaches have been proposed in modeling human attention on panoramic video, especially predicting the HM.
Benefiting from the most recent success of deep reinforcement learning (DRL) \cite{mnih2016asynchronous}, this paper proposes a DRL-based HM prediction (DHP) approach for modeling attention on panoramic video.
The proposed approach applies DRL rather than supervised learning. It is because DRL maximizes the accumulated \textit{reward} of the \textit{agent's} \textit{actions}, such that the predicted HM scanpaths can simulate the long-term HM behaviors of humans.
In fact, HM prediction can be classified into two categories: offline and online prediction. In this paper, the offline HM prediction is used for modeling attention of multiple subjects on panoramic video, whereas the online prediction is used to predict the next HM position of a single subject, based on the ground-truth of his/her HM positions at the current and previous frames.
In this paper, our DHP approach includes both online and offline HM prediction, named offline-DHP and online-DHP, respectively.
The codes for our offline-DHP and online-DHP approaches are downloadable from \url{https://github.com/YuhangSong/DHP}.
\begin{figure*}
\begin{center}
\subfigure[]{\includegraphics[width=0.56\columnwidth]{Head.pdf}}
\subfigure[]{\includegraphics[width=1.34\columnwidth]{fig_one.pdf}}
\caption{\footnotesize{(a) Illustration for head movement (HM) when viewing panoramic video. (b) Demonstration for FoVs and HM positions across different subjects. The heat map of HM positions from all subjects is also shown, which is defined as the HM map.}}
\label{fig-one}
\end{center}
\end{figure*}
To our best knowledge, there exists no offline work to predict the HM positions of multiple subjects in viewing panoramic video. The closest work is saliency detection on 2D video \cite{borji2013state}. The earliest approach for saliency detection was proposed by \textit{Itti et al.} \cite{itti1998model}, in which the features of color, intensity and orientation are combined to generate the saliency map of an image. Later, \textit{Itti et al.} \cite{itti2004automatic} proposed adding two features to \cite{itti1998model}, namely, motion and flicker contrast, for video saliency detection. Recently, several advanced approaches have been proposed for video saliency prediction. These advanced works include the earth mover's distance approach \cite{lin2013visual} and the Boolean map-based saliency model (BMS) \cite{zhang2016exploiting}.
Most recently, deep learning has been successfully applied in the works of saliency detection, such as SALICON \cite{huang2015salicon} and Liu's approach \cite{Liu2017cvpr}.
Saliency detection differs from the offline prediction of HM positions in two aspects.
(1) The input to saliency detection is 2D video in a plane, whereas panoramic video is a sphere (Figure \ref{fig-one}).
Saliency detection can be applied to panoramic video that is projected from sphere to 2D plane, but projection normally causes distortion or content discontinuity, degrading the performance of predicting HM positions.
(2) More importantly, saliency detection in 2D video assumes that humans are able to view all the content of each video frame.
However, this assumption does not hold for panoramic video, as subjects can only see a limited range of the FoV at a single sight, rather than the full panoramic range of $360^{\circ} \times 180^{\circ}$.
In fact, different FoVs of panoramic video are accessible to subjects via changing the positions of HM \cite{lowe2015visualization}.
In this paper, we find that different subjects are highly consistent in terms of HM positions.
This finding is based on establishing and analyzing a new database, which consists of the HM data of 58 subjects viewing 76 panoramic video sequences.
Then, we propose the offline-DHP approach to predict the consistent HM positions on panoramic video via generating the HM map for each single frame.
The HM maps are in the form of a sphere, and the positions in the HM maps are thus represented by the longitude and latitude in the geographic coordinate system (GCS) \cite{Goodchild2007}. This paper visualizes the spherical HM maps by projecting them onto the 2D plane.
The offline prediction of Figure \ref{fig-one}-(b) demonstrates an example of the ground-truth HM map for a panoramic video frame. Similar to the saliency maps of 2D video, the HM maps of panoramic video are obtained by convoluting the HM positions with a 2D Gaussian filter\footnote{The two dimensions of the Gaussian filter are longitude and latitude, respectively.}.
Specifically, our offline-DHP approach yields the HM maps of panoramic video via predicting the HM scanpaths of multiple \textit{agents}, since subjects interactively control their HM along with some scanpaths according to video content.
First, we find from our database that the HM scanpaths of different subjects are highly consistent.
Meanwhile, subjects are normally initialized to view the center of the front region in the beginning frames of panoramic video.
Therefore, the HM positions at the subsequent frames can be yielded on the basis of the predicted scanpaths.
Additionally, we find from our database that the magnitudes and directions of HM scanpaths are similar across subjects.
In light of these findings, our offline-DHP approach models both the magnitudes and directions of HM scanpaths as the \textit{actions} of multiple DRL \textit{agents} and takes the viewed panoramic content as the \textit{observation} of the \textit{environment}.
As such, the DRL model can be learned to predict HM positions.
In training the DRL model, a \textit{reward} is designed to measure the difference of \textit{actions} between the DRL \textit{agents} and subjects, indicating how well the \textit{agents} imitate humans in terms of HM scanpaths.
Then, the \textit{reward} is optimized to learn the parameters in the DRL model.
Given the learned model, the HM maps of panoramic video are generated upon the predicted HM positions, obtained from the scanpaths of several \textit{agents} in multiple DRL workflows.
For online HM prediction, the latest work of \cite{hu2017deep} proposed a deep 360 pilot, which automatically shifts viewing direction (equivalent to the HM position) when watching panoramic video. Specifically, the salient object is detected and tracked across panoramic video frames, via leveraging a region-based convolutional neural network (RCNN) \cite{ren2015faster} and recurrent neural network. Given the detected salient object and previous HM positions, the deep 360 pilot predicts to transit the HM position by learning a regressor. Since the deep 360 pilot relies heavily on one salient object, it is only suitable for some specific scenes that include one salient object, e.g., the sports scenes in \cite{hu2017deep}. It is still challenging to predict HM positions online for generic panoramic video, which may include more than one salient object (e.g., the panoramic video in the online prediction of Figure \ref{fig-one}-(b)). In this paper, we propose an online approach, namely online-DHP, to predict the HM positions on generic panoramic video. In contrast to \cite{hu2017deep}, our online-DHP approach does not need to detect the salient object using the RCNN. Rather, our online-DHP approach is based on attention-related content by leveraging the learned model of our offline-DHP approach. Then, a DRL algorithm is developed in our online-DHP approach to predict the HM positions in an online manner. Specifically, in the DRL algorithm, the \textit{agent} decides the \textit{action} of the HM scanpath in the next frame, according to the ground-truth of the previous HM scanpath and \textit{observation} of video content. Consequently, the HM positions at the incoming frames can be predicted for our online-DHP approach.
This paper is the first attempt to apply the DRL algorithm in modeling human attention on panoramic video. The main contributions of this paper are three-fold:
\begin{itemize}
\item We establish a new panoramic video database that consists of HM positions of 58 subjects across 76 panoramic video sequences, with a thorough analysis of their HM data.
\item We propose an offline-DHP approach to detect HM maps of panoramic video, and this approach predicts the consistent HM positions of multiple subjects.
\item We develop an online-DHP approach to predict the HM position of one subject at the next frame, based on the video content and HM scanpath till the current frame.
\end{itemize}
\section{Related work}
\subsection{Saliency detection}
The only approach for predicting the HM positions of panoramic video is the most recent work of \cite{su2016pano2vid}, in which Pano2Vid was proposed to obtain the FoV at each panoramic video frame.
However, Pano2Vid primarily focuses on virtually generating a potential HM position at one frame, rather than modeling HM maps of multiple subjects at this frame.
The closest work on predicting HM maps is saliency detection for 2D video, which is briefly reviewed in the following.
Saliency detection aims to predict the visual attention of humans on 2D video, by generating saliency maps of video frames. The studies on visual saliency began in 1998, when Itti and Koch \cite{itti1998model} found that the features of intensity, color and orientation in an image can be employed to detect its saliency map. Subsequently, they extended their work to video saliency detection \cite{itti2004automatic}, in which two dynamic features of motion and flicker contrast are combined with \cite{itti1998model} to detect saliency in 2D video. Both \cite{itti1998model} and \cite{itti2004automatic} are heuristic approaches for detecting saliency, since they utilize the understanding of the human vision system (HVS) to develop the computational models. Recently, some advanced heuristic approaches, e.g., \cite{itti2009bayesian, boccignone2008nonparametric, zhang2009sunday, guo2010novel, ren2013regularized, lin2013visual, zhang2016exploiting, hossein2015many, xu2017learning}, have been proposed to detect saliency in 2D video.
Specifically, \cite{itti2009bayesian} proposed a novel feature called \textit{surprise}, which measures how the visual change attracts human observers, based on the Kullback-Leibler divergence between spatio-temporal posterior and prior beliefs.
Given the feature of \textit{surprise}, a Bayesian framework was developed in \cite{itti2009bayesian} for video saliency detection.
Some other Bayesian frameworks \cite{boccignone2008nonparametric, zhang2009sunday} were also developed to detect video saliency.
Besides, Lin \textit{et al.} \cite{lin2013visual} quantified the earth mover's distance to measure the center-surround difference in spatio-temporal receptive field, generating saliency maps for 2D video.
Zhang \textit{et al.} \cite{zhang2016exploiting} explored the surround cue for saliency detection, by characterizing a set of binary images with random thresholds on color channels.
Recently, \cite{hossein2015many} and \cite{xu2017learning} have investigated that some features (e.g., motion vector) in compressed domain are of high correlation with human attention, and these features are thus explored in video saliency detection.
Benefiting from the most recent success of deep learning, deep
neural networks (DNNs) \cite{Vig_2014_CVPR, huang2015salicon, kruthiventi2015deepfix, Liu_2015_CVPR, wang2016RCNN, bazzani2016recurrent, SalGAN_2017, Liu2017cvpr, bak2016two,wang2017deep, Kummerer_2017_ICCV, Palazzi_2017_ICCV} have also been developed to detect 2D video saliency, rather than
exploring the HVS-related features as in heuristic saliency detection
approaches. These DNNs can be viewed as data-driven approaches.
For static saliency detection, SALICON \cite{huang2015salicon} fine-tuned the existing
convolutional neural networks (CNNs), with a new saliency-related
loss function. In \cite{Liu_2015_CVPR}, the architecture of multi-resolution CNN was developed for detecting saliency of images.
In \cite{Kummerer_2017_ICCV}, a readout architecture was proposed to predict human attention on static images, in which both DNN features and low-level (isotropic contrast) features are considered.
For dynamic saliency detection, \cite{bazzani2016recurrent} leveraged a
deep convolutional 3D network to learn the representations
of human attention on 16 consecutive frames, and then a long short-term
memory (LSTM) network connected with a mixture density
network was learned to generate saliency maps using Gaussian
mixture distribution. Similarly, Liu et al. \cite{Liu2017cvpr} combined a CNN
and multi-stream LSTM to detect saliency in video with multiple
faces. Moreover, other DNN structures have been developed to
detect either static saliency \cite{Vig_2014_CVPR, kruthiventi2015deepfix, wang2016RCNN, SalGAN_2017} or dynamic saliency \cite{bak2016two,bazzani2016recurrent,wang2017deep, Palazzi_2017_ICCV}.
Although saliency detection has been thoroughly studied in predicting eye movement in 2D video, no work has been developed to predict HM positions on panoramic video.
Similar to saliency detection for 2D video, this paper proposes generating HM maps that represent the HM positions of multiple subjects.
To obtain the HM maps of panoramic video, the HM positions are predicted by estimating the HM scanpaths of several \textit{agents}. Similarly, in the saliency detection area, there exist some works \cite{Wang2011, Sun12, Liu_2013_ICCV, Jiang2016, Assens_2017_ICCV, Shao2017} that predict eye movement scanpaths for static images.
In \cite{Wang2011}, a computational model was developed to simulate the scanpaths of eye movement in natural images. The proposed model embeds three factors to guide eye movement sequentially, including reference sensory responses, fovea periphery resolution discrepancy, and visual working memory. Sun \textit{et al.} \cite{Sun12} proposed modeling both saccadic scanpaths and visual saliency of images, on the basis of super Gaussian component (SGC) analysis. Recently, data-driven approaches have been proposed to learn the scanpaths of eye movement in static images, such as the hidden Markov model in \cite{Liu_2013_ICCV} and least-squares policy iteration (LSPI) in \cite{Jiang2016}.
Most recently, deep learning has been utilized in \cite{Assens_2017_ICCV, Shao2017} for predicting the eye movement scanpaths in static images.
However, to our best knowledge, there is no work on predicting the HM scanpaths on panoramic video.
In this paper, a DRL approach is developed for predicting the actions of the HM scanpaths from multiple \textit{agents}.
The actions are decided based on the environment of the panoramic video content, the features of which are automatically learned and then extracted by a DNN. Thus, our approach takes advantage
of both deep learning and reinforcement learning, driven by the HM data of our panoramic video database. Note that although few works apply DRL to predict human attention, the attention model is widely used in the opposite direction to improve the performance of reinforcement learning, e.g., \cite{minut2001reinforcement, mnih2014recurrent, jaderberg2016reinforcement, wang2016dueling}.
\subsection{Virtual cinematography}
Virtual cinematography of panoramic video, which directs an imaginary camera to virtually capture natural FOV, was proposed in \cite{foote2000flycam, sun2005region, su2016pano2vid, hu2017deep, lin2017tell}. In general, virtual cinematography attempts to agree with the HM positions of humans at each panoramic video frame. The early work of \cite{foote2000flycam} proposed cropping the object-of-interest in panoramic video, such that the natural FOV can be generated for virtual cinematography. Later, in \cite{sun2005region}, the cropped object-of-interest is tracked across frames by a Kalman filter, for automatically controlling the virtual camera in virtual cinematography of panoramic video. The approach of \cite{sun2005region} can work on both compressed and uncompressed domains, because two methods were developed to detect the object-of-interest in compressed and uncompressed domains. The works of \cite{foote2000flycam, sun2005region} were both designed for the task of online virtual cinematography. These works can be considered as heuristic approaches, which are not trained or even evaluated on the ground-truth HM data of human subjects.
Most recently, data-driven approaches have boosted the development of virtual cinematography for panoramic video. Specifically, Pano2Vid \cite{su2016pano2vid} learns to generate natural FOV at each panoramic frame. However, the learning mechanism of Pano2Vid is offline. In fact, natural FOV can be estimated at each frame in an online manner, which uses the observed HM positions of the previous frames to correct the estimation of natural FOV at the current frame. To this end, online virtual cinematography \cite{hu2017deep, lin2017tell} has been studied in a data-driven way.
Specifically, a state-of-the-art virtual cinematography approach, the deep 360 pilot, was proposed in \cite{hu2017deep}, which is a deep-learning-based \textit{agent} that smoothly tracks the object-of-interest for panoramic video. In other words, the \textit{agent} transits the HM position across video frames to track the key object detected by the RCNN, given the observed HM positions at previous frames. Consequently, natural FOV can be generated online for automatically displaying the object-of-interest in virtual cinematography of panoramic video. In fact, object-of-interest tracking in panoramic video refers to continuously focusing and refocusing the intended targets. Both focusing and refocusing require a subject to catch up the object. Such a task is challenging in extreme-sports video, as the object-of-interest may be moving fast. Therefore, Lin \textit{et al.} \cite{lin2017tell} investigated two focus assistance techniques to help the subject track the key object in viewing panoramic video, in which the potential HM position attended to the object-of-interest needs to be determined and provided for the subject.
The above approaches of \cite{foote2000flycam, sun2005region, su2016pano2vid, hu2017deep, lin2017tell} all depend on the detector of the object-of-interest. Thus, they can only be applied in some specific panoramic video with salient objects, such as video conferencing or classroom scenes in \cite{foote2000flycam, sun2005region} and the sports video in \cite{su2016pano2vid, hu2017deep, lin2017tell}. Different from these conventional approaches, our online-DHP approach is based on the learned model of our offline approach, which encodes HM-related content rather than detecting the object-of-interest. Consequently, our approach is object free, thus more suitable for generic panoramic video.
\section{Database establishment and findings}
\label{Database_establishment_and_analysis}
In this section, we collect a new database that includes 76 panoramic video sequences with the HM data of 58 subjects, called the PVS-HM database.
Along with the HM data, the eye fixation data of 58 subjects are also obtained in our PVS-HM database.
Our PVS-HM database allows quantitative analysis of subjects' HM on panoramic video, and it can also be used for learning to predict where humans look at panoramic video. Our database is available at \url{https://github.com/YuhangSong/dhp} for facilitating future research. In the following, we present how we conducted the experiment to obtain the PVS-HM database.
First, we selected 76 panoramic video sequences from YouTube and VRCun, with resolutions ranging from 3K to 8K.
As shown in Table 1 of the supplemental material, the content of these sequences is diverse, including computer animation, driving, action sports, movies, video games, scenery, and so forth.
Then, the duration of each sequence was cut to be from 10 to 80 seconds (averagely 26.9 seconds), such that fatigue can be reduced when viewing panoramic video.
To ensure video quality, all panoramic video sequences were compressed using H.265 \cite{Sullivan2013Overview} without any change in bit-rates.
Note that the audio tracks were removed to avoid the impact of acoustic information on visual attention.
In our experiment, 58 subjects (41 males and 17 females, ranging in age from 18 to 36) wore the head-mounted display of an HTC Vive to view all 76 panoramic video sequences at a random display order.
When watching panoramic video, the subjects were seated on a swivel chair and were allowed to turn around freely, such that all panoramic regions are accessible.
To avoid eye fatigue and motion sickness, the subjects had a 5-minute rest after viewing each session of 19 sequences.
With the support of the software development kit of the HTC Vive, we recorded the posture data of each subject as they viewed the panoramic video sequences.
Based on the recorded posture data, the HM data of all 58 subjects at each frame of the panoramic video sequences were obtained and stored for our PVS-HM database, in terms of longitude and latitude in the GCS.
In addition to the recorded HM data, the eye fixations were also captured by the VR eye-tracking module aGlass\footnote{When subjects viewing panoramic video, the aGlass device is able to capture the eye fixations within FoV at less than $0.5^{\circ}$ error. See \url{http://www.aglass.com/?lang=en} for more details about this device.}, which was embedded in the head-mounted display of the HTC vive.
Then, we mine our PVS-HM database to analyze the HM data of different subjects across panoramic video sequences.
Specifically, we have the following five findings, the analysis of which is presented in the supplemental material.
1) The HM positions on panoramic video possess front center bias (FCB).
2) When watching panoramic video, different subjects are highly consistent in HM positions.
3) The magnitude of HM scanpaths is similar across subjects, when viewing the same regions of panoramic video.
4) The direction of HM scanpaths on panoramic video is highly consistent across subjects.
5) Almost $50\%$ subjects are consistent in one HM scanpath direction (among 8 uniformly quantized directions), and over $85\%$ of subjects are consistent in three directions for HM scanpaths.
\begin{figure*}
\begin{center}
\vspace{-1em}
\centerline{\includegraphics[width=2.0\columnwidth]{main_framework}
\vspace{-1em}
\caption{\footnotesize{Overall framework of the offline-DHP approach.}}
\label{main-framework}
\end{center}
\vspace{-1em}
\end{figure*}
\section{Offline-DHP approach}\label{sec::offline-DHP}
\subsection{Framework of offline-DHP}
\label{framework}
In this section, we present our offline-DHP approach, in light of our findings in Section \ref{Database_establishment_and_analysis}.
Figure \ref{main-framework} shows the overall framework of our approach, in which the multiple DRL workflows are embedded to generate the HM maps of input panoramic video frames.
The procedure and notations of Figure \ref{main-framework} are presented in the following.
As shown in Figure \ref{main-framework}, the input to our offline-DHP approach is the panoramic video frames $\{\mathbf{F}_t\}_{t=1}^{T}$ with frame number $t$ ranging from $1$ to $T$.
Since \textit{Finding 2} has shown that the HM positions are highly consistent across different subjects, we propose to generate the HM maps $\{\mathbf{H}_t\}_{t=1}^{T}$ for modeling human attention on panoramic video, viewed as the output of our offline-DHP approach. The HM map $\mathbf{H}_t$ of frame $t$ represents the probability of each pixel being the HM position.
Assuming that $\{(\hat{x}^n_t, \hat{y}^n_t)\}_{n=1}^{N}$ are the HM positions at the $t$-th frame, $\mathbf{H}_t$ is obtained by convoluting $\{(\hat{x}^n_t, \hat{y}^n_t)\}_{n=1}^{N}$ with a 2D Gaussian filter, similar to the saliency maps of 2D video.
Here, $n$ means the $n$-th HM position and $N$ is the total number of HM positions.
Because \textit{Finding 5} has indicated that the HM scanpaths of different subjects are consistent in more than one direction, the HM positions $\{({x}^m_t, {y}^m_t)\}_{m=1}^{M}$ of subjects $\{m\}_{m=1}^{M}$ may be different from each other. Accordingly, this paper assumes that the number of predicted HM positions $N$ is equivalent to $M$ at each frame, for predicting the HM positions of all subjects.
In other words, to obtain $(\hat{x}^n_t, \hat{y}^n_t)$, our offline-DHP approach applies one DRL workflow to estimate the HM positions of one subject.
Then, $N$ DRL workflows are run to obtain $N$ HM positions $\{(\hat{x}^n_t, \hat{y}^n_t)\}_{n=1}^{N}$ at frame $t$, simulating the ground-truth HM positions of $M$ ($=N$) subjects at this frame.
At a panoramic frame, each of the DRL workflows works independently to generate an HM position by randomly sampling actions based on a learned \textit{policy} $\pi_t$, which is modeled as the predicted probability distribution of the HM direction at frame $t$.
Note that all DRL workflows share the same \textit{policy} $\pi_t$ in our approach.
Let $\hat{\alpha}^n_t$ and $\hat{\nu}^n_t$ be the direction and magnitude of the predicted HM scanpath at frame $t$, obtained from the $n$-th DRL workflow.
They are both viewed as the $actions$ of the DRL workflow.
In a single DRL workflow, $\{(\hat{x}^n_t, \hat{y}^n_t)\}_{t=1}^T$ can be modeled by determining a series of \textit{actions}: $\{\hat{\alpha}^n_t\}_{t=1}^T$ and $\{\hat{\nu}^n_t\}_{t=1}^T$.
It is worth pointing out that $\{\hat{\alpha}^n_t\}_{t=1}^{T}$ and $\{\hat{\nu}^n_t\}_{t=1}^{T}$ are predictable as the \textit{actions} of the DRL workflow, since \textit{Findings 3} and \textit{4} have indicated that subjects are consistent in the magnitudes and directions of HM scanpaths.
The direction and magnitude of the ground-truth HM scanpath are denoted by $\alpha^m_t$ and $\nu^m_t$ for the $m$-th subject at frame $t$.
As can be seen in Figure \ref{main-framework}, in each workflow, one HM scanpath is generated through the interaction between the FoV extractor\footnote{Note that the extracted FoV is $103^{\circ} \times 60^{\circ}$, which is the same as the setting of the head-mounted display.} and HM scanpath predictor.
Specifically, $\{\mathbf{o}^n_t\}_{t=1}^{T}$ denotes the FoVs of frames from $1$ to $T$ in the $n$-th DRL workflow. Figure \ref{main-framework} shows that FoV $\mathbf{o}^n_t$ is extracted via making its center locate at the HM position $(\hat{x}^n_t,\hat{y}^n_t)$, in which $(\hat{x}^n_t,\hat{y}^n_t)$ is generated by the predicted \textit{action} of HM scanpath $(\hat{\alpha}^n_{t-1},\hat{\nu}^n_{t-1})$ at the previous video frame.
Then, the content of the extracted FoV works as the \textit{observation} of DRL, for predicting the next \textit{action} of HM scanpath $(\hat{\alpha}^n_{t},\hat{\nu}^n_{t})$.
The HM scanpath generated by each DRL workflow is forwarded to obtain HM positions at incoming frames.
Subsequently, the HM positions from multiple DRL workflows are integrated, and then smoothed by a 2D Gaussian filter.
Finally, the HM maps $\{\mathbf{H}_t\}_{t=1}^{T}$ of the panoramic video are obtained, which model the heat maps for the HM positions at each frame.
\subsection{DRL model of the offline-DHP approach}
\label{train}
As described in Section \ref{framework}, the DRL workflow is a key component in our offline-DHP framework, which targets at predicting the HM scanpaths.
This section presents how to train the DRL model of each workflow for predicting the HM maps.
Note that our offline-DHP approach runs multiple workflows to train one global-shared model, the same as the asynchronous DRL method \cite{mnih2016asynchronous}.
In this section, we take the $n$-th workflow as an example.
Figure \ref{train-framework} shows the framework of training the DRL model.
As shown in this figure, the FoV of the input video frame is extracted based on the \textit{action} of the HM scanpath predicted at the previous frame.
The extracted FoV, as the \textit{observation}, is then fed into the DRL network.
The structure of the DRL network follows \cite{mnih2016asynchronous}, which has four 32-filter convolutional layers (size: $21\times 21$, $11\times 11$, $6\times 6$ and $3\times 3$), one flatten layer (size: $288$) and LSTM cells (size: 256).
Then, the 256-dimensional LSTM feature $\mathbf{f}^n_{t}$ is output at frame $t$, as part of the \textit{observed state} in the $n$-th DRL workflow.
In addition, the \textit{reward}, which measures the similarity between the predicted and ground-truth HM scanpaths, is estimated to evaluate the \textit{action} made by the DRL model. Then, the \textit{reward} is used to make decision on the \textit{action} through the DRL model, i.e., the HM scanpath at the current frame.
In this paper, we denote $r^{\alpha}_{n,t}$ and $r^{\nu}_{n,t}$ as the \textit{rewards} for evaluating \textit{actions} $\hat{\alpha}^n_t$ and $\hat{\nu}^n_t$, respectively, in the $n$-th DRL workflow.
Finally, the \textit{environment} of our DRL model is comprised by the \textit{observation} of the extracted FoV and the \textit{reward} of HM scanpath prediction.
In training the DRL model, the \textit{environment} interacts with the HM scanpath predictor.
The interaction is achieved in our DRL model through the following procedure.\\
(1) At frame $t$, the FoV extractor obtains the current $\textit{observation}$ $\mathbf{o}^n_t$ ($103^{\circ} \times 60^{\circ}$) from the input video frame $\mathbf{F}_t$, according to the predicted HM position $(\hat{x}^n_t,\hat{y}^n_t)$.
In our work, $\mathbf{o}^n_{t}$ is projected onto the 2D region and is then down-sampled to $42\times42$.\\
(2) The current $\mathbf{o}^n_t$ and the LSTM feature $\mathbf{f}^n_{t-1}$ from the last frame are delivered to the DRL network in the HM scanpath predictor.
In our work, the DRL network contains four convolutional layers and one LSTM layer \cite{hausknecht2015deep}, which are used to extract the spatial and temporal features, respectively. The details about the architecture of the DRL network can be found in Figure \ref{train-framework}.\\
(3) At frame $t$, the DRL network produces the LSTM feature $\mathbf{f}^n_{t}$, HM scanpath magnitude $\hat{\nu}^n_{t}$ and policy $\pi_{t}$. Here, $\pi_{t}$ is modeled by the probability distribution over the \textit{actions} of HM scanpath directions.\\
(4) Given $\pi_{t}$, the HM scanpath predictor randomly samples an \textit{action} $\hat{\alpha}^n_t$ with standard deviation $\varepsilon$, such that the exploration is ensured in decision making. Here, $\hat{\alpha}^n_t$ includes 8 discrete directions in GCS: $\{ 0^{\circ}, 45^{\circ}, \cdots, 315^{\circ} \}$.\\
(5) \textit{Environment} is updated using $\hat{\nu}^n_t$ and $\hat{\alpha}^n_t$, leading to $(\hat{x}^n_t, \hat{y}^n_t)\longrightarrow (\hat{x}^n_{t+1},\hat{y}^n_{t+1})$. The FoV extractor returns a new \textit{observation} $\mathbf{o}^n_{t+1}$ according to the HM position $(\hat{x}^n_{t+1},\hat{y}^n_{t+1})$. The \textit{reward} estimator returns the \textit{rewards} $r^{\nu}_{n,t}$ and $r^{\alpha}_{n,t}$ in predicting $\hat{\nu}^n_t$ and $\hat{\alpha}^n_t$, based on the ground-truth HM scanpaths of $\{\nu^m_t\}_{m=1}^{M}$ and $\{\alpha^m_t\}_{m=1}^{M}$. \\
(6) A set of experiences $\{ \mathbf{o}^n_{t}, \! \mathbf{f}^n_{t-1},\! \hat{\nu}^n_t,\! \hat{\alpha}^n_t,\! r^{\nu}_{n,t},\! r^{\alpha}_{n,t} \}$ are stored in an experience buffer for frame $t$.
In addition, $\mathbf{o}^n_{t+1}$ and $\mathbf{f}^n_{t}$ are preserved for processing frame $t+1$.\\
(7) Once $t$ meets the termination condition of exceeding the maximum frame number $T$, all experiences in the buffer are delivered to the optimizer for updating the DRL network.
\begin{figure*}
\begin{center}
\centerline{\includegraphics[width=1.8\columnwidth]{training_framework_3}
\caption{\footnotesize{Framework of training the DRL model to obtain each DRL workflow of the offline-DHP approach (Figure \ref{main-framework}).}}
\label{train-framework}
\end{center}
\end{figure*}
\textbf{Reward Estimation.}
Next, we focus on modeling the \textit{rewards} $r^{\alpha}_{n,t}$ and $r^{\nu}_{n,t}$ in determining the \textit{actions} of HM scanpaths.
When training the DRL model, our goal is to make the prediction of $\hat{\alpha}^n_{t}$ and $\hat{\nu}^n_{t}$ approach the ground-truth HM scanpaths.
Thus, the \textit{rewards} $r^{\alpha}_{n,t}$ and $r^{\nu}_{n,t}$ can be represented by the differences from $\hat{\alpha}^n_{t}$ to $\{{\alpha}^m_{t}\}_{m=1}^M$ and from $\hat{\nu}^n_{t}$ to $\{{\nu}^m_{t}\}_{m=1}^M$, respectively.
In our approach, these differences are measured by Gaussian distributions.
We further consider the distances from predicted HM position $(\hat{x}^n_t,\hat{y}^n_t)$ to $\{(x^{m}_{t},y^{m}_{t})\}_{m=1}^M$ in calculating the \textit{rewards} of $r^{\alpha}_{n,t}$ and $r^{\nu}_{n,t}$, which are also modeled by the 2D Gaussian distribution.
This consideration is because only the consistent HM regions have similar HM scanpaths, according to the analysis of \textit{Finding 4}.
Then, $r^{\alpha}_{n,t}$ can be written as
\begin{equation}
\label{reward-alpha}
r^{\alpha}_{n,t} = \frac{1}{N}\sum_{m=1}^{M} e^{-\frac{1}{2}\left(\frac{D_d(\hat{\alpha}^n_{t}, \alpha^m_{t})}{\rho}\right)^2} e^{-\frac{1}{2}\left(\frac{D_s((\hat{x}^n_{t},\hat{y}^n_{t}),(x^m_{t},y^m_{t}))}{\varrho}\right)^2}.
\end{equation}
In \eqref{reward-alpha}, $D_d$ defines the phase difference, and $D_s$ denotes the \textit{great-circle distance} \cite{shumaker1984astronomical}. Moreover, $\rho$ and $\varrho$ are the standard deviations of Gaussian distributions, as the hyper-parameters.
In \eqref{reward-alpha}, the similarity score of $e^{-\frac{1}{2}\left(\frac{D_d(\hat{\alpha}^n_{t}, \alpha^m_{t})}{\rho}\right)^2}$ measures the similarity of HM direction between the ground-truth and \textit{agent action},
while $e^{-\frac{1}{2}\left(\frac{D_s((\hat{x}^n_{t},\hat{y}^n_{t}),(x^m_{t},y^m_{t}))}{\varrho}\right)^2}$ qualifies the validity of the corresponding similarity score in calculating the reward.
Then, given the HM direction, we can estimate its corresponding magnitude through reward $r^{\nu}_{n,t}$. Similar to \eqref{reward-alpha}, we have
\begin{equation}
\label{reward-nu}
r^{\nu}_{n,t} \!\!=\!\! \frac{1}{N}\!\!\!\sum_{m=1}^{M}\!\!
e^{\!-\frac{1}{2}!\left(\!{\frac{\hat{\nu}^n_{t}-\nu^{m}_{t}}{\varsigma}}\!\right)\!^2} \!\! e^{\!-\frac{1}{2}\!\left(\!\frac{D_d(\hat{\alpha}^n_{t}, \alpha^m_{t})}{\rho}\!\right)\!^2} \!\!e^{\!-\frac{1}{2}\!\left(\!\frac{D_s((\hat{x}^n_{t},\hat{y}^n_{t}),(x^m_{t},y^m_{t}))}{\varrho}\!\right)\!^2},
\end{equation}
where $\varsigma$ is the hyper-parameter for the standard deviation of the HM scanpath magnitude.
As defined in \eqref{reward-nu}, $e^{\!-\frac{1}{2}\left(\!{\frac{\hat{\nu}^n_{t}-\nu^{m}_{t}}{\varsigma}}\!\right)^2}$ is the similarity score of the HM scanpath magnitude.
Reward $r^{\nu}_{n,t}$ is valid in predicting the magnitude, only if both the predicted HM position and direction are similar to the ground-truth.
Thus, $e^{\!-\frac{1}{2}\left(\!\frac{D_d(\hat{\alpha}^n_{t}, \alpha^m_{t})}{\rho}\!\right)^2}$ and $e^{\!-\frac{1}{2}\left(\!\frac{D_s((\hat{x}^n_{t},\hat{y}^n_{t}),(x^m_{t},y^m_{t}))}{\varrho}\!\right)^2}$ are introduced in \eqref{reward-nu} to determine the validity of the similarity score.
\textbf{Optimization.}
Next, we need to optimize the \textit{rewards} $r^{\alpha}_{n,t}$ and $r^{\nu}_{n,t}$, when learning the network parameters of our DRL model in Figure \ref{train-framework}.
Our offline-DHP approach applies the asynchronous DRL method \cite{mnih2016asynchronous} to learn the DRL parameters with optimized \textit{rewards}.
Hence, multiple workflows are run to interact with multiple \textit{environments} with workflow-specific parameter vectors $\{ \theta^{n}_{\nu}, \theta^{n}_{\pi}, \theta^{n}_{V} \}$, producing $\hat{\nu}^n_t$, $\hat{\pi}^n_t$ and $V$.
Here, $V$ denotes the \textit{state value} output by the DRL network, which is obtained using the same way as \cite{mnih2016asynchronous}.
Meanwhile, global-shared parameter vectors $\{ \theta_{\nu}, \theta_{\pi}, \theta_{V} \}$\footnote{As can be seen in Figure \ref{train-framework}, $\{ \theta_{\nu}, \theta_{\pi}, \theta_{V} \}$ share all CNN and LSTM layers in our offline-DHP approach, but they are separated at the output layer.} are updated via an accumulating gradient.
For more details about the workflow-specific and global-shared parameter vectors, refer to \cite{mnih2016asynchronous}.
In our approach, \textit{reward} $r^{\nu}_{n,t}$ is optimized to train $\theta_{\nu}$ as follows:
\begin{equation}
\label{opt-1}
d \theta_{\nu} \leftarrow d \theta_{\nu} + \nabla_{\theta_{\nu}^{n}} \sum_{t=1}^{T} r^{\nu}_{n,t}.
\end{equation}
Moreover, we can optimize \textit{reward} $r^{\alpha}_{n,t}$ by
\begin{equation}
\label{opt-2}
\small d \theta_{V} \leftarrow d \theta_{V} + \nabla_{\theta_{V}^{n}} \sum_{t = 1}^{T} (\sum_{i=t}^{T} \gamma^{i-t} r^{\alpha}_{n,i} - V(\mathbf{o}^n_{t}, \mathbf{f}^n_{t-1} ; \theta_{V}^{n}))^2,
\end{equation}
\begin{small}
\begin{eqnarray}
\label{opt-3}
\hspace{0.1cm} \nonumber d \theta_{\pi} \leftarrow d \theta_{\pi} +\nabla_{\theta_{\pi}^{n}} \sum_{t = 1}^{T} \log \pi( \hat{\alpha}^n_{t} | \mathbf{o}^n_{t}, \mathbf{f}^n_{t-1} ; \theta_{\pi}^{n})\cdot \\
(\sum_{i = t}^{T} \gamma^{i-t} r^{\alpha}_{n,i} - V(\mathbf{o}^n_{t}, \mathbf{f}^n_{t-1} ; \theta_{V}^{n})),
\end{eqnarray}
\end{small}
where $\gamma$ is the discount factor of Q-learning \cite{watkins1992q}.
In addition, $V(\mathbf{o}^n_{t}, \mathbf{f}^n_{t-1} ; \theta_{V}^{n})$ denotes state value $V$ obtained by $\mathbf{o}^n_{t}, \mathbf{f}^n_{t-1}$ and $\theta_{V}^{n}$; $\pi( \hat{\alpha}^n_{t} | \mathbf{o}^n_{t}, \mathbf{f}^n_{t-1} ; \theta_{\pi}^{n})$ stands for the probability of \textit{action} $\hat{\alpha}^n_{t}$ that is made by policy $\pi_t$ from $\mathbf{o}^n_{t}, \mathbf{f}^n_{t-1}$ and $\theta_{\pi}^{n}$.
Finally, based on the above equations, RMSProp \cite{tieleman2012lecture} is applied to optimize \textit{rewards} in the training data. Consequently, the workflow-specific and global-shared parameter vectors can be learned to predict HM scanpaths.
Finally, these learned parameter vectors can be used to determine the scanpaths and positions of HM through each DRL workflow in our offline-DHP approach.
\section{Online-DHP approach}\label{sec:online-approach}
\begin{figure}
\vspace{-2em}
\begin{center}
\centerline{\includegraphics[width=1.05\columnwidth]{whole_framework_online_5}
\vspace{-1em}
\caption{\footnotesize{Framework of the online-DHP approach.}}
\label{online-framework}
\end{center}
\vspace{-2em}
\end{figure}
In this section, we present our online-DHP approach.
The online-DHP approach refers to predicting a specific subject's HM position $(\hat{x}_{t+1},\hat{y}_{t+1})$ at frame $t+1$, given his/her HM positions $\{(x_{1},y_{1}),\ldots, (x_{t},y_{t})\}$ till frame $t$.
Note that the definitions of the notations in this section are similar to those in Section \ref{sec::offline-DHP}, and the only difference is that $n$ and $m$ are removed in all notations because there is only one subject/workflow in online-DHP.
Additionally, we define the subject as the \textit{viewer}, whose HM positions need to be predicted online.
Figure \ref{online-framework} shows the framework of our online-DHP approach.
It is intuitive that the current HM position is correlated with the previous HM scanpaths and video content.
Therefore, the input to our online-DHP framework is the \textit{viewer's} HM scanpath $\{(\alpha_1,\nu_1),\ldots, (\alpha_{t-1},\nu_{t-1})\}$ and frame content $\{\mathbf{F}_1, \ldots, \mathbf{F}_t \}$, and the output is the predicted HM position $(\hat{x}_{t+1},\hat{y}_{t+1})$ at the next frame for the \textit{viewer}.
This can be viewed as online prediction of HM positions $\{(\hat{x}_{t},\hat{y}_{t})\}_{t=1}^{T}$ . To this end, our online-DHP consists of two stages: the training and prediction stages.
In the first stage, the parameters of the DRL network are trained.
In the second stage, the \textit{action} of the HM scanpath is generated from the trained DRL network, to predict the HM position online.
In the following, we discuss these two stages in more detail.
\subsection{Stage I: Training}
At the beginning frame, the HM position $(\hat{x}_1,\hat{y}_1)$ of the \textit{viewer} is initialized to be the center of the front region, which is the general setting of the panoramic video player. Then, the trained DRL network of offline-DHP is loaded as the initial DRL network for online prediction, both sharing the same structure.
The reason for loading the offline-DHP network is that it encodes the knowledge of HM-related features.
Later, this initial DRL network is fine-tuned by the \textit{viewer's} HM scanpath at incoming frames.
Next, we focus on the algorithm for training the DRL network in our online-DHP approach. As previously mentioned, the initial parameters of the DRL network at the first frame are directly from those of offline-DHP. At each of the incoming frames, several episodes are run to update the DRL network for online-DHP. The following summarizes the procedure of one episode at frame $t+1$.
\begin{enumerate}
\item Iterate the following steps from $i=1$ to $t$. At each iteration, $(\hat{\alpha}_i,\hat{\nu}_i)$ and $(\alpha_i,\nu_i)$ are the predicted and ground-truth \textit{actions}, respectively, of the HM scanpath for the \textit{viewer}, and $\mathbf{o}_i$ is the \textit{observation} of the FoV content.
\item Take the \textit{action} of $(\hat{\alpha}_i,\hat{\nu}_i)$ using the DRL network, given the current \textit{observation} $\{\mathbf{o}_1, \ldots, \mathbf{o}_i\}$ till frame $i$. The \textit{action} of $\hat{\alpha}_i$ selects one among 8 discrete HM scanpath directions, i.e., $\{ 0^{\circ}, 45^{\circ}, \cdots, 315^{\circ} \}$. The \textit{action} of $\hat{\nu}_i$ is a scalar of HM scanpath magnitude.
\item Calculate \textit{rewards} $(r^{\alpha}_{i}, r^{\nu}_{i})$ from the \textit{reward} estimator with \eqref{reward-alpha} and \eqref{reward-nu}, which measures how close the \textit{action} $(\hat{\alpha}_i,\hat{\nu}_i)$ is to the ground-truth HM scanpath $(\alpha_i,\nu_i)$. Here, the sums in \eqref{reward-alpha} and \eqref{reward-nu} are not required for the \textit{reward} calculation, since the ground-truth HM scanpath of online prediction is from a single \textit{viewer}, rather than from all subjects.
\item Generate new \textit{observation} ${\mathbf{o}_{i+1}}$ from the FoV extractor with the above \textit{action} $(\hat{\alpha}_i,\hat{\nu}_i)$, and then input it to the DRL network.
\item Update the DRL network using \eqref{opt-1}, \eqref{opt-2} and \eqref{opt-3} and stop iterations, if the iteration number $i$ is equivalent to $t$. Otherwise, proceed to step 2) for the next iteration.
\end{enumerate}
Here, the definitions of \textit{action}, \textit{reward} and \textit{observation} are the same as those in Section \ref{train}.
The above iterations share the same implementation of training the DRL model in offline-DHP, which was already presented in Section \ref{train}.
Once the above iterations are terminated, our algorithm moves to the next episode.
After a number of episodes, the training stage ends for frame $t+1$, when meeting the termination conditions.
In our approach, there are two termination conditions. The first condition is the maximum number $E$ of episodes.
The second condition is based on the metric of mean overlap (MO), which measures how close the predicted HM position is to the ground-truth HM position.
MO ranges from 0 to 1, and a larger MO indicates a more precise prediction.
Specifically, MO is defined as,
\begin{equation}
\label{mo-defination}
\textrm{MO} =\frac{A(\textrm{FoV}_{p} \cap \textrm{FoV}_{g})}{A(\textrm{FoV}_{p} \cup \textrm{FoV}_{g})},
\end{equation}
where $\textrm{FoV}_{p}$ and $\textrm{FoV}_{g}$ represent the FoVs at the predicted and ground-truth HM positions, respectively.
In \eqref{mo-defination}, $A$ represents the area of a panoramic region, which accounts for number of pixels.
Then, the MO result of \eqref{mo-defination} at each episode is compared with a threshold $th_{\text{MO}}$ to determine whether the training stage is terminated.
Finally, the trained DRL network can be obtained at frame $t+1$, once satisfying one of the above termination conditions.
Algorithm \ref{online-DHP-algorithm-training} presents the summary of the training stage in online-DHP.
\begin{algorithm}
\caption{\hspace{-.3em}: Algorithm for the training stage of online-DHP to predict the HM position at frame $t+1$.}
\label{online-DHP-algorithm-training}
\footnotesize
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Panoramic video frames $\{\mathbf{F}_1, \ldots, \mathbf{F}_t \}$, and the ground-truth HM positions of the \textit{viewer} $\{(x_{1},y_{1}),\ldots, (x_{t},y_{t})\}$.
\STATE Initialize the DRL network of online-DHP with parameter vectors $\{ \theta_{{\nu}}, \theta_{{\pi}}, \theta_{V} \}$, by loading the network of offline-DHP.
\FOR{$e=1$ {\bfseries to} $E$}
\STATE Initialize the HM position to be the center of the front region: $\hat{x}_1=0,\hat{y}_1=0$.
\STATE Initialize the LSTM feature to be the zero vector: $\mathbf{f}_0=\mathbf{0}$.
\FOR{$i=1$ {\bfseries to} $t-1$}
\STATE Extract \textit{observation} $\mathbf{o}_i$ (i.e., FoV) from $\mathbf{F}_i$ according to $(\hat{x}_{i},\hat{y}_{i})$.
\STATE Obtain \textit{policy} $\pi_{i}$ and LSTM feature $\mathbf{f}_{i}$ using the DRL network with $\{\mathbf{o}_i, \mathbf{f}_{i-1}, \theta_{\pi}\}$.
\STATE Select \textit{action} $\hat{\alpha}_{i}$ according to the $\epsilon$-greedy policy of $\pi_{i}$.
\STATE Generate \textit{action} $\hat{\nu}_{i}$ using the DRL network given $\mathbf{o}_{i}, \mathbf{f}_{i-1}$ and $ \theta_{\nu}$.
\STATE Calculate $(\hat{x}_{i+1}, \hat{y}_{i+1})$ with regard to $\hat{\alpha}_{i},\hat{\nu}_{i}$, and $(\hat{x}_{i}, \hat{y}_{i})$.
\STATE Estimate \textit{rewards} $r^{\nu}_{i}$ and $ r^{\alpha}_{i}$ through \eqref{reward-alpha} and \eqref{reward-nu} for $(\hat{\alpha}_{i},\hat{\nu}_{i})$ .
\STATE Calculate the MO between $(\hat{x}_{i},\hat{y}_{i})$ and $(x_{i},y_{i})$, denoted as $\text{MO}_i$.
\STATE Store a set of experiences: $\{ \mathbf{o}_{i}, \! \mathbf{f}_{i-1},\! \hat{\nu}_{i},\! \hat{\alpha}_{i},\! r^{\nu}_{i},\! r^{\alpha}_{i} \}$.
\STATE $i \leftarrow i+1$.
\ENDFOR
\STATE Update $\{ \theta_{\nu}, \theta_{\pi}, \theta_{V} \}$ according to \eqref{opt-1}, \eqref{opt-2}, \eqref{opt-3}, in which $\{ \theta^{n}_{\nu}, \theta^{n}_{\pi}, \theta^{n}_{V} \}$ are replaced by $\{ \theta_{\nu}, \theta_{\pi}, \theta_{V} \}$.
\STATE $e \leftarrow e+1$.
\STATE Calculate the average MO through $\text{MO} = \frac{\sum_{i=1}^{t-1} \text{MO}_{i}}{t-1}$.
\IF{$\text{MO}> th_{\text{MO}}$}
\STATE \textbf{break}
\ENDIF
\ENDFOR
\STATE {\bfseries Return:} The trained parameter vectors: $\{ \theta_{\nu}, \theta_{\pi}, \theta_{V} \}$.
\end{algorithmic}
\end{algorithm}
\vspace{-1em}
\subsection{Stage II: Prediction}
When the average MO is larger than threshold $th_{\text{MO}}$, the switch of Figure \ref{online-framework} is turned to ``predict'', and the DRL network makes an action of the HM scanpath at frame $t+1$.
Note that if the number of training episodes exceeds $E$, then the ``predict'' is also switched on, such that the training episodes end in a limited time.
When entering the prediction stage, the DRL model trained in the first stage is used to produce the HM position as follows.
First, the LSTM features $\{\mathbf{f}_i\}_{i=1}^{t-1}$ are sequentially updated from frame $1$ to $t-1$, based on the observed FoVs $\{\mathbf{o}\}_{i=1}^{t-1}$ and the DRL parameters $\theta_{\pi}$ of the training stage. Note that the LSTM feature is initialized with the zero vector $\mathbf{0}$ at frame $1$. Then, $\{\mathbf{o}_t, \mathbf{f}_{t-1}, \theta_{\pi}\}$ produce action $\hat{\alpha}_t$ of the HM scanpath direction. In addition, the HM scanpath magnitude $\hat{\nu}_t$ is generated using $\{\mathbf{o}_t, \mathbf{f}_{t-1}, \theta_{\nu}\}$, in which the parameters of $\theta_{\nu}$ are obtained at the training stage. Afterwards, the HM position $(\hat{x}_{t+1}, \hat{y}_{t+1})$ at frame $t+1$ can be predicted, given the ground-truth HM position $(\!x_{t}, \!y_{t}\!)$ and the estimated HM scanpath $(\hat{\alpha}_t, \hat{\nu}_t)$ at frame $t$. Algorithm \ref{online-DHP-algorithm-predicting} presents the summary of the prediction stage in online-DHP. Finally, online-DHP is achieved by alternating between the training and prediction stages until the currently processed frame.
\begin{algorithm}
\caption{\hspace{-.3em}: Algorithm for the prediction stage of online-DHP at frame $t+1$.}
\label{online-DHP-algorithm-predicting}
\footnotesize
\begin{algorithmic}[1]
\STATE {\bfseries Input:} The trained parameter vectors: $\{ \theta_{\nu}, \theta_{\pi}, \theta_{V} \}$ from the training stage, panoramic video frames $\{\mathbf{F}_1, \ldots, \mathbf{F}_t \}$, and the ground-truth HM positions of the \textit{viewer} $\{(x_{1},y_{1}),\ldots, (x_{t},y_{t})\}$.
\STATE Initialize the LSTM feature with the zero vector: $\mathbf{f}_0=\mathbf{0}$.
\FOR{$i=1$ {\bfseries to} $t-1$}
\STATE Extract \textit{observation} $\mathbf{o}_i$ (i.e., FoV) from $\mathbf{F}_i$ according to $(x_{i},y_{i})$.
\STATE Obtain LSTM feature $\mathbf{f}_{i}$ using the DRL network with $\{\mathbf{o}_{i},\!\mathbf{f}_{i-1},\! \theta_{\pi}\}$.
\STATE $i \leftarrow i+1$.
\ENDFOR
\STATE Extract \textit{observation} $\mathbf{o}_t$ (i.e., FoV) from $\mathbf{F}_t$ according to $(x_{t},y_{t})$.
\STATE Obtain \textit{policy} $\pi_{t}$ using the DRL network with $\{\mathbf{o}_{t},\!\mathbf{f}_{t-1},\! \theta_{\pi}\}$.
\STATE Choose \textit{action} $\hat{\alpha}_{t}$ using the greedy policy based on $\pi_{t}$.
\STATE Generate HM magnitude $\hat{\nu}_{t}$ using the DRL network with $\{\mathbf{o}_{t}, \mathbf{f}_{t-1}, \theta_{\nu}\}$.
\STATE Estimate HM position $(\!\hat{x}_{t+1}\!,\!\hat{y}_{t+1}\!)$ at frame $\!t+1$, upon $\hat{\alpha}_{t},\hat{\nu}_{t}$ and $(\!x_{t}\!,\!y_{t}\!)$.
\STATE {\bfseries Return:} The HM position at frame $\!t+1$: $(\hat{x}_{t+1}\!,\!\hat{y}_{t+1})$.
\end{algorithmic}
\end{algorithm}
\vspace{-1em}
\section{Experimental results}
This section presents the experimental results for validating the effectiveness of our offline-DHP and online-DHP approaches. In Section \ref{sec:settings}, we discuss the settings of both offline-DHP and online-DHP in our experiments. Section \ref{abl_ex} presents the results of ablation experiments. Sections \ref{sec:evaluation_offline} and \ref{sec:evaluation_online} compare the performance of our offline-DHP and online-DHP approaches with those of other approaches in predicting HM positions, in the offline and online scenarios, respectively.
\subsection{Settings}\label{sec:settings}
For evaluating the performance of offline-DHP, we randomly divided all 76 panoramic sequences of our PVS-HM database into a training set (61 sequences) and a test set (15 sequences). In training of the DRL model, the hyperparameters $\rho$, $\varrho$ and $\varsigma$ of \eqref{reward-alpha} and \eqref{reward-nu} were tuned over the training set, when estimating the \textit{reward} of HM scanpath prediction. As a result, $\rho$, $\varrho$ and $\varsigma$ were set to be $42$, $0.7$ and $1.0$. In addition, we followed \cite{mnih2016asynchronous} to set the other hyperparameters of DRL. For example, we set the discount factor $\gamma$ of \eqref{opt-2} and \eqref{opt-3} to be $0.99$ for \textit{reward} optimization. In our experiments, all 61 training sequences, each of which corresponds to a local DRL network, were used to update the global network as the trained DRL model.
The number of DRL workflows $N$ in the offline-DHP framework was set to be 58, which is the same as the number of subjects in our PVS-HM database.
Similar to \cite{matin1974saccadic}, the HM positions predicted by the 58 DRL workflows were convoluted with a 2D Gaussian filter at each panoramic frame, to generate the HM map.
In our experiments, the HM maps in a panorama were projected to a 2D plane for facilitating visualization.
For evaluation, we measure the prediction accuracy of HM maps in terms of correlation coefficient(CC), normalized scanpath saliency (NSS), and area under receiver operating characteristic curve (AUC), which are three effective evaluation metrics \cite{Li_2015_ICCV} in saliency detection.
Here, the shuffled-AUC is applied, in order to remove the influence of FCB in the evaluation.
Note that larger values of CC, NSS and shuffled-AUC correspond to a more accurate prediction of HM maps.
For evaluating the performance of online-DHP, we compared our approach with \cite{hu2017deep} and two baseline approaches.
The same as \cite{hu2017deep}, MO of \eqref{mo-defination} is measured as the metric to evaluate the accuracy of online prediction in HM positions. Note that a larger value of MO means a more accurate online prediction in HM positions.
Since the DRL network of offline-DHP was learned over 61 training sequences and used as the initial model of online-DHP, our comparison was conducted on all 15 test sequences of our PVS-HM database. In our experiments, the comparison was further performed over all test sequences of the database presented in \cite{hu2017deep}, in order to test the generalization ability of our online-DHP approach.
In our online-DHP approach, the hyperparameters of $\rho$, $\varrho$, $\varsigma$ and $\gamma$ were set to be the same as those of the DRL workflows of offline-DHP.
The other hyperparameters were identical to those in the most recent DRL work of \cite{mnih2016asynchronous}.
In addition, the maximum number of episodes and the MO threshold were set to be 30 and 0.7, respectively, as the termination conditions in the training stage of online-DHP. Note that the MO threshold ensures the accuracy of HM position prediction, while the maximum episode number constrains the computational time of online-DHP.
\begin{figure}
\vspace{-1em}
\begin{center}
\centerline{\includegraphics[width=.6\columnwidth]{N_Ablation_new}
\vspace{-1em}
\caption{\footnotesize{Performance of offline-DHP at different numbers of workflows.}}
\label{N_Ablation}
\end{center}
\vspace{-2em}
\end{figure}
\begin{figure*}
\vspace{-1em}
\begin{center}
\centerline{\includegraphics[width=1.8\columnwidth]{Mean_MO}
\vspace{-2em}
\caption{\footnotesize{MO results between the online-DHP approaches with and without the trained offline-DHP network.}}
\label{Online_compare}
\end{center}
\vspace{-2em}
\end{figure*}
\begin{figure*}
\begin{center}
\vspace{-1em}
\centerline{\includegraphics[width=1.8\columnwidth]{mo}
\vspace{-1.5em}
\caption{\footnotesize{MO results for Deep 360 Pilot, online-DHP approach, and online-DHP w/o ground- truth HM positions of previous frames.}}
\label{MO_result_1}
\end{center}
\vspace{-1em}
\end{figure*}
\begin{table*}
\vspace{-1em}
\begin{center}
\caption{$\Delta$CC, $\Delta$NSS, $\Delta$S-AUC and $\Delta$MO between offline-DHP/online-DHP and the corresponding supervised baseline over 15 test sequences.}
\label{CC_NSS_MO_table}
\vspace{-1.5em}
\tiny
\resizebox{\textwidth}{!}{
\begin{tabular}{ccc*{16}c}
\tabincell{c}{\rotatebox{45}{}} & \rotatebox{45}{}& \rotatebox{45}{}
& \rotatebox{45}{StarryPolar} & \rotatebox{45}{Symphony} & \rotatebox{45}{SpaceWar} & \rotatebox{45}{RioOlympics} & \rotatebox{45}{InsideCar}
& \rotatebox{45}{SpaceWar2} & \rotatebox{45}{Sunset} & \rotatebox{45}{BlueWorld} & \rotatebox{45}{Waterfall} & \rotatebox{45}{Dancing}
& \rotatebox{45}{CMLauncher2} & \rotatebox{45}{Guitar} & \rotatebox{45}{KingKong} & \rotatebox{45}{BTSRun} & \rotatebox{45}{WaitingForLove}
& \rotatebox{45}{\textbf{Average}}
\\
\toprule
\multirow{3}{*}{Offline}
&\multicolumn{2}{c}{$\Delta$CC}
& -0.475 & -0.076 & 0.041 & 0.007 & 0.441 & 0.236 & 0.093 & 0.178 & 0.109 & 0.416 & 0.079 & 0.302 & 0.101 & 0.001 & -0.089 & 0.091
\\
&\multicolumn{2}{c}{$\Delta$NSS}
& -0.524 & 0.834 & 0.534 & 0.566 & 0.500 & 2.625 & 0.735 & 1.469 & 1.014 & 3.768 & 1.013 & 2.342 & 0.242 & 0.813 & 0.062 & 1.066
\\
&\multicolumn{2}{c}{$\Delta$S-AUC}
& -0.025 & 0.024 & -0.025 & 0.156 & 0.330 & 0.112 & -0.056 & 0.113 & 0.025 & 0.267 & 0.255 & 0.040 & 0.133 & 0.320 & 0.101 & 0.118
\\
\midrule
\multirow{1}{*}{Online}&
\multicolumn{2}{c}{$\Delta$MO}
& 0.06 & 0.03 & 0.06 & 0.05 & 0.04 & 0.07 & 0.05 & 0.05 & 0.05 & 0.03 & 0.04 & 0.06 & 0.04 & 0.03 & 0.03 & 0.05
\\
\bottomrule
\end{tabular}
}
\end{center}
\vspace{-2em}
\end{table*}
\subsection{Ablation experiments}\label{abl_ex}
\textbf{Ablation on the workflow number in offline-DHP.} Our offline-DHP approach generates the HM maps of panoramic video through the predicted HM positions of multiple workflows. Thus, we conducted the ablation experiments to investigate the performance of offline-DHP at different numbers of workflows.
Figure \ref{N_Ablation} shows the results of CC, NSS and shuffled-AUC for our offline-DHP approach, when the number of workflows $N$ varies from 1 to 118. Note that the results in this figure are averaged over all 15 test panoramic sequences.
We can see from Figure \ref{N_Ablation} that CC approximately converges at $N \geq 48$, and that NSS and shuffled-AUC approximately converge at $N \geq 58$.
Thus, we set the number of workflows $N$ to be 58 in our experiments.
\begin{table*}
\vspace{-.5em}
\begin{center}
\caption{CC results of offline HM map prediction by our and other approaches over 15 test sequences.}
\vspace{-1.5em}
\label{CC-table}
\begin{threeparttable}
\tiny
\resizebox{\textwidth}{!}{
\begin{tabular}{cc*{16}{c}c}
\tabincell{c}{\rotatebox{45}{CC}} & \rotatebox{45}{Method}
& \rotatebox{45}{StarryPolar} & \rotatebox{45}{Symphony} & \rotatebox{45}{SpaceWar} & \rotatebox{45}{RioOlympics} & \rotatebox{45}{InsideCar}
& \rotatebox{45}{SpaceWar2} & \rotatebox{45}{Sunset} & \rotatebox{45}{BlueWorld} & \rotatebox{45}{Waterfall} & \rotatebox{45}{Dancing}
& \rotatebox{45}{CMLauncher2} & \rotatebox{45}{Guitar} & \rotatebox{45}{KingKong} & \rotatebox{45}{BTSRun} & \rotatebox{45}{WaitingForLove}
& \rotatebox{45}{\textbf{Average}}
\\
\toprule
\multirow{4}{*}{\rotatebox{45}{Non-FCB}}
\abovestrut{0.01in}
& Our
& 0.185 & \textbf{0.710} & \textbf{0.573} & \textbf{0.717} & \textbf{0.783} & \textbf{0.673} & \textbf{0.673} & \textbf{0.678} & \textbf{0.763} & \textbf{0.837} & \textbf{0.585} & \textbf{0.645} & \textbf{0.751} & \textbf{0.764} & \textbf{0.471} & \textbf{0.654}
\\
& BMS
& \textbf{0.450} & 0.167 & 0.274 & 0.228 & 0.331 & 0.067 & 0.463 & 0.169 & 0.393 & 0.121 & 0.203 & 0.328 & 0.105 & 0.105 & 0.223 & 0.242
\\
& OBDL
& 0.107 & 0.184 & 0.028 & 0.190 & 0.260 & 0.100 & 0.308 & 0.027 & 0.025 & 0.176 & 0.117 & 0.066 & 0.125 & 0.047 & 0.222 & 0.132
\\
\belowstrut{-0.01in}
& SALICON $^{\ast}$
& 0.168 & 0.216 & 0.106 & 0.189 & 0.292 & 0.291 & 0.235 & 0.255 & 0.393 & 0.281 & 0.220 & 0.365 & 0.217 & 0.285 & 0.288 & 0.253
\\
\midrule
\multirow{4}{*}{\rotatebox{45}{FCB}}
\abovestrut{0.01in}
& Our
& 0.497 & \textbf{0.816} & \textbf{0.574} & \textbf{0.768} & \textbf{0.712} & \textbf{0.655} & \textbf{0.810} & \textbf{0.748} & \textbf{0.797} & \textbf{0.764} & \textbf{0.747} & \textbf{0.652} & \textbf{0.673} & \textbf{0.679} & \textbf{0.677} & \textbf{0.704}
\\
& BMS
& \textbf{0.692} & 0.567 & 0.520 & 0.494 & 0.495 & 0.368 & 0.711 & 0.500 & 0.655 & 0.414 & 0.546 & 0.494 & 0.311 & 0.322 & 0.503 & 0.506
\\
& OBDL
& 0.510 & 0.540 & 0.321 & 0.441 & 0.496 & 0.455 & 0.638 & 0.464 & 0.434 & 0.408 & 0.468 & 0.461 & 0.410 & 0.288 & 0.598 & 0.462
\\
\belowstrut{-0.01in}
& SALICON
& 0.642 & 0.670 & 0.552 & 0.629 & 0.539 & 0.527 & 0.745 & 0.530 & 0.621 & 0.453 & 0.651 & 0.496 & 0.445 & 0.431 & 0.622 & 0.570
\\
\midrule
\multicolumn{2}{c}{FCB Only}
\abovestrut{0.01in}\belowstrut{-0.01in}
& 0.557 & 0.747 & 0.317 & 0.403 & 0.292 & 0.239 & 0.585 & 0.477 & 0.583 & 0.387 & 0.735 & 0.356 & 0.271 & 0.201 & 0.497 & 0.443
\\
\bottomrule
\end{tabular}
}
\begin{tablenotes}
\item[] $\ast$ DNN based method has been fine-tuned by our database with their default settings.
\end{tablenotes}
\end{threeparttable}
\end{center}
\end{table*}
\begin{table*}
\begin{center}
\vspace{-1.5em}
\caption{NSS results of offline HM map prediction by our and other approaches over 15 test sequences.}
\vspace{-1em}
\label{NSS-table}
\tiny
\resizebox{\textwidth}{!}{
\begin{tabular}{cc*{16}{c}c}
\tabincell{c}{\rotatebox{45}{NSS}} & \rotatebox{45}{Method}
& \rotatebox{45}{StarryPolar} & \rotatebox{45}{RioOlympics} & \rotatebox{45}{SpaceWar2} & \rotatebox{45}{Symphony} & \rotatebox{45}{SpaceWar}
& \rotatebox{45}{Waterfall} & \rotatebox{45}{Sunset} & \rotatebox{45}{BlueWorld} & \rotatebox{45}{Guitar} & \rotatebox{45}{Dancing}
& \rotatebox{45}{InsideCar} & \rotatebox{45}{CMLauncher2} & \rotatebox{45}{WaitingForLove} & \rotatebox{45}{BTSRun} & \rotatebox{45}{KingKong}
& \rotatebox{45}{\textbf{Average}}
\\
\toprule
\multirow{4}{*}{\rotatebox{45}{Non-FCB}}
\abovestrut{0.01in}
& Our
& 0.899 & \textbf{2.806} & \textbf{2.237} & \textbf{3.346} & \textbf{2.180} & \textbf{3.765} & \textbf{2.529} & \textbf{3.196} & \textbf{3.461} & \textbf{5.297} & \textbf{4.402} & \textbf{3.529} & \textbf{2.278} & \textbf{4.572} & \textbf{3.334} & \textbf{3.189}
\\
& BMS
& \textbf{1.313} & 0.772 & 0.137 & 0.710 & 0.807 & 1.673 & 1.613 & 0.841 & 1.497 & 0.670 & 1.657 & 1.034 & 0.997 & 0.546 & 0.119 & 0.959
\\
& OBDL
& 0.126 & 0.637 & 0.301 & 0.260 & 0.064 & 0.073 & 1.015 & 0.035 & 0.393 & 0.980 & 1.375 & 0.660 & 0.964 & 0.215 & 0.107 & 0.480
\\
\belowstrut{-0.01in}
& SALICON
& 0.628 & 0.584 & 0.396 & 1.093 & 1.348 & 1.528 & 1.194 & 0.877 & 1.167 & 1.541 & 0.876 & 1.265 & 0.858 & 1.121 & 1.362 & 1.056
\\
\midrule
\multirow{4}{*}{\rotatebox{45}{FCB}}
\abovestrut{0.01in}
& Our
& 1.825 & \textbf{2.911} & \textbf{2.064} & \textbf{3.756} & \textbf{2.031} & \textbf{3.755} & \textbf{2.943} & \textbf{3.393} & \textbf{3.395} & \textbf{4.608} & \textbf{3.816} & \textbf{4.463} & \textbf{3.351} & \textbf{3.931} & \textbf{2.883} & \textbf{3.275}
\\
& BMS
& \textbf{2.206} & 1.779 & 1.063 & 2.537 & 1.667 & 2.891 & 2.507 & 2.280 & 2.386 & 2.366 & 2.508 & 3.136 & 2.434 & 1.771 & 1.288 & 2.188
\\
& OBDL
& 1.712 & 1.572 & 1.371 & 2.368 & 1.055 & 1.920 & 2.225 & 2.007 & 2.377 & 2.319 & 2.556 & 2.777 & 2.912 & 1.580 & 1.693 & 2.030
\\
\belowstrut{-0.01in}
& SALICON
& 2.008 & 2.219 & 1.503 & 2.799 & 1.669 & 2.736 & 2.522 & 2.218 & 2.385 & 2.568 & 2.794 & 3.766 & 3.038 & 2.358 & 1.709 & 2.419
\\
\midrule
\multicolumn{2}{c}{FCB Only}
\abovestrut{0.01in}\belowstrut{-0.01in}
& 2.388 & 1.613 & 0.699 & 4.123 & 1.190 & 3.191 & 2.406 & 2.286 & 1.828 & 2.151 & 1.387 & 5.764 & 2.600 & 1.095 & 1.020 & 2.249
\\
\bottomrule
\end{tabular}
}
\end{center}
\vspace{-1em}
\end{table*}
\textbf{Reinforcement learning vs. supervised learning.}
Here, we evaluate the effectiveness of reinforcement learning applied in our approach by comparing with the supervised learning baseline.
In the supervised learning baseline, the reinforcement learning component of our approach is replaced by a regressor and classifier.
Specifically, the input to the supervised learning baseline is the FoV at the current frame, the same as our DHP approach. Then, the supervised learning baseline predicts the continuous magnitude of HM scanpath through a regressor.
Additionally, the supervised learning baseline incorporates a classifier to predict the HM scanpath direction among 8 discrete directions in GCS: $\{ 0^{\circ}, 45^{\circ}, \cdots, 315^{\circ} \}$.
This ensures that the output of the baseline is the same as that of our DHP approach.
For fair comparison, the DNN architecture of our DHP approach is used as the regressor and classifier, which have the same convolutional layers and LSTM cells as our approach.
The magnitude regressor is trained by the MSE loss function,
while the direction classifier is trained by the cross entropy loss function.
First, we compare the supervised learning baseline with our offline-DHP approach.
To this end, the same as our offline-DHP approach, the supervised learning baseline runs 58 workflows to predict different HM positions for each panoramic frame.
In each workflow, the baseline randomly samples one direction for the possible HM scanpath at each frame, according to the probabilities of directions by the trained classifier.
Then, several HM positions are obtained upon the HM directions of all workflows, given the magnitude predicted by the trained regressor.
Finally, the HM map is produced by convoluting these HM positions.
Table \ref{CC_NSS_MO_table} reports the CC, NSS and shuffled-AUC increase ($\Delta$CC, $\Delta$NSS and $\Delta$S-AUC) of our offline-DHP approach with the supervised learning approach as an anchor.
We can see that the proposed offline-DHP approach performs much better against the supervised learning baseline.
This validates the effectiveness of reinforcement learning applied in offline-DHP.
Second, we compare the supervised learning baseline with our online-DHP approach.
The baseline predicts the HM position at the next frame using the trained magnitude regressor and direction classifier.
In contrast, our online-DHP approach predicts HM positions, based on reinforcement learning as introduced in Section 5.
Table \ref{CC_NSS_MO_table} tabulates the MO improvement ($\Delta$MO) of our online-DHP approach over the supervised learning baseline.
As seen in this table, our online-DHP approach outperforms the supervised learning baseline in all sequences.
Therefore, reinforcement learning is also effective in online-DHP.
\textbf{Influence of offline DRL network to online-DHP.} It is interesting to analyze the benefits of incorporating the DRL network of offline-DHP in our online-DHP approach, since the online-DHP approach is based on the offline DRL network. Figure \ref{Online_compare} shows the MO results of our online-DHP approach with and without the offline DRL network. As observed in this figure, the offline DRL network is able to increase the MO results of our online-DHP approach, for all 15 sequences. In addition, the MO value can be increased from 0.50 to 0.75 on average, when the offline DRL network is incorporated in online-DHP. Therefore, the learned DRL network of offline-DHL also benefits the online prediction of HM positions in online-DHL.
\textbf{Performance of online-DHP w/o previous ground-truth HM positions.} For each test sequence, our online-DHP takes as input the ground-truth HM positions of previous frames to predict subsequent HM positions.
The online-DHP approach belongs to online machine learning, and it is opposed to batch learning of deep 360 pilot \cite{hu2017deep}, which generates the predictor by learning on the entire training dataset at once.
Note that there is no online machine learning approach for predicting HM positions, and we can only compare with deep 360 pilot.
For fair comparison with deep 360 pilot, Figure \ref{MO_result_1} shows the results of our online-DHP approach using previous predicted HM positions as input, i.e., online-DHP w/o ground-truth HM positions of previous frames.
As observed in Figure \ref{MO_result_1}, our online-DHP approach (MO = 0.57) performs considerably better than deep 360 pilot (MO = 0.40), when the previous ground-truth HM positions are not available in these two approaches for fair comparison.
In addition, the ground-truth HM positions of previous frames can improve the performance of online-DHP, with MO increasing from 0.57 to 0.75 on average.
\subsection{Performance evaluation on offline-DHP}\label{sec:evaluation_offline}
\label{compare}
\begin{figure*}
\vspace{-1em}
\begin{center}
\centerline{\includegraphics[width=2\columnwidth]{objective_result_1}
\vspace{-1em}
\caption{\footnotesize{HM maps of several frames selected from two test sequences in our PVS-HM database. They are all visualized in the 2D coordination. The second row shows the ground-truth HM maps, which are generated upon the HM positions of all 58 subjects. The third to sixth rows show the HM maps of our, BMS \cite{zhang2016exploiting} , OBDL \cite{hossein2015many}, and SALICON \cite{huang2015salicon} approaches. }}
\label{figure-object}
\end{center}
\vspace{-2em}
\end{figure*}
\begin{table*}
\begin{center}
\caption{Shuffled-AUC results of HM map prediction by our and other approaches (without FCB) over 15 test sequences.}\label{Table-s-AUC}
\vspace{-1.2em}
\label{table-result}
\tiny
\resizebox{\textwidth}{!}{
\begin{tabular}{cc*{15}{c}c}
& \rotatebox{45}{Method}
& \rotatebox {45} {KingKong} & \rotatebox {45} {SpaceWar2} & \rotatebox {45} {StarryPolar} & \rotatebox {45} {Dancing} & \rotatebox {45} {Guitar}
& \rotatebox {45} {BTSRun} & \rotatebox {45} {InsideCar} & \rotatebox {45} {RioOlympics} & \rotatebox {45} {SpaceWar} & \rotatebox {45} {CMLauncher2}
& \rotatebox {45} {Waterfall} & \rotatebox {45} {Sunset} & \rotatebox {45} {BlueWorld} & \rotatebox {45} {Symphony} & \rotatebox {45} {WaitingForLove}
& \rotatebox{45}{\textbf{Average}}
\\
\toprule
& Our
& \textbf{0.72} & 0.63 & 0.46 & \textbf{0.82} & \textbf{0.73} & 0.84 & 0.80 & \textbf{0.69} & 0.60 & 0.76 & \textbf{0.72} & 0.64 & \textbf{0.70} & 0.70 & 0.68 & \textbf{0.70}
\\
& SALICON
& 0.62 & 0.57 & 0.42 & 0.66 & 0.62 & 0.76 & 0.56 & 0.54 & \textbf{0.64} & \textbf{0.79} & 0.66 & 0.62 & 0.65 & 0.70 & 0.71 & 0.64
\\
& BMS
& 0.65 & \textbf{0.65} & 0.43 & 0.74 & 0.54 & \textbf{0.88} & \textbf{0.83} & 0.53 & 0.62 & 0.63 & 0.66 & \textbf{0.70} & 0.49 & \textbf{0.77} & 0.69 & 0.65
\\
& OBDL
& 0.70 & 0.60 & \textbf{0.55} & 0.81 & 0.68 & 0.86 & 0.62 & 0.68 & 0.57 & 0.56 & 0.47 & 0.69 & 0.47 & 0.75 & \textbf{0.74} & 0.65
\\
\bottomrule
\end{tabular}
}
\end{center}
\vspace{-1.5em}
\end{table*}
\begin{table*}
\begin{center}
\caption{MO results of online HM position prediction by our and other approaches.}
\vspace{-1.5em}
\label{table-result}
\begin{threeparttable}
\tiny
\resizebox{\textwidth}{!}{
\begin{tabular}{cc*{16}{c}c}
\tabincell{c}{\rotatebox{45}{Method}}
& \rotatebox{45}{KingKong} & \rotatebox{45}{SpaceWar2} & \rotatebox{45}{StarryPolar} & \rotatebox{45}{Dancing} & \rotatebox{45}{Guitar}
& \rotatebox{45}{BTSRun} & \rotatebox{45}{InsideCar} & \rotatebox{45}{RioOlympics} & \rotatebox{45}{SpaceWar} & \rotatebox{45}{CMLauncher2}
& \rotatebox{45}{Waterfall} & \rotatebox{45}{Sunset} & \rotatebox{45}{BlueWorld} & \rotatebox{45}{Symphony} & \rotatebox{45}{WaitingForLove} & \rotatebox{45}{{\textbf{Average}}}
\\
\toprule
\multirow{1}{*}{\rotatebox{0}{Online$^{\ast}$}}
\abovestrut{0.01in}
& \bf{0.81} & \bf{0.76} & \bf{0.55} & \bf{0.86} & \bf{0.79} & \bf{0.88} & \bf{0.85} & \bf{0.82} & \bf{0.63} & \bf{0.76} & \bf{0.67} & \bf{0.66} & \bf{0.69} & 0.75 & \bf{0.84} & \bf{0.75}
\\
\multirow{1}{*}{\rotatebox{0}{Deep 360 Pilot}}
& 0.34 & 0.21 & 0.32 & 0.54 & 0.54 & 0.62 & 0.21 & 0.27 & 0.39 & 0.21 & 0.08 & 0.46 & 0.35 & \bf{0.93} & 0.52 & 0.40
\\
\multirow{1}{*}{\rotatebox{0}{Baseline 1$^{\ast}$}}
& 0.20 & 0.21 & 0.16 & 0.22 & 0.20 & 0.21 & 0.22 & 0.20 & 0.21 & 0.21 & 0.20 & 0.20 & 0.21 & 0.20 & 0.21 & 0.20
\\
\belowstrut{-0.01in}
\multirow{1}{*}{\rotatebox{0}{Baseline 2$^{\ast}$}}
& 0.22 & 0.23 & 0.20 & 0.22 & 0.23 & 0.24 & 0.23 & 0.23 & 0.22 & 0.25 & 0.25 & 0.21 & 0.23 & 0.22 & 0.23 & 0.23
\\
\bottomrule
\end{tabular}%
}
\begin{tablenotes}
\item[] $\ast$ Both the online-DHP approach and baseline make prediction based on the ground-truth of previous frames.
\vspace{-1em}
\end{tablenotes}
\end{threeparttable}
\end{center}
\vspace{-1em}
\end{table*}
Now, we evaluate the performance of our offline-DHP approach in predicting the HM maps of all 15 test sequences from the PVS-HM database. To the best of our knowledge, there is no work on predicting the HM maps of panoramic video, and saliency prediction is the closest field.
Therefore, we compare our offline-DHP approach to three state-of-the-art saliency detection approaches: OBDL \cite{hossein2015many}, BMS \cite{zhang2016exploiting} and SALICON \cite{huang2015salicon}, which are applied to panoramic frames mapped from sphere to plane using equirectangular projection. In particular, OBDL and BMS are the latest saliency detection approaches for videos and
images, respectively. SALICON is a state-of-the-art DNN approach for saliency detection. For fair comparison, we retrained the DNN model of SALICON by fine-tuning over the training set of our database. Note that OBDL and BMS were not retrained because they are not trainable.
In addition to the above three approaches, we also compare our approach to the FCB baseline, since \textit{Finding 1} argues that human attention normally biases toward the front-center regions of panoramic video.
Here, we model FCB using a 2D Gaussian distribution, similar to the center bias of saliency detection.
Appendix A presents the details of the FCB modeling.
In the field of saliency detection, the center bias \cite{borji2013state} is normally combined with saliency maps to improve the saliency detection accuracy. Hence, we further report the results of HM maps combined with the FCB feature, for our and other approaches.
See Appendix A for more details about the combination of FCB.
Tables \ref{CC-table} and \ref{NSS-table} tabulate the results of CC and NSS in predicting the HM maps of 15 test sequences, for our and other approaches.
In these tables, the results of CC and NSS are averaged over all frames for each test sequence.
As shown in this table, when FCB is not integrated, our offline-DHP approach performs best among all three approaches and the FCB baseline, in terms of CC and NSS.
More importantly, once integrated with FCB, all three approaches have performance improvement, and our approach still performs considerably better than other approaches.
Specifically, our offline-DHP approach increases the average CC value by 0.242, 0.198 and 0.134, compared with OBDL, BMS and SALICON, respectively.
Additionally, the increase of average NSS value is 1.245, 1.087 and 0.856 in our approach, in comparison with OBDL, BMS and SALICON.
In a word, our offline-DHP approach is effective in predicting the HM maps of panoramic video, much better than other approaches and the FCB baseline.
Additionally, Table \ref{Table-s-AUC} compares the performance of our and other approaches in terms of shuffled-AUC.
Note that FCB is not embedded in all approaches, since the shuffled-AUC metric is immune to FCB.
In terms of the average shuffled-AUC, our approach has better performance than other approaches.
This indicates that even not considering the influence of FCB, our approach again outperforms other approaches.
It is worth mentioning that the shuffled-AUC of our offline-DHP approach ranks top in 6 out of 15 test sequences, while SALICON, BMS and ODBL have highest shuffled-AUC in 2, 5 and 2 sequences, respectively.
The probable reasons are as follows. (1) In the evaluation, shuffled-AUC removes the influence of FCB, which can be learned by our offline-DHP approach. (2) The shuffled-AUC can be high even when the HM maps are non-sparse, i.e., far from ground truth. However, our approach yields more sparse HM maps than other approaches, close to ground-truth (see Figure \ref{figure-object}).
Next, we compare the subjective results. Figure \ref{figure-object} shows several frames from two selected sequences and their ground-truth HM maps.
In Figure \ref{figure-object}, we further visualize the HM maps generated by our and other approaches. Here, the predicted HM maps are integrated with FCB, since the FCB feature can improve the performance of all three approaches (as presented in Table \ref{table-result}).
From this figure, one can observe that the HM maps of our approach are considerably closer to the ground-truth HM maps, compared with other approaches.
This result indicates that our offline-DHP approach is capable of better locating the HM positions of different subjects on panoramic video.
\begin{figure*}
\begin{center}
\centerline{\subfigure[Dancing]{\includegraphics[width=1.5\columnwidth]{scanpath_Dancing}}
\vspace{-1em}
\centerline{\subfigure[KingKong]{\includegraphics[width=1.5\columnwidth]{scanpath_Kingkong}}
\vspace{-1em}
\caption{\footnotesize{Visualization in HM scanpaths generated by one subject and the online-DHP approach, for sequences \textit{Dancing} and \textit{KingKong}. Note that the HM scanpaths of one subject (among 58 subjects) are randomly selected and plotted, and then the corresponding HM scanpaths predicted by online-DHP are plotted.}}
\label{scan-path-example}
\end{center}
\end{figure*}
\subsection{Performance evaluation on online-DHP}\label{sec:evaluation_online}
\label{online-compare}
This section evaluates the performance of our online-DHP approach for predicting HM positions in the online scenario.
The online scenario refers to predicting the HM position of one subject at each panoramic frame based on the observed HM positions of this subject at the previous frames.
In our experiments, we compare the performance of online-DHP with the state-of-the-art deep 360 pilot \cite{hu2017deep}, which is the only existing approach for the online prediction of HM positions in panoramic video.
We also compare our online-DHP approach with two baselines. The first baseline (called baseline 1) keeps the HM scanpath of the current frame the same as that at the previous frame, such that the online HM position at each frame can be generated.
The second baseline (called baseline 2) produces the HM positions, using the randomly generated HM scanpaths.
Table \ref{table-result} compares the MO results of our and other approaches for the 15 test sequences of our PVS-HM database. Note that the MO results of each sequence are averaged over the predicted HM positions of all 58 subjects in our database. As observed in this table, our online-DHP approach is significantly superior to two baselines, indicating the effectiveness of applying DRL to predict HM positions online.
Table \ref{table-result} also shows that our online-DHP approach performs considerably better than the deep 360 pilot \cite{hu2017deep}, with an increase of 0.35 in average MO.
In addition, as shown in Table \ref{table-result}, our approach outperforms \cite{hu2017deep} over almost all sequences.
The performance improvement of our approach is because (1) the online DRL model of our approach is capable of generating the accurate \textit{actions} of HM scanpaths, and (2) the DRL network of offline-DHP is incorporated in our online prediction as the prior knowledge. Moreover, our approach is also effective for the generic panoramic sequences, while \cite{hu2017deep} fails in scenery panoramic video. For example, the MO result of \cite{hu2017deep} for the sequence Waterfall is $0.08$, which is far less than $0.67$ MO of online-DHP. This result is primarily because the deep 360 pilot \cite{hu2017deep} relies heavily on the object detection of RCNN.
Moreover, we visualize the ground-truth and predicted HM scanpaths, for subjective evaluation.
Specifically, Figure \ref{scan-path-example} plots the HM scanpaths by one subject and by the online-DHP approach, for the panoramic sequences of Dancing and KingKong.
As shown in this figure, online-DHP is able to obtain similar scanpaths as the subject, such that the HM positions can be accurately predicted online for each panoramic frame.
In conclusion, our subjective evaluation, together with the above objective MO comparison, illustrates that the proposed online-DHP approach is effective in predicting the HM positions with the online manner.
To test the generalizability of our approach, we further evaluate the performance of our, the deep 360 pilot \cite{hu2017deep} and the two baseline approaches on the sports-360 dataset of \cite{hu2017deep}. For this evaluation, our online-DHP is still based on our offline DRL network that is learned from the training sequences of our PVS-HM database. The MO results are presented in Table \ref{table-result-on360}. From this table, one may observe that our online-DHP approach again outperforms \cite{hu2017deep} and the two baselines, despite testing on the test set of \cite{hu2017deep}. In particular, the MO result of our approach is 0.90 for the panoramic sequences of dance, with 0.11 MO increase over \cite{hu2017deep}. Additionally, our approach improves the MO results of \cite{hu2017deep} by 0.07, 0.01, 0.01, and 0.10, for the sequences of skateboarding, parkour, BMX and basketball, respectively. In other words, the online-DHP approach is superior to the state-of-the-art approach \cite{hu2017deep} in the online prediction of HM positions, over almost all classes of panoramic video sequences. Therefore, the generalization capability of our online-DHP approach can be confirmed.
\begin{table}
\begin{center}
\caption{MO results for online prediction of HM positions over the sports-360 dataset.}
\vspace{-1em}
\label{table-result-on360}
\tiny
\resizebox{.47\textwidth}{!}{
\begin{tabular}{c c c c c c}
Method
& Skateboarding & Parkour & BMX & Dance & Basketball
\\
\toprule
DHP
\abovestrut{0.01in}
& \bf{0.78} & \bf{0.75} & \bf{0.72} & \bf{0.90} & \bf{0.77}
\\
Deep 360 Pilot
& 0.71 & 0.74 & 0.71 & 0.79 & 0.67
\\
Baseline 1
& 0.15 & 0.17 & 0.16 & 0.17 & 0.17
\\
Baseline 2
& 0.22 & 0.19 & 0.18 & 0.22 & 0.18
\\
\bottomrule
\end{tabular}
}
\end{center}
\vspace{-2em}
\end{table}
\section{Conclusion}
In this paper, we have proposed the DHP approach for predicting HM positions on panoramic video. First, we established a new database named PVS-HM, which includes the HM data of 58 subjects viewing 76 panoramic sequences. We found from our database that the HM positions are highly consistent across humans. Thus, the consistent HM positions on each panoramic frame can be represented in the form of an HM map, which encodes the possibility of each pixel being the HM position. Second, we proposed the offline-DHP approach to estimate HM maps in an offline manner. Specifically, our offline-DHP approach leverages DRL to make decisions on \textit {actions} of HM scanpaths, via optimizing the \textit{reward} of imitating the way that humans view panoramic video.
Subsequently, the HM scanpaths of several \textit{agents} from multiple DRL workflows are integrated to obtain the final HM maps.
Third, we developed the online-DHP approach, which predicts the HM positions of one subject online. In online-DHP, the DRL algorithm was developed to determine the HM positions of one \textit{agent} at the incoming frames, given the \textit{observation} of previous HM scanpaths and the current video content. The DRL algorithm is based on the learned model of offline-DHP in extracting the spatio-temporal features of attention-related content. Finally, the experimental results showed that both offline-DHP and online-DHP are superior to other conventional approaches, in the offline and online tasks of HM prediction for panoramic video.
Humans always perceive the world around them in a panorama, rather than a 2D plane. Therefore, modeling attention on panoramic video is an important component in establishing human-like computer vision systems. It is an interesting future work to apply imitation learning for modeling attention on panoramic video. In particular, the \textit{reward} of DHP may be learned from ground-truth HM data, belonging to inverse reinforcement learning that is a main category of imitation learning.
Moreover, our work at the current stage mainly focuses on predicting HM positions, as the first step toward attention modeling of panoramic video. Future work should further predict eye fixations within the FoV regions of panoramic video. The potential applications of our approach are another promising work in future. For example, the online-DHP approach may be embedded in robotics, to mimic the way in which humans perceive the real world. Besides, panoramic video has large perceptual redundancy, since most of the panoramic regions cannot be seen by humans. It is thus possible to use the offline-DHP approach to remove such perceptual redundancy, and then the bit-rates of panoramic video coding can be dramatically saved.
\appendices
\section{Analysis of FCB Combined in HM Maps}
The saliency detection literature \cite{judd2009learning} has argued that human attention has strong center bias in images or videos, and that the incorporation of center bias can improve the performance of saliency detection. Similarly, FCB exists when viewing panoramic video, as discussed in \textit{Finding 1}. Hence, this appendix presents the combination of the FCB feature and the offline-DHP approach. Here, we apply the FCB feature as an additional channel in generating the HM maps of panoramic video. Specifically, assume that $\mathbf{H}^f$ is the HM map generated by the channel of the FCB feature. Similar to the center bias feature of image saliency detection \cite{judd2009learning}, we apply the following 2D Gaussian distribution to model $\mathbf{H}^f$ at each frame:
\begin{equation}
\label{Gauss_sigma}
\mathbf{H}^f(u,v)= \exp\left({-\frac{(u-u_f)^2+(v-v_f)^2}{\sigma_f^2}}\right),
\end{equation}
where $(u,v)$ are the longitude and latitude of the GCS location in the map, and $(u_f,v_f)$ are the longitude and latitude of the front center position in GCS. In addition, $\sigma_f$ is the standard deviation of the 2D Gaussian distribution.
Next, we need to combine $\mathbf{H}^f$ with the predicted HM map $\mathbf{H}_t$ by
\begin{equation}
\label{optmization_x}
\mathbf{H}^c_t = w_1\cdot \mathbf{H}^f+w_2\cdot \mathbf{H}_t,
\end{equation}
for each panoramic frame. In the above equation, $\mathbf{H}^c_t$ is the HM map integrated with the FCB feature for frame $t$; $w_1$ and $w_2$ are the weights corresponding to the channels of $\mathbf{H}^f$ and $\mathbf{H}_t$, respectively. Given \eqref{Gauss_sigma} and \eqref{optmization_x}, the following optimization formulation is applied to obtain the values of $\sigma_f$, $w_1$ and $w_2$:
\begin{equation}
\label{optmization_w}
\max_{\sigma_f,w_1,w_2} \sum_{t=1}^{T} \text{CC}(\mathbf{H}^c_t, \mathbf{H}^g_t), \quad \text{s.t.} \quad w_1+w_2=1.
\end{equation}
In \eqref{optmization_x}, $\mathbf{H}^g_t$ is the ground-truth HM map of each frame; $\text{CC}(\cdot,\cdot)$ indicates the CC value of two maps. Then, we solve the above optimization formulation by the least square fitting of CC over all training data of our PVS-HM database. Consequently, the optimal values of $\sigma_f$, $w_1$ and $w_2$ are $21.1^\circ$, $0.48$ and $0.52$, respectively. These values are used to integrate the FCB feature in our offline-DHP approach. Note that the same way is applied to obtain the weights of $w_1$ and $w_2$, when combining the FCB feature with other approaches.
Figure \ref{fitting_surface} shows the results of CC between the predicted and ground-truth HM maps at various values of $\sigma_f$ and $w_1$. From this figure, we can see that the CC results vary from $0.44$ to $0.70$ alongside the increase of $w_1$ from $0$ to $1$, reaching the maximum value at $w_1=0.48$ given $\sigma_f=21.1^\circ$. This indicates that both the FCB feature and our offline-DHP approach are effective in predicting the HM maps of panoramic video, and that the effectiveness of the FCB feature is different at varying combination weights. In addition, as shown in Figure \ref{fitting_surface}, at $w_1=0.48$, the CC value increases from 0.66 to 0.70, when $\sigma_f$ grows from $7^\circ$ to $21.1^\circ$, and then it decreases to 0.63 until $\sigma_f = 43.6^\circ$. Thus, the standard deviation of the 2D Gaussian distribution in \eqref{Gauss_sigma} is set to be $21.1^\circ$ for the FCB feature in our experiments.
\begin{figure}
\vspace{-1em}
\begin{center}
\centerline{\includegraphics[width=.75\columnwidth]{Fitting}
\vspace{-1em}
\caption{\footnotesize{The fitting surface of CC results between the predicted and ground-truth HM maps at various $\sigma_f$ and $w_1$. The dark dots in this figure represent the CC results at each specific value of $\sigma_f$ and $w_1$, which are used to fit the surface. Note that the CC results are obtained over all training data of the PVS-HM database.}}
\label{fitting_surface}
\end{center}
\vspace{-2.5em}
\end{figure}
\ifCLASSOPTIONcompsoc
\else
\fi
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,477,468,751,139 | arxiv | \section{Introduction and Statement of Results}
In this paper we introduce and investigate linear models for the Poincar\'{e}
series and the radial limit set of normal subgroups of Kleinian groups of Schottky type. Here, a linear model means a linear graph directed Markov system (GDMS) associated to the free group $\mathbb{F}_{d}=\langle g_1,\dots, g_d \rangle$ on $d\ge2$ generators. Precise definitions are given in Section \ref{sub:Graph-Directed-Markov}, but briefly, such a system $\Phi$ is given by the vertex set $V:=\{ g_1,g^{-1}_1\dots, g_d,g^{-1}_d\}$, edge set $E:=\{ (v,w)\in V^2:v\neq w^{-1}\}$ and by a family of contracting similarities $\{ \phi_{\left(v,w\right)} : \left(v,w\right)\in E \}$ of the Euclidean space $\mathbb{R}^d$, for $d\ge1$, such that for each $\left(v,w\right)\in E$
the contraction ratio of the similarity $\phi_{\left(v,w\right)}$ is independent
of $w$. We denote this ratio by $c_{\Phi}(v)$. Also, we say that $\Phi$ is \emph{symmetric
}if $c_{\Phi}\left(g\right)=c_{\Phi}\left(g^{-1}\right)$ for all
$g\in V$. In order to state our first two main results, we must also make two further definitions. For this, we extend $c_{\Phi}$ to a function $c_{\Phi}:\mathbb{F}_{d}\rightarrow \mathbb{R}$ by setting $c_{\Phi}\left(g\right):=\prod_{i=1}^{n}c_{\Phi}\left(v_{i}\right)$,
where $n\in \mathbb{N}$ and $\left(v_{1},\dots,v_{n}\right)\in V^{n}$ refers to the unique representation of $g$ as a reduced word. Also, for each subgroup $H$ of $\mathbb{F}_d$, we introduce the \emph{Poincar\'{e}
series of $H$ }and the \emph{exponent of convergence of $H$ with
respect to $\Phi$} which are defined for $s\ge 0$ by
\[
P\left(H,\Phi,s\right):=\sum_{h\in H}\left(c_{\Phi}\left(h\right)\right)^{s} \quad \text{ and } \quad\delta\left(H,\Phi\right):=\inf\left\{ t\ge 0:P\left(H,\Phi,t\right)<\infty\right\} .
\]
Our first main result gives a relation between amenability and the exponent of convergence.
\begin{thm}
\label{thm:lineargdms-amenability-dichotomy}Let $\Phi$ be a symmetric
linear GDMS associated to $\mathbb{F}_{d}$. For every normal
subgroup $N$ of $\mathbb{F}_{d}$, we have that
\[
\delta\left(\mathbb{F}_{d},\Phi\right)=\delta\left(N,\Phi\right)\,\,\textrm{if and only if }\,\,\mathbb{F}_{d}/N\textrm{ is amenable}.
\]
\end{thm}
Our second main result gives a lower bound for the exponent of convergence $\delta(N,\Phi)$.
\begin{thm}
\label{thm:lineargdms-lowerhalfbound}Let $\Phi$ be a symmetric linear
GDMS associated to $\mathbb{F}_{d}$. For every non-trivial normal subgroup
$N$ of $\mathbb{F}_{d}$, we have that
\[
\delta\left(N,\Phi\right)>\delta\left(\mathbb{F}_{d},\Phi\right)/2.
\]
\end{thm}
Our next results study certain limit sets which provide fractal models of radial limit sets of Kleinian groups. More precisely, for a GDMS $\Phi$ associated to $\mathbb{F}_{d}$ and a subgroup $H$ of $\mathbb{F}_{d}$, we will consider the \emph{radial limit set }$\Lr(H,\Phi)$ \emph{of $H$} and the \emph{uniformly radial limit set }$\Lur(H,\Phi)$ \emph{of $H$ with respect to $\Phi$} (see Definition \ref{def:gdms-associated-to-freegroup-and-radiallimitsets}).
\begin{prop}
\label{pro:lineargdms-brooks}Let $\Phi$ be a linear GDMS associated
to $\mathbb{F}_{d}$. For every normal subgroup $N$ of $\mathbb{F}_{d}$,
we have that
\[
\delta\left(N,\Phi\right)=\dim_{H}\left(\Lur(N,\Phi)\right)=\dim_{H}\left(\Lr(N,\Phi)\right).
\]
\end{prop}
The following corollary is an immediate consequence of Theorem
\ref{thm:lineargdms-amenability-dichotomy}, Theorem \ref{thm:lineargdms-lowerhalfbound}
and Proposition \ref{pro:lineargdms-brooks}.
\begin{cor}
\label{main-cor}
Let $\Phi$ be a symmetric linear GDMS associated to $\mathbb{F}_{d}$. For every normal subgroup $N$ of $\mathbb{F}_{d}$, we have that
\[
\dim_{H}\left(\Lr(N,\Phi)\right)=\dim_{H}\left(\Lr(\mathbb{F}_d,\Phi)\right)\mbox{ if and only if }\mathbb{F}_{d}/N\mbox{ is amenable}.
\]
Moreover, if $N$ is non-trivial, then we have that
\[
\dim_{H}\left(\Lr(N,\Phi)\right)\,>\,\dim_{H}\left(\Lr(\mathbb{F}_d,\Phi)\right)/2.
\]
\end{cor}
Let us now briefly summarize the corresponding results for normal subgroups of Kleinian groups, which served as the motivation for our main results in this paper. A more
detailed discussion of Kleinian groups and how these relate to the concept of
a GDMS will be given in Section \ref{sec:Kleinian-groups}. We start by giving a short introduction to Kleinian groups.
Recall that, for $m\in\mathbb{N}$, an $\left(m+1\right)$-dimensional hyperbolic manifold can be described
by the hyperbolic $\left(m+1\right)$-space $\mathbb{D}^{m+1}:=\left\{ z\in\mathbb{R}^{m+1}:\left|z\right|<1\right\} $
equipped with the hyperbolic metric $d$ and quotiented by the action
of a Kleinian group $G$. The \emph{Poincar\'{e}
series of $G$} and the \emph{exponent
of convergence of
$G$} are for $s\ge 0$ given by
\[
P\left(G,s\right):=\sum_{g\in G}\mathrm{e}^{-sd\left(0,g\left(0\right)\right)} \quad \text{ and }\quad \delta\left(G\right):=\inf\left\{ t\ge 0:P\left(G,t\right)<\infty\right\}.
\]
A normal subgroup $N$ of a Kleinian group $G$ gives
rise to an intermediate covering of the associated hyperbolic manifold $\mathbb{D}^{m+1}/G$.
It was shown by Brooks in \cite{MR783536}
that if $N$ is a normal subgroup of a convex cocompact Kleinian group
$G$ such that $\delta\left(G\right)>m/2$, then we have that
\begin{equation}
\delta\left(N\right)=\delta\left(G\right)\mbox{ if and only if }G/N\mbox{ is amenable.}\label{eq:intro-brooks}
\end{equation}
Moreover, Falk and Stratmann \cite{MR2097162} showed that for
every non-trivial normal subgroup $N$ of a non-elementary Kleinian
group $G$ we have $\delta\left(N\right)\ge\delta\left(G\right)/2$.
Using different methods, Roblin (\cite{MR2166367}) proved that if
$G$ is of $\delta\left(G\right)$-divergence type, that is, if $P\left(G,\delta\left(G\right)\right)=\infty$,
then we have
\begin{equation}
\delta\left(N\right)>\delta\left(G\right)/2.\label{eq:intro-roblin}
\end{equation}
Another proof of (\ref{eq:intro-roblin}) can be found in \cite{Bonfert-Taylor2012}
for a convex cocompact Kleinian group $G$, where it was also shown that $\delta(N)$ can be arbitrarily close to $\delta(G)/2$.
Note that our results stated in Theorem \ref{thm:lineargdms-amenability-dichotomy} and Theorem \ref{thm:lineargdms-lowerhalfbound} extend the assertions given in (\ref{eq:intro-brooks}) and (\ref{eq:intro-roblin}) for Kleinian groups.
\begin{rem*}
Note that in Theorem \ref{thm:lineargdms-amenability-dichotomy} there is no restriction on $\delta\left(\mathbb{F}_{d},\Phi\right)$
whereas for the proof of
(\ref{eq:intro-brooks}) it was vital to assume that $\delta\left(G\right)>m/2$.
It was conjectured by Stratmann \cite{MR2191250} that this assumption can be removed from
Brooks' Theorem. In fact, it was shown by
Sharp in \cite[Theorem 2]{MR2322540} that if $G$ is a finitely generated Fuchsian groups, that is for $m=1$, and if $N$ is a normal subgroup of $G$, then amenability of $G/N$ implies $\delta\left(G\right)=\delta\left(N\right)$.
Recently, Stadlbauer \cite{Stadlbauer11} showed that the equivalence
in (\ref{eq:intro-brooks}) extends to the class of essentially free
Kleinian groups with arbitrary exponent of convergence $\delta\left(G\right)$.
\end{rem*}
Finally, let us turn our attention to limit sets of Kleinian groups. For a Kleinian group $G$, the \emph{radial limit set $L_{\mathrm{r}}\left(G\right)$} and
the \emph{uniformly radial limit set $L_{\mathrm{ur}}\left(G\right)$}
(see Definition \ref{def:radiallimitsets-fuchsian}) are both subsets
of the boundary $\mathbb{S}:=\left\{ z\in\mathbb{R}^{m+1}:\left|z\right|=1\right\} $
of $\mathbb{D}^{m+1}$. By a theorem of Bishop and Jones (\cite[Theorem 1.1]{MR1484767},
cf. \cite{MR2087134}), we have for every Kleinian group $G$ that
\begin{equation}
\delta\left(G\right)=\dim_{H}\left(L_{\mathrm{ur}}\left(G\right)\right)=\dim_{H}\left(L_{\mathrm{r}}\left(G\right)\right),\label{eq:bishop-jones}
\end{equation}
where $\dim_{H}$ denotes the Hausdorff dimension with respect to
the Euclidean metric on $\mathbb{S}$. Combining (\ref{eq:intro-brooks})
and (\ref{eq:bishop-jones}) then shows that for every
normal subgroup $N$ of a convex cocompact Kleinian group $G$ for which
$\delta\left(G\right)>m/2$, we have
\begin{equation}
\dim_{H}\left(L_{\mathrm{r}}\left(N\right)\right)=\dim_{H}\left(L_{\mathrm{r}}\left(G\right)\right)\mbox{ if and only if }G/N\mbox{ is amenable.}\label{eq:brooks-via-hausdorffdimension}
\end{equation}
We would like to point that there is a close analogy between the results on radial limit sets of Kleinian groups stated in (\ref {eq:bishop-jones}) and (\ref{eq:brooks-via-hausdorffdimension}), and our results in the context of linear GDMSs associated to free groups stated in Proposition \ref{pro:lineargdms-brooks} and Corollary \ref{main-cor}.
Let us now further clarify the relation between GDMSs associated to free groups and Kleinian groups of Schottky type (see Definition \ref{def:kleinian-of-schottkytype}). For this, recall that a Kleinian group of Schottky type $G=\langle g_{1},\dots,g_{d}\rangle$ is isomorphic to a free group. In Definition \ref{def:canonical-model-kleinianschottky} we introduce
a \emph{canonical GDMS $\Phi_{G}$} \emph{associated to} $G$. We will then show in Proposition \ref{pro:canonicalgdms-gives-radiallimitset} that for every non-trivial normal subgroup $N$ of $G$ we have that
\[
L_{\mathrm{r}}\left(N\right)=\Lr(N,\Phi_G)\mbox{ and }L_{\mathrm{ur}}\left(N\right)=\Lur(N,\Phi_G).
\]
This shows that our fractal models of radial limit sets of Kleinian groups of Schottky type can be thought of as a replacement of the conformal generators of the Kleinian group by similarity maps. Our main results show that several important properties of Kleinian groups extend to these fractal models.
Let us now end this introductory section by briefly summarizing the methods used to obtain our results and how this paper is organized.
Theorem \ref{thm:lineargdms-amenability-dichotomy} and
Theorem \ref{thm:lineargdms-lowerhalfbound} are based on and extend
results of Woess \cite{MR1743100} and Ortner and Woess \cite{MR2338235},
which in turn refer back to work of P\'{o}lya \cite{MR1512028} and Kesten \cite{MR0109367,MR0112053}.
Specifically, we provide generalizations of \cite{MR2338235} for
weighted graphs. Our new thermodynamic formalism for group-extended
Markov systems (see Section \ref{sec:Thermodynamic-Formalism-grpextension})
characterizes amenability of discrete groups in terms of topological
pressure and the spectral radius of the Perron-Frobenius operator
acting on a certain $L^{2}$-space.
The paper is organized as follows. In Section \ref{sec:Preliminaries}
we collect the necessary background on thermodynamic formalism, GDMSs and
random walks on graphs. In Section \ref{sec:Thermodynamic-Formalism-grpextension}, we prove a thermodynamic formalism for group-extended Markov systems,
which is also of independent interest. Using the results
of Section \ref{sec:Thermodynamic-Formalism-grpextension} we prove
our main results in Section \ref{sec:Proofs}. Finally, in Section \ref{sec:Kleinian-groups} we
provide the background on Kleinian groups of Schottky type, which has motivated our results.
After having finished this paper, Stadlbauer (\cite{Stadlbauer11}) proved a
partial extension of Theorem \ref{thm:amenability-dichotomy-markov} (see
Remark \ref{proof-comment-stadlabuer}).
Moreover, in \cite{Jaerisch12a} the author has extended Lemma \ref{lem:delta-half-divergencetype}
and Theorem \ref{thm:lineargdms-lowerhalfbound} in order to give
a short new proof of (\ref{eq:intro-roblin}) for Kleinian groups.
\begin{acknowledgement*}
Parts of this paper constitute certain parts of the author's doctoral
thesis supervised by Marc Kesseb\"ohmer at the University of Bremen.
The author would like to express his deep gratitude to Marc Kesseb\"ohmer
and Bernd Stratmann for their support and many fruitful discussions. The author thanks an anonymous referee for the careful reading of the manuscript and for valuable comments on the exposition of this paper. Final thanks go to Sara Munday for helping to improve the presentation of the paper significantly.
\end{acknowledgement*}
\section{Preliminaries\label{sec:Preliminaries}}
\subsection{Symbolic Thermodynamic Formalism}
Throughout, the underlying symbolic space for the symbolic thermodynamic formalism will be a
\emph{Markov shift $\Sigma$ }, which is given by
\[
\Sigma:=\left\{ \omega:=\left(\omega_{1},\omega_{2},\ldots\right)\in I^{\mathbb{N}}:\; a\left(\omega_{i},\omega_{i+1}\right)=1\,\,\mbox{for all }i\in\mathbb{N}\right\} ,
\]
where $I$ denotes a finite or countable infinite \emph{alphabet}, the matrix $A=\left(a\left(i,j\right)\right)\in\left\{ 0,1\right\} ^{I\times I}$
is the \emph{incidence matrix} and the \emph{shift map} $\sigma:\Sigma\rightarrow\Sigma$
is defined by $\sigma(\left(\omega_{1},\omega_{2},\ldots\right)):=\left(\omega_{2},\omega_{3},\ldots\right)$, for each $\left(\omega_{1},\omega_{2},\ldots\right)\in \Sigma$.
We always assume that for each $i\in I$ there exists $j\in I$ such
that $a\left(i,j\right)=1$. The set of \emph{$A$-admissible words} of length
$n\in\mathbb{N}$ is given by
\[
\Sigma^{n}:=\left\{(\omega_1, \dots, \omega_n)\in I^{n}:\,\, a\left(\omega_{i},\omega_{i+1}\right)=1\mbox{ for all }i\in\left\{ 1,\dots,n-1\right\} \right\}
\]
and we set $\Sigma^{0}:=\left\{ \emptyset\right\} $, where $\emptyset$
denotes the empty word. Note that $\emptyset$ will also be used to denote the
empty set. The set of all finite $A$-admissible words is
denoted by
\[
\Sigma^{*}:=\bigcup_{n\in\mathbb{N}}\Sigma^{n}.
\]
Let us also define the \emph{word length function} $\left|\cdot\right|:\,\Sigma^{*}\cup\Sigma\cup\left\{ \emptyset\right\} \rightarrow\mathbb{N}_{0}\cup\left\{ \infty\right\} $,
where for $\omega\in\Sigma^{*}$ we set $\left|\omega\right|$ to
be the unique $n\in\mathbb{N}$ such that $\omega\in\Sigma^{n}$, for $\omega\in\Sigma$
we set $\left|\omega\right|:=\infty$ and $\emptyset$ is the unique
word of length zero. For each $\omega\in\Sigma^{*}\cup\Sigma\left\{ \emptyset\right\} $
and $n\in\mathbb{N}_{0}$ with $n\le\left|\omega\right|$, we define $\omega_{|n}:=\left(\omega_{1},\dots,\omega_{n}\right)$.
For $\omega,\tau\in\Sigma$, we set $\omega\wedge\tau$ to be the longest common initial block of $\omega$ and $\tau$, that is, $\omega\wedge\tau:=\omega_{|l}$,
where $l:=\sup\left\{ n\in\mathbb{N}_{0}:\omega_{|n}=\tau_{|n}\right\} $.
For $\omega\in\Sigma^{n}$, $n\in\mathbb{N}_{0}$, the \emph{cylinder set} $[\omega]$ defined by $\omega$ is given by $\left[\omega\right]:=\left\{ \tau\in\Sigma:\tau_{|n}=\omega\right\} $.
Note that $\left[\emptyset\right]=\Sigma$.
If $\Sigma$ is the Markov shift with alphabet $I$ whose incidence
matrix consists entirely of $1$s, then we have that $\Sigma=I^{\mathbb{N}}$
and $\Sigma^{n}=I^{n}$, for all $n\in\mathbb{N}$. Then we set $I^{*}:=\Sigma^{*}$
and $I^{0}:=\left\{ \emptyset\right\} $. For $\omega,\tau\in I^{*}\cup\left\{ \emptyset\right\} $, let $\omega\tau\in I^{*}\cup\left\{ \emptyset\right\} $ denote
the \emph{concatenation} of $\omega$ and $\tau$, which is defined
by $\omega\tau:=\left(\omega_{1},\dots,\omega_{\left|\omega\right|},\tau_{1},\dots,\tau_{\left|\tau\right|}\right)$,
for $\omega,\tau\in I^{*}$, and if $\omega\in I^{*}\cup\left\{ \emptyset\right\} $ then we define $\omega\emptyset:=\emptyset\omega:=\omega$. Note that $I^{*}$
is the free semigroup over the set $I$ which satisfies the following
universal property: For each semigroup $S$ and for every map $u:I\rightarrow S$,
there exists a unique semigroup homomorphism $\hat{u}:I^{*}\rightarrow S$
such that $\hat{u}\left(i\right)=u\left(i\right)$, for all $i\in I$
(see \cite[Section 3.10]{MR1650275}).
Moreover, we equip $I^{\mathbb{N}}$ with the product topology of the discrete topology
on $I$ and the Markov shift $\Sigma\subset I^{\mathbb{N}}$ is equipped with
the subspace topology. The latter topology on $\Sigma$ is the weakest topology on $\Sigma$
such that for each $j\in\mathbb{N}$ the \emph{canonical projection on the
$j$-th coordinate} $p_{j}:\Sigma\rightarrow I$ is continuous. A
countable basis for this topology on $\Sigma$ is given by the cylinder
sets $\left\{ \left[\omega\right]:\omega\in\Sigma^{*}\right\} $.
We will use the following metric generating the topology
on $\Sigma$. For $\alpha>0$ fixed, we define the metric $d_{\alpha}$
on $\Sigma$ given by
\[
d_{\alpha}\left(\omega,\tau\right):=\mathrm{e}^{-\alpha\left|\omega\wedge\tau\right|},\mbox{ for all }\omega,\tau\in\Sigma.
\]
For a function $f:\Sigma\rightarrow\mathbb{R}$ and $n\in\mathbb{N}_{0}$, we use the notation $S_{n}f:\Sigma\rightarrow\mathbb{R}$
to denote the \emph{ergodic sum} of $f$ with respect to the left-shift map $\sigma$, in other words, $S_{n}f:=\sum_{i=0}^{n-1}f\circ\sigma^{i}$.
Furthermore, the following function spaces will be crucial throughout.
\begin{defn}
We say that a function $f:\Sigma\rightarrow\mathbb{R}$ is {\em bounded} whenever $\Vert f\Vert_{\infty}:=\sup_{\omega\in\Sigma}\left|f\left(\omega\right)\right|$ is finite.
We denote by $C_{b}\left(\Sigma\right)$ the real vector space of
bounded continuous functions on $\Sigma$. We say that $f:\Sigma\rightarrow\mathbb{R}$
is \emph{$\alpha$-H\"older continuous}, for some $\alpha>0$, if
\[
V_{\alpha}\left(f\right):=\sup_{n\ge1}\left\{ V_{\alpha,n}\left(f\right)\right\} <\infty,
\]
where for each $n\in\mathbb{N}$ we let
\[
V_{\alpha,n}\left(f\right):=\sup\left\{ \mathrm{e}^{-\alpha}\frac{\left|f\left(\omega\right)-f\left(\tau\right)\right|}{d_{\alpha}\left(\omega,\tau\right)}:\omega,\tau\in\Sigma,\left|\omega\wedge\tau\right|\ge n\right\} .
\]
The function \emph{$f$ }is called \emph{ H\"older continuous} if there exists
$\alpha>0$ such that $f$ is $\alpha$-H\"older continuous.
For $\alpha>0$ we also introduce the real vector space
\[
H_{\alpha}\left(\Sigma\right):=\left\{ f\in C_{b}\left(\Sigma\right):\, f\textrm{ is }\alpha-\textrm{H\"older continuous}\right\} ,
\]
which we assume to be equipped with the norm $\Vert \cdot \Vert_{\alpha}$ which is given by
\[
\Vert f\Vert_{\alpha}:=\Vert f\Vert_{\infty}+V_{\alpha}\left(f\right).
\]
\end{defn}
We need the following notion of pressure, which was originally introduced in \cite[Definition 1.1]{JaerischKessebohmer10}.
\begin{defn}
\label{def:induced-topological-pressure}For
$\varphi,\psi:\Sigma\rightarrow\mathbb{R}$ with $\psi\ge0$, $\mathcal{C}\subset\Sigma^{*}$
and $\eta>0$, the $\psi$\emph{-induced pressure of} $\varphi$ (with
respect to $\mathcal{C}$) is given by
\[
\mathcal{P}_{\psi}\left(\varphi,\mathcal{C}\right):=\limsup_{T\rightarrow\infty}\frac{1}{T}\log\sum_{{\omega\in\mathcal{C}\atop T-\eta<S_{\omega}\psi\le T}}\exp S_{\omega}\varphi,
\]
where we have set $S_{\omega}\varphi:=\sup_{\tau\in\left[\omega\right]}S_{\left|\omega\right|}\varphi\left(\tau\right)$. Note that $\mathcal{P}_{\psi}\left(\varphi,\mathcal{C}\right)$ is an element of $\overline{\mathbb{R}}:=\mathbb{R}\cup\left\{-\infty, +\infty\right\} $. \end{defn}
\begin{rem*}
It was shown in \cite[Theorem 2.4]{JaerischKessebohmer10} that the
definition of $\mathcal{P}_{\psi}\left(\varphi,\mathcal{C}\right)$
is in fact independent of the choice of $\eta>0$. For this reason,
we do not refer to $\eta>0$ in the definition of the induced pressure. \end{rem*}
\begin{notation*}
If $\psi$ and/or $\mathcal{C}$ is left out in the notation
of induced pressure, then we tacitly assume that $\psi=1$ and/or
$\mathcal{C}=\Sigma^{*}$, that is, we let $\mathcal{P}(\varphi):=\mathcal{P}_{1}\left(\varphi,\Sigma^*\right)$.
\end{notation*}
The following fact is taken from \cite[Remark 2.11, Remark 2.7]{JaerischKessebohmer10}.
\begin{fact}
\label{fac:criticalexponents-via-pressure}Let $\Sigma$ be a Markov
shift over a finite alphabet. If $\varphi,\psi:\Sigma\rightarrow\mathbb{R}$ are two functions such that
$\psi\ge c>0$, for some $c>0$, and if $\mathcal{C}\subset\Sigma^{*}$
then $\mathcal{P}_{\psi}\left(\varphi,\mathcal{C}\right)$ is equal to the
unique real number $s\in\mathbb{R}$ for which $\mathcal{P}\left(\varphi-s\psi,\mathcal{C}\right)=0$.
Moreover, we have that
\[
\mathcal{P}_{\psi}\left(\varphi,\mathcal{C}\right)=\inf\left\{ s\in\mathbb{R}:\sum_{\omega\in\mathcal{C}}\mathrm{e}^{S_{\omega}\left(\varphi-s\psi\right)}<\infty\right\} .
\]
\end{fact}
The next definition goes back to the work of Ruelle and Bowen (\cite{MR0289084,bowenequilibriumMR0442989}).
\begin{defn}
Let $\varphi:\Sigma\rightarrow\mathbb{R}$ be continuous. We say
that a Borel probability measure \emph{$\mu$ is a Gibbs measure for
$\varphi$ }if there exists a constant $C>0$ such that
\begin{equation}
C^{-1}\le\frac{\mu\left[\omega\right]}{\mathrm{e}^{S_{\left|\omega\right|}\varphi\left(\tau\right)-\left|\omega\right|\mathcal{P}\left(\varphi\right)}}\le C,\mbox{ for all }\omega\in\Sigma^{*}\mbox{ and }\tau\in\left[\omega\right].\label{eq:gibbs-equation}
\end{equation}
\end{defn}
The Perron-Frobenius operator, which we are going to define now, provides a useful tool for guaranteeing the
existence of Gibbs measures and for deriving some of the stochastic
properties of these measures (see \cite{MR0289084,bowenequilibriumMR0442989}).
\begin{defn}\label{def:perron-frobenius}
Let $\Sigma$ be a Markov shift over a finite alphabet and let $\varphi:\Sigma\rightarrow \mathbb{R}$ be continuous. The \emph{Perron-Frobenius operator associated to $\varphi$} is the operator $\mathcal{L}_{\varphi}:C_b(\Sigma)\rightarrow C_b(\Sigma)$ which is given, for each $f\in C_b(\Sigma)$ and $x\in \Sigma$, by
\[
\mathcal{L}_{\varphi}(f)(x):=\sum_{y\in \sigma^{-1}\left\{ x\right\} }\mathrm{e}^{\varphi\left(y\right)}f\left(y\right).
\]
\end{defn}
The following theorem summarizes some of the main results of
the thermodynamic formalism for a Markov shift $\Sigma$ with a finite alphabet
$I$ (see for instance \cite{MR648108} and \cite[Section 2]{MR2003772}).
Here, $\Sigma$ is called \emph{irreducible} if for all $i,j\in I$
there exists $\omega\in\Sigma^{*}\cup\left\{ \emptyset\right\} $
such that $i\omega j\in\Sigma^{*}$. Moreover, for $k\in\mathbb{N}_{0}$, the $\sigma$-algebra generated by $\left\{ \left[\omega\right]:\omega\in\Sigma^{k}\right\} $ is denoted by $\mathcal{C}(k)$, and we say that $f:\Sigma \rightarrow \mathbb{R}$ is $\mathcal{C}(k)$ -\emph{measurable} if $f^{-1}(A)\in \mathcal{C}(k)$ for every $A\in \mathcal{B}\left(\mathbb{R}\right)$, where $\mathcal{B}\left(\mathbb{R}\right)$ denotes the Borel $\sigma$-algebra on $\mathbb{R}$.
\begin{thm}
\label{thm:perron-frobenius-thm-urbanski}Let $\Sigma$ be an irreducible
Markov shift over a finite alphabet and let $\varphi:\Sigma\rightarrow\mathbb{R}$
be $\alpha$-H\"older continuous, for some $\alpha>0$. Then there exists
a unique Borel probability measure $\mu$ supported on $\Sigma$ such
that $\int\mathcal{L}_{\varphi}\left(f\right)\, d\mu=\mathrm{e}^{\mathcal{P}\left(\varphi\right)}\int f\, d\mu$, for all $f\in C_{b}\left(\Sigma\right)$. Furthermore, $\mu$ is a Gibbs
measure for $\varphi$ and there exists a unique $\alpha$-H\"older
continuous function $h:\Sigma\rightarrow\mathbb{R}^{+}$ such that $\int h\,d\mu=1$ and $\mathcal{L}_{\varphi}\left(h\right)=\mathrm{e}^{\mathcal{P}\left(\varphi\right)}h$.
The measure $h\,d\mu$ is the unique $\sigma$-invariant Gibbs measure
for $\varphi$ and will be denoted by $\mu_{\varphi}$. If $\varphi:\Sigma\rightarrow\mathbb{R}$ is $\mathcal{C}(k)$-measurable, for some $k\in \mathbb{N}_0$,
then $h$ is $\mathcal{C}\!\left(\max\left\{ k-1,1\right\} \right)$-measurable.
\end{thm}
\subsection{Graph Directed Markov Systems\label{sub:Graph-Directed-Markov}}
In this section we will first recall the definition of a graph directed
Markov system (GDMS), which was introduced by Mauldin and Urba\'nski
\cite{MR2003772}. Subsequently, we will introduce the notion of a linear GDMS associated to a free group and certain radial limit sets.
\begin{defn}
\label{def:gdms}
A \emph{graph directed Markov system (GDMS)} $\Phi:=\left(V,\left(X_{v}\right)_{v\in V},E,i,t,\left(\phi_{e}\right)_{e\in E},A\right)$
consists of a finite vertex set $V$, a family of nonempty compact
metric spaces $\left(X_{v}\right)_{v\in V}$, a countable edge set
$E$, the maps $i,t:E\rightarrow V$ defining the initial and terminal
vertex of an edge, a family of injective contractions $\phi_{e}:X_{t\left(e\right)}\rightarrow X_{i\left(e\right)}$
with Lipschitz constants bounded by some $0<s<1$, and an edge incidence
matrix $A=\left(a\left(e,f\right)\right)\in\left\{ 0,1\right\} ^{E\times E}$
such that $a\left(e,f\right)=1$ implies $t\left(e\right)=i\left(f\right)$,
for all $e,f\in E$. For a GDMS $\Phi$ there exists a canonical \emph{coding
map} $\pi_{\Phi}:\Sigma_{\Phi}\rightarrow\oplus_{v\in V}X_{v}$, which is defined by
\[
\bigcap_{n\in\mathbb{N}}\phi_{\omega_{|n}}\left(X_{t\left(\omega_{n}\right)}\right)=\left\{ \pi_{\Phi}\left(\omega\right)\right\},
\]
where $\oplus_{v\in V}X_{v}$ denotes the disjoint union of the sets
$X_{v}$, $\phi_{\omega|_n}:=\phi_{\omega_1}\circ\cdots\circ\phi_{\omega_n}$ and $\Sigma_{\Phi}$ denotes the Markov shift with alphabet
$E$ and incidence matrix $A$. We set
\[
J\left(\Phi\right):=\pi_{\Phi}\left(\Sigma_{\Phi}\right),\quad J^{*}\left(\Phi\right):=\bigcup_{F\subset E,\card\left(F\right)<\infty}\pi_{\Phi}\left(\Sigma_{\Phi}\cap F^{\mathbb{N}}\right),
\]
and refer to $J\left(\Phi\right)$ as the \emph{limit set of $\Phi$}.
\end{defn}
The following notion was introduced in \cite[Section 4]{MR2003772}.
\begin{defn}
\label{def:cgdms}The GDMS $\Phi=\left(V,\left(X_{v}\right)_{v\in V},E,i,t,\left(\phi_{e}\right)_{e\in E},A\right)$
is called \emph{conformal }if the following conditions are satisfied.
\renewcommand{\theenumi}{\alph{enumi}}
\begin{enumerate}
\item \label{enu:cgdms-a-phasespace}For $v\in V$, the \emph{phase space}
$X_{v}$ is a compact connected subset of a Euclidean space $\left(\mathbb{R}^{D},\Vert\cdot\Vert\right)$,
for some $D\geq1$, such that $X_{v}$ is equal to the closure of
its interior, that is $X_{v}=\overline{\Int(X_{v})}$.
\item \textit{\emph{\label{enu:cgdms-b-osc}(}}\textit{Open set condition
}\textit{\emph{(OSC))}} For all $a,b\in E$ with $a\ne b$, we have
that
\[
\phi_{a}\left(\Int(X_{t\left(a\right)})\right)\cap\phi_{b}\left(\Int(X_{t\left(b\right)})\right)=\emptyset.
\]
\item \label{enu:cgdms-c-conformalextension}For each vertex $v\in V$ there
exists an open connected set $W_{v}\supset X_{v}$ such that the map
$\phi_{e}$ extends to a $C^{1}$ conformal diffeomorphism of $W_{v}$
into $W_{i\left(e\right)}$, for every $e\in E$ with $t\left(e\right)=v$.
\item \textit{\emph{\label{enu:cgdms-d-coneproperty}(}}\textit{Cone property}\textit{\emph{)}}
There exist $l>0$ and $0<\gamma<\pi/2$ such that for each $x\in X\subset\mathbb{R}^{D}$
there exists an open cone $\Con(x,\gamma,l)\subset\Int(X)$ with vertex
$x$, central angle of measure $\gamma$ and altitude $l$.
\item \label{enu:cgdms-e-hoelderderivative}There are two constants $L\geq1$
and $\alpha>0$ such that for each $e\in E$ and $x,y\in X_{t\left(e\right)}$
we have
\[
\big|\left|\phi_{e}'(y)\right|-\left|\phi_{e}'(x)\right|\big|\leq L\inf_{u\in W_{t\left(e\right)}}\left|\phi_{e}'\left(u\right)\right|\Vert y-x\Vert^{\alpha}.
\]
\end{enumerate}
\end{defn}
The \emph{associated geometric potential $\zeta_{\Phi}:\Sigma_{\Phi}\rightarrow\mathbb{R}^{-}$
of a conformal GDMS $\Phi$} is defined by
\[
\zeta_{\Phi}\left(\omega\right):=\log\left|\phi_{\omega_{1}}'\left(\pi_{\Phi}\left(\sigma\left(\omega\right)\right)\right)\right|,\mbox{ for all }\omega\in\Sigma_{\Phi}.
\]
A Markov shift $\Sigma$ with a finite or countable alphabet $I$ is called \emph{finitely irreducible}
if there exists a finite set $\Lambda\subset\Sigma^{*}$ such that
for all $i,j\in I$ there exists a word $\omega\in\Lambda\cup\left\{ \emptyset\right\} $
such that $i\omega j\in\Sigma^{*}$ (see \cite[Section 2]{MR2003772}). Note that if $I$ is finite, then $\Sigma$ is finitely irreducible if and only if $\Sigma$ is irreducible.
The following result from \cite[Theorem 3.7]{MR2413348} shows that
in the sense of Hausdorff dimension, the limit set of a conformal
GDMS with a finitely irreducible incidence matrix can be exhausted
by its finitely generated subsystems. The last equality in
Theorem \ref{thm:cgdms-bowen-formula} follows from \cite[Corollary 2.10]{JaerischKessebohmer10}
since the associated geometric potential of the conformal GDMS $\Phi$ is bounded away from zero by $-\log\left(s\right)$, where $s$ denotes the uniform bound of the Lipschitz constants of the contractions of $\Phi$ (see Definition \ref{def:gdms}).
\begin{thm}
[Generalized Bowen's formula]\label{thm:cgdms-bowen-formula}Let
$\Phi$ be a conformal GDMS such that $\Sigma_{\Phi}$ is finitely
irreducible. We then have that
\[
\dim_{H}\left(J\left(\Phi\right)\right)=\dim_{H}\left(J^{*}\left(\Phi\right)\right)=\inf\left\{ s\in\mathbb{R}:\mathcal{P}\left(s\zeta_{\Phi}\right)\le0\right\} =\mathcal{P}_{-\zeta_{\Phi}}\left(0,\Sigma_{\Phi}^{*}\right).
\]
\end{thm}
Let us now give the definition of a GDMS $\Phi$ associated
to the free group $\mathbb{F}_{d}$ of rank $d\geq2$ and introduce the radial limit set of a normal subgroup
$N$ of $\mathbb{F}_{d}$ with respect to $\Phi$.
\begin{defn}
\label{def:gdms-associated-to-freegroup-and-radiallimitsets} Let $\Phi=\left(V,\left(X_{v}\right)_{v\in V},E,i,t,\left(\phi_{e}\right)_{e\in E},A\right)$ be a GDMS and let $d\ge2$. The GDMS $\Phi$ is said to be \emph{associated to $\mathbb{F}_{d}=\langle g_{1},\dots,g_{d}\rangle$}, if $V=\left\{ g_1, g_{1}^{-1},\dots,g_d, g_{d}^{-1}\right\} $,
$E=\left\{ \left(v,w\right)\in V^{2}:v\neq w^{-1}\right\} $, the maps $i,t:E\rightarrow V$
are given by $i\left(v,w\right)=v$ and $t\left(v,w\right)=w$, for each $(v,w)\in E$, and
the incidence matrix $A=\left(a\left(e,f\right)\right)\in\left\{ 0,1\right\} ^{E\times E}$
satisfies $a\left(e,f\right)=1$ if and only if $t\left(e\right)=i\left(f\right)$,
for all $e,f\in E$. If additionally $\Phi$ is a conformal GDMS such that, for each $(v,w)\in E$, the map $\phi_{\left(v,w\right)}$ is a similarity for which the contraction ratio is independent
of $w$, then $\Phi$ is called a \emph{linear GDMS associated to} $\mathbb{F}_d$.
For a subgroup $H$ of $\mathbb{F}_{d}$ and a GDMS $\Phi$ associated
to $\mathbb{F}_{d}$, the\emph{ radial }and the\emph{ uniformly radial limit
set of $H$ with respect to $\Phi$} are respectively given by
\begin{align*}
\Lr(H,\Phi) & :=\pi_{\Phi}\left\{ \left(v_{i},w_{i}\right)\in\Sigma_{\Phi}:\exists\gamma\in\mathbb{F}_{d}\mbox{ such that for infinitely many }n\in\mathbb{N},\, v_{1}\cdot\dots\cdot v_{n}\in H\gamma\right\}
\\
\text{and}
\\
\Lur(H,\Phi) & :=\pi_{\Phi}\left\{ \left(v_{i},w_{i}\right)\in\Sigma_{\Phi}:\exists\Gamma\subset\mathbb{F}_{d}\mbox{ finite such that for all }n\in\mathbb{N},\, v_{1}\cdot\dots\cdot v_{n}\in H \Gamma \right\} .
\end{align*}
\end{defn}
\begin{rem*}It is clear that if $\Phi$ is a GDMS generated by a family of similarity maps, then $\Phi$ automatically satisfies (c) and (e) in Definition \ref{def:cgdms} of a conformal GDMS.
\end{rem*}
\subsection{Random Walks on Graphs and Amenability}
In this section we collect some useful definitions and results concerning random walks on graphs. We will mainly follow \cite{MR1743100}.
\begin{defn}
\label{def:graphs}A \emph{graph $X=\left(V,E\right)$} consists of
a countable \emph{vertex set} $V$ and an \emph{edge set }$E\subset V\times V$
such that $\left(v,w\right)\in E$ if and only if $\left(w,v\right)\in E$.
We write $v\sim w$ if $\left(v,w\right)\in E$, which defines an
equivalence relation on $V$. For all $v,w\in V$ and $k\in\mathbb{N}_{0}$,
a \emph{path of length $k$ from $v$ to $w$} is a sequence $\left(v_{0},\dots,v_{k}\right)\in V^{k+1}$
such that $v_{0}=v$, $v_{k}=w$ and $v_{i-1}\sim v_{i}$ for all
$1\le i\le k$. For all $v\in V$, let $\mathrm{deg}\left(v\right):=\card\left\{ w\in V:w\sim v\right\} $
denote the \emph{degree }of the vertex $v$. The graph $\left(V,E\right)$
is called \emph{connected} if for all $v,w\in V$ with $v\neq w$,
there exists $k\in\mathbb{N}$ and a path of length $k$ from $v$ to $w$.
For a connected graph $X=\left(V,E\right)$ and $v,w\in V$ we let
$d_{X}\left(v,w\right)$ denote the minimal length of all paths from
$v$ to $w$, which defines the \emph{graph metric }$d_{X}\left(\cdot,\cdot\right):V\times V\rightarrow\mathbb{N}_{0}$.
The graph $\left(V,E\right)$ is said to have \emph{bounded geometry}
if it is connected and if $\sup_{v\in V}\left\{ \mathrm{deg}\left(v\right)\right\} <\infty$.
For each set of vertices $A\subset V$ we define $dA:=\left\{ v\in A:\exists w\in V\setminus A\mbox{ such that }v\sim w\right\} $.
\end{defn}
We now recall an important property of groups, which was introduced
by von Neumann \cite{vonNeumann1929amenabledef} under the German
name {\em messbar}. Later, groups with this property were renamed {\em amenable
groups} by Day \cite{day1949amenabledef} and also referred to as {\em groups with full
Banach mean value} by F\o lner \cite{MR0079220}.
\begin{defn}
\label{def:amenable-group}A discrete group\emph{
$G$ }is said to be\emph{ amenable} if there exists a finitely additive probability
measure $\nu$ on the set of all subsets of $G$ which is invariant
under left multiplication by elements of $G$, that is, $\nu\left(A\right)=\nu\left(g\left(A\right)\right)$
for all $g\in G$ and $A\subset G$.
\end{defn}
We will also require the concept of an amenable graph, which extends the concept of amenability for groups (see Proposition \ref{pro:groupamenable-iff-graphamenable}
below).
\begin{defn}
\label{def:amenable-graph}A graph $X=\left(V,E\right)$
with bounded geometry is called \emph{amenable} if and only if there
exists $\kappa>0$ such that for all finite sets $A\subset V$ we
have $\card\left(A\right)\le\kappa\card\left(dA\right)$.
\end{defn}
For the study of graphs in terms of amenability, the following definition
is useful.
\begin{defn}
A \emph{rough isometry (or quasi-isometry) between
two metric spaces} $\left(Y,d_{Y}\right)$ and $\left(Y',d_{Y'}\right)$
is a map $\varphi:Y\rightarrow Y'$ which has the following properties.
There exist constants $A,B>0$ such that for all $y_{1},y_{2}\in Y$
we have
\[
A^{-1}d_{Y}\left(y_{1},y_{2}\right)-A^{-1}B\le d_{Y'}\left(\varphi\left(y_{1}\right),\varphi\left(y_{2}\right)\right)\le Ad_{Y}\left(y_{1},y_{2}\right)+B
\]
and for all $y'\in Y'$ we have
\[
d_{Y'}\left(y',\varphi\left(Y\right)\right)\le B.
\]
Two metric spaces \emph{$\left(Y,d_{Y}\right)$ }and\emph{ $\left(Y',d_{Y'}\right)$
}are said to be \emph{roughly isometric} if there exists a rough isometry between
$\left(Y,d_{Y}\right)$ and $\left(Y',d_{Y'}\right)$. For connected
graphs $X=\left(V,E\right)$ and $X=\left(V',E'\right)$ with graph
metrics $d_{X}$ and $d_{X'}$ we say that the graphs $X$ and $X'$\emph{
}are\emph{ roughly isometric} if the metric spaces $\left(V,d_{X}\right)$
and $\left(V',d_{X'}\right)$ are roughly isometric.
\end{defn}
The next theorem states that amenability of graphs is invariant under rough isometries
(\cite[Theorem 4.7]{MR1743100}).
\begin{thm}
\label{thm:amenability-is-roughisometry-invariant}Let $X$ and $X'$
be graphs with bounded geometry such that $X$ and $X'$ are roughly
isometric. We then have that
$X$ is amenable if and only if $X'$ is amenable.
\end{thm}
The Cayley graph of a group provides the connection between groups
and graphs.
\begin{defn}
We say that a set $S\subset G$ is a \emph{symmetric
set of generators of the group} $G$ if $\left\langle S\right\rangle =G$
and if $g^{-1}\in S$, for all $g\in S$. For a group
$G$ and a symmetric set of generators $S$, the \emph{Cayley graph
of $G$ with respect to $S$} is the graph with vertex set $G$ and
edge set $E:=\left\{ \left(g,g'\right)\in G\times G:g^{-1}g'\in S\right\} $.
We denote this graph by $X\left(G,S\right)$.
\end{defn}
Next proposition shows that amenability of groups and graphs is compatible
(\cite[Proposition 12.4]{MR1743100}).
\begin{prop}
\label{pro:groupamenable-iff-graphamenable}A finitely generated group
$G$ is amenable if and only if one (and hence every) Cayley graph $X\left(G,S\right)$
of $G$ with respect to a finite symmetric set of generators $S\subset G$
is amenable.
\end{prop}
Let us now relate amenability of graphs to spectral properties
of transition operators.
\begin{defn}
For a finite or countably infinite discrete vertex
set $V$, we say that the matrix $P=\left(p\left(v,w\right)\right)\in\mathbb{R}^{V\times V}$
is a \emph{transition matrix on $V$} if $p\left(v,w\right)\ge0$
and $\sum_{u\in V}p\left(v,u\right)=1$, for all $v,w\in V$. A Borel
measure $\nu$ supported on $V$ is\emph{ $P$-invariant }if we have
$\sum_{u\in V}\nu\left(u\right)p\left(u,w\right)=\nu\left(w\right)$,
for all $w\in V$.
\end{defn}
The following definitions introduce the concept of a transition matrix to be adapted to a graph (see \cite[(1.20, 1.21)]{MR1743100}).
\begin{defn}
\label{def:uniformly-irred-bounded-range}For
a connected graph $X=\left(V,E\right)$ and a transition matrix $P=\left(p\left(v,w\right)\right)\in\mathbb{R}^{V\times V}$
on $V$, we say that \emph{$P$ is uniformly irreducible with respect
to $X$} if there exist $K\in\mathbb{N}$ and $\epsilon>0$ such that
for all $v,w\in V$ satisfying $v\sim w$ there exists $k\in\mathbb{N}$ with
$k\le K$ such that $p^{\left(k\right)}\left(v,w\right)\ge\epsilon$.
We say that $P$ has \emph{bounded range with respect to $X$} if
there exists $R>0$ such that $p\left(v,w\right)=0$ whenever $d_{X}\left(v,w\right)>R$.
\end{defn}
Let $P=\left(p\left(v,w\right)\right)\in\mathbb{R}^{V\times V}$ be a transition
matrix on $V$ with $P$-invariant Borel measure $\nu$ on $V$. It
is well-known that $P$ defines a linear operator on $\ell^{2}\left(V,\nu\right)$
through the equations
\[
Pf\left(v\right):=\sum_{w\in V}p\left(v,w\right)f\left(w\right),\quad\mbox{for all }v\in V\mbox{ and }f\in\ell^{2}\left(V,\nu\right)
\]
and that the norm of this operator is less or equal to one. For the
spectral radius $\rho\left(P\right)$ of the operator $P$ on $\ell^{2}\left(V,\nu\right)$
we cite the following result from \cite{MR2338235}. This result has
a rather long history going back to \cite{MR0109367,MR0112053} (see
also \cite{MR0159230,MR678175,MR743744,MR894523,MR938257,MR943998,MR1245225,MR1743100}).
\begin{thm}
[Ortner, Woess]\label{thm:woess-amenability-randomwalk-characterization}Let
$X=\left(V,E\right)$ be a graph with bounded geometry and let $P$
denote a transition matrix on $V$ such that $P$ is uniformly irreducible
with respect to $X$ and has bounded range with respect
to $X$. If there exists a $P$-invariant Borel measure $\nu$ on
$V$ and a constant $C\ge1$ such that $C^{-1}\le\nu\left(w\right)\le C$,
for all $w\in V$, then we have that $\rho\left(P\right)=1$ if and only if $X$ is amenable.
\end{thm}
\section{Thermodynamic Formalism for Group-extended Markov Systems\label{sec:Thermodynamic-Formalism-grpextension}}
Throughout this section our setting is as follows.
\begin{enumerate}
\item $\Sigma$ is a Markov shift with finite alphabet $I$ and left-shift map
$\sigma:\Sigma\rightarrow\Sigma$.
\item $G$ is a countable discrete group $G$ with Haar measure (counting
measure) $\lambda.$
\item $\Psi:I^{*}\rightarrow G$ is a semigroup homomorphism such that the
following property holds. For all $a,b\in I$ there exists $\gamma\in\Psi^{-1}\left\{ \id\right\} \cap\Sigma^{*}\cup\left\{ \emptyset\right\} $
such that $a\gamma b\in\Sigma^{*}$.
\item $\varphi:\Sigma\rightarrow\mathbb{R}$ denotes a H\"older continuous potential
with $\sigma$-invariant Gibbs measure $\mu_{\varphi}$, $\mathcal{L}_{\varphi}:C_b(\Sigma)\rightarrow C_b(\Sigma) $ denotes the Perron-Frobenius operator associated to $\varphi$, and $h:\Sigma \rightarrow \mathbb{R}$ denotes the unique H\"older continuous eigenfunction of $\mathcal{L}_{\varphi}$ with corresponding eigenvalue $\mathrm{e}^{\mathcal{P}(\varphi)}$ whose existence is guaranteed by Theorem \ref{thm:perron-frobenius-thm-urbanski}.
\end{enumerate}
In this section we will address the following problem. \begin{problem}
\label{mainproblem}How is amenability of $G$ reflected in the relationship
between $\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ \id\right\} \cap\Sigma^{*}\right)$
and $\mathcal{P}\left(\varphi\right)$?
\end{problem}
It turns out that in order to investigate Problem \ref{mainproblem}
it is helpful to consider group-extended Markov systems (defined below), which were
studied in (\cite{MR1803461,MR1906436}) for certain abelian groups.
\begin{defn}
\label{def:skew-product-dynamics}The
skew-product dynamics on $\left(\Sigma\times G,\sigma\rtimes\Psi\right)$, for which the transformation $\sigma\rtimes\Psi:\Sigma\times G\rightarrow\Sigma\times G$ is given by
\[
\left(\sigma\rtimes\Psi\right)\left(\omega,g\right):=\left(\sigma\left(\omega\right),g\Psi\left(\omega_{1}\right)\right),\ \text{ for all }
\left(\omega,g\right)\in\Sigma\times G,
\]
is called a \emph{group-extended Markov system}. We let $\pi_{1}:\Sigma\times G\rightarrow\Sigma\mbox{ and }\pi_{2}:\Sigma\times G\rightarrow G$
denote the projections to the first and to the second factor of $\Sigma\times G$. \end{defn}
\begin{rem*}
Throughout, we assume that $\Sigma\times G$ is equipped with the product topology. Note that by item
(3) of our standing assumptions we have that the group-extended Markov
system $\left(\Sigma\times G,\sigma\rtimes\Psi\right)$ is topologically
transitive if and only if $\Psi\left(\Sigma^{*}\right)=G$.
\end{rem*}
\subsection{Perron-Frobenius Theory\label{sub:Perron-Frobenius-Theory}}
In this section, we investigate the relationship
between the pressure $\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ \id\right\} \cap\Sigma^{*}\right)$
and the spectral radius of a Perron-Frobenius operator associated
to $\left(\Sigma\times G,\sigma\rtimes\Psi\right)$, which will be introduced in Definition \ref{def:Koopman-M-PF} below. Combining this with
results concerning transition operators of random walks on graphs, which will be given in Section
\ref{sec:Random-Walks-Application}, we are able to give a complete answer
to Problem \ref{mainproblem} for potentials $\varphi$ depending only on a finite number
of coordinates (see Theorem \ref{thm:amenability-dichotomy-markov}).
Let us begin by stating the following lemma. The proof is straightforward and is thus left to the reader.
\begin{lem}
\label{lem:mu-phi-prod-counting-is-invariant}The measure \textup{\emph{$\mu_{\varphi}\times\lambda$
is $\left(\sigma\rtimes\Psi\right)$-invariant. }}
\end{lem}
Next, we define the Koopman operator (\cite{koopman31,MR1244104})
and the Perron-Frobenius operator associated to the group-extended
Markov system $\left(\Sigma\times G,\sigma\rtimes\Psi\right)$. Note
that the previous lemma ensures that these operators are well-defined. We denote by $L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$ the Hilbert space of real-valued functions on $\Sigma\times G$ which are square-integrable with respect to $\mu_{\varphi}\times \lambda$.
\begin{defn}
\label{def:Koopman-M-PF}The \emph{Koopman operator} $U:L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)\rightarrow L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$ is given by
\[
U\left(f\right):= f\circ\left(\sigma\rtimes\Psi\right),
\]
and the \emph{Perron-Frobenius operator} $\mathcal{L}_{\varphi \circ \pi_1}:L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)\rightarrow L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$
is given by
\[
\mathcal{L}_{\varphi \circ \pi_1}:= \mathrm{e}^{\mathcal{P}\left(\varphi\right)}M_{h\circ\pi_{1}}\circ U^{*}\circ\left(M_{h\circ\pi_{1}}\right)^{-1},
\]
where the \emph{multiplication operator} $M_{h\circ\pi_{1}}:L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)\rightarrow L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$
is given by
\[
M_{h\circ\pi_{1}}\left(f\right):=f\cdot\left(h\circ\pi_{1}\right)
\]
and $U^{*}:L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)\rightarrow L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$ denotes the adjoint of $U$.
\end{defn}
The proof of the next lemma is straightforward and therefore omitted.
\begin{lem}
\label{fac:pf-fact}For the bounded linear operators $U,\mathcal{L}_{\varphi \circ \pi_1}:L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)\rightarrow L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$,
the following properties hold.
\begin{enumerate}
\item \label{enu:UisIsometry}$U$ is an isometry, so we have that $\Vert U\Vert=\rho\left(U\right)=1$, where $\rho$ denotes the spectral radius of $U$.
\item \label{enu:Uadjoint-is-PF}For $f\in L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$
and $\left(\mu_{\varphi}\times\lambda\right)$-almost every $\left(\omega,g\right)\in\Sigma\times G$
we have that \textup{
\[
\mathcal{L}_{\varphi\circ\pi_{1}}\left(f\right)\left(\omega,g\right)=\sum_{i\in I:i\omega_{1}\in\Sigma^{2}}\mathrm{e}{}^{\varphi\left(i\omega\right)}f\left(i\omega,g\Psi\left(i\right)^{-1}\right).
\]
}
\item \label{enu:.pf-fact-spectralradius-pressure}For the spectral radius
of $\mathcal{L}_{\varphi\circ\pi_{1}}$ we obtain that $\rho\left(\mathcal{L}_{\varphi\circ\pi_{1}}\right)=\mathrm{e}^{\mathcal{P}\left(\varphi\right)}$.
\end{enumerate}
\end{lem}
\global\long\def\pr#1{\mathbbm{1}_{\left\{ \pi_{2}=#1\right\} }}
\begin{rem*} The representation of $\mathcal{L}_{\varphi\circ\pi_{1}}$ in Lemma \ref{fac:pf-fact} (\ref{enu:Uadjoint-is-PF}) extends Definition \ref{def:perron-frobenius} of the Perron-Frobenius operator for Markov shifts with a finite alphabet.
\end{rem*}
The next lemma gives relationships between $\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ \id\right\} \cap\Sigma^{*}\right)$
and $\mathcal{L}_{\varphi\circ\pi_{1}}$. Before stating the lemma, let us fix some notation. We write $\mathbbm{1}_{A}$ for the
characteristic function of a set $A$ and we use $\left\{ \pi_{2}=g\right\} $
to denote the set $\pi_{2}^{-1}\left\{ g\right\}$, for each $g\in G$. Further, let $\mathcal{B}\left(\Sigma\times G\right)$ denote the Borel $\sigma$-algebra on $\Sigma \times G$.
\begin{lem}
\label{lem:perronfrobenius-pressure}For all sets $A,B\in\mathcal{B}\left(\Sigma\times G\right)$
and for each $n\in\mathbb{N}$ we have that
\[
\frac{\min h}{\max h}\mu_{\varphi}\left(A\cap\left(\sigma\rtimes\Psi\right)^{-n}\left(B\right)\right)\le\mathrm{e}^{-n\mathcal{P}\left(\varphi\right)}\left(\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\left(\mathbbm{1}_{A}\right),\mathbbm{1}_{B}\right)\le\frac{\max h}{\min h}\mu_{\varphi}\left(A\cap\left(\sigma\rtimes\Psi\right)^{-n}\left(B\right)\right).
\]
Moreover, for all $g,g'\in G$ we have that
\[
\limsup_{n\rightarrow\infty}\frac{1}{n}\log\left(\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\left(\pr g\right),\pr{g'}\right)=\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g^{-1}g'\right\} \cap\Sigma^{*}\right).
\]
\end{lem}
\begin{proof}
For the first assertion, observe that by the definition of $\mathcal{L}_{\varphi\circ\pi_{1}}$ we have that
\begin{eqnarray*}
\left(\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\left(\mathbbm{1}_{A}\right),\mathbbm{1}_{B}\right) & = & \mathrm{e}^{n\mathcal{P}\left(\varphi\right)}\left(M_{h\circ\pi_{1}}\circ\left(U^{*}\right)^{n}\circ\left(M_{h\circ\pi_{1}}\right)^{-1}\left(\mathbbm{1}_{A}\right),\mathbbm{1}_{B}\right)\\
& = & \mathrm{e}^{n\mathcal{P}\left(\varphi\right)}\left(\left(M_{h\circ\pi_{1}}\right)^{-1}\left(\mathbbm{1}_{A}\right),\left(M_{h\circ\pi_{1}}\left(\mathbbm{1}_{B}\right)\right)\circ\left(\sigma\rtimes\Psi\right)^{n}\right).
\end{eqnarray*}
Since the continuous function $h:\Sigma\rightarrow\mathbb{R}^{+}$ is bounded
away from zero and infinity on the compact set $\Sigma$, the first
assertion follows.
The second assertion follows from the first, if we set $A:=\left\{ \pi_{2}=g\right\} $
and $B:=\left\{ \pi_{2}=g'\right\} $ and use the Gibbs property (\ref{eq:gibbs-equation})
of $\mu_{\varphi}$.
\end{proof}
As an immediate consequence of the previous lemma, we obtain the following
upper bound for $\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g^{-1}g'\right\} \cap\Sigma^{*}\right)$
in terms of the spectral radius of $\mathcal{L}_{\varphi\circ\pi_{1}}$.
\begin{cor}
\label{cor:upperboundviaspectralradius}Let $V\subset L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$
be a closed $\mathcal{L}_{\varphi\circ\pi_{1}}$-invariant linear
subspace such that $\pr g, \pr{g'}\in V$, for some $g,g'\in G$. We then have that
\[
\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g^{-1}g'\right\} \cap\Sigma^{*}\right)\le\log\rho\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right).
\]
\end{cor}
\begin{proof}
By the Cauchy-Schwarz inequality and Gelfand's formula (\cite[Theorem 10.13]{MR0365062}) for the spectral
radius, we have that
\[
\limsup_{n\rightarrow\infty}\frac{1}{n}\log\left(\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\big|_{V}\left(\pr g\right),\pr{g'}\right)\le\limsup_{n\rightarrow\infty}\frac{1}{n}\log\Vert\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\big|_{V}\Vert=\log\rho\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right).
\]
Combining the above inequality with the second assertion of Lemma \ref{lem:perronfrobenius-pressure} completes
the proof.
\end{proof}
Recall that for a closed linear subspace $V\subset L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$,
a bounded linear operator $T:V\rightarrow V$ is called {\em positive} if
$T\left(V^{+}\right)\subset V^{+}$, where the positive cone $V^{+}$
is defined by $V^{+}:=\left\{ f\in V:f\ge0\right\} $.
The following lemma will be crucial in order to obtain equality in the inequality stated in Corollary \ref{cor:upperboundviaspectralradius}.
The lemma extends a result of Gerl (see \cite{MR938257} and also
\cite[Lemma 10.1]{MR1743100}).
\begin{lem}
\label{lem:lowerbound-selfadjoint}Let $V$ be a closed linear subspace
of\textup{ $L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$
}\textup{\emph{such that \linebreak}}\textup{ $\left\{ \mathbbm{1}_{\left\{ \pi_{2}=g\right\} }:g\in G\right\} \subset V$.
}\textup{\emph{Let $T:V\rightarrow V$ be a self-adjoint bounded linear
operator on $V$, which is positive and which satisfies}} $\ker\left(T\right)\cap V^{+}=\left\{ 0\right\} $.
We then have that
\[
\sup_{g,g'\in G}\left\{ \limsup_{n\rightarrow\infty}\left|\left(T^{n}\left(\pr g\right),\pr{g'}\right)\right|^{1/n}\right\} =\Vert T\Vert=\rho\left(T\right).
\]
\end{lem}
\begin{proof}
Since $T$ is self-adjoint, it follows that $\Vert T\Vert=\rho\left(T\right)$.
As in the proof of Corollary \ref{cor:upperboundviaspectralradius},
one immediately verifies that
\[
\sup_{g,g'\in G}\left\{ \limsup_{n\rightarrow\infty}\left|\left(T^{n}\left(\pr g\right),\pr{g'}\right)\right|^{1/n}\right\} \le\rho\left(T\right).
\]
Let us first give an outline for the proof of the opposite inequality.
We will first prove that for all $f\in V^{+}$ with $f\neq0$, the sequence $\left(\left(T^{n+1}f,T^{n+1}f\right)/\left(T^{n}f,T^{n}f\right)\right)_{n\in\mathbb{N}_{0}}$,
is non-decreasing. This will then imply that the following limits
exist and are equal:
\begin{equation}
\lim_{n\rightarrow\infty}\frac{\left(T^{n+1}f,T^{n+1}f\right)}{\left(T^{n}f,T^{n}f\right)}=\lim_{n\rightarrow\infty}\left(T^{n}f,T^{n}f\right)^{1/n}.\label{eq:lowerboundselfadjoint-1a}
\end{equation}
From this we obtain for every $f\in V^{+}$ with $f\neq0$ that
\begin{equation}
\frac{\left(Tf,Tf\right)}{\left(f,f\right)}\le\lim_{n\rightarrow\infty}\left(T^{n}f,T^{n}f\right)^{1/n}.\label{eq:lowerboundselfadjoint-1aa}
\end{equation}
Subsequently, we make use of the fact that
\[
D':=\left\{ f\in L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)\cap L^{\infty}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right):\,\, f\big|_{\left\{ \pi_{2}=g\right\} }=0\textrm{ for almost every }g\in G\right\}
\]
is dense in $L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$
and hence, $D:=D'\cap V$ is dense in $V$. For $f\in D$ we show
that
\[
\lim_{n\rightarrow\infty}\left(T^{n}f,T^{n}f\right)^{1/n}\le\sup_{g,g'\in G}\left\{ \limsup_{n\rightarrow\infty}\left|\left(T^{2n}\left(\pr g\right),\pr{g'}\right)\right|^{1/n}\right\} .
\]
Combining this with (\ref{eq:lowerboundselfadjoint-1aa}) applied
to $\left|f\right|$, we conclude for $f\in D$ with $f\neq0$ that
\begin{equation}
\frac{\left(Tf,Tf\right)}{\left(f,f\right)}\le\frac{\left(T\left|f\right|,T\left|f\right|\right)}{\left(\left|f\right|,\left|f\right|\right)}\le\sup_{g,g'\in G}\left\{ \limsup_{n\rightarrow\infty}\left|\left(T^{2n}\left(\pr g\right),\pr{g'}\right)\right|^{1/n}\right\} .\label{eq:lowerboundselfadjoint-1}
\end{equation}
Since $D$ is dense in $V$, there exists a sequence $(f_n)_{n\in \mathbb{N}}\in D^{\mathbb{N}}$ such that $\lim_n(Tf_n,Tf_n)=\Vert T \Vert$ and $(f_n,f_n)=1$, for each $n\in \mathbb{N}$. Combining this observation with the estimate in (\ref{eq:lowerboundselfadjoint-1}), we conclude that $\Vert T\Vert\le\sup_{g,g'\in G}\left\{ \limsup_{n}\left|\left(T^{2n}\left(\pr g\right),\pr{g'}\right)\right|^{1/2n}\right\} $.
Let us now turn to the details. We first verify that for every $f\in V^{+}$
with $f\neq0$, the sequence $\left(a_{n}\right)_{n\in\mathbb{N}_{0}}$ of
positive real numbers, given for $n\in\mathbb{N}_{0}$ by $a_{n}:=\left(T^{n+1}f,T^{n+1}f\right)/\left(T^{n}f,T^{n}f\right)$ is non-decreasing. Using that $T$ is self-adjoint
and applying the Cauchy-Schwarz inequality, we have for $n\in\mathbb{N}_{0}$
that
\begin{eqnarray}
\left(T^{n+1}f,T^{n+1}f\right)^{2} & = & \left(T^{n}f,T^{n+2}f\right)^{2}\le\left(T^{n}f,T^{n}f\right)\left(T^{n+2}f,T^{n+2}f\right).\label{eq:monotony-spectral-radius}
\end{eqnarray}
Since $\left(T^{n}f,T^{n}f\right)\neq0$ for all $n\in\mathbb{N}_{0}$ by
our hypothesis, we can multiply both sides of (\ref{eq:monotony-spectral-radius})
by $\left(T^{n+1}f,T^{n+1}f\right)^{-1}\left(T^{n+2}f,T^{n+2}f\right)^{-1}$,
which proves that $\left(a_{n}\right)_{n\in\mathbb{N}_{0}}$ is non-decreasing.
Hence, we have that $\lim_{n\rightarrow \infty}a_{n}\in\mathbb{R}^{+}\cup\left\{ \infty\right\} $
exists. Observing that $\log\left(T^{n}f,T^{n}f\right)$ is equal
to the telescoping sum $\log\left(f,f\right)+\sum_{j=0}^{n-1}\log a_{j}$
and using that $\lim_{n\rightarrow \infty}\log\left(a_{n}\right)$ is equal to its Ces\`{a}ro
mean, we deduce that
\[
\lim_{n\rightarrow\infty}\frac{1}{n}\log\left(T^{n}f,T^{n}f\right)=\lim_{n\rightarrow\infty}\frac{1}{n}\log\left(f,f\right)+\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{j=0}^{n-1}\log a_{j}=\lim_{n\rightarrow\infty}\log a_{n},
\]
which proves (\ref{eq:lowerboundselfadjoint-1a}). Since $\left(T^{n}f,T^{n}f\right)^{1/n}\le\Vert T\Vert^{2}\max\left\{ \Vert f\Vert_{2}^2,1\right\} $,
for all $n\in\mathbb{N}$, we have that the limits in (\ref{eq:lowerboundselfadjoint-1a})
are both finite.
It remains to prove that (\ref{eq:lowerboundselfadjoint-1}) holds
for every $f\in D$ with $f\neq0$. By definition of $D$, there exists
a finite set $G_{0}\subset G$ such that $f=\sum_{g\in G_{0}}f\pr g$.
Since $T$ is positive and self-adjoint, we conclude that
\begin{eqnarray*}
\left(T^{n}f,T^{n}f\right) & \le & \left(T^{n}\left|f\right|,T^{n}\left|f\right|\right)=\left(T^{2n}\left|f\right|,\left|f\right|\right)=\sum_{g,g'\in G_{0}}\left(T^{2n}\left|f\pr g\right|,\left|f\pr{g'}\right|\right)\\
& \le & \sum_{g,g'\in G_0}\Vert f\Vert_{L^{\infty}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)}\left(T^{2n}\pr g,\pr{g'}\right).
\end{eqnarray*}
Finally, raising both sides of the previous inequality to the power
$1/n$ and let $n$ tend to infinity gives
\[
\lim_{n\rightarrow\infty}\left(T^{n}f,T^{n}f\right)^{1/n}\le\max_{g,g'\in G_{0}}\limsup_{n\rightarrow\infty}\left|\left(T^{2n}\pr g,\pr{g'}\right)\right|^{1/n},
\]
and the estimate in (\ref{eq:lowerboundselfadjoint-1}) follows. The
proof is complete.
\end{proof}
Regarding the requirements of the previous proposition, we prove the
following for $\mathcal{L}_{\varphi\circ\pi_{1}}$.
\begin{lem}
\label{fac:positivity-injectivitiy-of-adjoint-of-pf}Let $V$ be a
closed $\mathcal{L}_{\varphi\circ\pi_{1}}$-invariant linear subspace
of $L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$
and suppose that $\mathcal{L}_{\varphi}\mathbbm{1}=\mathbbm{1}$. Then, $\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}$
is a positive operator for which $\ker\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)\cap V^{+}=\left\{ 0\right\} .$ Further, if $\left\{ f^{-}:f\in V\right\} \subset V$ then $\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)^{*}$
is a positive operator and if there exists $g\in V$ with $g>0$,
then we have that $\ker\left(\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)^{*}\right)\cap V^{+}=\left\{ 0\right\} .$ \end{lem}
\begin{proof}
Clearly, by definition of $\mathcal{L}_{\varphi\circ\pi_{1}}$,
we have that $\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}$ is positive.
Now let $f\in\ker\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)\cap V^{+}$. Since $\mu_{\varphi}$ is a fixed point of $\mathcal{L}_{\varphi}^*$, one deduces by the monotone convergence theorem and by the definition of $\mathcal{L}_{\varphi\circ\pi_{1}}$ that $\int fd\!\left(\mu_{\varphi}\times\lambda\right)=\int\mathcal{L}_{\varphi\circ\pi_{1}}\left(f\right)d\!\left(\mu_{\varphi}\times\lambda\right)$. Hence, $f\in\ker\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)\cap V^{+}$
implies $\int fd\!\left(\mu_{\varphi}\times\lambda\right)=0$ and so, $f=0$.
We now turn our attention to the adjoint operator $\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)^{*}$. Let $f\in V^+$. Since $\left\{ f^{-}:f\in V\right\} \subset V$ and using that $\mathcal{L}_{\varphi\circ\pi_{1}}$ is positive, we obtain that
\[
0\ge\left(\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)^{*}\left(f\right),\left(\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)^{*}\left(f\right)\right)^{-}\right)=\left(f,\mathcal{L}_{\varphi\circ\pi_{1}}\left(\left(\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)^{*}\left(f\right)\right)^{-}\right)\right)\ge0.
\]
Thus, $0=\left(\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)^{*}\left(f\right),\left(\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)^{*}\left(f\right)\right)^{-}\right)=-\Vert\left(\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)^{*}\left(f\right)\right)^{-}\Vert_{2}^{2}$
and so, $\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)^{*}$
is positive. Now let $f\in\ker\left(\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)^{*}\right)\cap V^{+}$
be given and assume that there exists $g\in V$ with $g>0$. We then have that
\[
0=\left(\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)^{*}\left(f\right),g\right)=\left(f,\mathcal{L}_{\varphi\circ\pi_{1}}\left(g\right)\right).
\]
Since $g>0$, we have $\mathcal{L}_{\varphi\circ\pi_{1}}\left(g\right)>0$,
which implies that $f=0$. The proof is complete.
\end{proof}
It turns out that the Perron-Frobenius operator is not self-adjoint in general. In fact, as we will see in the following remark, this operator is self-adjoint only in very special cases. Therefore, we will introduce the notion of an asymptotically self-adjoint operator in Definition \ref{def:asymptotically-sefadjoint} below.
\begin{rem*} We observe that the requirement that $\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}$
is self-adjoint, for some closed linear subspace $V$ of $L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$,
is rather restrictive. Indeed, suppose that $\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}$ is self-adjoint
for a closed linear subspace $V$ of $L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$
satisfying $\left\{ \mathbbm{1}_{\left[i\right]\times\left\{ g\right\} }:i\in I,g\in G\right\} \subset V$.
It follows that $ji\in\Sigma^{2}$ and $\Psi\left(i\right)=\Psi\left(j\right)^{-1}$, for all $i,j\in I$ such that $ij\in\Sigma^{2}$.
In particular, we have that $\Psi\left(\Sigma^{*}\right)$ has at
most two elements. To prove this, let $ij\in\Sigma^{2}$ be given. By the Gibbs property (\ref{eq:gibbs-equation}) of $\mu_\varphi$ we have that $\mu_{\varphi}[ij]>0$. Setting $C:=\frac{\max h}{\min h}\mathrm{e}^{-\mathcal{P}(\varphi)}$ we deduce from Lemma \ref{lem:perronfrobenius-pressure} that
\[
0<\left(\mu_{\varphi}\times\lambda\right)\left(\left(\left[i\right]\times\left\{ \id\right\} \right)\cap\left(\sigma\rtimes\Psi\right)^{-1}\left(\left[j\right]\times\left\{ \Psi(i)\right\} \right)\right)\le C \left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\left(\mathbbm{1}_{\left[i\right]\times\left\{ \id\right\} }\right),\mathbbm{1}_{\left[j\right]\times\left\{ \Psi\left(i\right)\right\} }\right).
\]
Using that $\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}$ is self-adjoint and again by Lemma \ref{lem:perronfrobenius-pressure}, we conclude that
\[
\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\left(\mathbbm{1}_{\left[i\right]\times\left\{ \id\right\} }\right),\mathbbm{1}_{\left[j\right]\times\left\{ \Psi\left(i\right)\right\} }\right)\le C \left(\mu_{\varphi}\times\lambda\right)\left(\left(\left[j\right]\times\left\{ \Psi\left(i\right)\right\} \right)\cap\left(\sigma\rtimes\Psi\right)^{-1}\left(\left[i\right]\times\left\{ \id\right\} \right)\right).
\]
Combining the previous two estimates, we conclude that $\left(\left[j\right]\times\left\{ \Psi\left(i\right)\right\} \right)\cap\left(\sigma\rtimes\Psi\right)^{-1}\left(\left[i\right]\times\left\{ \id\right\} \right)$
is nonempty, hence $ji\in\Sigma^{2}$ and $\Psi\left(i\right)\Psi\left(j\right)=\id$.
\end{rem*}
The following definition introduces a concept which is slightly weaker than self-adjointness.
\begin{defn}\label{def:asymptotically-sefadjoint}Let $V$ be a closed linear subspace of $L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$ and let $T:V\rightarrow V$ be a positive bounded linear operator. We say that $T$ is \emph{asymptotically self-adjoint} if there exist sequences
$\left(c_{m}\right)_{m\in\mathbb{N}}\in\left(\mathbb{R}^{+}\right)^{\mathbb{N}}$ and $\left(N_{m}\right)_{m\in\mathbb{N}}\in\mathbb{N}_0^{\mathbb{N}}$
with the property that $\lim_{m\to\infty}$$\left(c_{m}\right)^{1/m}=1$, $\lim_{m\to\infty}m^{-1}N_{m}=0$, such that for all non-negative functions $f,g\in V$ and for all $n\in\mathbb{N}$ we have
\begin{equation}
\left(T^{n}f,g\right)\le c_{n}\sum_{i=0}^{N_{n}}\left(f,T^{n+i}g\right).
\label{eq:asymptotically-selfadjoint-condition}
\end{equation}
\end{defn}
\begin{rem*}
Note that $T$ is asymptotically self-adjoint if and only if $T^{*}$ is
asymptotically self-adjoint. We also remark that it clearly suffices to verify
(\ref{eq:asymptotically-selfadjoint-condition}) on a norm-dense subset
of non-negative functions in $V$.
\end{rem*}
The next proposition shows that if $\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}$
is asymptotically self-adjoint, for some closed linear subspace $V$ of $L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$,
then we can relate the supremum of $\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g\right\} \cap\Sigma^{*}\right)$,
for $g\in G$, to the spectral radius of $\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}$.
The proof, which makes use of Lemma \ref{lem:lowerbound-selfadjoint}
and Lemma \ref{fac:positivity-injectivitiy-of-adjoint-of-pf}, is
inspired by \cite[Proposition 1.6]{MR2338235}.
\begin{prop}
\label{pro:lowerbound-asymptselfadjoint}Suppose that $\mathcal{L}_{\varphi}\mathbbm{1}=\mathbbm{1}$
and let $V\subset L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$
be a closed linear $\mathcal{L}_{\varphi\circ\pi_{1}}$-invariant
subspace such that $\left\{ f^{-}:f\in V\right\} \subset V$ and $\left\{ \pr g:g\in G\right\} \subset V$.
If \textup{$\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}$ } is asymptotically
self-adjoint, then we have that
\[
\sup_{g\in G}\left\{ \mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g\right\} \cap\Sigma^{*}\right)\right\} =\log\rho\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right).
\]
\end{prop}
\begin{proof}
By Corollary \ref{cor:upperboundviaspectralradius}, we have $\sup_{g\in G}\left\{ \mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g\right\} \cap\Sigma^{*}\right)\right\} \le\log\rho\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)$.
Let us turn to the proof of the converse inequality. Using that $\Vert\left(\mathcal{L}_{\varphi\circ\pi_{1}}^{m}\big|_{V}\right)^{*}\mathcal{L}_{\varphi\circ\pi_{1}}^{m}\big|_{V}\Vert=\Vert\mathcal{L}_{\varphi\circ\pi_{1}}^{m}\big|_{V}\Vert^{2}$
for each $m\in\mathbb{N}$, it follows from Gelfand's formula (\cite[Theorem 10.13]{MR0365062}) that
\begin{equation}
\rho\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)=\lim_{n\rightarrow\infty}\Vert\left(\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\big|_{V}\right)^{*}\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\big|_{V}\Vert^{1/(2n)}.\label{eq:lowerbound-asymptselfadjoint-1}
\end{equation}
Our next aim is to apply Lemma \ref{lem:lowerbound-selfadjoint} to the self-adjoint operator
$T_n:=\left(\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\big|_{V}\right)^{*}\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\big|_{V}$,
for each $n\in\mathbb{N}$. We have to verify that $T_n$
is positive and that $\ker(T_n)\cap V^{+}=\left\{ 0\right\} $,
for each $n\in\mathbb{N}$. By Lemma \ref{fac:positivity-injectivitiy-of-adjoint-of-pf},
we have that $\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}$ is positive
and $\ker\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)\cap V^{+}=\left\{ 0\right\} $.
Fix some arbitrary order for the elements in $G$, say $G=\left\{ g_{i}:i\in\mathbb{N}\right\} $. Using that $V$ is a closed linear subspace containing $\left\{ \pr{g_{i}}:i\in\mathbb{N}\right\} $,
we obtain that $g:=\sum_{j\in\mathbb{N}}2^{-j}\mathbbm{1}_{\left\{ \pi_{2}=g_{j}\right\} }>0$
is an element of $V$. Since $\left\{ f^{-}:f\in V\right\} \subset V$
by our assumptions, Lemma \ref{fac:positivity-injectivitiy-of-adjoint-of-pf}
implies that $\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)^{*}$
is positive with $\ker\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)^{*}\cap V^{+}=\left\{ 0\right\} $.
Hence, for each $n\in\mathbb{N}$ we have that $T_n$
is positive and $\ker(T_n)\cap V^{+}=\left\{ 0\right\} $.
Consequently, it follows from Lemma \ref{lem:lowerbound-selfadjoint} that for each $n\in\mathbb{N}$ we have
\begin{eqnarray}
& & \Vert\left(\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\big|_{V}\right)^{*}\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\big|_{V}\Vert=\sup_{g,g'\in G}\left\{ \limsup_{k\rightarrow\infty}\left(\left(\left(\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\big|_{V}\right)^{*}\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\big|_{V}\right)^{k}\left(\pr g\right),\pr{g'}\right)^{1/k}\right\} .\label{eq:lowerbound-asymptselfadjoint-2}
\end{eqnarray}
Let $g,g'\in G$ be given. Using that $\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}$
is asymptotically self-adjoint, with sequences $\left(c_{m}\right)_{m\in\mathbb{N}}\in\mathbb{R}^{\mathbb{N}}$
and $\left(N_{m}\right)_{m\in\mathbb{N}}\in\mathbb{N}_0^{\mathbb{N}}$ as in Definition \ref{def:asymptotically-sefadjoint},
we estimate for all $n\in\mathbb{N}$ that
\begin{eqnarray*}
& & \limsup_{k\rightarrow\infty}\big(\left(\left(\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\big|_{V}\right)^{*}\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\big|_{V}\right)^{k}\left(\pr g\right),\pr{g'}\big)^{1/k}\\
& \le & \limsup_{k\rightarrow\infty}\big(c_{n}^{k}\sum_{i_{1}=0}^{N_{n}}\sum_{i_{2}=0}^{N_{n}}\dots\sum_{i_{k}=0}^{N_{n}}\big(\mathcal{L}_{\varphi\circ\pi_{1}}^{2nk+\sum_{j=1}^{k}i_{j}}\big|_{V}\left(\pr g\right),\pr{g'}\big)\big)^{1/k}\\
& \le & c_{n}\limsup_{k\rightarrow\infty}\big(\left(N_{n}+1\right)^{k}\max_{\left(i_{1},\dots,i_{k}\right)\in\left\{ 0,\dots,N_{n}\right\} ^{k}}\big\{\big(\mathcal{L}_{\varphi\circ\pi_{1}}^{2nk+\sum_{j=1}^{k}i_{j}}\big|_{V}\left(\pr g\right),\pr{g'}\big)\big\}\big)^{1/k}.
\end{eqnarray*}
Let $\epsilon>0$. Since we have $\limsup_{m\rightarrow\infty}\left(\mathcal{L}_{\varphi\circ\pi_{1}}^{m}\big|_{V}\left(\pr g\right),\pr{g'}\right)^{1/m}=\mathrm{e}^{\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g^{-1}g'\right\} \cap\Sigma^{*}\right)}$
by Lemma \ref{lem:perronfrobenius-pressure}, we obtain that
\begin{align*}
& \limsup_{k\rightarrow\infty}\big(\max_{\left(i_{1},\dots,i_{k}\right)\in\left\{ 0,\dots,N_{n}\right\} ^{k}}\big\{\big(\mathcal{L}_{\varphi\circ\pi_{1}}^{2nk+\sum_{j=1}^{k}i_{j}}\big|_{V}\left(\pr g\right),\pr{g'}\big)\big\}\big)^{1/k}\le\\
& \limsup_{k\rightarrow\infty}\big(\max_{\left(i_{1},\dots,i_{k}\right)\in\left\{ 0,\dots,N_{n}\right\} ^{k}}\max\big\{\mathrm{e}^{\left(2nk+kN_{n}\right)\left(\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g^{-1}g'\right\} \cap\Sigma^{*}\right)+\epsilon\right)},\mathrm{e}^{2nk\left(\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g^{-1}g'\right\} \cap\Sigma^{*}\right)+\epsilon\right)}\big\}\big)^{1/k}.
\end{align*}
Since $\epsilon>0$ was chosen to be arbitrary, our previous estimates
imply that for each $n\in\mathbb{N}$ and for all $g,g'\in G$ we have
\begin{eqnarray}
& & \limsup_{k\rightarrow\infty}\big(\left(\left(\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\big|_{V}\right)^{*}\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\big|_{V}\right)^{k}\left(\pr g\right),\pr{g'}\big)^{1/k}\label{eq:lower-bound-estimate}\\
& \le & c_{n}\left(N_{n}+1\right)\max\left\{ \mathrm{e}^{\left(2n+N_{n}\right)\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g^{-1}g'\right\} \cap\Sigma^{*}\right)},\mathrm{e}^{2n\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g^{-1}g'\right\} \cap\Sigma^{*}\right)}\right\} .\nonumber
\end{eqnarray}
Combining (\ref{eq:lowerbound-asymptselfadjoint-1}), (\ref{eq:lowerbound-asymptselfadjoint-2})
and (\ref{eq:lower-bound-estimate}), we obtain that
\begin{eqnarray*}
& & \rho\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V}\right)=\lim_{n\rightarrow\infty}\Vert\left(\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\big|_{V}\right)^{*}\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\big|_{V}\Vert^{1/(2n)}\\
& \le & \limsup_{n\rightarrow\infty}\big(c_{n}\left(N_{n}+1\right)\sup_{g,g'\in G}\max\big\{\mathrm{e}^{\left(2n+N_{n}\right)\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g^{-1}g'\right\} \cap\Sigma^{*}\right)},\mathrm{e}^{2n\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g^{-1}g'\right\} \cap\Sigma^{*}\right)}\big\}\big)^{1/(2n)}\\
& \le & \lim_{n\rightarrow\infty}\left(c_{n}\left(N_{n}+1\right)\right)^{1/(2n)}\sup_{g\in G}\max\big\{\mathrm{e}^{\left(1+N_{n}/(2n)\right)\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g\right\} \cap\Sigma^{*}\right)},\mathrm{e}^{\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g\right\} \cap\Sigma^{*}\right)}\big\}.
\end{eqnarray*}
Since $\lim_{n\rightarrow \infty}\left(c_{n}\right)^{1/n}=1$ and $\lim_{n\rightarrow \infty}n^{-1}N_{n}=0$,
the proof is complete.
\end{proof}
In the following definition, we introduce certain important closed linear subspaces of the space $L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$.
\begin{defn}
\label{def:C-k-measurable-subspaces} For $j\in\mathbb{N}_{0}$, let $V_{j}\subset L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$
denote the subspace consisting of all $f\in L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$
which possess a $\mathcal{C}(j)\otimes\mathcal{B}\left(G\right)$-measurable
representative in $L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$, where $\mathcal{C}(j)\otimes\mathcal{B}\left(G\right)$ denotes
the product $\sigma$-algebra of $\mathcal{C}(j)$ and the Borel $\sigma$-algebra $\mathcal{B}\left(G\right)$ on $G$.
\end{defn}
Note that $V_{j}$ is a Hilbert space for each $j\in\mathbb{N}_{0}$. The next
lemma gives an invariance property for $V_{j}$ with respect to $\mathcal{L}_{\varphi\circ\pi_{1}}$
for $\mathcal{C}(k)$-measurable potentials $\varphi$.
\begin{lem}
\label{lem:vp_invariantsubspaces}Let $\varphi:\Sigma\rightarrow\mathbb{R}$
be $\mathcal{C}(k)$-measurable for some $k\in\mathbb{N}_{0}$. Then $V_{j}$
is $\mathcal{L}_{\varphi\circ\pi_{1}}$-invariant for each $j\in\mathbb{N}$
with $j\ge k-1$. Moreover, for all $j\in\mathbb{N}_{0}$ we have that $U\left(V_{j}\right)\subset V_{j+1}$. \end{lem}
\begin{proof}
If $f$ is $\mathcal{C}\left(j\right)$-measurable, $j\in \mathbb{N}_0$, then it follows from Lemma \ref{fac:pf-fact}
(\ref{enu:Uadjoint-is-PF}) that $\mathcal{L}_{\varphi\circ\pi_{1}}\left(f\right)$
is given by
\[
\mathcal{L}_{\varphi\circ\pi_{1}}\left(f\right)\left(\omega,g\right)=\sum_{i\in I:i\omega_{1}\in\Sigma^{2}}\mathrm{e}^{\varphi\left(i\omega\right)}f\left(i\omega,g\Psi\left(i\right)^{-1}\right).
\]
Note that the right-hand side of the previous equation depends only on
$g\in G$ and on the elements $\omega_{1},\dots,\omega_{\max\left\{ k-1,j-1,1\right\} }\in I$.
Consequently, for $j\in\mathbb{N}$ with $j\ge k-1$, we have that $V_{j}$
is $\mathcal{L}_{\varphi\circ\pi_{1}}$-invariant.
The remaining assertion follows immediately from the definition of
$U$.
\end{proof}
We need the following notion of symmetry.
\begin{defn}\label{def:symmetry} We say that $\varphi:\Sigma\rightarrow\mathbb{R}$
is \emph{asymptotically symmetric with respect to $\Psi$ }if there exist
sequences $\left(c_{m}\right)_{m\in\mathbb{N}}\in\left(\mathbb{R}^{+}\right)^{\mathbb{N}}$
and $\left(N_{m}\right)_{m\in\mathbb{N}}\in\mathbb{N}_0^{\mathbb{N}}$ with the property that
$\lim_{m}\left(c_{m}\right)^{1/m}=1$, $\lim_{m}m^{-1}N_{m}=0$ and
such that for each $g\in G$ and for all $n\in\mathbb{N}$ we have
\begin{equation}
\sum_{\omega\in\Sigma^{n}:\Psi\left(\omega\right)=g}\mathrm{e}^{S_{\omega}\varphi}\le c_{n}\sum_{i=0}^{N_{n}}\sum_{\tau\in\Sigma^{n+i}:\Psi\left(\tau\right)=g^{-1}}\mathrm{e}^{S_{\tau}\varphi}.\label{eq:symmetry-definition}
\end{equation}
\end{defn}
\begin{rem}
\label{rem:symmetry-invariant-coboundary}If $\varphi$ is asymptotically
symmetric with respect to $\Psi$, then it is straightforward to verify
that, for each $\psi:\Sigma\rightarrow\mathbb{R}^{+}$ H\"older continuous and
$c\in\mathbb{R}$, we have that also $\varphi+\log\psi-\log\psi\circ\sigma+c$ is asymptotically
symmetric with respect to $\Psi$. Using the Gibbs property (\ref{eq:gibbs-equation})
of $\mu_{\varphi}$, an equivalent way to state that $\varphi$ is
asymptotically symmetric with respect to $\Psi$ is the following: there exist
sequences $\left(c_{m}'\right)_{m\in\mathbb{N}}\in\left(\mathbb{R}^{+}\right)^{\mathbb{N}}$
and $\left(N_{m}'\right)_{m\in\mathbb{N}}\in\mathbb{N}_0^{\mathbb{N}}$ with the property that
$\lim_{m}\left(c_{m}'\right)^{1/m}=1$, $\lim_{m}m^{-1}N_{m}'=0$
and such that for each $g\in G$ and for all $n\in\mathbb{N}$ we have
\[
\sum_{\omega\in\Sigma^{n}:\Psi\left(\omega\right)=g}\mu_{\varphi}\left(\left[\omega\right]\right)\le c_{n}'\sum_{i=0}^{N_{n}'}\sum_{\tau\in\Sigma^{n+i}:\Psi\left(\tau\right)=g^{-1}}\mu_{\varphi}\left(\left[\tau\right]\right).
\]
\end{rem}
Next lemma gives a necessary and sufficient condition for $\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V_{j}}$
to be asymptotically self-adjoint.
\begin{lem}
\label{lem:weaklysymmetry-is-weakselfadjoint}Let $\varphi:\Sigma\rightarrow\mathbb{R}$
be $\mathcal{C}(k)$-measurable, for some $k\in\mathbb{N}_{0}$. For each
$j\in\mathbb{N}$ with $j\ge k-1$, we then have that $\varphi$ is asymptotically symmetric with respect to $\Psi$ if and only if $\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V_{j}}$ is asymptotically self-adjoint.
\end{lem}
\begin{proof}
We first observe that by Lemma \ref{lem:perronfrobenius-pressure} and by the Gibbs property (\ref{eq:gibbs-equation}) of $\mu_{\varphi}$, there exists $K>0$ such that for all $n\in \mathbb{N}$ and for all $g,g' \in G$ we have
\begin{equation}
K^{-1}\le\frac{\left(\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\left(\mathbbm{1}_{\Sigma\times\left\{ g\right\} }\right),\mathbbm{1}_{\Sigma\times\left\{ g'\right\} }\right)}{\sum_{\tau\in\Sigma^{n}:g\Psi\left(\tau\right)=g'}\mathrm{e}^{S_{\tau}\varphi}}\le K,\label{eq:perronfrobenius-sums}
\end{equation}
unless nominator and denominator in (\ref{eq:perronfrobenius-sums}) are both equal to zero.
From (\ref{eq:perronfrobenius-sums}) we obtain that $\varphi$ is asymptotically symmetric with respect to $\Psi$ if and only if there exist sequences $\left(c_{m}\right)\in\left(\mathbb{R}^{+}\right)^{\mathbb{N}}$ and $\left(N_{m}\right)\in\mathbb{N}_0^{\mathbb{N}}$, as in Definition \ref{def:symmetry}, such that for all $n\in\mathbb{N}$ and $g,g'\in G$ we have
\begin{equation}
\left(\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\left(\mathbbm{1}_{\Sigma\times\left\{ g\right\} }\right),\mathbbm{1}_{\Sigma\times\left\{ g'\right\} }\right)\le c_{n}\left(\mathbbm{1}_{\Sigma\times\left\{ g\right\} },\sum_{i=0}^{N_{n}}\mathcal{L}_{\varphi\circ\pi_{1}}^{n+i}\left(\mathbbm{1}_{\Sigma\times\left\{ g'\right\} }\right)\right).\label{eq:symmetry-via-perronfrobenius-sums}
\end{equation}
Since $V_0 \subset V_j$ for each $j\in \mathbb{N}_0$, we obtain that if $\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V_{j}}$ is asymptotically self-adjoint, then $\varphi$ is asymptotically symmetric with respect to $\Psi$.
For the opposite implication, let $j\in \mathbb{N}$, $j\ge k-1$ and suppose that $\varphi$ is asymptotically symmetric with respect to $\Psi$. By Lemma \ref{lem:vp_invariantsubspaces}, we have that $V_{j}$ is $\mathcal{L}_{\varphi\circ\pi_{1}}$-invariant. Next, we note that since $\varphi$ is asymptotically symmetric with respect to $\Psi$, we have that, for each $\omega \in \Sigma^j$, there exists $\kappa(\omega)\in \Sigma^*$ such that $\Psi(\omega)\Psi(\kappa(\omega))=\id$. Combining this with item (3) of our standing assumptions, we conclude that for all $\omega,\omega'\in \Sigma^j$ there exists a finite-to-one map which maps $\tau\in\Sigma^*$ to an element $\omega\gamma_1 \kappa(\omega)\gamma_2 \tau \gamma_3 \omega' \in \Sigma^*$, where $\Psi(\gamma_i)=\id$ and $\gamma_i$ depends only on the preceding and successive symbol, for each $i\in \{ 1,2,3\}$. Hence, in view of Lemma \ref{lem:perronfrobenius-pressure} and the Gibbs property (\ref{eq:gibbs-equation}) of $\mu_{\varphi}$, and by using that $\Sigma^j$ is finite, we conclude that there exist $N\in\mathbb{N}$ and $C>0$ (depending on $j$) such that for all $n\in\mathbb{N}$, $g,g'\in G$ and for all $\omega, \omega' \in \Sigma^j$ we have
\begin{equation}
\left(\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\left(\mathbbm{1}_{\Sigma\times\left\{ g\right\} }\right),\mathbbm{1}_{\Sigma\times\left\{ g'\right\} }\right)\le C \sum_{r=0}^N{\left(\mathcal{L}_{\varphi\circ\pi_{1}}^{n+r}\left(\mathbbm{1}_{[\omega]\times\left\{ g\right\} }\right),\mathbbm{1}_{[\omega']\times\left\{ g'\right\} }\right)}.
\label{eq:symmetry-via-perronfrobenius-sums-2}
\end{equation}
By first using (\ref{eq:symmetry-via-perronfrobenius-sums}) and then (\ref{eq:symmetry-via-perronfrobenius-sums-2}), we deduce that for all $n\in\mathbb{N}$, $g,g' \in G$ and for all $\omega,\omega'\in \Sigma^j$,
\begin{align*}
\left(\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\left(\mathbbm{1}_{\Sigma\times\left\{ g\right\} }\right),\mathbbm{1}_{\Sigma\times\left\{ g'\right\} }\right) & \le c_{n}\left(\mathbbm{1}_{\Sigma\times\left\{ g\right\} },\sum_{i=0}^{N_{n}}\mathcal{L}_{\varphi\circ\pi_{1}}^{n+i}\left(\mathbbm{1}_{\Sigma\times\left\{ g'\right\} }\right)\right) \\
& \le c_{n} C\sum_{i=0}^{N_{n}} \sum_{r=0}^N{\left( \mathbbm{1}_{[\omega]\times\left\{ g\right\} },\mathcal{L}_{\varphi\circ\pi_{1}}^{n+i+r}\left(\mathbbm{1}_{[\omega']\times\left\{ g'\right\} }\right) \right)} \\
& \le c_{n} CN \sum_{i=0}^{N_{n}+N} {\left( \mathbbm{1}_{[\omega]\times\left\{ g\right\} },\mathcal{L}_{\varphi\circ\pi_{1}}^{n+i}\left(\mathbbm{1}_{[\omega']\times\left\{ g'\right\} }\right) \right)}.
\end{align*}
Since $\left(\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\left(\mathbbm{1}_{[\omega]\times\left\{ g\right\} }\right),\mathbbm{1}_{[\omega']\times\left\{ g'\right\} }\right) \le \left(\mathcal{L}_{\varphi\circ\pi_{1}}^{n}\left(\mathbbm{1}_{\Sigma\times\left\{ g\right\} }\right),\mathbbm{1}_{\Sigma\times\left\{ g'\right\} }\right)$ for all $\omega, \omega' \in \Sigma^j$, it follows that $\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V_{j}}$ is asymptotically self-adjoint with respect to the sequences $\left(c_{m}'\right)\in\left(\mathbb{R}^{+}\right)^{\mathbb{N}}$ and $\left(N_{m}'\right)\in\mathbb{N}_0^{\mathbb{N}}$, which are given by $c_{m}':=c_{m}CN$ and $N_{m}':=N_{m}+N$. The proof is complete.
\end{proof}
The following corollary is a consequence of Proposition \ref{pro:lowerbound-asymptselfadjoint}
and clarifies the relation between $\sup_{g\in G}\left\{ \mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g\right\} \cap\Sigma^{*}\right)\right\} $
and the spectral radius of $\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V_{j}}$
provided that $\varphi$ is asymptotically symmetric with respect to $\Psi$.
\begin{cor}
\label{cor:pressure-is-spectralradius}Let $\varphi:\Sigma\rightarrow\mathbb{R}$
be $\mathcal{C}(k)$-measurable, for some $k\in\mathbb{N}_{0}$, and suppose
that $\varphi$ is asymptotically symmetric with respect to $\Psi$. For
each $j\in\mathbb{N}$ with $j\ge k-1$, we then have that
\[
\sup_{g\in G}\left\{ \mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g\right\} \cap\Sigma^{*}\right)\right\} =\log\rho\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V_{j}}\right).
\]
\end{cor}
\begin{proof}
Fix $j\in\mathbb{N}$ with $j\ge k-1$. By Lemma \ref{lem:vp_invariantsubspaces},
we then have that $V_{j}$ is $\mathcal{L}_{\varphi\circ\pi_{1}}$-invariant.
Let us first verify that without loss of generality we may assume
that $\mathcal{L}_{\varphi}\mathbbm{1}=\mathbbm{1}$. Otherwise, by Theorem \ref{thm:perron-frobenius-thm-urbanski},
there exists a $\mathcal{C}\!\left(\max\left\{ k-1,1\right\} \right)$-measurable
function $h:\Sigma\rightarrow\mathbb{R}^{+}$ with $\mathcal{L}_{\varphi}\left(h\right)=\mathrm{e}^{\mathcal{P}\left(\varphi\right)}h$.
For $\tilde{\varphi}:=\varphi+\log h-\log h\circ\sigma-\mathcal{P}\left(\varphi\right)$,
we then have that $\mathcal{L}_{\tilde{\varphi}}\mathbbm{1}=\mathbbm{1}$, $\mathcal{P}\left(\tilde{\varphi}\right)=0$
and, for each $g\in G$,
\[
\mathcal{P}\left(\tilde{\varphi},\Psi^{-1}\left\{ g\right\} \cap\Sigma^{*}\right)=\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g\right\} \cap\Sigma^{*}\right)-\mathcal{P}\left(\varphi\right).
\]
Further, we have that
$\tilde{\varphi}$ is asymptotically symmetric with respect to $\Psi$, by Remark \ref{rem:symmetry-invariant-coboundary}. It
remains to show that $V_{j}$ is $\mathcal{L}_{\tilde{\varphi}\circ\pi_{1}}$-invariant
and that
\begin{equation}
\log\rho\left(\mathcal{L}_{\tilde{\varphi}\circ\pi_{1}}\big|_{V_{j}}\right)=\log\rho\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V_{j}}\right)-\mathcal{P}\left(\varphi\right).\label{eq:spectrum-under-cocycle}
\end{equation}
Since $h$ is $\mathcal{C}\!\left(\max\left\{ k-1,1\right\} \right)$-measurable,
we have that $V_{j}$ is $M_{h\circ\pi_{1}}$-invariant and, by the definition of the Perron-Frobenius operator, we obtain that
\[
\mathcal{L}_{\tilde{\varphi}\circ\pi_{1}}\big|_{V_{j}}=\mathrm{e}^{-\mathcal{P}\left(\varphi\right)}\left(M_{h\circ\pi_{1}}\big|_{V_{j}}\right)^{-1}\circ\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V_{j}}\right)\circ\left(M_{h\circ\pi_{1}}\big|_{V_{j}}\right).
\]
We conclude that $V_{j}$ is $\mathcal{L}_{\tilde{\varphi}\circ\pi_{1}}$-invariant
and that $\mathcal{L}_{\tilde{\varphi}\circ\pi_{1}}\big|_{V_{j}}$ and $\mathrm{e}^{-\mathcal{P}\left(\varphi\right)}\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V_{j}}$
have the same spectrum. The latter fact gives the equality in (\ref{eq:spectrum-under-cocycle}). Hence, we may assume without loss of generality that $\mathcal{L}_{\varphi}\mathbbm{1}=\mathbbm{1}$.
By Lemma \ref{lem:weaklysymmetry-is-weakselfadjoint}, we have that
$\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V_{j}}$
is asymptotically self-adjoint. Since the closed linear subspace $V_{j}\subset L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)$
satisfies $\left\{ f^{-}:f\in V_{j}\right\} \subset V_{j}$ and $\left\{ \mathbbm{1}_{\left\{ \pi_{2}=g\right\} }:g\in G\right\} \subset V_{j}$,
the assertion of the corollary follows from Proposition \ref{pro:lowerbound-asymptselfadjoint}.
\end{proof}
\begin{rem*}
Note that, in particular, under the assumptions of the previous corollary
we have that $\rho\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V_{j}}\right)$
is independent of $j\in\mathbb{N}$ for all $j\ge k-1$.
\end{rem*}
\subsection{Random Walks on Graphs and Amenability\label{sec:Random-Walks-Application}}
In this section we relate the Perron-Frobenius operator to the transition
operator of a certain random walk on a graph. We start by introducing
the following graphs.
\begin{defn}
\label{def:k-cylinder-graphs}For each $j\in\mathbb{N}_{0}$, the \emph{$j$-step
graph of $\left(\Sigma\times G,\sigma\rtimes\Psi\right)$} consists
of the vertex set $\Sigma^{j}\times G$ where two vertices $\left(\omega,g\right),\left(\omega',g'\right)\in\Sigma^{j}\times G$
are connected by an edge in $X_{j}$ if and only if
\[
\left(\sigma\rtimes\Psi\right)^{-1}\left(\left[\omega\right]\times\left\{ g\right\} \right)\cap\left(\left[\omega'\right]\times\left\{ g'\right\} \right)\neq\emptyset\,\,\,\mbox{or}\,\,\left(\sigma\rtimes\Psi\right)^{-1}\left(\left[\omega'\right]\times\left\{ g'\right\} \right)\cap\left(\left[\omega\right]\times\left\{ g\right\} \right)\neq\emptyset.
\]
We use $X_{j}\left(\Sigma\times G,\sigma\rtimes\Psi\right)$ or simply
$X_{j}$ to denote this graph.
\end{defn}
Provided that $\Psi\left(\Sigma^{*}\right)=G$, we have that each
$j$-step graph of \emph{$\left(\Sigma\times G,\sigma\rtimes\Psi\right)$}
is connected. Next lemma shows that each of these graphs is roughly
isometric to the Cayley graph of $G$ with respect to $\Psi\left(I\right)\cup\Psi\left(I\right)^{-1}$
denoted by $X\!\left(G,\Psi\left(I\right)\cup\Psi\left(I\right)^{-1}\right)$.
For a similar argument, see \cite{MR2338235}.
\begin{lem}
\label{lem:roughisometric-withcayleygraph}Suppose that $\Psi\left(\Sigma^{*}\right)=G$
and let $j\in\mathbb{N}_{0}$. We then have that the graphs $X_{j}\left(\Sigma\times G,\sigma\rtimes\Psi\right)$
and $X\negthinspace\left(G,\Psi\left(I\right)\cup\Psi\left(I\right)^{-1}\right)$
are roughly isometric. \end{lem}
\begin{proof}
By identifying $\Sigma^{0}\times G$ with $G$, we clearly have that $X_{0}$ is isometric to $X\!\left(G,\Psi\left(I\right)\cup\Psi\left(I\right)^{-1}\right)$. Suppose now that $j\in\mathbb{N}$.
We show that the map $\pi_{2}:\Sigma^{j}\times G\rightarrow G$, given
by $\pi_{2}\left(\omega,g\right):=g$, for all $\left(\omega,g\right)\in\Sigma^{j}\times G$,
defines a rough isometry between the metric spaces $\left(\Sigma^{j}\times G,d_{j}\right)$
and $\left(G,d\right)$, where $d_{j}$ denotes the graph metric on
$X_{j}$ and $d$ denotes the graph metric on $X\!\left(G,\Psi\left(I\right)\cup\Psi\left(I\right)^{-1}\right)$.
Clearly, we have that $\pi_{2}$ is surjective. Further, by the
definition of the edge set of $X_{j}$, we have that if two vertices
$\left(\omega,g\right),\left(\omega',g'\right)\in\Sigma^{j}\times G$
are connected by an edge in $X_{j}$, then $g$ and $g'$ are connected
by an edge in $X\!\left(G,\Psi\left(I\right)\cup\Psi\left(I\right)^{-1}\right)$.
Hence, for all $\left(\omega,g\right),\left(\omega',g'\right)\in\Sigma^{j}\times G$
we have that $d\left(\pi_{2}\left(\omega,g\right),\pi_{2}\left(\omega',g'\right)\right)\le d_{j}\left(\left(\omega,g\right),\left(\omega',g'\right)\right)$.
It remains to show that there exist constants $A,B>0$ such that for
all $\left(\omega,g\right),\left(\omega',g'\right)\in\Sigma^{j}\times G$,
\begin{equation}
d_{j}\left(\left(\omega,g\right),\left(\omega',g'\right)\right)\le Ad\left(\pi_{2}\left(\omega,g\right),\pi_{2}\left(\omega',g'\right)\right)+B.\label{eq:rough-isometry-inequality}
\end{equation}
First note that by our assumptions, there exists a finite set $F\subset\Sigma^{*}$
with the following properties.
\begin{enumerate}
\item For all $\tau\in\Sigma^{j}$ there exists $\kappa\left(\tau\right)\in F$
such that $\Psi\left(\tau\right)\Psi\left(\kappa\left(\tau\right)\right)=\id$,
and for all $h\in\Psi\left(I\right)\cup\Psi\left(I\right)^{-1}$ there
is $\alpha\in F$ such that $\Psi\left(\alpha\right)=h$. (We used
that $\card\left(I\right)<\infty$ and hence, $\card\left(\Sigma^{j}\right)<\infty$,
and that $\Psi\left(\Sigma^{*}\right)=G$.)
\item For all $a,b\in I$ there exists $\gamma\in F\cap\Psi^{-1}\left\{ \id\right\} \cup\left\{ \emptyset\right\} $
such that $a\gamma b\in\Sigma^{*}$. (We used $\card\left(I\right)<\infty$
and item (3) of our standing assumptions.)
\end{enumerate}
Setting $L:=\max_{\gamma\in F}\left|\gamma\right|$, $A:=2L$ and
$B:=3L+j$, we will show that (\ref{eq:rough-isometry-inequality})
holds. Let $\left(\omega,g\right),\left(\omega',g'\right)\in\Sigma^{j}\times G$
be given. First suppose that $d\left(\pi_{2}\left(\omega,g\right),\pi_{2}\left(\omega',g'\right)\right)=m\in\mathbb{N}$.
Hence, there exist $h_{1},\dots,h_{m}\in\Psi\left(I\right)\cup\Psi\left(I\right)^{-1}$
such that $gh_{1}\cdot\dots\cdot h_{m}=g'$. By property (1) above,
there exist $\alpha_{1},\dots,\alpha_{m}\in F$ such that $\Psi\left(\alpha_{i}\right)=h_{i}$
for all $1\le i\le m$, and there exists $\kappa(\omega)\in F$ such that $\Psi\left(\omega\right)\Psi\left(\kappa\left(\omega\right)\right)=\id$. Then property (2) implies the existence of
$\gamma_{0},\gamma_{1},\dots,\gamma_{m+1}\in F\cap\Psi^{-1}\left\{ \id\right\} \cup\left\{ \emptyset\right\} $
such that $\omega\gamma_{0}\kappa\left(\omega\right)\gamma_{1}\alpha_{1}\gamma_{2}\alpha_{2}\cdot\dots\cdot\gamma_{m}\alpha_{m}\gamma_{m+1}\omega'\in\Sigma^{*}$
and hence,
\[
\left[\omega\gamma_{0}\kappa\left(\omega\right)\gamma_{1}\alpha_{1}\gamma_{2}\alpha_{2}\cdot\dots\cdot\gamma_{m}\alpha_{m}\gamma_{m+1}\omega'\right]\subset\left(\left[\omega\right]\times\left\{ g\right\} \right)\cap\left(\sigma\rtimes\Psi\right)^{-l}\left(\left[\omega'\right]\times\left\{ g'\right\} \right),
\]
where we have set $l:=\left|\omega\gamma_{0}\kappa\left(\omega\right)\gamma_{1}\alpha_{1}\gamma_{2}\alpha_{2}\cdot\dots\cdot\gamma_{m}\alpha_{m}\gamma_{m+1}\right|\le\left(2m+3\right)L+j$.
The inequality in (\ref{eq:rough-isometry-inequality}) follows. Finally,
if $d\left(\pi_{2}\left(\omega,g\right),\pi_{2}\left(\omega',g'\right)\right)=0$
then $g=g'$ and there exist $\gamma_0, \gamma_1 \in F\cap \Psi^{-1}\{\id\}\cup \{\emptyset\}$ such that $\omega \gamma_0 \kappa(\omega) \gamma_1 \omega' \in \Sigma^*$, which proves $d_{j}\left(\left(\omega,g\right),\left(\omega',g'\right)\right)\le B$.
The proof is complete.
\end{proof}
In the following proposition we let $\mathbb{E}\left(\cdot|\mathcal{C}(j)\right):L^{2}\left(\Sigma\times G,\mu_{\varphi}\times\lambda\right)\rightarrow V_{j}$
denote the conditional expectation given $\mathcal{C}(j)$.
\begin{prop}
\label{pro:transitionmatrix-via-pfadjoint}Suppose that $\Psi\left(\Sigma^{*}\right)=G$.
Let $\varphi:\Sigma\rightarrow\mathbb{R}$ be $\mathcal{C}(k)$-measurable
for some $k\in\mathbb{N}_{0}$, such that $\mathcal{L}_{\varphi}\mathbbm{1}=\mathbbm{1}$. The
following holds for all $j\in\mathbb{N}$ with $j\ge k-1$. For the bounded
linear operator $\mathbb{E}\left(U\left(\cdot\right)|\mathcal{C}\left(j\right)\right):V_{j}\rightarrow V_{j}$
we have that
\[
\rho\left(\mathbb{E}\left(U\left(\cdot\right)|\mathcal{C}\left(j\right)\right)\right)\le\Vert\mathbb{E}\left(U\left(\cdot\right)|\mathcal{C}\left(j\right)\right)\Vert=1
\]
with equality if and only if $G$ is amenable. In particular, we have that
\[
\rho\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V_{j}}\right)\le\Vert\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V_{j}}\Vert=1
\]
with equality if and only if $G$ is amenable. \textup{\emph{}}\end{prop}
\begin{proof}
Fix $j\in\mathbb{N}$ with $j\ge k-1$. We first observe that for each $f\in V_{j}$
we have that $\mathbb{E}\left(U\left(f\right)|\mathcal{C}\left(j\right)\right)$
is the unique element in $V_{j}$, such that $\left(\mathbb{E}\left(U\left(f\right)|\mathcal{C}\left(j\right)\right),g\right)=\left(U\left(f\right),g\right)$ for all $g\in V_{j}$.
Since $\left(U\left(f\right),g\right)=\left(f,\mathcal{L}_{\varphi\circ\pi_{1}}\left(g\right)\right)$ and $V_{j}$ is $\mathcal{L}_{\varphi\circ\pi_{1}}$-invariant
by Lemma \ref{lem:vp_invariantsubspaces}, we conclude that $\mathbb{E}\left(U\left(\cdot\right)|\mathcal{C}\left(j\right)\right)$
is the adjoint of $\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V_{j}}$.
Since $U\left(V_{0}\right)\subset V_{1}\subset V_{j}$, we have that
the restriction of $\mathbb{E}\left(U\left(\cdot\right)|\mathcal{C}\left(j\right)\right)$
to $V_{0}$ is equal to $U\big|_{V_{0}}$. Because $U$ is an isometry
by Lemma \ref{fac:pf-fact} (\ref{enu:UisIsometry}), we conclude that
$\Vert\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V_{j}}\Vert=\Vert\mathbb{E}\left(U\left(\cdot\right)|\mathcal{C}\left(j\right)\right)\Vert=1$.
In order to prove the amenability dichotomy for $\rho\left(\mathbb{E}\left(U\left(\cdot\right)|\mathcal{C}\left(j\right)\right)\right)$
we aim to apply Theorem \ref{thm:woess-amenability-randomwalk-characterization}
to a transition matrix on the vertex set $\Sigma^{j}\times G$ of
the graph $X_{j}$. Since $\left\{ \mathbbm{1}_{\left[\omega\right]\times\left\{ g\right\} }:\left(\omega,g\right)\in\Sigma^{j}\times G\right\} $
is a basis of $V_{j}$, we obtain a Hilbert space isomorphism between
$V_{j}$ and $\ell^{2}\left(\Sigma^{j}\times G,\nu_{j}\right)$ by
setting $\nu_{j}\left(\omega,g\right):=\left(\mu_{\varphi}\times\lambda\right)\left(\left[\omega\right]\times\left\{ g\right\} \right)$
for every $\left(\omega,g\right)\in\Sigma^{j}\times G$. Using this
isomorphism and with respect to the canonical basis of $\ell^{2}\left(\Sigma^{j}\times G,\nu_{j}\right)$,
we have that $\mathbb{E}\left(U\left(\cdot\right)|\mathcal{C}\left(j\right)\right)$
is represented by the matrix $P=\left(p\left(\left(\omega,g\right),\left(\omega',g'\right)\right)\right)$
given by
\begin{equation}
p\left(\left(\omega,g\right),\left(\omega',g'\right)\right)=\left(U\mathbbm{1}_{\left[\omega'\right]\times\left\{ g'\right\} },\mathbbm{1}_{\left[\omega\right]\times\left\{ g\right\} }\right)\left(\left(\mu_{\varphi}\times\lambda\right)\left(\left[\omega\right]\times\left\{ g\right\} \right)\right)^{-1}.\label{eq:transitionmatrix-representation}
\end{equation}
Note that we have chosen the matrix $P$ to act on the left. Summing
over $\left(\omega',g'\right)\in\Sigma^{j}\times G$ in the previous
line, we obtain that $P$ is a transition matrix on $\Sigma^{j}\times G$.
Using that $\mu_{\varphi}\times\lambda$ is $\left(\sigma\rtimes\Psi\right)$-invariant
by Lemma \ref{lem:mu-phi-prod-counting-is-invariant}, one then deduces
from (\ref{eq:transitionmatrix-representation}) that $\nu_{j}$ is
$P$-invariant. Let us now verify that Theorem \ref{thm:woess-amenability-randomwalk-characterization}
is applicable to the transition matrix $P$ acting on the vertex set
$\Sigma^{j}\times G$ of $X_{j}$. Since $\card\left(I\right)<\infty$,
we have that $X_{j}$ has bounded geometry. Further, it follows immediately
from the definition of $X_{j}$ that $p\left(\left(\omega,g\right),\left(\omega',g'\right)\right)>0$
implies that $\left(\omega,g\right)\sim\left(\omega',g'\right)$ in
$X_{j}$ and hence, $P$ has bounded range ($R=1$) with respect to
$X_{j}$. It is also clear from the definition of $\nu_{j}$ that
\[
0<\min_{\omega\in\Sigma^{j}}\mu_{\varphi}\left(\left[\omega\right]\right)=\inf_{\left(\omega,g\right)\in\Sigma^{j}\times G}\nu_{j}\left(\omega,g\right)\le\sup_{\left(\omega,g\right)\in\Sigma^{j}\times G}\nu_{j}\left(\omega,g\right)=\max_{\omega\in\Sigma^{j}}\mu_{\varphi}\left(\left[\omega\right]\right)<\infty.
\]
It remains to verify that $P$ is uniformly irreducible with respect
to $X_{j}$. Let $\left(\omega,g\right),\left(\omega',g'\right)\in\Sigma^{j}\times G$
denote a pair of vertices which is connected by an edge in $X_{j}$.
By definition, we then have that $\left(\sigma\rtimes\Psi\right)^{-1}\left(\left[\omega'\right]\times\left\{ g'\right\} \right)\cap\left(\left[\omega\right]\times\left\{ g\right\} \right)\neq\emptyset\mbox{ or }\left(\sigma\rtimes\Psi\right)^{-1}\left(\left[\omega\right]\times\left\{ g\right\} \right)\cap\left(\left[\omega'\right]\times\left\{ g'\right\} \right)\neq\emptyset$.
In the first case, we have that
\begin{eqnarray*}
p\left(\left(\omega,g\right),\left(\omega',g'\right)\right) & = & \left(\mu_{\varphi}\times\lambda\right)\left(\left(\sigma\rtimes\Psi\right)^{-1}\left(\left[\omega'\right]\times\left\{ g'\right\} \right)\cap\left(\left[\omega\right]\times\left\{ g\right\} \right)\right)\left(\mu_{\varphi}\left(\left[\omega\right]\right)\right)^{-1}\\
& = & \mu_{\varphi}\left(\left[\omega \omega'_j\right]\right)\left(\mu_{\varphi}\left(\left[\omega\right]\right)\right)^{-1}\ge\min_{\tau\in\Sigma^{j+1}}\mu_{\varphi}\left(\left[\tau\right]\right)>0.
\end{eqnarray*}
Next we consider the second case in which $\left(\sigma\rtimes\Psi\right)^{-1}\left(\left[\omega\right]\times\left\{ g\right\} \right)\cap\left(\left[\omega'\right]\times\left\{ g'\right\} \right)\neq\emptyset$
and thus, $g'\Psi\left(\omega'_{1}\right)=g$. Similarly as in the
proof of Lemma \ref{lem:roughisometric-withcayleygraph} one can verify
that there exists a finite set $F\subset\Sigma^{*}$ with the following
properties. Firstly, for all $\tau\in\Sigma^{j}\cup I$ there exists
$\kappa\left(\tau\right)\in F$ such that $\Psi\left(\tau\right)\Psi\left(\kappa\left(\tau\right)\right)=\id$
and secondly, for all $a,b\in I$ there exists $\gamma\in F\cap\Psi^{-1}\left\{ \id\right\} \cup\left\{ \emptyset\right\} $
such that $a\gamma b\in\Sigma^{*}$. Hence, there exist $\gamma_{1},\gamma_{2},\gamma_{3}\in F$
such that
\[
\left(\left[\omega\gamma_{1}\kappa\left(\omega\right)\gamma_{2}\kappa\left(\omega'_{1}\right)\gamma_{3}\omega'\right]\times\left\{ g\right\} \right)\subset\left(\left[\omega\right]\times\left\{ g\right\} \right)\cap\left(\sigma\rtimes\Psi\right)^{-l}\left(\left[\omega'\right]\times\left\{ g'\right\} \right),
\]
where we have set $l:=\left|\omega\gamma_{1}\kappa\left(\omega\right)\gamma_{2}\kappa\left(\omega'_{1}\right)\gamma_{3}\right|\le j+5\max_{\gamma\in F}\left|\gamma\right|$.
Consequently,
\[
p^{\left(l\right)}\left(\left(\omega,g\right),\left(\omega',g'\right)\right)\ge \left(\min_{\tau\in\Sigma^{j+1}}\mu_{\varphi}\left(\left[\tau\right]\right)\right)^{j+5\max_{\gamma\in F}\left|\gamma\right|} >0.
\]
Hence, with $K:=j+5\max_{\gamma\in F}\left|\gamma\right|$ and $\epsilon:=\left(\min_{\tau\in\Sigma^{j+1}}\mu_{\varphi}\left(\left[\tau\right]\right)\right)^{j+5\max_{\gamma\in F}\left|\gamma\right|} >0$
we have that $P$ is uniformly irreducible with respect $X_{j}$.
We are now in the position to apply Theorem \ref{thm:woess-amenability-randomwalk-characterization} to the transition matrix $P$,
which gives that $\rho\left(P\right)=1$ if and only if $X_{j}$
is amenable. Since $X_j$ is roughly isometric to the Cayley graph of $G$ with respect to $\Psi\left(I\right)\cup\Psi\left(I\right)^{-1}$ by Lemma \ref{lem:roughisometric-withcayleygraph}, it follows from Theorem \ref{thm:amenability-is-roughisometry-invariant} that $X_j$ is amenable if and only if $G$ is amenable (cf. Proposition \ref{pro:groupamenable-iff-graphamenable}) . Finally, since $\mathbb{E}\left(U\left(\cdot\right)|\mathcal{C}\left(j\right)\right)$
and $P$ are conjugated by an isomorphism of Hilbert spaces, we have $\rho\left(\mathbb{E}\left(U\left(\cdot\right)|\mathcal{C}\left(j\right)\right)\right)=\rho(P)$, which completes the proof.
\end{proof}
Summarizing the outcomes of this section, we obtain the following
main result.
\begin{thm}
\label{thm:amenability-dichotomy-markov}Suppose that $\Psi\left(\Sigma^{*}\right)=G$
and let $\varphi:\Sigma\rightarrow\mathbb{R}$ be $\mathcal{C}(k)$-measurable
for some $k\in\mathbb{N}_{0}$. The following holds for all $j\in\mathbb{N}$ with
$j\ge k-1$. We have
\begin{equation}
\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ \id\right\} \cap\Sigma^{*}\right)\le\log\rho\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V_{j}}\right)\le\log\rho\left(\mathcal{L}_{\varphi\circ\pi_{1}}\right)=\mathcal{P}\left(\varphi\right),\label{eq:amenability-dichotomy-1}
\end{equation}
with equality in the second inequality if and only if $G$ is amenable.
Moreover, if $\varphi$ is asymptotically symmetric with respect to $\Psi$,
then
\begin{equation}
\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ \id\right\} \cap\Sigma^{*}\right)=\log\rho\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V_{j}}\right)\label{eq:amenability-dicotomy-2}
\end{equation}
and so, $G$ is amenable if and only if $\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ \id \right\} \cap\Sigma^{*}\right)=\mathcal{P}\left(\varphi\right)$. \end{thm}
\begin{proof}
Fix $j\in\mathbb{N}$ with $j\ge k-1$, which implies that $V_{j}$ is $\mathcal{L}_{\varphi\circ\pi_{1}}$-invariant
by Lemma \ref{lem:vp_invariantsubspaces}. As shown in the proof of
Corollary \ref{cor:pressure-is-spectralradius} we may assume without
loss of generality that $\mathcal{L}_{\varphi}\mathbbm{1}=\mathbbm{1}$ and thus $\mathcal{P}\left(\varphi\right)=0$.
The first inequality in (\ref{eq:amenability-dichotomy-1}) follows
from Corollary \ref{cor:upperboundviaspectralradius} applied to $V=V_{j}$.
The second inequality in (\ref{eq:amenability-dichotomy-1}) is an
immediate consequence of the definition of the spectrum. The amenability
dichotomy follows from Proposition \ref{pro:transitionmatrix-via-pfadjoint}.
The equality $\log\rho\left(\mathcal{L}_{\varphi\circ\pi_{1}}\right)=\mathcal{P}\left(\varphi\right)$
follows from Lemma \ref{fac:pf-fact} (\ref{enu:.pf-fact-spectralradius-pressure}).
In order to complete the proof, we now address (\ref{eq:amenability-dicotomy-2})
under the assumption that $\varphi$ is asymptotically symmetric with respect
to $\Psi$. By Corollary \ref{cor:pressure-is-spectralradius}, we
then have that
\[
\sup_{g\in G}\left\{ \mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g\right\} \cap\Sigma^{*}\right)\right\} =\log\rho\left(\mathcal{L}_{\varphi\circ\pi_{1}}\big|_{V_{j}}\right).
\]
Using that $\Psi\left(\Sigma^{*}\right)=G$ and item (3) of our standing assumptions, one easily verifies that the pressure $\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ g\right\} \cap\Sigma^{*}\right)$
is independent of $g\in G$, which completes the proof.
\end{proof}
\begin{cor}
\label{cor:amenable-implies-fullpressure}
Let $\varphi:\Sigma\rightarrow\mathbb{R}$ be $\mathcal{C}(k)$-measurable,
for some $k\in\mathbb{N}_{0}$ and assume that $\varphi$ is asymptotically symmetric with respect to $\Psi$. If $G$ is amenable, then $\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ \id \right\} \cap\Sigma^{*}\right)=\mathcal{P}\left(\varphi\right)$.
\end{cor}
\begin{proof}
Using item (3) of our standing assumptions and that $\varphi$ is asymptotically symmetric with respect to $\Psi$, one verifies that $G':=\Psi(\Sigma^*)$ is a subgroup of $G$. Since $G$ is amenable, it is well-known that also $G'$ is amenable (see e.g. \cite[Theorem 12.2 (c)]{MR1743100}), and the corollary follows from Theorem \ref{thm:amenability-dichotomy-markov}.
\end{proof}
\begin{rem}
\label{proof-comment-stadlabuer}It is not difficult to extend Corollary \ref{cor:amenable-implies-fullpressure} to arbitrary H\"older
continuous potentials by approximating a H\"older
continuous potential by a $\mathcal{C}\left(k\right)$-measurable
potential and then letting $k$ tend to infinity. One obtains that, for an amenable group
$G$ and for an asymptotically symmetric H\"older continuous potential $\varphi$, we have $\mathcal{P}\left(\varphi,\Psi^{-1}\left\{ \id\right\} \cap\Sigma^{*}\right)=\mathcal{P}\left(\varphi\right)$. This was proved by the author in \cite[Theorem 5.3.11]{JaerischDissertation11}, and independently, by Stadlbauer \cite[Theorem 4.1]{Stadlbauer11} in a slightly different setting.
The reversed implication of Corollary \ref{cor:amenable-implies-fullpressure}
was proved recently in \cite[Theorem 5.4]{Stadlbauer11} by extending ideas of Day (\cite{MR0159230}). A generalization of (\ref{eq:amenability-dicotomy-2}) in Theorem \ref{thm:amenability-dichotomy-markov} for arbitrary H\"older continuous potentials seems still to be open.
\end{rem}
\section{Proof of the Main Results\label{sec:Proofs}}
For a linear GDMS $\Phi$ associated to $\mathbb{F}_{d}=\langle g_{1},\dots,g_{d}\rangle$,
$d\ge2$, we set $I:=\left\{ g_1,g_{1}^{-1},\dots,g_d,g_{d}^{-1}\right\} $
and we consider the Markov shift $\Sigma$, given by
\[
\Sigma:=\left\{ \omega\in I^{\mathbb{N}}:\,\,\omega_{i}\neq\left(\omega_{i+1}\right)^{-1}\,\,\mbox{for all }i\in\mathbb{N}\right\} .
\]
The involution $\kappa:\Sigma^{*}\rightarrow\Sigma^{*}$ is given
by $\kappa\left(\omega\right):=\left(\omega_{n}^{-1},\omega_{n-1}^{-1},\dots,\omega_{1}^{-1}\right)$,
for all $n\in\mathbb{N}$ and $\omega\in\Sigma^{n}$.
For a normal subgroup $N$ of $\mathbb{F}_{d}$, we let $\Psi_{N}:I^{*}\rightarrow\mathbb{F}_{d}/N$
denote the unique semigroup homomorphism such that $\Psi_{N}\left(g\right)=g\mbox{ mod }N$
for all $g\in I$. Clearly, we have that
\begin{equation}
\Psi_{N}\left(\Sigma^{*}\right)=\mathbb{F}_{d}/N.\label{eq:proof-psi-onto}
\end{equation}
Since the assertions in Theorem \ref{thm:lineargdms-amenability-dichotomy} and Proposition \ref{pro:lineargdms-brooks} are clearly satisfied in the case that $N=\left\{ \id\right\}$, we will from now on assume that $N\neq\left\{ \id\right\} $. Using that $N$ is a normal subgroup of $\mathbb{F}_{d}$ and $d\ge2$, one easily verifies that there exists a
finite set $F\subset\Sigma^{*}\cap\Psi_{N}^{-1}\left\{ \id\right\} $
with the following property:
\begin{equation}
\mbox{For all }i,j\in I\mbox{ there exists }\tau\in F\cup\left\{ \emptyset\right\} \mbox{ such that }i\tau j\in\Sigma^{*}.\label{eq:proof-mixingproperty}
\end{equation}
Note that (\ref{eq:proof-mixingproperty}) implies that the group-extended Markov system $\left(\Sigma\times\left(\mathbb{F}_{d}/N\right),\sigma\rtimes\Psi_{N}\right)$
satisfies item (3) of our standing
assumptions at the beginning of Section 4. Hence, the results of
Section 3 are applicable to the $\mathcal{C}\left(1\right)$-measurable potential $\varphi:\Sigma\rightarrow\mathbb{R}$, given by $\varphi_{|\left[g\right]}=\log\left(c_{\Phi}\left(g\right)\right)$
for all $g\in I$.
\begin{proof}
[Proof of Theorem \ref{thm:lineargdms-amenability-dichotomy} ]Our
aim is to apply Theorem \ref{thm:amenability-dichotomy-markov} to
the group-extended Markov system $\left(\Sigma\times\left(\mathbb{F}_{d}/N\right),\sigma\rtimes\Psi_{N}\right)$
and the $\mathcal{C}\left(1\right)$-measurable potential $s\varphi:\Sigma\rightarrow\mathbb{R}$,
for each $s\in\mathbb{R}.$ By (\ref{eq:proof-psi-onto}) and (\ref{eq:proof-mixingproperty}),
we are left to show that $s\varphi$ is asymptotically symmetric with respect
to $\Psi_{N}$. Since $\Phi$ is symmetric we
have that $c_{\Phi}\left(\omega\right)=c_{\Phi}\left(\kappa\left(\omega\right)\right)$,
for all $\omega\in\Sigma^{*}$. Hence, for all $s\in\mathbb{R}$, $n\in\mathbb{N}$
and $g\in \mathbb{F}_d / N$, we have that
\begin{eqnarray*}
\sum_{\omega\in\Sigma^{n}:\,\Psi_{N}\left(\omega\right)=g}\exp\left(sS_{\omega}\varphi\right) & = & \sum_{\omega\in\Sigma^{n}:\,\Psi_{N}\left(\omega\right)=g}\left(c_{\Phi}\left(\omega\right)\right)^{s}=\sum_{\omega\in\Sigma^{n}:\,\Psi_{N}\left(\omega\right)=g}\left(c_{\Phi}\left(\kappa\left(\omega\right)\right)\right)^{s}\\
& = & \sum_{\omega\in\Sigma^{n}:\,\Psi_{N}\left(\omega\right)=g^{-1}}\left(c_{\Phi}\left(\omega\right)\right)^{s}=\sum_{\omega\in\Sigma^{n}:\,\Psi_{N}\left(\omega\right)=g^{-1}}\exp\left(sS_{\omega}\varphi\right),
\end{eqnarray*}
which proves that $s\varphi$ is asymptotically symmetric with respect to
$\Psi_{N}$. We are now in the position to apply Theorem \ref{thm:amenability-dichotomy-markov},
which gives that amenability of $\mathbb{F}_{d}/N$ is
equivalent to
\[
\mathcal{P}\left(s\varphi,\Psi_{N}^{-1}\left\{ \id\right\} \cap\Sigma^{*}\right)=\mathcal{P}\left(s\varphi\right).
\]
Since
$\delta\left(N,\Phi\right)$ is equal to the unique zero of $s\mapsto\mathcal{P}\left(s\varphi,\Psi_{N}^{-1}\left\{ \id\right\} \cap\Sigma^{*}\right)$
and $\delta\left(\mathbb{F}_{d},\Phi\right)$ is equal to the unique zero of $s\mapsto\mathcal{P}\left(s\varphi\right)$ by Fact \ref{fac:criticalexponents-via-pressure}, we conclude that
\[
\delta\left(\mathbb{F}_{d},\Phi\right)=\delta\left(N,\Phi\right)\,\,\,\textrm{if and only if }\mathbb{F}_{d}/N\textrm{ is amenable}.
\]
The proof is complete.
\end{proof}
For the proof of Theorem \ref{thm:lineargdms-lowerhalfbound} we need
the following lemma.
\begin{lem}
\label{lem:delta-half-divergencetype}Let $\Phi$ be a symmetric linear GDMS
associated to $\mathbb{F}_{d}$. For every non-trivial normal subgroup
$N$ of $\mathbb{F}_{d}$, we have that
\[
\sum_{h\in N}\left(c_{\Phi}\left(h\right)\right)^{\delta\left(\mathbb{F}_{d},\Phi\right)/2}=\infty.
\]
In particular, we have that $\delta\left(N,\Phi\right)\ge\delta\left(\mathbb{F}_{d},\Phi\right)/2$. \end{lem}
\begin{proof}
First observe that $N$ and $\Psi_{N}^{-1}\left\{ \id\right\} \cap\Sigma^{*}$
are in one-to-one correspondence, which implies that
\[
\sum_{h\in N}\left(c_{\Phi}\left(h\right)\right)^{\delta\left(\mathbb{F}_{d},\Phi\right)/2}=\sum_{\omega\in\Psi_{N}^{-1}\left\{ \id\right\} \cap\Sigma^{*}}\exp\left(\left(\delta\left(\mathbb{F}_{d},\Phi\right)/2\right)S_{\omega}\varphi\right).
\]
For each $\omega\in\Sigma^{*}$, we can choose $\tau\left(\omega\right)\in F$
such that $\omega\tau\left(\omega\right)\kappa\left(\omega\right)\in\Sigma^{*}$
by making use of property (\ref{eq:proof-mixingproperty}). Further, we define the
map $\Theta:\Sigma^{*}\rightarrow\Psi_{N}^{-1}\left\{ \id\right\} \cap\Sigma^{*}$,
$\Theta(\omega):=\omega\tau\left(\omega\right)\kappa\left(\omega\right)$,
which is at most $\card\left(F\right)$-to-one. Moreover, setting $C:=\min\left\{ S_{\tau}\varphi/2:\tau\in F\right\} >-\infty$
and using that $\Phi$ is symmetric, we observe that $S_{\omega}\varphi+C=S_{\omega}\varphi/2+S_{\kappa\left(\omega\right)}\varphi/2+C\le S_{\Theta\left(\omega\right)}\varphi/2$, for each $\omega\in\Sigma^{*}$.
Consequently, we have that
\begin{eqnarray}
\sum_{\omega\in\Psi_{N}^{-1}\left\{ \id\right\} \cap\Sigma^{*}}\exp\left(\left(\delta\left(\mathbb{F}_{d},\Phi\right)/2\right)S_{\omega}\varphi\right) & \ge &\card\left(F\right)^{-1}\sum_{\omega\in\Sigma^{*}}\exp\left(\left(\delta\left(\mathbb{F}_{d},\Phi\right)/2\right)S_{\Theta\left(\omega\right)}\varphi\right)\label{eq:delta-half-bound-1}\\
& \ge & \card\left(F\right)^{-1} \exp\left({\delta\left(\mathbb{F}_{d},\Phi\right)C}\right)\sum_{\omega\in\Sigma^{*}}\exp\left(\delta\left(\mathbb{F}_{d},\Phi\right)S_{\omega}\varphi\right).\nonumber
\end{eqnarray}
Finally, the existence of the Gibbs measure $\mu=\mu_{\delta\left(\mathbb{F}_{d},\Phi\right)\varphi}$ implies that there exists a constant $C_{\mu}>0$ such that
\[
\sum_{\omega\in\Sigma^{*}}\exp\left(\delta\left(\mathbb{F}_{d},\Phi\right)S_{\omega}\varphi\right)\ge C_{\mu}\sum_{\omega\in\Sigma^{*}}\mu\left(\left[\omega\right]\right)=C_{\mu}\sum_{n\in\mathbb{N}}\sum_{\omega\in\Sigma^{n}}\mu\left(\left[\omega\right]\right)=C_{\mu} \sum_{n\in\mathbb{N}}1=\infty.
\]
Combining the latter estimate with (\ref{eq:delta-half-bound-1}), the proof is complete.
\end{proof}
\begin{proof}
[Proof of Theorem \ref{thm:lineargdms-lowerhalfbound}] By Theorem
\ref{thm:lineargdms-amenability-dichotomy}, the assertion is clearly
true if $\mathbb{F}_{d}/N$ is amenable. We address the remaining case that $\mathbb{F}_{d}/N$
is non-amenable. Suppose for a contradiction that the claim is wrong.
By Lemma \ref{lem:delta-half-divergencetype}, we obtain that
\begin{equation}
\delta\left(N,\Phi\right)=\delta\left(\mathbb{F}_{d}\right)/2.\label{eq:deltaN-is-deltahalf}
\end{equation}
For notational convenience, we set $G:=\mathbb{F}_{d}/N$ throughout this proof.
Consider the non-negative matrix $P\in\mathbb{R}^{\left(I\times G\right)\times\left(I\times G\right)}$,
given by
\[
p\left(\left(v_{1},g_{1}\right),\left(v_{2},g_{2}\right)\right)=\begin{cases}
c_{\Phi}\left(v_{1}\right)^{\delta\left(N,\Phi\right)}, & \mbox{if }v_{1}\neq v_{2}^{-1}\mbox{ and }g_{2}=g_{1}\Psi_{N}\left(v_{1}\right)\\
0 & \mbox{else.}
\end{cases}.
\]
By the assertions in (\ref{eq:proof-psi-onto}) and (\ref{eq:proof-mixingproperty}),
we have that $P$ is irreducible in the sense that, for all $x,y\in I\times G$
there exists $n\in\mathbb{N}$ such that $p^{\left(n\right)}\left(x,y\right)>0$.
Using the irreducibility of $P$ and that $\card\left(I\right)=2d<\infty$,
we deduce from (\ref{eq:deltaN-is-deltahalf}) and Lemma \ref{lem:delta-half-divergencetype}
that $P$ is $R$-recurrent with $R=1$ in the sense of Vere-Jones
(\cite{MR0141160}, see also Seneta \cite[Definition 6.4]{MR2209438}). That is, $P$ satisfies the following properties.
\begin{equation}
\limsup_{n\rightarrow\infty}\left(p^{\left(n\right)}\left(x,y\right)\right)^{1/n}=1\mbox{ and }\sum_{n\in\mathbb{N}}p^{\left(n\right)}\left(x,y\right)=\infty,\mbox{ for all }x,y\in I\times G.\label{eq:recurrent-matrix}
\end{equation}
Thus, by \cite[Theorem 6.2]{MR2209438}, it follows that there exists
a positive row vector $h\in\mathbb{R}^{I\times G}$ such that
\begin{equation}
hP=h.\label{eq:strict-half-bound-1a}
\end{equation}
It also follows from \cite[Theorem 6.2]{MR2209438} that the vector $h$ in (\ref{eq:strict-half-bound-1a}) is unique up to a constant multiple.
Next, we define the non-negative matrix $P_{h}\in\mathbb{R}^{\left(I\times G\right)\times\left(I\times G\right)}$, which is for all $x,y \in I\times G$ given by
\[
p_{h}\left(x,y\right)=p\left(y,x\right)h\left(y\right)/h\left(x\right).
\]
It follows from (\ref{eq:strict-half-bound-1a}) that $P_{h}$ is
a transition matrix on $I\times G$. Further, we deduce from (\ref{eq:recurrent-matrix}) that $P_{h}$ is $1$-recurrent.
In order to derive a contradiction, we consider $P_{h}$ as a random
walk on the graph $X_{1}$ associated to the group-extended Markov
system $\left(\Sigma\times G,\sigma\rtimes\Psi_{N}\right)$ (see Definition
\ref{def:k-cylinder-graphs}), and we investigate the automorphisms
of $X_{1}$. Let $\Aut\left(X_{1}\right)$ denote the group of self-isometries
of $\left(X_{1},d_{X_{1}}\right)$, where $d_{X_{1}}$ denotes the
graph metric on $X_{1}$. Note that each element $g\in G$ gives rise
to an automorphism $\gamma_{g}\in\Aut\left(X_{1}\right)$, which is
given by $\gamma_{g}\left(i,\tau\right):=\left(i,g\tau\right)$, for each $\left(i,\tau\right)\in I\times G$.
The next step is to verify that also $\gamma_{g}\in\Aut\left(X_{1},P_{h}\right)$,
where we have set
\[
\Aut\left(X_{1},P_{h}\right):=\left\{ \gamma\in\Aut\left(X_{1}\right):P_{h}\left(x,y\right)=P_{h}\left(\gamma x,\gamma y\right),\mbox{ for all }x,y\in I\times G\right\} .
\]
Since $P$ has the property that $p\left(x,y\right)=p\left(\gamma_{g}\left(x\right),\gamma_{g}\left(y\right)\right)$,
for all $x,y\in I\times G$ and $g\in G$, it follows that the vector $h_{g}\in\mathbb{R}^{I\times G}$,
given by $h_{g}\left(i,\tau\right):=h\left(i,g\tau \right)$, $\left(i,\tau\right)\in I\times G$,
satisfies $h_{g}P=h_{g}$ as well. Since the function $h$ in (\ref{eq:strict-half-bound-1a})
is unique up to a constant multiple, we conclude that there exists
a homomorphism $r:G\rightarrow\mathbb{R}^{+}$ such that$h_{g}=r\left(g\right)h$,
for each $g\in G$. Consequently, we have $p_{h}\left(x,y\right)=p_{h}\left(\gamma_{g}\left(x\right),\gamma_{g}\left(y\right)\right)$
for all $x,y\in I\times G$ and $g\in G$.
Hence, $\gamma_{g}\in\Aut\left(X_{1},P_{h}\right)$
for each $g\in G$. Since $\card(I)<\infty$, we deduce that $\Aut\left(X_{1},P_{h}\right)$) acts with finitely many orbits on $X_{1}$.
In the terminology of \cite{MR1743100} this is to say that $\left(X_{1},P_{h}\right)$
is a quasi-transitive recurrent random walk. By \cite[Theorem 5.13]{MR1743100}
we then have that $X_{1}$ is a generalized lattice of dimension one
or two. In particular, we have
that $X_{1}$ has polynomial growth with degree one or two (\cite[Proposition 3.9]{MR1743100}). Since
$X_{1}$ is roughly isometric to the Cayley graph of $G$ by Lemma
\ref{lem:roughisometric-withcayleygraph}, we conclude that also $G$ has
polynomial growth (see e.g. \cite[Lemma 3.13]{MR1743100}).
This contradicts the well-known fact that each non-amenable group
has exponential growth. The proof is complete. \end{proof}
\begin{rem*}
The construction of the matrix $P_{h}$ and the verification of its
invariance properties is analogous to the discussion of the $h$-process
in \cite[Proof of Theorem 7.8]{MR1743100} and goes back to the work
of Guivarc'h (\cite[page 85]{MR588157}) on random walks on groups.
However, note that in our case $P$ is in general not stochastic.
\end{rem*}
\begin{proof}
[Proof of Proposition \ref{pro:lineargdms-brooks}]In order to investigate the radial limit sets of $N$, we introduce an induced GDMS $\tilde{\Phi}$, whose edge set consists of first return loops in the Cayley graph of $\mathbb{F}_d / N$. We define $\tilde{\Phi}:=\left(V,\left(X_{v}\right)_{v\in V},\tilde{E},\tilde{i},\tilde{t},\left(\tilde{\phi}_{\omega}\right)_{\omega\in\tilde{E}},\tilde{A}\right)$
as follows. The edge set $\tilde{E}$ and $\tilde{i},\tilde{t}:\tilde{E}\rightarrow V$
are given by
\[
\tilde{E}:=\left\{ \omega=\left(v_{i},w_{i}\right)\in\Sigma_{\Phi}^{*}:\,\, v_{1}\cdot\dots\cdot v_{\left|\omega\right|}\in N,\,\, v_{1}\cdot\dots\cdot v_{k}\notin N\mbox{ for all }1\le k<\left|\omega\right|\right\} ,
\]
\[
\tilde{i}\left(\omega\right):=i\left(\omega_{1}\right),\,\,\tilde{t}\left(\omega\right):=t\left(\omega_{\left|\omega\right|}\right),\,\,\omega\in\tilde{E},
\]
the matrix $\tilde{A}=\left(\tilde{a}\left(\omega,\omega'\right)\right)\in\left\{ 0,1\right\} ^{\tilde{E}\times\tilde{E}}$
satisfies $\tilde{a}\left(\omega,\omega'\right)=1$ if and only if
$a\left(\omega_{\left|\omega\right|},\omega'_{1}\right)=1$, and the
family $\left(\tilde{\phi}_{\omega}\right)_{\omega\in\tilde{E}}$
is given by $\tilde{\phi}_{\omega}:=\phi_{\omega},\,\,\omega\in\tilde{E}$.
One immediately verifies that $\tilde{\Phi}$ is a conformal GDMS. Note that there are canonical embeddings from $\Sigma_{\tilde{\Phi}}$ into $\Sigma_{\Phi}$ and from $\Sigma_{\tilde{\Phi}}^{*}$ into $\Sigma_{\Phi}^{*}$, which we will both indicate by omitting the tilde, that is $\tilde{\omega}\mapsto\omega$. For the coding maps $\pi_{\tilde{\Phi}}:\Sigma_{\tilde{\Phi}}\rightarrow J\left(\tilde{\Phi}\right)$
and $\pi_{\Phi}:\Sigma_{\Phi}\rightarrow J\left(\Phi\right)$ we have
$\pi_{\tilde{\Phi}}\left(\tilde{\omega}\right)=\pi_{\Phi}\left(\omega\right)$, for each $\tilde{\omega}\in \Sigma_{\tilde{\Phi}}$.
The following relations between the limit set of $\tilde{\Phi}$ and
the radial limit sets of $N$ are straightforward to prove. We have
that
\[
J^{*}\left(\tilde{\Phi}\right)\subset \Lur(N,\Phi)\subset \Lr(N,\Phi) \subset J\left(\tilde{\Phi}\right)\cup\bigcup_{\eta\in\Sigma_{\Phi}^{*},\tilde{\omega}\in\Sigma_{\tilde{\Phi}}:\eta\omega\in\Sigma_{\Phi}}\phi_{\eta}\left(\pi_{\tilde{\Phi}}\left(\tilde{\omega}\right)\right).
\]
Note that the right-hand side in the latter chain of inclusions can be written as a countable
union of images of $J\left(\tilde{\Phi}\right)$ under Lipschitz continuous
maps. Since Lipschitz continuous maps do not increase Hausdorff dimension
and since Hausdorff dimension is stable under countable unions, we
obtain
\begin{equation}
\dim_{H}\left(J^{*}\left(\tilde{\Phi}\right)\right)\le\dim_{H}\left(\Lur(N,\Phi)\right)\le\dim_{H}\left(\Lr(N,\Phi)\right)\le\dim_{H}\left(J\left(\tilde{\Phi}\right)\right).\label{eq:hausdorffdimension-inequalities}
\end{equation}
Since the incidence matrix of $\tilde{\Phi}$ is finitely irreducible
by property (\ref{eq:proof-mixingproperty}), the generalised Bowen's
formula (Theorem \ref{thm:cgdms-bowen-formula}) implies that $\dim_{H}\left(J^{*}\left(\tilde{\Phi}\right)\right)=\dim_{H}\left(J\left(\tilde{\Phi}\right)\right)$,
so equality holds in (\ref{eq:hausdorffdimension-inequalities}).
The final step is to show that $\dim_{H}\left(J\left(\tilde{\Phi}\right)\right)=\delta\left(N,\Phi\right)$.
By Theorem \ref{thm:cgdms-bowen-formula} and Fact \ref{fac:criticalexponents-via-pressure},
we have
\[
\dim_{H}\left(J\left(\tilde{\Phi}\right)\right)=\mathcal{P}_{-\tilde{\zeta_{\Phi}}}\left(0,\Sigma_{\tilde{\Phi}}^{*}\right)=\inf\left\{ s\in\mathbb{R}:\sum_{\tilde{\omega}\in\Sigma_{\tilde{\Phi}}^{*}}\mathrm{e}^{sS_{\tilde{\omega}}\zeta_{\tilde{\Phi}}}<\infty\right\} .
\]
Since the elements $\tilde{\omega}\in\Sigma_{\tilde{\Phi}}^{*}$ are
in one-to-one correspondence with $\omega\in\mathcal{C}_{N}$, where $\mathcal{C}_{N}$ is given by
\[
\mathcal{C}_{N}:=\left\{ \omega=\left(v_{i},w_{i}\right)\in\Sigma_{\Phi}^{*}:\,\, v_{1}\cdot\dots\cdot v_{\left|\omega\right|}\in N\right\} ,
\]
and using that $S_{\tilde{\omega}}\zeta_{\tilde{\Phi}}=S_{\omega}\zeta_{\Phi}$
for all $\tilde{\omega}\in\Sigma_{\tilde{\Phi}}^{*}$, we conclude
that
\[
\dim_{H}\left(J\left(\tilde{\Phi}\right)\right)=\inf\left\{ s\in\mathbb{R}:\sum_{\omega\in\mathcal{C}_{N}}\mathrm{e}^{sS_{\omega}\zeta_{\Phi}}<\infty\right\} .
\]
Finally, since the map from $\mathcal{C}_{N}$ onto $N$, given by
$\omega=\left(\left(v_{1},w_{1}\right),\left(v_{2},w_{2}\right),\dots,\left(v_{n},w_{n}\right)\right)\mapsto v_{1}v_{2}\cdots v_{n}$, for $n\in\mathbb{N}$,
is $\left(2d-1\right)$-to-one, and since $S_{\omega}\zeta_{\Phi}=c_{\Phi}\left(v_{1}\dots v_{n}\right)$, for all $\omega\in\mathcal{C}_{N}$,
it follows that
\[
\dim_{H}\left(J\left(\tilde{\Phi}\right)\right)=\inf\left\{ s\in\mathbb{R}:\sum_{g\in N}\left(c_{\Phi}\left(g\right)\right)^{s}<\infty\right\} =\delta\left(N,\Phi\right),
\]
which completes the proof.
\end{proof}
\section{Kleinian Groups\label{sec:Kleinian-groups}}
In this section we give a more detailed discussion of Kleinian groups and how these relate to the concept of
a GDMS. In particular, in Proposition \ref{pro:canonicalgdms-gives-radiallimitset} we will give the motivation for our definition of the radial limit set in the context of a GDMS associated to the free group (see Definition \ref{def:gdms-associated-to-freegroup-and-radiallimitsets}).
In the following we let $G\subset\mathrm{Con}\left(m\right)$ denote
a non-elementary, torsion-free Kleinian group acting properly discontinuously
on the $\left(m+1\right)$-dimensional hyperbolic space $\mathbb{D}^{m+1}$, where $\mathrm{Con}\left(m\right)$
denotes the set of orientation preserving conformal automorphisms
of $\mathbb{D}^{m+1}$. The \emph{limit set} $L\left(G\right)$ of $G$
is the set of accumulation points with respect to the Euclidean topology on $\mathbb{R}^{m+1}$ of the $G$-orbit of some arbitrary point in $\mathbb{D}^{m+1}$,
that is, for each $z\in\mathbb{D}^{m+1}$ we have that
\[
L\left(G\right)=\overline{G\left(z\right)}\setminus G\left(z\right),
\]
where the closure is taken with respect to the Euclidean topology on $\mathbb{R}^{m+1}$. Clearly, $L\left(G\right)$ is a subset of $\mathbb{S}$.
For more details on Kleinian groups and their limit sets, we refer
to \cite{Beardon,MR959135,MR1041575,MR1638795,MR2191250}.
Let us recall the definition of the following important subsets of $L(G)$, namely the \emph{radial} and the \emph{uniformly
radial limit set} of $G$. In here, $s_{\xi}\subset\mathbb{D}^{m+1}$
denotes the hyperbolic ray from $0$ to $\xi$ and $B\left(x,r\right):=\left\{ z\in\mathbb{D}^{m+1}:\,\, d\left(z,x\right)<r\right\} \subset\mathbb{D}^{m+1}$
denotes the open hyperbolic ball of radius $r$ centred at $x$, where $d$ denotes the hyperbolic metric on $\mathbb{D}^{m+1}$.
\begin{defn}
\label{def:radiallimitsets-fuchsian}For a Kleinian
group $G$ the \emph{radial} and the \emph{uniformly radial limit
set} of $G$ are given by
\begin{align*}
L_{\mathrm{r}}\left(G\right) & :=\left\{ \xi\in L\left(G\right):\exists c>0\mbox{ such that }s_{\xi}\cap B\left(g\left(0\right),c\right)\neq\emptyset\mbox{ for infinitely many }g\in G\right\} ,\\
\text{and}
\\
L_{\mathrm{ur}}\left(G\right) & :=\left\{ \xi\in L\left(G\right):\exists c>0\mbox{ such that }s_{\xi}\subset\bigcup_{g\in G}B\left(g\left(0\right),c\right)\right\} .
\end{align*}
\end{defn}
A Kleinian group $G$ is said to be \emph{geometrically finite}
if the action of $G$ on $\mathbb{D}^{m+1}$ admits a fundamental polyhedron
with finitely many sides. We denote by $E_{G}$ the set of points
in $\mathbb{D}^{m+1}$, which lie on a geodesic connecting any two limit
points in $L\left(G\right)$. The \emph{convex hull} of $E_{G}$, which we will denote
by $C_{G}$, is the minimal hyperbolic convex subset of $\mathbb{D}^{m+1}$
containing $E_{G}$. $G$ is called \emph{convex cocompact} (\cite[page 7]{MR1041575})
if the action of $G$ on $C_{G}$ has a compact fundamental domain
in $\mathbb{D}^{m+1}.$
The following class of Kleinian groups gives the main motivation for
our definition of a GDMS associated to the free group (see also \cite[X.H]{MR959135}).
\begin{defn}
\label{def:kleinian-of-schottkytype}
Let $d\ge2$ and let $\mathcal{D}:=\{ (D_{n}^j):n\in \{1,\dots,d\}, j\in \{-1,1\}\}$ be a family of
pairwise disjoint compact Euclidean balls $D_{n}^j\subset\mathbb{R}^{m+1}$, which intersect $\mathbb{S}^m$ orthogonally such that $\diam\left(D_{n}\right)=\diam\left(D_{n}^{-1}\right)$. For each $n\in \{1,\dots d \}$, let $g_{n}\in\mathrm{Con}\left(m\right)$
be the unique hyperbolic element such that $g_{n}\left(\mathbb{D}^{m+1}\cap\partial D_{n}^{-1}\right)=\mathbb{D}^{m+1}\cap\partial D_{n}$,
where $\partial D_{n}^j$ denotes the boundary of $D_{n}^j$ with respect
to the Euclidean metric on $\mathbb{R}^{m+1}$. Then $G:=\left\langle g_{1},\dots,g_{d}\right\rangle $
is referred to as the \emph{Kleinian group of Schottky type} generated
by $\mathcal{D}$.
\end{defn}
Note that a Kleinian group of Schottky type $G=\left\langle g_{1},\dots,g_{d}\right\rangle $ is algebraically a free group. The following construction of a particular GDMS associated to the free group $\langle g_{1},\dots,g_{d}\rangle$ is canonical.
\begin{defn}
\label{def:canonical-model-kleinianschottky}Let $G=\langle g_{1},\dots,g_{d}\rangle$ be
a Kleinian group of Schottky type
generated by $\mathcal{D}$. The \emph{canonical GDMS $\Phi_{G}$ associated
to $G$} is the GDMS associated to the free group $\langle g_{1},\dots,g_{d}\rangle$
which satisfies $X_{g_{n}^j}:=\left(\mathbb{D}^{m+1}\cup\mathbb{S}^m\right)\cap D_{n}^j$,
for each $n\in \{1,\dots,d\}$ and $j\in \{-1, 1\}$, and for which the contractions $\phi_{\left(v,w\right)}:X_{w}\rightarrow X_{v}$ are given by $\phi_{\left(v,w\right)}:=v_{\big|X_{w}}$, for each $(v,w)\in E$. \end{defn}
For the following fact we refer to \cite[Theorem 5.1.6]{MR2003772}.
\begin{fact} \label{fac:coding-kleinian-of-schottkytype}
For a Kleinian group of Schottky type $G$ we have that $L\left(G\right)=J\left(\Phi_{G}\right)$.
\end{fact}
\begin{rem} We remark that without our assumption on $G$ that $\diam\left(D_{n}\right)=\diam\left(D_{n}^{-1}\right)$, for each $n\in \{1,\dots ,d\}$ in Definition \ref{def:kleinian-of-schottkytype}, the generators of the associated GDMS $\Phi_G$ may fail to be contractions. However, in that case, by taking sufficiently high iterates of the generators, we can pass to a finite index subgroup of $G$, for which there exists a set $\mathcal{D}$ as in Definition \ref{def:kleinian-of-schottkytype}.
\end{rem}
The following brief discussion of the geometry of a Kleinian group
of Schottky type $G$ contains nothing that is not well known, however, the reader might like
to recall a few of its details. Let $\Phi_{G}$ denote the canonical GDMS associated to $G$. Recall that for the half-spaces $$H_{v}:=\left\{ z\in\mathbb{D}^{m+1}:\,\, d\left(z,0\right)<d\left(z,v\left(0\right)\right)\right\}, \text{ for each }v\in V, $$
we have that the set
\[
F:=\bigcap_{v\in V}H_{v}
\]
is referred to as a \emph{Dirichlet fundamental domain for $G$}. That $F$ is a fundamental domain for $G$ means that $F$ is an open set which satisfies the conditions
\[
\bigcup_{g\in G}g\left(\overline{F}\cap\mathbb{D}^{m+1}\right)=\mathbb{D}^{m+1}\mbox{ and }g\left(F\right)\cap h\left(F\right)=\emptyset\mbox{ for all }g,h\in G\mbox{ with }g\neq h.
\]
For $\omega=\left(v_{k},w_{k}\right)_{k\in\mathbb{N}}\in\Sigma_{\Phi_G}$ and
$\pi_{\Phi_G}\left(\omega\right)=\xi$, we have that the ray $s_{\xi}$
successively passes through the fundamental domains
$F,v_{1}\left(F\right),v_{1}v_{2}\left(F\right),\dots$.
We also make use of the fact that a Kleinian group of Schottky type $G$ is convex cocompact. This follows from a theorem due to Beardon and Maskit (\cite{MR0333164}, \cite[Theorem 2]{MR2191250}),
since $G$ is geometrically finite and $L\left(G\right)$ contains no parabolic
fixed points (cf. \cite[Theorem 12.27]{MR2249478}). Clearly, if $G$
is convex cocompact, then there exists $R_{G}>0$ such
that
\begin{equation}
C_{G}\cap g\overline{F}\subset B\left(g\left(0\right),R_{G}\right),\mbox{ for all }g\in G.\label{eq:schottky-ex-2}
\end{equation}
In particular, we have that $L_{\mathrm{ur}}\left(G\right)=L_{\mathrm{r}}\left(G\right)=L\left(G\right)$.
Using the fact that $G$ acts properly discontinuously on $\mathbb{D}^{m+1}$ and that $G$ is convex cocompact, one
easily verifies that for each $r>0$ there exists
a finite set $\Gamma\subset G$ such that
\begin{equation}
B\left(0,r\right)\cap C_{G}\subset\bigcup_{\gamma\in \Gamma}\gamma \overline{F}.\label{eq:Lur-coding-2}
\end{equation}
The next proposition provides the main motivation for our definition
of the (uniformly) radial limit set of a normal subgroup $N$ of $\mathbb{F}_{d}$
with respect to a GDMS associated to $\mathbb{F}_{d}$.
\begin{prop}
\label{pro:canonicalgdms-gives-radiallimitset}Let $G$ be a Kleinian
group of Schottky type and let $\Phi_{G}$ denote the canonical GDMS
associated to $G$. For every non-trivial normal subgroup $N$ of
$G$, we have that
\[
L_{\mathrm{r}}\left(N\right)=\Lr\left(N,\Phi_{G}\right)\mbox{ and }L_{\mathrm{ur}}\left(N\right)=\Lur\left(N,\Phi_{G}\right).
\]
\end{prop}
\begin{proof}
Let us begin by proving that $\Lur\left(N,\Phi_G\right)\subset L_{\mathrm{ur}}\left(N\right)$. To start, let $\xi\in \Lur\left(N,\Phi_G\right)$ be given. By the definition of $\Lur\left(N,\Phi_G\right)$,
there exists $\omega=\left(v_{k},w_{k}\right)_{k\in\mathbb{N}}\in\Sigma_{\Phi_G}$ and a finite set $\Gamma\subset G$
such that $\pi_{\Phi_G}\left(\omega\right)=\xi$
and $v_{1}v_{2}\cdots v_{k}\in N\Gamma$, for all $k\in\mathbb{N}$.
Hence, using (\ref{eq:schottky-ex-2}), it follows that
\[
s_{\xi}\subset\bigcup_{h\in N}\bigcup_{\gamma\in\Gamma}B\left(h\gamma\left(0\right),R_{G}\right).
\]
Note that for each $h\in N$, $\gamma\in\Gamma$ and $x\in B\left(h\gamma\left(0\right),R_{G}\right)$
we have
\[
d\left(h\left(0\right),x\right)\le d\left(h\left(0\right),h\gamma\left(0\right)\right)+d\left(h\gamma\left(0\right),x\right)<\max\left\{ d\left(0,\gamma\left(0\right)\right):\gamma\in\Gamma\right\} +R_{G},
\]
which implies that
\[
\bigcup_{h\in N}\bigcup_{\gamma\in\Gamma}B\left(h\gamma\left(0\right),R_{G}\right)\subset\bigcup_{h\in N}B\left(h\left(0\right),R_{G}+\max\left\{ d\left(0,\gamma\left(0\right)\right):\gamma\in\Gamma\right\} \right).
\]
Thus, $\xi\in L_{\mathrm{ur}}\left(N\right)$.
For the converse inclusion, let $\xi\in L_{\mathrm{ur}}\left(N\right)$ be given. Then, by the definition of $L_{\mathrm{ur}}\left(N\right)$,
there exists a constant $c:=c\left(\xi\right)>0$ such that
\[
s_{\xi}\subset\bigcup_{h\in N}B\left(h\left(0\right),c\right).
\]
Hence, by (\ref{eq:Lur-coding-2}), there exists a finite set $\Gamma\subset G$
such that $s_{\xi}\subset\bigcup_{h\in N}\bigcup_{\gamma\in\Gamma}h\gamma\overline{F}$. We conclude that for $\omega=\left(v_{k},w_{k}\right)_{k\in\mathbb{N}}\in\Sigma_{\Phi_G}$
with $\pi_{\Phi_G}\left(\omega\right)=\xi$ we have that $\left\{ v_{1}v_{2}\cdots v_{k}:\,\, k\in\mathbb{N}\right\} \subset N\Gamma$
and hence, $\xi\in \Lur\left(N,\Phi_G\right)$.
Let us now address the inclusion $\Lr\left(N,\Phi_G\right)\subset L_{\mathrm{r}}\left(N\right)$. For this, let $\xi\in \Lr\left(N,\Phi_G\right)$ be given. By the definition of $\Lr\left(N,\Phi_G\right)$,
there exists $\omega=\left(v_{k},w_{k}\right)_{k\in\mathbb{N}}\in\Sigma_{\Phi_G}$, an element
$\gamma\in G$, a sequence $\left(h_{k}\right)_{k\in\mathbb{N}}$ of pairwise distinct elements
in $N$ and a sequence $\left(n_{k}\right)_{k\in\mathbb{N}}$ tending to infinity
such that $\pi_{\Phi_G}\left(\omega\right)=\xi$ and $v_{1}v_{2}\cdots v_{n_{k}}=h_{k}\gamma$,
for all $k\in\mathbb{N}$. Using (\ref{eq:schottky-ex-2}) it follows that $s_{\xi}\cap B\left(h_{k}\gamma\left(0\right),R_{G}\right)\neq\emptyset$, for all $k\in \mathbb{N}$.
Since $B\left(h_{k}\gamma\left(0\right),R_{G}\right)\subset B\left(h_{k}\left(0\right),R_{G}+d\left(0,\gamma\left(0\right)\right)\right)$
for all $k\in\mathbb{N}$, we obtain that also $s_{\xi}\cap B\left(h_{k}\left(0\right),R_{G}+d\left(0,\gamma\left(0\right)\right)\right)\neq\emptyset$. We have thus shown that $\xi\in L_{\mathrm{r}}\left(N\right)$.
Finally, let us demonstrate that $L_{\mathrm{r}}\left(N\right)\subset \Lr\left(N,\Phi_G\right)$. To that end, pick an arbitrary $\xi\in L_{\mathrm{r}}\left(N\right)$ and let $\omega=\left(v_{k},w_{k}\right)_{k\in\mathbb{N}}\in\Sigma_{\Phi_G}$
with $\pi_{\Phi_G}\left(\omega\right)=\xi$ be given. Then, by definition of $L_{\mathrm{r}}\left(N\right)$,
there exists $c>0$ and a sequence $\left(h_{k}\right)_{k\in\mathbb{N}}$
of pairwise distinct elements in $N$ such that $s_{\xi}\cap B\left(h_{k}\left(0\right),c\right)\neq\emptyset$,
for all $k\in\mathbb{N}$. Using (\ref{eq:Lur-coding-2}) we deduce that there exists a finite
set $\Gamma\subset G$ such that for all $k\in\mathbb{N}$ we have
\[
s_{\xi}\cap B\left(h_{k}\left(0\right),c\right)\cap\bigcup_{\gamma\in\Gamma}h_{k}\gamma \overline{F}\neq\emptyset.
\]
Since $\Gamma$ is finite, there exist $\gamma_0\in\Gamma$ and
sequences $\left(n_{k}\right)_{k\in \mathbb{N}}$ and $\left(l_{k}\right)_{k\in \mathbb{N}}$ tending to infinity such
that $s_{\xi}\cap B\left(h_{n_{k}}\left(0\right),c\right)\cap h_{n_{k}}\gamma_0 \overline{F}\neq\emptyset$ and $v_{1}v_{2}\cdots v_{l_{k}}=h_{n_{k}}\gamma_0$, for all $k\in \mathbb{N}$. Hence, $\xi\in \Lr\left(N,\Phi_G\right)$.
\end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,477,468,751,140 | arxiv | \section{\label{}}
In the years since the ARGUS and CLEO collaboration first observed baryonic $B$ decays \cite{Albrecht88,Crawford92},
many three-body baryonic $B$ decays ($B\rightarrow\mathrm{\mathbf{B\bar{B}'M}}$)
have been found \cite{Olive14,Abe02, Aubert06,J.H.Chen08,Wei08}, where $\mathrm{\mathbf{B\bar{B}'}}$
denotes a baryon-antibaryon system and $\mathrm{\mathbf M}$ stands for a meson. Although the general pattern of these decays can be understood as the interplay between the short-distance weak interaction and the long-distance strong interaction \cite{Suzuki07}, theories still have difficulties adjusting for various details such as the angular correlation between the energetic outgoing meson and
one specific baryon (\textbf{B}) in the di-baryon system \cite{Wang05,Wang07,Wei08,Sanchez12}.
A popular theoretical approach used to investigate the three-body baryonic decays is generalized factorization.
This method smears the correlation between the weak decay and the fragmentation and allows $B\rightarrow\mathrm{\mathbf{B\bar{B}'M_{c}}}$ decays (with $\mathbf M_{c}$ denoting a charmed meson) to be categorized into three types: current type, where the $\mathbf{B\bar{B}'}$ pair is formed by an external $W$ with other quarks; transition type, where the $W$ is internal and forms $\mathbf{BM_{c}}$; and hybrid (current+transition) type~\cite{C.Chen08}. The
$B^{0} \rightarrow p\bar{\Lambda} D^{(*)-}$~\footnote{Hereafter, the inclusion of the charge-conjugate mode is implied.} decay belongs to the first type whereas its corresponding charged mode, $B^{+} \rightarrow p\bar{\Lambda} \bar{D}^{(*)0}$,
is of the last type. Using this approach, Ref.~\cite{C.Chen08} predicts the branching fractions
\begin{equation}
\begin{split}
\mathcal{B}(B^{0} \rightarrow p \bar{\Lambda} D^{-})&=(3.4\pm0.2)\times10^{-6},\\
\mathcal{B}(B^{0} \rightarrow p \bar{\Lambda} D^{*-})&=(11.9\pm0.5)\times10^{-6},\\
\mathcal{B}(B^{+} \rightarrow p\bar{\Lambda} \bar{D}^{0})&=(11.4\pm2.6)\times10^{-6},\\
\mathcal{B}(B^{+} \rightarrow p\bar{\Lambda} \bar{D}^{*0})&=(32.3\pm3.2)\times10^{-6}.
\end{split}
\end{equation}
There are two salient features of the predicted results. First, the ratios of the branching fractions of the decays into $D^{*}$ to the analogous decays into $D$ are $\approx 3 : 1$.
Secondly, the branching fraction of the hybrid-type decay is also $\approx 3$ times larger than the corresponding current-type decay.
The measured branching fraction for $B^{+} \rightarrow p\bar{\Lambda} \bar{D}^{0}$ is consistent with the theoretical calculation based on the factorization approach \cite{C.Chen08,Chen11}.
In most $B\rightarrow\mathrm{\mathbf{B\bar{B}'M}}$ decay studies, the final-state di-baryon system is observed to favor a mass near threshold~\cite{Olive14,Bai03,Ablikim05,Alexander10}. While this ``threshold enhancement effect'' is intuitively understood in terms of the factorization approach, such enhancements are not seen in $B^+ \to p \bar{\Lambda} J/\psi$ nor $B^+ \to \Lambda_c^+\Lambda_c^- K^+$ \cite{Xie05, Abe05}. More intriguingly, the factorization approach fails to provide a satisfactory explanation for the $\mathrm{\mathbf M}$--$p$ angular correlations
in $B^{-} \rightarrow p\bar{p}K^{-}$, $B^{0} \rightarrow p{\bar\Lambda}\pi^-$,
and $B^{-} \rightarrow p\bar{p}D^{-}$ \cite{Wang05,Wang07,Wei08,Sanchez12}. A striking difference between the non-zero angular asymmetries of $B^{-} \rightarrow p\bar{p}D^{*-}$ and $B^{-} \rightarrow p\bar{p}D^{-}$ was also reported in Ref.\ \cite{Aubert06,C.Chen08}, for which a theoretical explanation was attempted in Ref.\ \cite{Geng06}. A study of pure current-type decays like $B^{0} \rightarrow p \bar{\Lambda}D^{(*)-}$ is useful to shed more light on the afore mentioned phenomena. In this paper, we report the first observation of $B^{0} \rightarrow p \bar{\Lambda}D^{(*)-}$ decays using data from the Belle experiment.
The data sample used in this study corresponds to an integrated luminosity of 711 fb$^{-1}$ or $772\times10^{6}$ $B\bar{B}$ pairs produced at the $\Upsilon(4S)$ resonance. The Belle detector is located at the interaction point (IP) of the KEKB asymmetric-energy $e^{+}$ (3.5 GeV) $e^{-}$ (8 GeV) collider \cite{Kurokawa03,Abe13}. It is a large-solid-angle spectrometer comprising six specialized sub-detectors: the Silicon Vertex Detector (SVD), the 50-layer Central Drift Chamber (CDC), the Aerogel Cherenkov Counter (ACC), the Time-Of-Flight scintillation counter (TOF), the electromagnetic calorimeter, and the $K_L$ and muon detector (KLM). A superconducting solenoid surrounding all but the KLM produces a $1.5$ T magnetic field \cite{Abashian02,Brodzicka12}.
The final-state charged particles, $\pi^{\pm},\ K^{\pm}$ and
\rlap{$\,p$}{${\mathstrut}^{\scriptscriptstyle(-)}$},
are selected using the likelihood information from the combined tracking (SVD, CDC) and charged-hadron identification (CDC, ACC, TOF) systems \cite{Nakano02}. The $B^{0} \rightarrow p\bar{\Lambda} D^{(*)-}$ signals are reconstructed through the sub-decays $D^{-}\rightarrow K^{+}\pi^{-}\pi^{-}$, $D^{*-}\rightarrow \bar{D}^{0}\pi^{-}$, $\bar{D}^{0}\rightarrow K^{+}\pi^{-}$, and $\bar{\Lambda}\rightarrow\bar{p}\pi^{+}$. The distance
of closest approach to the IP by each charged track is required to be less than 3.0 cm along the positron beam ($z$ axis) and 0.3 cm
in the transverse plane
The pion and kaon identification efficiencies are in the range of 85--95\% while the probability of misidentifying
one as the other is 10--20\%, both depending on the momentum. The proton identification efficiency is 90--95\% for the typical momenta in this study,
and the probability of misidentifying a proton as a pion (kaon) is less than 5\% (10\%). The candidate $\bar{\Lambda}$ is required to have
a displaced vertex that is consistent with a long-lived particle originating from the IP and an invariant mass between 1.102 and 1.130 $\mathrm{GeV}/{c^{2}}$.
The particle-identification criterion is omitted for the daughter pion in the $\bar{\Lambda}$ reconstruction due to the low background rate.
For a $\bar{D}^{0}$, we require the reconstructed invariant mass to lie between 1.72 and 2.02 $\mathrm{GeV}/{c^{2}}$. For $D^-$ and $D^{*-}$, we require $|M_{D^-} - 1870\ \mathrm{MeV}/c^2| < 10$ $\mathrm{MeV}/{c^{2}}$, $|M_{D^{*-}} - 2010\ \mathrm{MeV}/c^2| < 150$ $\mathrm{MeV}/{c^{2}}$, and $ |M_{D^{*-}}-M_{\bar{D}^{0}} -145\ \mathrm{MeV}/c^2| < 9$ $\mathrm{MeV}/{c^{2}}$, where $M_{D^{(*)-}}$ and $M_{\bar{D}^{0}}$ are the reconstructed masses of $D^{(*)-}$ and $\bar{D}^{0}$, respectively.
We identify the signals using two kinematic variables: the
energy difference ($\Delta E$) and the beam-energy-constrained mass ($M_{\rm bc}$),
\begin{equation}
\begin{split}
&\Delta {E}={E}_{B}-{E}_{\rm beam}\\
&{M}_{\rm bc}=\sqrt{{E}_{\rm beam}^{2}-p_{B}^2 c^2}/c^2,
\end{split}
\end{equation}
where $E_{B}$ and $p_{B}$ are the energy and momentum of the $B$ meson and $E_{\mathrm{beam}}$ is the beam energy, all measured in the $\Upsilon(4S)$ center-of-mass (CM) frame.
We optimize all selection criteria using Monte Carlo (MC) event samples before examining the data. These samples, both for signal and background, are generated using EvtGen \cite{EvtGen} and later processed with a GEANT3-based detector simulation program
that provides the detector-level information \cite{Geant}.
Using the generated MC samples, the fit region is defined as $ -0.1\ \mathrm{GeV} < \Delta E < 0.3\ \mathrm{GeV}$ and $5.22$ GeV/$c^2 <M_{\rm bc} < 5.30$ GeV/$c^2$ while the signal region is defined as $|\Delta E| < 0.05\ \mathrm{GeV}$ and $5.27$ GeV/$c^2 <M_{\rm bc} < 5.29$ GeV/$c^2$.
Two major sources contribute as background: $e^{+}e^{-} \rightarrow q\bar{q}\ (q=u,\ d,\ s,\ c)$ production, also known as the continuum background, and other $b\rightarrow c$ dominated $B$ meson decays, labeled generically as $B$ decays in this paper.
To suppress the continuum background, we use the difference between its jet-like topology and the spherical $B$-decay topology. We calculate the distributions of 23 modified Fox-Wolfram moments from the final-state particle momenta given by the signal and background MC \cite{FoxWolfarmMoment,ksfw}. A Fisher discriminant that enhances the signal and background separation with a weighted linear combination of the moments is then calculated \cite{fisher}. We augment the obtained probability density functions (PDFs) of the Fisher discriminant for the signal and background with two more variables to form the signal (background) likelihood ${\mathcal L}_{\mathrm{S(B)}}$: the axial distance ($\Delta z$)
between the vertices of the candidate $B$ and the remaining final-state particles --- presumably from the other $B$ --- and the cosine of the polar angle of the $B$ momentum (${\mathrm{cos}}\theta_{B}$) in the CM frame. The PDFs used for the modified Fox-Wolfram moments, $\Delta z$, and $\mathrm{cos}\theta_{B}$ are bifurcated Gaussian functions, the sums of three Gaussian functions, and second-order polynomials, respectively.
To suppress the background, we optimize the selection criteria
for $[{\mathcal L}_{\mathrm{S}}/({\mathcal L}_{\mathrm{S}}+{\mathcal L}_{\mathrm{B}})]_{D(D^{*})}<\alpha_{D(D^{*})}$, $|{M}_{D^{-}} - 1870\ \mathrm{MeV}/c^{2}|<\beta_{D}$ $\mathrm{MeV}/{c^{2}}$,
and $|{M}_{D^{*-}}-{M}_{\bar{D}^{0}} - 145\ \mathrm{MeV}/c^{2}|<\beta_{D^{*}}$ $\mathrm{MeV}/{c^{2}}$~simultaneously and obtain $\alpha_{D}=0.53$, $\alpha_{D^{*}}=0.40$, $\beta_{D}=10$, and $\beta_{D^{*}}=9$. The $\beta$ selections correspond to $\pm2.4\sigma$ and $\pm12.4\sigma$ selections around the nominal $M_{D^{*-}}$ and $M_{D^{*-}}-M_{\bar{D}^0}$. This procedure maximizes the figure of merit, $N_{\mathrm{S}}/\sqrt{N_{\mathrm{S}}+N_{\mathrm{B}}}$,
where $N_{\mathrm{S}}$ and $N_{\mathrm{B}}$ are the expected yields of signal and background, respectively, in the signal region. We use the theoretical expectations in Eq.~(1) to obtain $N_{\mathrm{S}}$ and normalize the $q\bar{q}$
and generic $B$ MC samples to the integrated luminosity to obtain $N_{\mathrm{B}}$. After applying all the selection criteria, the fractions of events with multiple signal candidates are found to be 3.5\% and 5.6\% in the $D$ and $D^*$ modes, respectively. To ensure that no event has multiple entries in the fit region, we retain the $B$ candidate with the smallest vertex-fit $\chi^2$ in each event, where the vertex-fit is performed using all charged tracks from the $B$ candidate except those from $\bar{\Lambda}$.
We model the signal $\Delta E$ distribution with the sum of three Gaussian functions; and the ${M}_{\rm bc}$ distribution with the sum of two Gaussian functions. We model the background $\Delta E$ shape with a second-order polynomial; and the ${M}_{\rm bc}$ shape with an ARGUS function \cite{Albrecht90}. We determine the PDF shapes with MC samples and calibrate the means and widths of the signal PDFs using a large control sample of $B^{0} \rightarrow \pi^{+} K^{0}_{\mathrm{S}} D^{(*)-}$ decays from the data. The signal yields are extracted separately
from eight di-baryon ($p\bar{\Lambda}$) invariant mass bins, in the ranges of 2.05--3.41 $\mathrm{GeV}/{c^{2}}$~for the $D$ mode and 2.05--3.30 $\mathrm{GeV}/{c^{2}}$~for the $D^*$ mode. We obtain the signal using a two-dimensional extended unbinned maximum likelihood fit in $\Delta E$ and ${M}_{\rm bc}$.
Figure~\ref{binned_fit_example} illustrates the fit results of
the lowest and highest $p{\bar\Lambda}$ mass bins for
the $D$ and $D^*$ modes. We observe clear signal peaks with very low background in the lowest $M_{p\bar\Lambda}$ bin, indicating an enhancement near threshold. As the efficiency is dependent on $M_{p\bar\Lambda}$, Table~\ref{binned_yield_table} lists
the efficiencies and fitted yields in all mass bins for the two modes. Note that the efficiencies shown do not include the sub-decay branching fractions.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.6\linewidth]{mLambp_bin1_yield_Dmode.jpg}\\
\includegraphics[width=0.6\linewidth]{mLambp_bin8_yield_Dmode.jpg}\\
\includegraphics[width=0.6\linewidth]{mLambp_bin1_yield_Dstarmode.jpg}\\
\includegraphics[width=0.6\linewidth]{mLambp_bin6_yield_Dstarmode.jpg}
\caption{Projections of typical $\Delta E$-$M_{\rm bc}$ fits to data for events in the signal region of the orthogonal variable. The peaking and flat red dotted lines represent the signal and background components; the blue solid lines with the dotted areas represent the combined PDFs with their $1\sigma$ uncertainty bands. The top (bottom) four panels from top to bottom show the fits in the lowest and highest $M_{p\bar{\Lambda}}$ bin in the $D$ ($D^*$) mode.}
\label{binned_fit_example}
\end{figure}
\begin{table}[b!]
\centering
\caption{The fitted signal yield and efficiency in each $M_{p\bar{\Lambda}}$ bin. To obtain a stable fit, we combine the last three bins in the $D^*$ mode into the sixth bin.}
\label{binned_yield_table}
\begin{tabular}{c|cc|c|cc}
\hline
\hline
$M_{p\bar{\Lambda}}$ & \multicolumn{2}{c|}{$D$ mode} & $M_{p\bar{\Lambda}}$ & \multicolumn{2}{c}{$D^{*}$ mode} \\
\cline{2-3}\cline{5-6}
($\mathrm{GeV}/{c^{2}}$) & Yield & Eff.(\%) & ($\mathrm{GeV}/{c^{2}}$) & Yield & Eff.(\%)\\
\hline
2.05--2.22 & $57\pm8$ & $12.2\pm0.0$ & 2.05--2.21 & $19\pm5$ & $12.2\pm0.0$\\
2.22--2.39 & $24\pm5$ & $10.5\pm0.0$ & 2.21--2.36 & $9\pm3$ & $10.2\pm0.0$\\
2.39--2.56 & $14\pm4$ & $9.5\pm0.1$ & 2.36--2.52 & $5\pm3$ & $8.7\pm0.0$\\
2.56--2.73 & $8\pm3$ & $9.8\pm0.1$ & 2.52--2.68 & $2\pm1$ & $8.4\pm0.1$\\
2.73--2.90 & $3\pm2$ & $10.4\pm0.1$ & 2.68--2.83 & $3\pm2$ & $7.6\pm0.1$\\
2.90--3.07 & $7\pm3$ & $10.9\pm0.2$ & 2.83--3.30 & $1\pm1$& $6.3\pm0.1$\\
3.07--3.24 & $1\pm2$ & $10.8\pm0.3$ & & &\\
3.24--3.41 & $2\pm2$ & $11.4\pm0.7$ & & & \\
\hline
Total & $117\pm12$ & & & $39\pm7$ &\\
\hline
\hline
\end{tabular}
\end{table}
\begin{figure}[htb!]
\includegraphics[width=4.2cm]{diff_eff_mLambp_Dmode.jpg}
\includegraphics[width=4.5cm]{diff_eff_mLambp_Dstarmode.jpg}
\caption{Differential branching fractions of the $D$ (left) and $D^*$ (right) modes in $M_{p\bar{\Lambda}}$. Fit curves are
based on an empirical threshold function (see text).}
\label{differential_efficiency_mLambp}
\end{figure}
Assuming that the branching fractions of $\Upsilon(4S)$ decaying to the charged and neutral $B\bar{B}$ pairs are equal, we use the efficiency and fitted yield in each mass bin to calculate the differential branching fraction and integrate over the entire mass range
to obtain the branching fraction $\mathcal{B}=(\sum_{i}N_{i}/ \epsilon_{i})/(\prod\mathcal{B}_{\mathrm{subdecay}} \times N_{B\bar{B}} \times C_{\mathrm{PID}})$,
where $i$ is the mass bin number, $N_{i}$ and $\epsilon_{i}$ are the bin-dependent fitted yield and selection efficiency, respectively, $\mathcal{B}_{\mathrm{subdecay}}$ and $N_{B\bar{B}}$ are the sub-decay branching fraction and the number of $B\bar{B}$ pairs,
respectively, and $C_\mathrm{PID}$ is the charged-particle identification efficiency correction between MC and data ($0.92$ for the $D$ mode and $0.85$ for the $D^*$ mode).
Figure\ \ref{differential_efficiency_mLambp} shows the results, where both modes have visible peaks near threshold.
The data are fit with an empirical threshold yield, $m^{a}\times e^{(bm+cm^{2}+dm^{3})}$, \textit{vs.} the mass excess $m=M_{p\bar{\Lambda}}-M_{\bar{\Lambda}}-M_{p}$ by varying $a$, $b$, $c$, and $d$.
The obtained branching fractions are:
\begin{equation}
\begin{split}
&\mathcal{B}(B^{0}\rightarrow p \bar{\Lambda} D^{-})=(25.1\pm2.6\pm3.5)\times10^{-6},\ 19.8\sigma,\\
&\mathcal{B}(B^{0}\rightarrow p \bar{\Lambda} D^{*-})=(33.6\pm6.3\pm4.4)\times10^{-6},\ 10.8\sigma,\\
\end{split}
\end{equation}
where the quoted uncertainties are statistical and systematic (described later), respectively,
and the significance is estimated by the Z-score of the p-value for $\chi^2=2\sum_i\ln (L_{\mathrm{max},i}/L_{\mathrm{0},i})$ with 8 or 6 degrees of freedom representing the number of bins. $L_\mathrm{max}$ and $L_\mathrm{0}$ are the likelihood values with and without the signal component in the fit, respectively, and $i$ is again the mass bin index.
The measured branching fractions are clearly incompatible with the theoretical predictions for both the $D$ and $D^*$ modes \cite{C.Chen08}. This indicates that the model parameters used in the calculation need to be revised and, perhaps, some modification of the theoretical framework is required.
To extract the decay angular distributions, we divide $\mathrm{cos}\theta_{pD^{(*)}}$ into eight bins, where $\theta_{pD^{(*)}}$ is defined as the angle between the proton and meson directions in the $p\bar\Lambda$ rest frame. We follow the same procedure to determine the differential branching fractions in $\mathrm{cos}\theta_{pD^{(*)}}$ as in determining those in $M_{p\bar{\Lambda}}$.
Table~\ref{binned_theta_yield} lists the fitted signal yields and efficiencies in the $\mathrm{cos}\theta_{pD^{(*)}}$
bins; Fig.\ \ref{differential_efficiency_thetaPD} shows the differential branching fractions. The efficiency is determined with the MC sample, including the threshold enhancement effect as observed in the data.
\begin{table}[b!]
\centering
\caption{The fitted signal yield and efficiency in each $\mathrm{cos}\theta_{pD^{(*)}}$ bin.}
\label{binned_theta_yield}
\begin{tabular}{c|cc|cc}
\hline
\hline
\multirow{2}{*}{$\mathrm{cos}\theta_{pD^{(*)}}$} & \multicolumn{2}{c|}{$D$ mode} & \multicolumn{2}{c}{$D^*$ mode}\\
\cline{2-5}
& Yield & Eff.(\%) & Yield & Eff.(\%)\\
\hline
$-1.00\mbox{\ --\ }-0.75$ & $10\pm4$ & 9.0 & $3\pm2$ & 8.6\\
$-0.75\mbox{\ --\ }-0.50$ & $17\pm5$ & 10.5 & $1\pm1$ & 10.2\\
$-0.50\mbox{\ --\ }-0.25$ & $16\pm4$ & 11.5 & $1\pm1$ & 11.3\\
$-0.25\mbox{\ --\ }-0.00$ & $15\pm4$ & 12.2 & $2\pm2$ & 12.2\\
$+0.00\mbox{\ --\ }+0.25$ & $19\pm5$ & 12.8 & $7\pm3$ & 12.7\\
$+0.25\mbox{\ --\ }+0.50$ & $15\pm4$ & 13.0 & $7\pm3$ & 13.0\\
$+0.50\mbox{\ --\ }+0.75$ & $16\pm5$ & 12.6 & $9\pm3$ & 12.8\\
$+0.75\mbox{\ --\ }+1.00$ & $7\pm3$ & 11.5 & $8\pm3$& 11.5\\
\hline
\hline
\end{tabular}
\end{table}
\begin{figure}[htb!]
\includegraphics[width=4.3cm,height=4.3cm]{diff_eff_thetaPD_Dmode.jpg}
\includegraphics[width=4.2cm]{diff_eff_thetaPD_Dstarmode.jpg}
\caption{Differential branching fractions of the $D$ (left) and $D^*$ (right) modes in $\mathrm{cos}\theta_{pD^{(*)}}$. The fit curves are
second-order polynomials, as suggested by Ref. \cite{Geng06}.}
\label{differential_efficiency_thetaPD}
\end{figure}
We define the angular asymmetry $A_{\theta}=\frac{\mathcal{B}_{+}-\mathcal{B}_{-}}{\mathcal{B}_{+}+\mathcal{B}_{-}}$, where $\mathcal{B}_{+(-)}$ represents the branching fraction of positive (negative) cosine value. The results are
\begin{equation}
\begin{split}
&A_{\theta}(B^{0}\rightarrow p \bar{\Lambda} D^{-})=-0.08\pm0.10,\\
&A_{\theta}(B^{0}\rightarrow p \bar{\Lambda} D^{*-})=+0.55\pm0.17,\\
\end{split}
\end{equation}
where the uncertainty is purely statistical since the correlated systematic uncertainties cancel in the $A_{\theta}$ calculation. The angular distributions of the $D$ and $D^*$ modes appear to have distinct trends, even though they are both categorized as current-type decays. More data are needed to make the result conclusive.
Three major categories of systematic uncertainties are considered: in the signal yield determination, in the efficiency estimation, and in translating the signal yields and efficiencies into the branching fractions. Table \ref{tab_sys_error} lists all the systematic uncertainties.
We observe a mild peaking background in the $M_{\rm bc}$ fit region due to $B^{+}\rightarrow p \bar{\Lambda} \bar{D}^{*0}$, plausibly by
the replacement of the low-momentum $\pi^0$ in $\bar{D}^{*0}\rightarrow \bar{D}^{0}\pi^0$ with an unaffiliated $\pi^-$ or $K^-$ to reconstruct a ${D}^{*-}$. To study its contribution to the uncertainty in the $D^{*}$ mode, a dedicated MC sample of this background mode is generated. Based on its current branching fraction upper limit \cite{Chen11}, we subtract 0.5 events from the extracted signal yield and assign $\pm0.5$ events as the systematic uncertainty. We have verified that our signal extraction method is robust and see negligible systematic bias in the signal yield when assuming 0.1 to 10 times the theoretical branching fractions (about 1.6 to 160 events) in an MC ensemble test.
\begin{table}[b!]
\centering
\caption{The systematic uncertainties in the $D$ and $D^*$ modes. The $\approx$ signs indicate the $M_{p\bar{\Lambda}}$ dependence of the uncertainty.}
\label{tab_sys_error}
\begin{tabular}{c|cc}
\hline
\hline
\multirow{2}{*}{Item} & \multicolumn{2}{c}{Systematic\ uncertainty (\%)}\\
\cline{2-3}
& $D$ mode & $D^{*}$ mode\\
\hline
Yield bias & negligible & $1.3\ (0.5$ evt.)\\
Modeling & $\approx3$ & $\approx2$\\
Charged track & $2.1$ & $4.3$\\
Charged hadron identification & $1.3$ & $1.8$\\
$\bar{\Lambda}$ identification & $4.0$ & $4.4$\\
$M_{D^{-}}$, $M_{D^{*-}}-$$M_{\bar{D}^{0}}$ window & $2.0$ & negligible\\
${\mathcal L}_{\mathrm{S}}/({\mathcal L}_{\mathrm{S}}+{\mathcal L}_{\mathrm{B}})$ requirement & $11.5$ & $11.0$\\
PDF shape & negligible & negligible\\
N$_{B\bar{B}}$ & $1.4$ & $1.4$\\
Sub-decay $\mathcal{B}$ & $2.2$ & $1.7$\\
\hline
Overall & $13.9$ & $13.1$\\
\hline
\hline
\end{tabular}
\end{table}
For the reconstruction efficiency, we consider the following systematic uncertainties: the signal MC modeling for the threshold enhancement effect using the bound state assumption, charged track reconstruction, charged hadron identification, $\bar{\Lambda}$ reconstruction, background discrimination selections, and the PDF shapes. The modeling uncertainty is estimated by comparing the efficiency calculation based on two different MC samples, one generated assuming $p$-$\bar{\Lambda}$ bound states and the other with three-body phase-space decays, in each $M_{p\bar{\Lambda}}$ bin. As the result is highly threshold-enhanced, we use the efficiency given by the bound-state model to calculate the branching fractions and take the differences as the systematic uncertainties between the two models. The uncertainty is about 3\ (2)\% in the $D\ (D^*)$ mode, depending on the bins. For each charged track except the low-momentum pion in $D^{*-}\rightarrow \bar{D}^{0}\pi^{-}$, a 0.35\% uncertainty is assigned to take into account the data-MC difference in the charged track reconstruction. For the low-momentum pion, a 2.5\% uncertainty is assigned. We use the $\Lambda \rightarrow p \pi^{-}$ and $D^{*+} \rightarrow D^{0}\pi^{+}$, $D^{0} \rightarrow K^{-}\pi^{+}$
samples to calibrate the MC \rlap{$\,p$}{${\mathstrut}^{\scriptscriptstyle(-)}$}, $K^{\pm}$, $\pi^{\pm}$ identification efficiencies and assign uncertainties. For the $\bar{\Lambda}$ reconstruction, we estimate the uncertainty by considering the data--MC difference of tracks displaced from the IP, the $\bar{\Lambda}$ proper time, and $\bar{\Lambda}$ mass distributions. The uncertainties due to the $\alpha_{D^{(*)}}$ selections are estimated separately with the control sample mode, $B^{0}\rightarrow \pi^{+}K_{S}^{0}D^{(*)-}$.
We compare the data--MC efficiency differences with or without
the $\alpha$ selections, where the non-negligible statistical uncertainties are also included. In both cases, the obtained $\mathcal{B}(B^{0}\rightarrow \pi^{+}K_{S}^{0}D^{(*)-})$
is found to be consistent with the world average, indicating overall reliability of our methodology. For the $\beta_{D}$ and $\beta_{D^*}$ selections,
we compare the widths of the peaking components in $M_{D^{-}}$ and $M_{D^{*-}}- M_{\bar{D}^{0}}$ in the MC and data and quote the differences as the uncertainties. We also relax the shape variables of the signal PDF when fitting the control sample and compare the difference to MC-determined PDF. The resulting difference in the calculated $\mathcal{B}(B^{0}\rightarrow \pi^{+}K_{S}^{0}D^{(*)-})$ is negligible.
In the translation from signal yields to branching fractions, we consider the uncertainties of $\mathcal{B}_{\mathrm{subdecay}}$ and $N_{B\bar{B}}$. The uncertainties of $\mathcal{B}_{\mathrm{subdecay}}$ are obtained from Ref.\ \cite{Olive14}. For $N_{B\bar{B}}$, on- and off- resonance di-lepton events, $e^{+}e^{-}\rightarrow q\bar{q}$ MC and data difference, primary vertex sideband data, and statistical uncertainty are combined to estimate the uncertainty.
In this paper, we have reported the first observation of the $B^{0}\rightarrow p \bar{\Lambda} D^{-}$ and $B^{0}\rightarrow p \bar{\Lambda} D^{*-}$ decays with branching fractions $(25.1\pm2.6\pm3.5)\times10^{-6}\ (19.8\sigma)$
and $(33.6\pm6.3\pm4.4)\times10^{-6}\ (10.8\sigma)$. The threshold enhancement effect observed in $M_{p\bar{\Lambda}}$ is found to be consistent with many other three-body baryonic $B$ decays.
The obtained branching fractions disagree with predictions based on the factorization approach, as do the measured
ratios of branching fractions, both for the $D$ and $D^*$ modes and for the charged and neutral $B$ modes.
We also find potential angular asymmetry in the $D^*$ mode but not in the $D$ mode. Theoretical explanations, as well as
confirmation from experiments with sizable data sets, such as LHCb and Belle II, will be needed in the future.
We thank the KEKB group for the excellent operation of the
accelerator; the KEK cryogenics group for the efficient
operation of the solenoid; and the KEK computer group,
the National Institute of Informatics, and the
PNNL/EMSL computing group for valuable computing
and SINET4 network support. We acknowledge support from
the Ministry of Education, Culture, Sports, Science, and
Technology (MEXT) of Japan, the Japan Society for the
Promotion of Science (JSPS), and the Tau-Lepton Physics
Research Center of Nagoya University;
the Australian Research Council and the Australian
Department of Industry, Innovation, Science and Research;
Austrian Science Fund under Grant No.~P 22742-N16 and P 26794-N20;
the National Natural Science Foundation of China under Contracts
No.~10575109, No.~10775142, No.~10875115, No.~11175187, and No.~11475187;
the Ministry of Education, Youth and Sports of the Czech
Republic under Contract No.~LG14034;
the Carl Zeiss Foundation, the Deutsche Forschungsgemeinschaft
and the VolkswagenStiftung;
the Department of Science and Technology of India;
the Istituto Nazionale di Fisica Nucleare of Italy;
National Research Foundation (NRF) of Korea Grants
No.~2011-0029457, No.~2012-0008143, No.~2012R1A1A2008330,
No.~2013R1A1A3007772, No.~2014R1A2A2A01005286, No.~2014R1A2A2A01002734,
No.~2014R1A1A2006456;
the Basic Research Lab program under NRF Grant No.~KRF-2011-0020333,
No.~KRF-2011-0021196, Center for Korean J-PARC Users, No.~NRF-2013K1A3A7A06056592;
the Brain Korea 21-Plus program and the Global Science Experimental Data
Hub Center of the Korea Institute of Science and Technology Information;
the Polish Ministry of Science and Higher Education and
the National Science Center;
the Ministry of Education and Science of the Russian Federation and
the Russian Foundation for Basic Research;
the Slovenian Research Agency;
the Basque Foundation for Science (IKERBASQUE) and
the Euskal Herriko Unibertsitatea (UPV/EHU) under program UFI 11/55 (Spain);
the Swiss National Science Foundation; the Ministry of Science and Technology
and the Ministry of Education of Taiwan; and the U.S.\
Department of Energy and the National Science Foundation.
This work is supported by a Grant-in-Aid from MEXT for
Science Research in a Priority Area (``New Development of
Flavor Physics'') and from JSPS for Creative Scientific
Research (``Evolution of Tau-lepton Physics'').
\bibliographystyle{apsrev4-1}
|
1,477,468,751,141 | arxiv | \section{Introduction}
The better understanding of the behavior of novel materials with unusual mechanical properties is important in many applications. As it is well known the optimization of the topology and geometry of a structure will greatly impact its performance. Topology optimization, in particular, has found many uses in the aerospace industry, automotive industry, acoustic devices to name a few. As one of the most demanding undertakings in structural design, topology optimization, has undergone a tremendous growth over the last thirty years. Generally speaking, topology optimization of continuum structures has branched out in two directions. One is structural optimization of macroscopic designs, where methods like the Solid Isotropic Method with Penalization (SIMP) \cite{BS04} and the homogenization method \cite{AllHom}, \cite{ABFJ97} where first introduced. The other branch deals with optimization of micro-structures in order to elicit a certain macroscopic response or behavior of the resulting composite structure \cite{BK88}, \cite{GM14}, \cite{Sig94}, \cite{WMW04}. The latter will be the focal point of the current work.
In the context of linear elastic material and small deformation kinematics there is quite a body of work in the design of mechanical meta-materials using inverse homogenization. One of the first works in the aforementioned subject was carried out by \cite{Sig94}. The author used a modified optimality criteria method that was proposed in \cite{RZ93} to optimize a periodic micro-structure so that the homogenized coefficients attained certain target values.
On the same wavelength the authors in \cite{WMW04} used inverse homogenization and a level set method coupled with the Hadamard boundary variation technique \cite{AllCon}, \cite{AJT} to construct elastic and thermo-elastic periodic micro-structures that exhibited certain prescribed macroscopic behavior for a single material and void. More recent work was also done by \cite{GM14}, where again inverse homogenization and a level set method coupled with the Hadamard shape derivative was used to extend the class of optimized micro-structures in the context of the smoothed interface approach \cite{ADDM}, \cite{GM14}. Namely, for mathematical or physical reasons a smooth, thin transitional layer of size $2\epsilon$, where $\epsilon$ is small, replaces the sharp interface between material and void or between two different material. The theory that \cite{ADDM}, \cite{GM14} develop in obtaining the shape derivative is based on the differentiability properties of the signed distance function \cite{DZ11} and it is mathematically rigorous.
Topology optimization under finite deformation has not undergone the same rapid development as in the case of small strains elasticity, for obvious reasons. One of the first works of topology optimization in non-linear elasticity appeared as part of the work of \cite{AJT} where they considered a non-linear hyper-elastic material of St. Venant-Kirchhoff type in designing a cantilever using a level set method. More recent work was carried out by the authors of \cite{WSJ14}, where they utilized the SIMP method to design non-linear periodic micro-structures using a modified St. Venant-Kirchhoff model.
The rapid advances of 3D printers have made it possible to print many of these micro-structures, that are characterized by complicated geometries, which itself has given way to testing and evaluation of the mechanical properties of such structures. For instance, the authors of \cite{Clauetal15}, 3D printed and tested a variety of the non-linear micro-structures from the work of \cite{WSJ14} and showed that the structures, similar in form as the one in {\sc figure} \ref{fig:Clauu}, exhibited an apparent Poisson ratio between $-0.8$ and $0$ for strains up to $20\%$. Preliminary experiments by P. Rousseau \cite{Rou16} on the printed structure of {\sc figure} \ref{fig:Clauu} showed that opposite branches of the structure came into contact with one another at a strain of roughly $25\%$ which matched the values reported in \cite{Clauetal15}.
\begin{figure}[h]
\label{fig:Clauu}
\centering
\begin{tabular}{cc}
\subf{\includegraphics[width=55mm]{Clauetal15_a_.png}}
{(a)}
&
\subf{\includegraphics[width=56mm]{Clauetal15_b_.png}}
{(b)}
\end{tabular}
\caption{A 3D printed material with all four branches on the same plane achieving an apparent Poisson ratio of $-0.8$ with over $20\%$ strain. On sub-figure (a) is the uncompressed image and on sub-figure (b) is the image under compression. Used with permission from \cite{Rou16}.}
\end{figure}
To go beyond the $25\%$ strain mark, the author of \cite{Rou16} designed a material where the branches were distributed over different parallel planes (see {\sc figure} \ref{fig:Rou}). The distribution of the branches on different planes eliminated contact of opposite branches up to a strain of $50\%$. A question remains whether or not the shape of the unit cell in {\sc figure} \ref{fig:Rou} is optimal. We suspect that it is not, however, the novelty of the actual problem lies in its multi-layer character within the optimization framework of a unit cell with respect to two desired apparent elastic tensors.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\subf{\includegraphics[width=55mm]{Rou16_a_.png}}
{(a)}
&
\subf{\includegraphics[width=35mm]{Rou16_b_.png}}
{(b)}
\end{tabular}
\caption{A 3D printed material with two of the branches on a different plane achieving an apparent Poisson ratio of approximately $-1.0$ with over $40\%$ strain. Sub-figure (a) is the uncompressed image and sub-figure (b) is the image under compression. Used with permission from \cite{Rou16}.}
\label{fig:Rou}
\end{figure}
Our goal in this work is to design a multi-layer periodic composite with desired elastic properties. In other words, we need to specify the micro-structure of the material in terms of both the distribution as well as its topology. In section 2 we specify the problem setting, define our objective function that needs to be optimized and describe the notion of a Hadamard shape derivative. In section 3 we introduce the level set that is going to implicitly characterize our domain and give a brief description of the smoothed interface approach. Moreover, we compute the shape derivatives and describe the steps of the numerical algorithm. Furthermore, in Section 4 we compute several examples of multi-layer auxetic material that exhibit negative apparent Poisson ratio in 2D. For full 3D systems the steps are exactly the same, albeit with a bigger computational cost.
\noindent {\bf Notation}. Throughout the paper we will be employing the Einstein summation notation for repeated indices. As is the case in linear elasticity, $\vc{\varepsilon}(\vc{u})$ will indicate the strain defined by: $\vc{\varepsilon}(\vc{u}) = \frac{1}{2} \left ( \nabla \vc{u} + \nabla \vc{u}^\top \right)$, the inner product between matrices is denoted by $\vc{A}$:$\vc{B}$ = $tr(\vc{A}^\top \vc{B}) = A_{ij} \, B_{ji}$. Lastly, the mean value of a quantity is defined as $\mathcal{M}_Y(\gamma) = \frac{1}{|Y|}\int_Y \gamma(\vc{y}) \, d\vc{y}$.
\section{Problem setting}
We begin with a brief outline of some key results from the theory of homogenization \cite{AllHom}, \cite{BP89}, \cite{CD00}, \cite{MV10}, \cite{SP80}, that will be needed to set up the optimization problem. Consider a linear, elastic, periodic body occupying a bounded domain $\Omega$ of $ {\mathbb R} ^N, N = 2, 3$ with period $\epsilon$ that is assumed to be small in comparison to the size of the domain. Moreover, denote by $Y=\left(-\dfrac{1}{2},\dfrac{1}{2}\right)^N$ the rescaled periodic unit cell. The material properties in $\Omega$ are represented by a periodic fourth order tensor $\mathbb{A}(\vc{y})$ with $\vc{y}=\vc{x}/\epsilon \in Y$ and $\vc{x} \in \Omega$ carrying the usual symmetries and it is positive definite:
\[
A_{ijkl}=A_{jikl}=A_{klij} \text{ for } i,j,k,l \in \{1, \ldots, N \}
\]
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=1.0]
\draw [step=0.5,thin,gray!40] (-2.6,-1.7) grid (2.6,1.7);
\draw [semithick,black] (0,0) ellipse (2.1 and 1.2);
\draw [semithick,black] (2.0,1.0) node [left] {$\Omega$};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(0.25,0.3) (0.3,0.2) (0.2,0.1) (0.2,0.3)};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(0.75,0.3) (0.8,0.2) (0.7,0.1) (0.7,0.3)};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(1.25,0.3) (1.3,0.2) (1.2,0.1) (1.2,0.3)};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(0.25,0.8) (0.3,0.7) (0.2,0.6) (0.2,0.8)};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(0.75,0.8) (0.8,0.7) (0.7,0.6) (0.7,0.8)};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-0.25,0.3) (-0.2,0.2) (-0.3,0.1) (-0.3,0.3)};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-0.75,0.3) (-0.7,0.2) (-0.8,0.1) (-0.8,0.3)};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-1.25,0.3) (-1.2,0.2) (-1.3,0.1) (-1.3,0.3)};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-0.25,0.8) (-0.2,0.7) (-0.3,0.6) (-0.3,0.8)};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-0.75,0.8) (-0.7,0.7) (-0.8,0.6) (-0.8,0.8)};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-0.25,-0.3) (-0.2,-0.2) (-0.3,-0.1) (-0.3,-0.3)};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-0.75,-0.3) (-0.7,-0.2) (-0.8,-0.1) (-0.8,-0.3)};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-1.25,-0.3) (-1.2,-0.2) (-1.3,-0.1) (-1.3,-0.3)};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-0.25,-0.8) (-0.2,-0.7) (-0.3,-0.6) (-0.3,-0.8)};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-0.75,-0.8) (-0.7,-0.7) (-0.8,-0.6) (-0.8,-0.8)};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(0.25,-0.3) (0.3,-0.2) (0.2,-0.1) (0.2,-0.3)};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(0.75,-0.3) (0.8,-0.2) (0.7,-0.1) (0.7,-0.3)};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(1.25,-0.3) (1.3,-0.2) (1.2,-0.1) (1.2,-0.3)};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(0.25,-0.8) (0.3,-0.7) (0.2,-0.6) (0.2,-0.8)};
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(0.75,-0.8) (0.8,-0.7) (0.7,-0.6) (0.7,-0.8)};
\draw [->] (1.5,0) -- (5,-1);
\draw [->] (1.5,0.5) -- (5,2);
\draw [semithick,lightgray] (5,-1) -- (8,-1) -- (8,2) -- (5,2) -- (5,-1);
\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(6.2,-0.3) (6.8,-0.1) (7,0.8) (6.3,1.2)};
\draw [semithick,black] (8.2,2.3) node [left] {$\epsilon \, Y$};
\draw [<->,semithick,lightgray] (8.2,-1) -- (8.2,2);
\draw [semithick,black] (8.2,0.5) node [right] {$\epsilon$};
\draw [<->,semithick,lightgray] (5,-1.2) -- (8,-1.2);
\draw [semithick,black] (6.5,-1.2) node [below] {$\epsilon$};
\end{tikzpicture}
\end{center}
\caption{Schematic of the elastic composite material that is governed by eq. \eqref{elas}. }
\label{fig:hom_schem}
\end{figure}
Denoting by $\vc{f}$ the body force and enforcing a homogeneous Dirichlet boundary condition the description of the problem is,
\begin{align}\label{elas}
- \dv{ \vc{\sigma}^\epsilon } &= \vc{f} & \text{in } &\Omega,\nonumber \\
\vc{\sigma}^\epsilon &= \mathbb{A}(\vc{x}/\epsilon) \, \vc{\varepsilon}(\vc{u}^\epsilon) & \text{in } &\Omega, \\
\vc{u}^\epsilon &= \vc{0} & \text{on } &\partial \Omega. \nonumber
\end{align}
We perform an asymptotic analysis of $\eqref{elas}$ as the period $\epsilon$ approaches $0$ by searching for a displacement $\vc{u}^{\epsilon}$ of the form
\[
\vc{u}^{\epsilon}(\vc{x}) = \sum_{i=0}^{+\infty} \epsilon^i \, \vc{u}^i(\vc{x},\vc{x}/\epsilon)
\]
One can show that $\vc{u}^0$ depends only on $\vc{x}$ and, at order $\epsilon^{-1}$, we can obtain a family of auxiliary periodic boundary value problems posed on the reference cell $Y.$ To begin with, for any $m,\ell \in \{1, \ldots, N\}$ we define $\vc{E}^{m\ell}=\frac{1}{2}(\vc{e}_m \otimes \vc{e}_\ell + \vc{e}_{\ell} \otimes \vc{e}_m),$ where $(\vc{e}_k)_{1 \le k \le N}$ is the canonical basis of $ {\mathbb R} ^N.$ For each $\vc{E}^{m\ell}$ we have
\begin{align} \label{local}
-&\dv_y \left( { \mathbb{A}(\vc{y})(\vc{E}^{m\ell} + \vc{\varepsilon}_y(\vc{\chi}^{m\ell})) } \right) = \vc{0} & \text{in } Y,\nonumber \\
&\vc{y} \mapsto \vc{\chi}^{m\ell}(\vc{y}) &Y-\text{ periodic}, \nonumber \\
&\mathcal{M}_Y (\vc{\chi}^{m\ell}) = \vc{0}. \nonumber
\end{align}
where $\vc{\chi}^{m\ell}$ is the displacement created by the mean deformation equal to $\vc{E}^{m\ell}.$ In its weak form the above equation looks as follows:
\begin{equation} \label{local:sol}
\text{Find } \vc{\chi}^{m\ell} \in V \text{ such that } \int_Y \mathbb{A}(\vc{y}) \, \left( \vc{E}^{m\ell} + \vc{\varepsilon}(\vc{\chi}^{m\ell}) \right) : \vc{\varepsilon}(\vc{w}) \, d\vc{y} = 0 \text{ for all } w \in V,
\end{equation}
where $V=\{ \vc{w} \in W^{1,2}_{per}(Y; {\mathbb R} ^N) \mid \mathcal{M}_Y(\vc{w}) = 0 \}.$ Furthermore, matching asymptotic terms at order $\epsilon^0$ we can obtain the homogenized equations for $\vc{u}^0,$
\begin{align}
- \dv_x{ \vc{\sigma}^0 } &= \vc{f} & \text{in } &\Omega,\nonumber \\
\vc{\sigma}^0 &= \mathbb{A}^H \, \vc{\varepsilon}(\vc{u}^0) & \text{in } &\Omega, \\
\vc{u}^0 &= \vc{0} & \text{on } &\partial \Omega. \nonumber
\end{align}
where $\mathbb{A}^H$ are the homogenized coefficients and in their symmetric form look as follows,
\begin{equation}\label{hom:coef}
A^H_{ijm\ell} = \int_{Y} \mathbb{A}(\vc{y})(\vc{E}^{ij} + \vc{\varepsilon}_y(\vc{\chi}^{ij})):(\vc{E}^{m\ell} + \vc{\varepsilon}_y(\vc{\chi}^{m\ell})) \, d\vc{y}.
\end{equation}
\subsection{The optimization problem}
Assume that $Y$ is a working domain and consider $d$ sub-domains labeled $S_1,\ldots,S_d \subset Y$ that are smooth, open, bounded subsets. Define the objective function,
\begin{equation} \label{objective}
J(\mathbf{S}) = \frac{1}{2} \norm{\mathbb{A}^H - \mathbb{A}^t}^2_{\eta} \text{ with } \mathbf{S}=(S_1,\ldots,S_d).
\end{equation}
where $\norm{\cdot}_{\eta}$ is the weighted Euclidean norm, $\mathbb{A}^t$, written here component wise, are the specified elastic tensor values, $\mathbb{A}^H$ are the homogenized counterparts, and $\eta$ are the weight coefficients carrying the same type of symmetry as the homogenized elastic tensor. We define a set of admissible shapes contained in the working domain $Y$ that have a fixed volume by
\[
\mathcal{U}_{ad} = \left \{ S_i \subset Y \text{is open, bounded, and smooth},\text{ such that } |S_i| = V^t_i, i=1,\ldots,d \right \}.
\]
Thus, we can formulate the optimization problem as follows,
\begin{gather} \label{opti:prob}
\begin{aligned}
& \inf_{\mathbf{S} \subset \mathcal{U}_{ad}} J(\mathbf{S}) \\
& \vc{\chi}^{m\ell} \text{ satisfies } \eqref{local:sol}
\end{aligned}
\end{gather}
\subsection{Shape propagation analysis}
In order to apply a gradient descent method to \eqref{opti:prob} we recall the notion of shape derivative. As has become standard in the shape and topology optimization literature we follow Hadamard's variation method for computing the deformation of a shape. The classical shape sensitivity framework of Hadamard provides us with a descent direction. The approach here is due to \cite{MS76} (see also \cite{AllCon}). Assume that $\Omega_0$ is a smooth, open, subset of a design domain $D.$ In the classical theory one defines the perturbation of the domain $\Omega_0$ in the direction $\vc{\theta}$ as
\[
(Id + \vc{\theta})(\Omega_0) := \{ \vc{x} + \vc{\theta}(\vc{x}) \mid \vc{x} \in \Omega_0 \}
\]
where $\vc{\theta} \in W^{1,\infty}( {\mathbb R} ^N; {\mathbb R} ^N)$ and it is tangential on the boundary of $D.$ For small enough $\vc{\theta}$, $(Id + \vc{\theta})$ is a diffeomorphism in $ {\mathbb R} ^N$. Otherwise said, every admissible shape is represented by the vector field $\vc{\theta}$. This framework allows us to define the derivative of a functional of a shape as a Fr\'echet derivative.
\begin{deff}
The shape derivative of $J(\Omega_0)$ at $\Omega_0$ is defined as the Fr\'echet derivative in $W^{1,\infty}( {\mathbb R} ^N; {\mathbb R} ^N)$ at $\vc{0}$ of the mapping $\vc{\theta} \to J((Id + \vc{\theta})(\Omega_0))$:
\[
J((Id + \vc{\theta})(\Omega_0)) = J(\Omega_0) + J'(\Omega_0)(\vc{\theta}) + o(\vc{\theta})
\]
with $\lim_{\vc{\theta} \to \vc{0}} \frac{|o(\vc{\theta})|}{\norm{\vc{\theta}}_{W^{1,\infty}}},$ and $J'(\Omega_0)(\vc{\theta})$ a continuous linear form on $W^{1,\infty}( {\mathbb R} ^N; {\mathbb R} ^N).$
\end{deff}
\begin{figure}[h]
\centering
\includegraphics[width=2.5in]{ShapeD.png}
\caption{Perturbation of a domain in the direction $\vc{\theta}.$}
\end{figure}
\begin{remark}
The above definition is not a constructive computation for $J'(\Omega_0)(\vc{\theta}).$ There are more than one ways one can compute the shape derivative of $J(\Omega_0)$ (see \cite{AllCon} for a detailed presentation). In the following section we compute the shape derivative associated to \eqref{opti:prob} using the formal Lagrangian method of J. Cea \cite{Cea86}.
\end{remark}
\section{Level set representation of the shape in the unit cell}
Following the ideas of \cite{ADDM}, \cite{WMW04}, the $d$ sub-domains in the cell $Y$ labeled $S_i$, $i \in \{1, \ldots, d\}$ can treat up to $2^d$ distinct phases by considering a partition of the working domain $Y$ denoted by $F_j$, $j \in \{1, \ldots, 2^d \}$ and defined the following way,
\begin{align*}
F_1 =& S_1 \cap S_2 \cap \ldots \cap S_d \\
F_2 =& \overline{S_1^c} \cap S_2 \cap \ldots \cap S_d \\
&\vdots\\
F_{2^d} =& \overline{S_1^c} \cap \overline{S_2^c} \cap \ldots \cap \overline{S_d^c}
\end{align*}
\begin{figure}[h]
\centering
\includegraphics[width=2in]{cell}
\caption[Representation of different phases in the unit cell for $d=2$.]{Representation of different material in the unit cell for $d=2$.}
\end{figure}
Define for $i \in \{ 1, \ldots, d \}$ the level sets $\phi_i$,
\[
\phi_i(\vc{y})
\begin{cases}
= 0 & \text{ if } \vc{y} \in \partial S_i \\
> 0 & \text{ if } \vc{y} \in S_i^c \\
< 0 & \text{ if } \vc{y} \in S_i
\end{cases}
\]
Moreover, denote by $\Gamma_{km} = \Gamma_{mk} = \overline{F}_m \cap \overline{F}_k$ where $k \ne m$, the interface boundary between the $m^{th}$ and the $k^{th}$ partition and let $\Gamma = \cup_{i,j=1\\ i \ne j}^{2^d} \Gamma_{ij}$ denote the collective interface to be displaced. The properties of the material that occupy each phase, $F_j$ are characterized by an isotropic fourth order tensor
\[
\mathbb{A}^j = 2 \, \mu_j \, I_4 + \left( \kappa_j - \frac{2\,\mu_j}{N} \right) \, I_2 \otimes I_2, \quad j \in \{ 1, \ldots, 2^d\}
\]
where $\kappa_j$ and $\mu_j$ are the bulk and shear moduli of phase $F_j$, $I_2$ is a second order identity matrix, and $I_4$ is the identity fourth order tensor acting on symmetric matrices.
\begin{remark}
Expressions of the layer $F_k$, $0 \leq k \leq 2^d$ in terms of the sub-domains $S_i$, $1 \leq k \leq d$ is simply given by the representation of the number k in basis 2. For a number, $k$ its representation in basis 2 is a sequence of d digits, 0 or 1. Replacing in position $i$ the digit $0$ with $S_i$ and $1$ with $\overline{S_i^c}$ and can map the expression in basis 2 in the expression of the layer $F_i$. In a similar way, one can express the subsequent formulas in a simple way. However for the sake of simplicity we shall restrain the expressions of the development in the paper to $d=2$ and $0 \geq j \geq 4$.
\end{remark}
\begin{remark}
At the interface boundary between the $F_j$'s there exists a jump on the coefficients that characterize each phase. In the sub-section that follows we will change this sharp interface assumption and allow for a smooth passage from one material to the other as in \cite{ADDM}, \cite{GM14}.
\end{remark}
\subsection{The smoothed interface approach}
We model the interface as a smooth, transition, thin layer of width $2 \, \epsilon > 0$ (see \cite{ADDM}, \cite{GM14}) rather than a sharp interface. This regularization is carried out in two steps: first by re-initializing each level set, $\phi_i$ to become a signed distance function, $d_{S_i}$ to the interface boundary and then use an interpolation with a Heaviside type of function, $h_\epsilon(t)$, to pass from one material to the next,
\[
\phi_i \rightarrow d_{S_i} \rightarrow h_\epsilon(d_{S_i}).
\]
The Heaviside function $h_\epsilon(t)$ is defined as,
\begin{equation}\label{heavy}
h_{\epsilon}(t) =
\begin{cases}
0 & \text{if } t < -\epsilon, \\
\frac{1}{2}\left(1+\frac{t}{\epsilon} + \frac{1}{\pi} \, \sin\left( \frac{\pi \, t}{\epsilon} \right) \right) & \text{if } |t| \le \epsilon,\\
1 & \text{if } t > \epsilon.
\end{cases}
\end{equation}
\begin{remark}
The choice of the regularizing function above is not unique, it is possible to use other type of regularizing functions (see \cite{WW04}).
\end{remark}
The signed distance function to the domain $S_i, i=1,2$, denoted by $d_{S_i}$ is obtained as the stationary solution of the following problem \cite{OS88},
\begin{gather} \label{reinit}
\begin{aligned}
\frac{\partial d_{S_i}}{dt} + sign(\phi_i) (|\nabla d_{S_i}| - 1) &= 0 \text{ in } {\mathbb R} ^+ \times Y, \\
d_{S_i}(0,\vc{y}) &= \phi_i (\vc{y}) \text{ in } Y,
\end{aligned}
\end{gather}
where $\phi_i$ is the initial level set for the subset $S_i.$ Hence, the properties of the material occupying the unit cell $Y$ are then defined as a smooth interpolation between the tensors $\mathbb{A}^j$'s $j \in \{1,\ldots,2^d \}$,
\begin{align} \label{smoothing}
\mathbb{A}^{\epsilon}(d_{\mathbf{S}}) &= (1-h_\epsilon(d_{S_1})) \, (1-h_\epsilon(d_{S_2})) \, \mathbb{A}^1 + h_\epsilon(d_{S_1}) \, (1-h_\epsilon(d_{S_2})) \, \mathbb{A}^2 \nonumber \\
&+ (1-h_\epsilon(d_{S_1})) \, h_\epsilon(d_{S_2}) \, \mathbb{A}^3 + h_\epsilon(d_{S_1}) \, h_\epsilon(d_{S_2}) \, \mathbb{A}^4.
\end{align}
where $d_{\mathbf{S}}=(d_{S_1},d_{S_2})$. Lastly, we remark that the volume of each phase is written as
\[
\int_Y \iota_k \, d\vc{y} = V_k
\]
where $\iota_k$ is defined as follows,
\begin{equation}\label{vol:const}
\begin{cases}
\iota_1 &= (1-h_\epsilon(d_{S_1})) \, (1-h_\epsilon(d_{S_2})), \\
\iota_2 &= h_\epsilon(d_{S_1}) \, (1-h_\epsilon(d_{S_2})), \\
\iota_3 &= (1-h_\epsilon(d_{S_1})) \, h_\epsilon(d_{S_2}), \\
\iota_4 &= h_\epsilon(d_{S_1}) \, h_\epsilon(d_{S_2}).
\end{cases}
\end{equation}
\begin{remark}
Once we have re-initialized the level sets into signed distance functions we can obtain the shape derivatives of the objective functional with respect to each sub-domain $S_i.$ In order to do this we require certain differentiability properties of the signed distance function. Detailed results pertaining to the aforementioned properties can be found in \cite{ADDM}, \cite{GM14}. We encourage the reader to consult their work for the details. For our purposes, we will make heavy use of Propositions $2.5$ and $2.9$ in \cite{ADDM} as well as certain results therein.
\end{remark}
\begin{thm} \label{Shape:Thm}
Assume that $S_1, S_2$ are smooth, bounded, open subsets of the working domain $Y$ and $\vc{\theta^1}, \vc{\theta^2} \in W^{1,\infty}( {\mathbb R} ^N; {\mathbb R} ^N).$ The shape derivatives of \eqref{opti:prob} in the directions $\vc{\theta^1}, \vc{\theta^2}$ respectively are,
\begin{align*}
\frac{\partial J}{\partial S_1}(\vc{\theta}^1) =
&-\int_{\Gamma} \vc{\theta^1} \cdot \vc{n}^1 \Big (\eta_{ijk\ell} \, \left( A^H_{ijk\ell} - A^t_{ijk\ell} \right) A^{\epsilon*}_{mqrs}(d_{S_2}) (E^{k\ell}_{mq} + \varepsilon_{mq}(\vc{\chi^{k\ell}})) (E^{ij}_{rs} + \varepsilon_{rs}(\vc{\chi}^{ij})) \\
&- h^{*}_{\epsilon}(d_{S_2}) \Big ) d\vc{y}
\end{align*}
\begin{align*}
\frac{\partial J}{\partial S_2}(\vc{\theta}^2) =
&-\int_{\Gamma} \vc{\theta^2} \cdot \vc{n}^2 \Big (\eta_{ijk\ell} \, \left( A^H_{ijk\ell} - A^t_{ijk\ell} \right) \, A^{\epsilon*}_{mqrs}(d_{S_1}) \, (E^{k\ell}_{mq} + \varepsilon_{mq}(\vc{\chi^{k\ell}})) \, (E^{ij}_{rs} + \varepsilon_{rs}(\vc{\chi}^{ij})) \\
&- h^{*}_{\epsilon}(d_{S_1}) \Big) d\vc{y}
\end{align*}
where, for $i=1,2$, $\mathbb{A}^{\epsilon*}(d_{S_i})$, written component wise above, denotes,
\begin{equation} \label{A:star}
\mathbb{A}^{\epsilon*}(d_{S_i}) = \mathbb{A}^{2} - \mathbb{A}^{1} + h_{\epsilon}(d_{S_i}) \, \left( \mathbb{A}^{1} - \mathbb{A}^{2} - \mathbb{A}^{3} + \mathbb{A}^{4} \right),
\end{equation}
\begin{equation} \label{h:star}
h^{*}_{\epsilon}(d_{S_i}) = (\ell_2 - \ell_1+ h_{\epsilon}(d_{S_i})(\ell_1 - \ell_2 - \ell_3 + \ell_4) )
\end{equation}
and $\ell_j, j \in \{1, \ldots, 4 \}$ are the Lagrange multipliers for the weight of each phase.
\end{thm}
\begin{proof}
For each $k,\ell$ we introduce the following Lagrangian for $(\vc{u}^{k\ell},\vc{v},\vc{\mu}) \in V \times V \times {\mathbb R} ^{2d}$ associated to problem \eqref{opti:prob},
\begin{gather}\label{Lagrangian}
\begin{aligned}
\mathcal{L}(\vc{S}, \vc{u}^{k\ell}, \vc{v}, \vc{\mu})
= J(\vc{S})
& + \int_Y \mathbb{A}^{\epsilon}(d_{\vc{S}}) \, \left( \vc{E}^{k\ell} + \vc{\varepsilon}(\vc{u}^{k\ell}) \right): \vc{\varepsilon}(\vc{v}) \, d\vc{y} + \vc{\mu} \cdot \left( \int_Y \vc{\iota} \, d\vc{y} - \vc{V}^t \right),
\end{aligned}
\end{gather}
where $\vc{\mu}=(\mu_1, \ldots, \mu_4)$ is a vector of Lagrange multipliers for the volume constraint, $\vc{\iota}=(\iota_1, \ldots, \iota_4)$, and $\vc{V}^t=(V_1^t,\ldots,V_4^t)$.
\begin{remark}
Each variable of the Lagrangian is independent of one another and independent of the sub-domains $S_1$ and $S_2$.
\end{remark}
\subsubsection*{Direct problem}
Differentiating $\mathcal{L}$ with respect to $\vc{v}$ in the direction of some test function $\vc{w} \in V$ we obtain,
\[
\dpair{\frac{ \partial \mathcal{L} }{ \partial \vc{v} }}{ \vc{w} } = \int_Y A^{\epsilon}_{ijrs}(d_{\vc{S}}) \, (E^{k\ell}_{ij} + \varepsilon_{ij}(\vc{u^{k\ell}})) \, \varepsilon_{rs}(\vc{w}) \, d\vc{y},
\]
upon setting this equal to zero we obtain the variational formulation in \eqref{local:sol}.
\subsubsection*{Adjoint problem}
Differentiating $\mathcal{L}$ with respect to $\vc{u}^{k\ell}$ in the direction $\vc{w} \in V$ we obtain,
\begin{align*}
\dpair{\frac{ \partial \mathcal{L} }{ \partial \vc{u}^{k\ell} }}{ \vc{w} }
& = \eta_{ijk\ell} \, \left( A^H_{ijk\ell} - A^t_{ijk\ell} \right) \, \int_Y A^{\epsilon}_{mqrs}(d_{\vc{S}}) \, (E^{k\ell}_{mq} + \varepsilon_{mq}(\vc{u^{k\ell}})) \, \varepsilon_{rs}(\vc{w}) \, d\vc{y} \\
&+\int_Y A^{\epsilon}_{mqrs}(d_{\vc{S}}) \, \varepsilon_{mq}(\vc{w}) \, \varepsilon_{rs}(\vc{v}) \, d\vc{y}.
\end{align*}
We immediately observe that the integral over $Y$ on the first line is equal to $0$ since it is the variational formulation \eqref{local:sol}. Moreover, if we chose $\vc{w} = \vc{v}$ then by the positive definiteness assumption of the tensor $\mathbb{A}$ as well as the periodicity of $\vc{v}$ we obtain that adjoint solution is identically zero, $\vc{v} \equiv 0.$
\subsubsection*{Shape derivative}
Lastly, we need to compute the shape derivative in directions $\vc{\theta}^1$ and $\vc{\theta}^2$ for each sub-domain $S_1$, $S_2$ respectively. Here we will carry out computations for the shape derivative with respect to the sub-domain $S_1$ with calculations for the sub-domain $S_2$ carried out in a similar fashion. We know (see \cite{AllCon}) that
\begin{equation} \label{SD}
\dpair{\frac{\partial J}{\partial S_i}(\vc{S})}{\vc{\theta}^i} = \dpair{\frac{\partial \mathcal{L}}{\partial S_i}(\vc{S},\vc{\chi}^{k\ell},\vc{0},\vc{\lambda})}{\vc{\theta}^i} \text{ for } i=1,2.
\end{equation}
Hence,
\begin{align*}
\frac{ \partial \mathcal{L} }{ \partial S_1 }( \vc{\theta}^1 )
& = \eta_{ijk\ell} \left( A^H_{ijk\ell} - A^t_{ijk\ell} \right) \int_Y d'_{S_1}(\vc{\theta}^1) \frac{\partial A^{\epsilon}_{mqrs}}{\partial S_1} (d_{\vc{S}}) (E^{k\ell}_{mq} + \varepsilon_{mq}(\vc{u^{k\ell}})) \, (E^{ij}_{rs} + \varepsilon_{rs}(\vc{u}^{ij})) d\vc{y} \\
&+\int_Y d'_{S_1}(\vc{\theta}^1) \frac{\partial A^{\epsilon}_{ijrs}}{\partial d_{S_1}} (d_{\vc{S}}) (E^{k\ell}_{ij} + e_{yij}(\vc{u^{k\ell}})) \varepsilon_{rs}(\vc{v}) d\vc{y} \\
&+ \ell_1 \int_Y - \, d'_{S_1}(\vc{\theta}^1) \frac{\partial h_{\epsilon}(d_{S_1})}{\partial d_{S_1}} (1 - h_{\epsilon}(d_{S_2})) d\vc{y}
+ \ell_2 \, \int_Y d'_{S_1}(\vc{\theta}^1) \, \frac{\partial h_{\epsilon}(d_{S_1})}{\partial d_{S_1}} \, (1 - h_{\epsilon}(d_{S_2})) \, d\vc{y} \\
&+ \ell_3 \, \int_Y - \, d'_{S_1}(\vc{\theta}^1) \, \frac{\partial h_{\epsilon}(d_{S_1})}{\partial d_{S_1}} \, h_{\epsilon}(d_{S_2}) \, d\vc{y}
+ \ell_4 \, \int_Y d'_{S_1}(\vc{\theta}^1) \, \frac{\partial h_{\epsilon}(d_{S_1})}{\partial d_{S_1}} \, h_{\epsilon}(d_{S_2}) \, d\vc{y}.
\end{align*}
The term on the second line is zero due to the fact that the adjoint solution is identically zero. Moreover, applying Proposition $2.5$ and then Proposition $2.9$ from \cite{ADDM} as well as using the fact that we are dealing with thin interfaces we obtain,
\begin{align*}
\frac{ \partial \mathcal{L} }{ \partial S_1 }( \vc{\theta}^1 )
& = -\eta_{ijk\ell} \, \left( A^H_{ijk\ell} - A^t_{ijk\ell} \right) \, \int_{\Gamma} \vc{\theta}^1 \cdot \vc{n}^1 \, A^{\epsilon*}_{mqrs}(d_{S_2}) \, (E^{k\ell}_{mq} + \varepsilon_{mq}(\vc{u^{k\ell}})) \, (E^{ij}_{rs} + \varepsilon_{rs}(\vc{u}^{ij})) \, d\vc{y} \\
&+ \ell_1 \, \int_{\Gamma} \vc{\theta}^1 \cdot \vc{n}^1 \, (1 - h_{\epsilon}(d_{S_2})) \, d\vc{y}
- \ell_2 \, \int_{\Gamma} \vc{\theta}^1 \cdot \vc{n}^1 \, (1 - h_{\epsilon}(d_{S_2})) \, d\vc{y} \\
&+ \ell_3 \, \int_{\Gamma} \vc{\theta}^1 \cdot \vc{n}^1 \, h_{\epsilon}(d_{S_2}) \, d\vc{y}
- \ell_4 \, \int_{\Gamma} \vc{\theta}^1 \cdot \vc{n}^1 \, h_{\epsilon}(d_{S_2}) \, d\vc{y}
\end{align*}
where $\vc{n}^1$ denotes the outer unit normal to $S_1.$ Thus, if we let $\vc{u}^{k\ell} = \vc{\chi}^{k\ell}$, the solution to the unit cell \eqref{local:sol} and collect terms the result follows.
\end{proof}
\begin{remark}
The tensor $\mathbb{A}^{\epsilon *}$ in \eqref{A:star} as well $h^{\epsilon*}$ in \eqref{h:star} of the shape derivatives in {\bf Theorem \ref{Shape:Thm}} depend on the signed distance function in an alternate way which provides an insight into the coupled nature of the problem. We further remark, that in the smooth interface context, the collective boundary $\Gamma$ to be displaced in {\bf Theorem \ref{Shape:Thm}}, is not an actual boundary but rather a tubular neighborhood.
\end{remark}
\subsection{The numerical algorithm}
The result of {\bf Theorem \ref{Shape:Thm}} provides us with the shape derivatives in the directions $\vc{\theta}^1$, $\vc{\theta}^2$ respectively. If we denote by,
\[
v^1 = \frac{\partial J}{\partial S_1}(\vc{S}), \quad v^2 = \frac{\partial J}{\partial S_2}(\vc{S}),
\]
a descent direction is then found by selecting the vector field $\vc{\theta}^1=v^1\vc{n}^1$, $\vc{\theta}^2=v^2\vc{n}^2.$ To move the shapes $S_1, S_2$ in the directions $v^1, v^2$ is done by transporting each level set, $\phi_i, \quad i=1,2$ independently by solving the Hamilton-Jacobi type equation
\begin{equation} \label{HJ:phi}
\frac{\partial \phi^i}{\partial t} + v^i \, |\nabla \phi^i| = 0, \quad i=1,2.
\end{equation}
Moreover, we extend and regularize the scalar velocity $v^i, \, i=1,2$ to the entire domain $Y$ as in \cite{AJT}, \cite{ADDM}. The extension is done by solving the following problem for $i=1,2$,
\begin{align*}
- \alpha^2 \, \Delta \vc{\theta}^i + \vc{\theta}^i & = 0 \text{ in } Y, \nonumber \\
\nabla \vc{\theta}^i \, \vc{n^i} & = v^i\vc{n}^i \text{ on } \Gamma, \nonumber \\
\vc{\theta}^i & \text{ Y--periodic}, \nonumber
\end{align*}
where $\alpha > 0$ is small regularization parameter. Hence, using the same algorithm as in \cite{AJT}, for $i=1,2$ we have:
\subsubsection{Algorithm} We initialize $S_i^0 \subset U_{ad}$ through the level sets $\phi^i_0$ defined as the signed distance function of the chosen initial topology, then
{\small \it
\begin{itemize}
\item[1.] iterate until convergence for $k \ge 0$:
\begin{itemize}
\item[a.] Calculate the local solutions $\vc{\chi}^{m\ell}_k$ for $m,\ell=1,2$ by solving the linear
elasticity problem \eqref{local:sol} on $\mathcal{O}^k := S_1^k \cup S_2^k$.
\item[b.] Deform the domain $\mathcal{O}^k$ by solving the Hamilton-Jacobi equations \eqref{HJ:phi} for $i=1,2$. The new shape $\mathcal{O}^{k+1}$ is characterized by the level sets $\phi_i^{k+1}$ solutions of \eqref{HJ:phi} after a time step $\Delta t_k$ starting from the initial condition $\phi_i^k$ with velocity $v^i_k$ computed in terms of the local problems $\vc{\chi^{m\ell}_k}$ for $i=1,2$. The time step $\Delta t_k$ is chosen so that $J(\vc{S}^{k+1}) \le J(\vc{S}^k)$.
\end{itemize}
\item[2.] From time to time, for stability reasons, we re-initialize the level set functions $\phi_i^k$ by solving \eqref{reinit} for $i=1,2$.
\end{itemize}}
\section{Numerical examples}
For all the examples that follow we have used a symmetric $100 \times 100$ mesh of $P1$ elements. We imposed volume equality constraints for each phase. In the smooth interpolation of the material properties in formula \eqref{smoothing}, we set $\epsilon$ equal to $2\Delta x$ where $\Delta x$ is the grid size. The parameter $\epsilon$ is held fixed through out (see \cite{ADDM} and \cite{GM14}). The Lagrange multipliers were updated at each iteration the following way, $\ell^{n+1}_j = \ell^{n}_j - \beta \, \left( \int_Y \iota^n_j \, d\vc{y} -V^t_j \right)$, where $\beta$ is a small parameter.
Due to the fact that this type of problem suffers from many local minima that may not result in a shape, instead of putting a stopping criterion in the algorithm we fix, a priori, the number iterations. Furthermore, since we have no knowledge of what volume constraints make sense for a particular shape, we chose not to strictly enforce the volume constraints for the first two examples. However, for examples $3$ and $4$ we use an augmented Lagrangian to actually enforce the volume constraints,
\[
L(\vc{S} , \vc{\mu}, \vc{\beta}) = J(\vc{S}) - \sum_{i=1}^4 \mu_i C_i(\vc{S}) + \sum_{i=1}^4 \frac{1}{2} \, \beta_i C_i^2(\vc{S}),
\]
here $C_i(\vc{S})$ are the volume constraints and $\beta$ is a penalty term. The Lagrange multipliers are updated as before, however, this time we update the penalty term, $\beta$ every $5$ iterations. All the calculations were carried out using the software {\tt FreeFem++} \cite{FH12}.
\begin{remark}
We remark that for the augmented Lagrangian we need to compute the new shape derivative that would result. The calculations are similar as that of Theorem \ref{Shape:Thm} and, therefore, we do not detail them here for the sake of brevity.
\end{remark}
\subsection{Example 1}
The first structure to be optimized is multilevel material that attains an apparent Poisson ratio of $-1$. The Young moduli of the four phases are set to $E^1=0.91$, $E^2=0.0001$, $E^3=1.82$, $E^4=0.0001$. Here phase $2$ and phase $4$ represent void, phase $2$ represents a material that is twice as stiff as the material in phase $3$. The Poisson ratio of each phase is set to $\nu=0.3$ and the volume constraints were set to $V^t_1=30\%$ and $V^t_3=4\%.$
\begin{table}[h]
\center
\begin{tabular}{c|ccc}
$ijkl$ & $1111$ & $1122$ & $2222$ \\
\hline
$\eta_{ijkl}$ & $1$ & $30$ & $1$ \\
$A^H_{ijkl}$ & $0.12$ & $-0.09$ & $0.12$ \\
$A^t_{ijkl}$ & $0.1$ & $-0.1$ & $0.1$
\end{tabular}
\caption{Values of weights, final homogenized coefficients and target coefficients}
\end{table}
\vspace{-0.5cm}
From {\sc figure} \ref{Im:aux3} we observe that the volume constraint for the stiffer material is not adhered to the target volume. In this cases the algorithm used roughly $16\%$ of the material with Poisson ratio $1.82$ while the volume constraint for the weaker material was more or less adhered to the target constraint.
\newpage
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\subf{\includegraphics[width=60mm]{M3_0_2_}}
{Initial shape}
&
\subf{\includegraphics[width=60mm]{M3_5_2_}}
{iteration $5$}
\\
\subf{\includegraphics[width=60mm]{M3_10_2_}}
{iteration $10$}
&
\subf{\includegraphics[width=60mm]{M3_50_2_}}
{iteration $50$}
\\
\subf{\includegraphics[width=60mm]{M3_100_2_}}
{iteration $100$}
&
\subf{\includegraphics[width=60mm]{M3_200_2_}}
{iteration $200$}
\end{tabular}
\caption{The design process of the material at different iteration steps. \break \protect \newboxsymbol{magenta}{magenta} Young modulus of $1.82$, \protect \newboxsymbol{cyan}{cyan} Young modulus of $0.91$, \protect \newboxsymbol{yellow}{yellow} void.}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\subf{\includegraphics[width=55mm]{M3_200_2_}}
{}
&
\subf{\includegraphics[width=55mm]{img_macro3_2_}}
{}
\end{tabular}
\caption{On the left we have the unit cell and on the right we have the macro-structure obtained by periodic assembly of the material with apparent Poisson ratio $-1$.}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\subf{\includegraphics[width=55mm]{Aux3_sqerror}}
{Evolution of the values of the objective }
&
\subf{\includegraphics[width=55mm]{Aux3_vol}}
{Evolution of the volume constraints}
\end{tabular}
\caption{Convergence history of objective function and the volume constraints.}
\label{Im:aux3}
\end{figure}
\subsection{Example 2}
The second structure to be optimized is multilevel material that also attains an apparent Poisson ratio of $-1$. Every assumption remains the same as in the first example. The Young moduli of the four phases are set to $E^1=0.91$, $E^2=0.0001$, $E^3=1.82$, $E^4=0.0001$. The Poisson ratio of each material is set to $\nu=0.3$, however, this times we require that the volume constraints be set to $V^t_1=33\%$ and $V^t_3=1\%.$
\begin{table}[h]
\center
\begin{tabular}{c|ccc}
$ijkl$ & $1111$ & $1122$ & $2222$ \\
\hline
$\eta_{ijkl}$ & $1$ & $30$ & $1$ \\
$A^H_{ijkl}$ & $0.11$ & $-0.09$ & $0.12$ \\
$A^t_{ijkl}$ & $0.1$ & $-0.1$ & $0.1$
\end{tabular}
\caption{Values of weights, final homogenized coefficients and target coefficients}
\end{table}
Again, from {\sc figure} \ref{Im:aux4} we observe that the volume constraint for the stiffer material is not adhered to the target volume. In this cases the algorithm used roughly $15\%$ of the material with Poisson ratio $1.82$ while the volume constraint for the weaker material was more or less adhered to the target constraint.
\newpage
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\subf{\includegraphics[width=60mm]{M4_0_2_}}
{Initial shape}
&
\subf{\includegraphics[width=60mm]{M4_5_2_}}
{iteration $5$}
\\
\subf{\includegraphics[width=60mm]{M4_10_2_}}
{iteration $10$}
&
\subf{\includegraphics[width=60mm]{M4_50_2_}}
{iteration $50$}
\\
\subf{\includegraphics[width=60mm]{M4_100_2_}}
{iteration $100$}
&
\subf{\includegraphics[width=60mm]{M4_200_2_}}
{iteration $200$}
\end{tabular}
\caption{The design process of the material at different iteration steps. \break \protect \newboxsymbol{magenta}{magenta} Young modulus of $1.82$, \protect \newboxsymbol{cyan}{cyan} Young modulus of $0.91$, \protect \newboxsymbol{yellow}{yellow} void.}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\subf{\includegraphics[width=55mm]{M4_200_2_}}
{}
&
\subf{\includegraphics[width=55mm]{img_macro4_2_}}
{}
\end{tabular}
\caption{On the left we have the unit cell and on the right we have the macro-structure obtained by periodic assembly of the material with apparent Poisson ratio $-1$.}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\subf{\includegraphics[width=55mm]{Aux4_sqerror}}
{Evolution of the values of the objective }
&
\subf{\includegraphics[width=55mm]{Aux4_vol}}
{Evolution of the volume constraints}
\end{tabular}
\caption{Convergence history of objective function and the volume constraints.}
\label{Im:aux4}
\end{figure}
\subsection{Example 3}
The third structure to be optimized is multi-layer material with target apparent Poisson ratio of $-0.5$. For this example we used an augmented Lagrangian to enforce the volume constraints. The Lagrange multiplier was updated the same way as before, however, the penalty parameter $\beta$ was updated every five iterations. The Young moduli of the four phases are set to $E^1=0.91$, $E^2=0.0001$, $E^3=1.82$, $E^4=0.0001$ and the volume target constraints were set to $V^t_1=38.5\%$ and $V^t_3=9.65\%.$
\begin{table}[h]
\center
\begin{tabular}{c|ccc}
$ijkl$ & $1111$ & $1122$ & $2222$ \\
\hline
$\eta_{ijkl}$ & $1$ & $10$ & $1$ \\
$A^H_{ijkl}$ & $0.18$ & $-0.08$ & $0.18$ \\
$A^t_{ijkl}$ & $0.2$ & $-0.1$ & $0.2$
\end{tabular}
\caption{Values of weights, final homogenized coefficients and target coefficients}
\end{table}
\vspace{-0.5cm}
Again just as in the previous two examples we observe that the volume constraint for the stiffer material is not adhered to the target volume, even though for this example a augmented Lagrangian was used. In this cases the algorithm used roughly $20\%$ of the material with Poisson ratio $1.82$ while the volume constraint for the weaker material was more or less adhered to the target constraint.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\subf{\includegraphics[width=60mm]{M2_0}}
{Initial shape}
&
\subf{\includegraphics[width=60mm]{M2_5}}
{iteration $5$}
\\
\subf{\includegraphics[width=60mm]{M2_10}}
{iteration $10$}
&
\subf{\includegraphics[width=60mm]{M2_50}}
{iteration $50$}
\\
\subf{\includegraphics[width=60mm]{M2_100}}
{iteration $100$}
&
\subf{\includegraphics[width=60mm]{M2_200}}
{iteration $200$}
\end{tabular}
\caption{The design process of the material at different iteration steps. \break \protect \newboxsymbol{magenta}{magenta} Young modulus of $1.82$, \protect \newboxsymbol{cyan}{cyan} Young modulus of $0.91$, \protect \newboxsymbol{yellow}{yellow} void.}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\subf{\includegraphics[width=55mm]{M2_200}}
{}
&
\subf{\includegraphics[width=55mm]{img_macro2}}
{}
\end{tabular}
\caption{On the left we have the unit cell and on the right we have the macro-structure obtained by periodic assembly of the material with apparent Poisson ratio $-0.5$.}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\subf{\includegraphics[width=55mm]{ConvHist2.png}}
{Evolution of the values of the objective }
&
\subf{\includegraphics[width=55mm]{vol2.png}}
{Evolution of the volume constraints}
\end{tabular}
\caption{Convergence history of the objective function and the volume constraints.}
\end{figure}
\subsection{Example 4}
The fourth structure to be optimized is multilevel material that attains an apparent Poisson ratio of $-0.5$. An augmented Lagrangian was used to enforce the volume constraints for this example as well. The Lagrange multiplier was updated the same way as before, as was the penalty parameter $\beta$. The Young moduli of the four phases are set to $E^1=0.91$, $E^2=0.0001$, $E^3=1.82$, $E^4=0.0001$. The Poisson ratio of each material is set to $\nu=0.3$, however, this times we require that the volume constraints be set to $V^t_1=53\%$ and $V^t_3=7\%.$
\begin{table}[h]
\center
\begin{tabular}{c|ccc}
$ijkl$ & $1111$ & $1122$ & $2222$ \\
\hline
$\eta_{ijkl}$ & $1$ & $10$ & $1$ \\
$A^H_{ijkl}$ & $0.18$ & $-0.08$ & $0.18$ \\
$A^t_{ijkl}$ & $0.2$ & $-0.1$ & $0.2$
\end{tabular}
\caption{Values of weights, final homogenized coefficients and target coefficients}
\end{table}
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\subf{\includegraphics[width=60.3mm]{M1_0}}
{Initial shape}
&
\subf{\includegraphics[width=60.3mm]{M1_5}}
{iteration $5$}
\\
\subf{\includegraphics[width=60.3mm]{M1_10}}
{iteration $10$}
&
\subf{\includegraphics[width=60.3mm]{M1_50}}
{iteration $50$}
\\
\subf{\includegraphics[width=60.3mm]{M1_100}}
{iteration $100$}
&
\subf{\includegraphics[width=60.3mm]{M1_200}}
{iteration $200$}
\end{tabular}
\caption{The design process of the material at different iteration steps. \break \protect \newboxsymbol{magenta}{magenta} Young modulus of $1.82$, \protect \newboxsymbol{cyan}{cyan} Young modulus of $0.91$, \protect \newboxsymbol{yellow}{yellow} void.}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\subf{\includegraphics[width=55mm]{M1_200}}
{}
&
\subf{\includegraphics[width=55mm]{img_macro1}}
{}
\end{tabular}
\caption{On the left we have the unit cell and on the right we have the macro-structure obtained by periodic assembly of the material with apparent Poisson ratio $-0.5$.}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\subf{\includegraphics[width=55mm]{ConvHist1.png}}
{Evolution of the values of the objective }
&
\subf{\includegraphics[width=55mm]{vol1.png}}
{Evolution of the volume constraints}
\end{tabular}
\caption{Convergence history of objective function and the volume constraints.}
\end{figure}
\section{Conclusions and Discussion}
The problem of an optimal multi-layer micro-structure is considered. We use inverse homogenization, the Hadamard shape derivative and a level set method to track boundary changes, within the context of the smooth interface, in the periodic unit cell. We produce several examples of auxetic micro-structures with different volume constraints as well as different ways of enforcing the aforementioned constraints. The multi-layer interpretation suggests a particular way on how to approach the subject of 3D printing the micro-structures. The magenta material is essentially the cyan material layered twice producing a small extrusion with the process repeated several times. This multi-layer approach has the added benefit that some of the contact among the material parts is eliminated, thus allowing the structure to be further compressed than if the material was in the same plane.
The algorithm used does not allow ``nucleations'' (see \cite{AJT}, \cite{WMW04}). Moreover, due to the non-uniques of the design, the numerical result depend on the initial guess. Furthermore, volume constraints also play a role as to the final form of the design.
The results in this work are in the process of being physically realized and tested both for polymer and metal structures. The additive manufacturing itself introduces further constraints into the design process which need to be accounted for in the algorithm if one wishes to produce composite structures.
\section*{Acknowledgments}
This research was initiated during the sabbatical stay of A.C. in the group of Prof. Chiara Daraio at ETH, under the mobility grant DGA-ERE (2015 60 0009). Funding for this research was provided by the grant \textit{''MechNanoTruss"}, Agence National pour la Recherche, France (ANR-15-CE29-0024-01). The authors would like to thank the group of Prof. Chiara Daraio for the fruitful discussions. The authors are indebted to Gr\'egoire Allaire and Georgios Michailidis for their help and fruitful discussions as well as to Pierre Rousseau who printed and tested the material in {\sc figure} \ref{fig:Clauu} \& {\sc figure \ref{fig:Rou}}.
\bigskip
\bibliographystyle{amsplain}
|
1,477,468,751,142 | arxiv | \section{Introduction}
Since the 1960-1970s, the understanding of dynamic
critical phenomena and physical properties
of itinerant spin-fluctuation systems
has been one of the main topics in the fields of magnetism
in condensed matter physics
\cite{Doniach_PRL_1966, Berk_PRL_1966, Beal-Monod_PRL_1969,
Moriya_PRL_1970, Moriya_JPSJ_1973, Moriya_JPSJ_1995}.
This is because these questions lead to understand
not only weak itinerant magnetism in $d$- and $f$-electron systems
but also recently observed anomalous non-Fermi-liquid (NFL)
\color{black} behaviors
and magnetically mediated Cooper instabilities
\cite{SpinFluctuation_Moriya, Moriya_RepProgPhys_2003}
caused by spin fluctuations near quantum-phase transitions (QPTs).
So far, it was widely believed that both itinerant ferromagnetic (FM) and antiferromagnetic (AF) compounds usually have the quantum-critical points (QCPs),
where a 2nd-order phase transition occurs at $T = 0$ by tuning some physical parameter, such as pressure, or atomic substitution, etc.
The self-consistent-renormalized (SCR) theory by Moriya and his coworkers gives a theoretical base to describe NFL behaviors of itinerant FM and AF metallic systems near QCPs
\cite{ Moriya_JPSJ_1995, SpinFluctuation_Moriya, Moriya_RepProgPhys_2003}.
Furthermore, critical phenomena around magnetic QCPs were investigated theoretically using the renormalization-group method by Hertz
\cite{Hertz_PRB_1976}, and Millis \cite{Millis_PRB_1993}.
Actually, some itinerant AF compounds obey the Moriya-Hertz-Millis theory for critical behaviors near QCPs \cite{Lohneysen_RevModPhys_2007}.
\color{black}
However, for FM quantum criticality, the situation is different.
Surprisingly,
it has been reported that an almost FM helimagnet, MnSi
\cite{Pfleiderer_PRB_1997, Pfleiderer_Nature_2001}, and several
ferromagnets, such as UGe$_{2}$
\cite{Pfleiderer_PRL_2002,
Taufour_PRL_2010}, and ZrZn$_{2}$
\cite{Uhlarz_PRL_2004, Kabeya_JPSJ_2012}, do not show the QCP at zero field but show a 1st-order phase transition
when $T_{\mathrm{C} }$ is suppressed by applying pressure.
To explain these behaviors,
recently specific attentions were given to new quantum treatment for FM QPT;
for example,
\color{black}
Belitz and Kirkpatrick considered particle-hole excitations from a Fermi surface with a low frequency and a long wavelength (soft mode),
which couples to the order parameter
\cite{Belitz_PRL_1999, Belitz_PRL_2005}.
They showed that
a 2nd-order phase transition at high temperature changes to a 1st-order transition
below a \textit{tri-critical point} (TCP) with
1st-order \textit{wing} planes, which terminate
at zero temperature in a finite magnetic field, i.e. at a quantum-critical-end point (QCEP)
\cite{Belitz_PRL_1999, Belitz_PRL_2005}.
Previously,
it has also been discussed that the TCP emerges due solely to
the thermal spin fluctuations
\cite{Yamada_PRB_1993, Yamada_PhysicaB_2007}
and the magneto elastic coupling
\cite{Mineev_JPhysConfSer_2012, Gehring_EurophysLett_2008}.
So far, the quantum criticality around the QCEP with the metamagnetic
transition has been classified into the same criticality as the QCP for an Ising-type transition
\cite{Millis_PRL_2002}.
However,
there is no symmetry change around a QCEP, whereas
the symmetry of the ordered phase is clearly distinguished from
the paramagnetic (PM) phase for a QCP.
It has recently been
pointed out theoretically that the quantum criticality of the metamagnetic transition accompanied with the Fermi-surface change (Lifshitz transition) has another universality class, which differs from other symmetry-breaking phase transitions
\cite{Yamaji_JPSJ_2007, Imada_JPhysC_2010}.
Also,
as unconventional superconductivity associated with FM order has been discovered only in uranium materials (UGe$_{2}$
\cite{Saxana_Nature_2000},
URhGe \cite{Aoki_Nature_2001},
and UCoGe \cite{Huy_PRL_2007}),
it is intriguing to study
the quantum criticality and the spin-fluctuation effects around the FM QPT for itinerant uranium compounds.
\color{black}
\begin{centering}
\begin{figure}[!htb]
\includegraphics[width=7.7cm]{Fig1a_URhAl_PSEPS.eps}
\includegraphics[width=7.7cm]{Fig1b_URhAl_PSEPS.eps}
\caption{ (Color online)
(a) Temperature dependence of resistance (vertically shifted) of URhAl (sample $\#1$) between 6 and 30 K, measured at zero field and high pressures, 4.15, 4.3, 4.5, 4.8, 4.9, 5.0, and 5.2 GPa.
The arrows indicate the Curie temperatures ($T_{\mathrm{C}}$) at each pressure. The dashed lines are guides to the eyes.
(b) Temperature dependence of resistivity of URhAl (sample $\#1$) below 4 K,
measured at zero field and high pressures, 3.75, 4.51, 5.23, 55.3, 6.03, 6.63, and 7.34 GPa.
}
\end{figure}
\end{centering}
Recently, a FM wing structure and a QCEP
have been reported for UCoAl,
which shows a 1st-order metamagnetic transition
at $\sim$ 0.7 T with a FM moment of $\sim$ 0.3 $\mu_{\mathrm{B} }/$U at low temperature
\cite{Andreev_SovPhys_1985, Matsuda_JPSJ_1999, Aoki_JPSJ_2011}.
This compound has a hexagonal ZrNiAl-type structure with space group $P\bar{6}2m$,
in which there is no inversion symmetry.
The uranium atoms form a quasi-Kagom\'{e} lattice, thus
magnetic frustration effects are possibly expected.
From high-pressure studies,
it is considered that in UCoAl a TCP exists at negative pressure of $-$0.2 GPa
\cite{Mushnikov_PRB_1999}, and
the metamegnetic transition can be explained by the FM wings
\cite{Aoki_JPSJ_2011}.
Since the TCP in UCoAl is estimated to exist at a negative pressure,
it is not observable from hydrostatic-pressure measurements.
In order to understand the critical phenomena near the itinerant FM QPTs,
further experimental examples are necessary.
In this paper,
we report pressure-induced quantum criticality of a $5f$ itinerant
Ising-type FM compound, URhAl, which has the same crystal structure as that of UCoAl.
URhAl shows a FM transition at 25-27 K at ambient pressure
\cite{Veenhuizen_J_de_Physique_1988, Javorsky_PRB_2004, TristanCombier_Dr_Thesis_2014}, and
the FM moment ($\sim$ 0.9 $\mu_{\mathrm{B} }$/U) is strongly Ising-type with the magnetization-easy axis along $c$, similar to the Ising-type metamagnetism in UCoAl.
The atomic radius of Rh is larger than that of Co, so the
$5f$-electronic state in URhAl may correspond to a state in which negative pressure is applied for UCoAl
\cite{Andreev_JAlloysComp_1999}.
Therefore, the high-pressure study of critical behaviors for URhAl can help
to understand the metamagnetism in UCoAl as well as the general problem of FM quantum criticality.
\section{Experimental Procedures}
Single crystals of URhAl were grown by the Czochralski pulling method in a tetra-arc furnace.
Resistivity measurements under high pressure were performed by using diamond-anvil cells
with an \textit{in situ} pressure-tuning device
\cite{Salce_RevSciInstrum_2000, Demuer_JLTP_2000}.
We measured resistivity of samples
($\#1$ and $\#2$) which were less than the size of $\sim$ 200 $\times$ 100 $\times$ 30 $\mu$m$^{3}$.
The sample geometry did not allow a precise
determination of the form factor of resistivity.
Therefore,
we extrapolated $A(P)$ linearly to 0 GPa, and
obtained absolute values of $\rho(T,H)$ and $A$
by normalizing the extrapolated value [$A(P = 0)$] to
the zero-pressure value ($A$ = 0.27 $\mu \Omega $ cm/K$^{2}$ for $J \perp c$),
since the pressure variation of $A$-coefficient is almost linear for $P < $4.8 GPa.
The low-$T$ measurements were carried out for sample $\#1$
using a $^{3}$He-$^{4}$He dilution refrigerator
in temperatures down to 40 mK and in fields up to 7 T under high pressures up to 7.5 GPa.
Here, the magnetic field was applied almost along the
$c$-axis (easy-magnetization axis) and the current was applied perpendicular to the field direction.
The high-$T$ measurements under high pressure were performed at zero magnetic field using $^{4}$He cryostat
for sample $\#1$ as well as $\#2$ to check the reproducibility.
As a pressure transmitting medium,
liquid argon was used,
and pressure was determined with the ruby fluorescence technique.
For high-$T$ measurements, since there is a volume increase of helium gas in bellows of the pressure-tuning
device above liquid-helium temperature,
this may cause a slight change of the force, which is applied to the pressure cell.
Then,
the determination of pressure is more precise for low-$T$ measurements below $\sim$ 4 K than for high-$T$ measurements
above $\sim$ 5 K.
\section{Results and Discussion}
In order to examine the pressure dependence of the Curie temperature of URhAl,
we first show the temperature dependence of the resistance at various pressures
(shifted vertically)
between 6 and 30 K in Fig. 1(a).
One can see the clear kink anomaly in the resistivity curves due to
the FM transition at the Curie temperature ($T_{\mathrm{C} }$),
as indicated by the arrows [Fig. 1(a)].
$T_{\mathrm{C} }$ shifts
to lower temperature with increasing pressure,
and the kink anomaly becomes
too broad to determine $T_{\mathrm{C} }$ for $P >$ 5.0 GPa.
Figure 1(b) shows results of resistivity measurements below 4 K
at high pressures from 3.75 to 7.34 GPa.
At 3.75 and 4.51 GPa, $T_{\mathrm{C}}$ is 19 and 17 K, respectively,
and URhAl is FM in the temperature range of Fig. 1(b) at these pressures.
The variation of resistivity $\rho(T)$ is small at low temperature in the FM state.
On the other hand,
from 5.2 to 7.3 GPa,
the variation of resistivity is
very large compared to that at 3.75 and 4.51 GPa.
Since we did not observe the kink anomaly in the resistivity due to the FM transition above 5.2 GPa,
URhAl appears to be PM in the high-pressure region above 5.2 GPa.
\begin{centering}
\begin{figure}[!htb]
\includegraphics[width=8.6cm]{Fig2_URhAl_R2015_PSEPS.eps}
\caption{ (Color online) $T$-$P$ phase diagram of URhAl at zero field for the samples $\#1$ and $\#2$.
The dashed lines are guides to the eyes.
The $A$-coefficient and the residual resistivity of the sample $\#1$ are also plotted.
The solid curve indicates $T^{*}(P) \propto (P -P_{\mathrm{c} })^{3/2}$, which is predicted by the spin-fluctuation theory for a 2nd-order FM QCP
\cite{Millis_PRB_1993, Flouquet_ArXiv_2005}.
\color{black}
}
\end{figure}
\end{centering}
Figure 2 shows the obtained $T$-$P$ phase diagram at zero field.
There is no significant sample dependence of $T_{\mathrm{C} }(P)$.
$T_{\mathrm{C} }(P)$ suddenly
disappears above 5.0 GPa.
Our results suggest that
the FM critical pressure exists at $P_{\mathrm{c}} \sim $ 5.2 GPa.
In Fig. 2, we also plot the $A$-coefficient and the residual resistivity $\rho_{0}$
as a function of pressure, where $\rho(T) = \rho_{0} + AT^2$.
The pressure variation of
the
\color{black}
$A$-coefficient is very small ($A \sim$ 0.4-0.5 $\mu \Omega $cm/K$^{2}$) for $P < $ 4.8 GPa,
whereas it shows a drastic increase above 5.0 GPa.
The $A$-coefficient becomes a maximum ($A \sim$ 9 $\mu \Omega$cm/K$^{2}$) at around 5.2-5.5 GPa,
suggesting a large enhancement of the density of states at Fermi energy [$D(\epsilon_{\mathrm{F} })$] of
itinerant electrons above $P_{\mathrm{c} }$.
Interestingly, the large increase
of the $A$-coefficient ($A \sim $4 $\mu \Omega $cm/K$^{2}$) beyond $P_{\mathrm{c} }$
indicates that the large enhancement of $D(\epsilon_{\mathrm{F} })$
remains up to $\sim$ 7.5 GPa (Fig. 2).
The behavior of the residual resistivity, $\rho_{0}(P)$,
accompanies the behavior of the $A$-coefficient, $A(P)$.
Below 4.8 GPa, $\rho_{0}(P)$ increases with increasing pressure almost linearly,
then suddenly decreases with increasing pressure above 4.9 GPa.
At 0 T
$\rho_{0}(P)$ shows a step-like behavior at $\sim$ 4.8 GPa slightly below $P_{\mathrm{c} } \sim $ 5.2 GPa.
We also plot $T^{*}$, the maximum temperature for the $T^{2}$-regime, in the 3rd panel of Fig. 2.
$T^{*}$ is about $\sim$ 2-3 K in the low-pressure region below $P_{\mathrm{c} }$,
but it suddenly decreases at $P_{\mathrm{c}}$, where
$T^{*}(P)$ shows a minimum ($T^{*} \sim $ 0.4 K), and then gradually
increases with increasing pressure.
In FM spin fluctuation frame with a 2nd-order QCP,
$T^{*}(P)$ and $A(P)$ are predicted to vary as $T^{*}(P) \propto (P - P_{\mathrm{c} })^{3/2}$ and $A(P) \propto (P - P_{\mathrm{c} })^{-1}$,
respectively, leading to $A \propto (1/T^{*})^{2/3}$;
in other words,
$A \times (T^{*})^{2/3}$ is constant
\cite{Millis_PRB_1993, Flouquet_ArXiv_2005}.
For URhAl,
we obtain $A \sim 8$ $\mu \Omega$cm/K$^{2}$ and $T^{*} \sim 0.4 $ K at $P_{\mathrm{c} }$, leading to $A \times (T^{*})^{2/3} \sim$ 4.3, and
$A \sim 3.5$ $\mu \Omega$cm/K$^{2}$ and $T^{*} \sim 1.5 $ K at 7.5 GPa, leading to $A \times (T^{*})^{2/3} \sim$ 4.6.
This rough estimation
suggests that the observed large $A$-coefficient emerges due to the FM spin-fluctuation effects.
However,
we would like to point out the peculiarity of critical behavior of the FM QPT in URhAl;
as shown in the 3rd panel of Fig. 2,
$T^{*}(P) $ does not vary as $T^{*}(P) \propto (P - P_{\mathrm{c} })^{3/2}$ (the solid curve) and
does not go to zero as $P \rightarrow P_{\mathrm{c} }$.
Also, the fact that $T^{*}$ is finite at $P_{\mathrm{c} }$ conflicts with presence of a 2nd-order QCP in URhAl.
\color{black}
\begin{table}
\caption{The $A$-coefficient of resistivity, $A$ [$\mu \Omega$cm/K$^2$], the electronic specific-heat
coefficient $\gamma$ [mJ/K$^{2}$mol], and the values of $A/\gamma^{2}$ [$\mu \Omega$ cm$ (\mathrm{mol K})^2/(\mathrm{mJ} )^2$]
for URhAl, UCoAl\cite{Aoki_JPSJ_2011}, and UGe$_{2}$\cite{Tateiwa_JPhysC_2001}.
Here, $A(0)$ is the value of $A$-coefficient at ambient pressure at zero field.
}
\begin{ruledtabular}
\def\arraystretch{1.3}
\begin{tabular}{lcrrrr}
& $P$ [GPa] & $A$ & $\gamma$ & $A/\gamma^{2}$ & $A/A(0)$ \\
\hline
URhAl & 0 (FM) & 0.27 & 75 & 4.8$\times$10$^{-5}$ & $-$ \\
& $P_{\mathrm{c} } \sim $5.2 & 8 & & & $30$ \\
\hline
UCoAl & 0 & 0.28 & 75 & 5.0$\times$10$^{-5}$ & $-$ \\
& 0.54 & 0.2 & & & 0.7 \\
& $P_{\mathrm{QCEP}}\sim$1.5, 7 T & 0.4 & & & 1.4 \\
\hline
UGe$_{2}$ & 0 & 0.007 & 30 & 7.8$\times$10$^{-6}$ & $-$ \\
& 1.3 & 0.1 & 110 & 8.3$\times$10$^{-6}$ & 14.3 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{centering}
\begin{figure}[!htb]
\begin{minipage}{4.2cm}
\includegraphics[width=4.2cm]{Fig3_0T_URhAl_PSEPS.eps}
\end{minipage}\hspace{0cm
\begin{minipage}{4.2cm}
\includegraphics[width=4.2cm]{Fig3_4T_URhAl_PSEPS.eps}
\end{minipage}
\begin{minipage}{4.2cm}
\includegraphics[width=4.2cm]{Fig3_2T_URhAl_PSEPS.eps}
\end{minipage}\hspace{0cm
\begin{minipage}{4.2cm}
\includegraphics[width=4.2cm]{Fig3_7T_URhAl_PSEPS.eps}
\end{minipage}
\includegraphics[width=8cm]{Fig3e_0T_URhAl_Enlarged_PSEPS.eps}
\caption{ (Color online) $\rho(T)$ vs $T^{2}$ plot of URhAl (sample $\#1$) at high pressures from 3.75 to 7.34 GPa in
(a) 0 T, (b) 2 T, (c) 4 T, and (d) 7 T.
(e) The enlarged figure of $\rho(T)$ curves for
0 T measured at 5.23, 6.03, and 7.34 GPa as a function of $T^2$ below 1 K.
The arrows indicate $T^{*}$ (Fig.2), below which
the $T^2$-regime works.
Here, $T^{*}$ at 7.34 GPa is about $\sim$ 1.1 K.
The solid lines are the results of fitting by $\rho(T) = \rho_{0} + AT^2$.
\color{black}
}
\end{figure}
\end{centering}
\color{red}
\color{black}
The maximum value of $A$ $\sim$ 9 $\mu \Omega$cm/K$^{2}$ in URhAl near $P_{\mathrm{c}}$
is quite large for uranium intermetallic compounds.
While the heavy-electron superconductor UBe$_{13}$ shows the
exceptionally large $A$-coefficient ($\sim$ 90-100 $\mu \Omega$cm/K$^{2}$)
\cite{Remenyi_JPhysique_1986, KodowakiWoods_SolidStateCom_58_1986},
a lot of uranium compounds
show the $A$-coefficient of less than $\sim$ 1 $\mu \Omega$ cm/K$^{2}$
as summarized in the Kadowaki-Woods plot
\cite{KodowakiWoods_SolidStateCom_58_1986}.
Table I shows the $A$-coefficient, the electronic specific-heat coefficient ($\gamma$), and
the ratio $A/\gamma^{2}$ for URhAl \cite{TristanCombier_Dr_Thesis_2014},
UCoAl \cite{Aoki_JPSJ_2011}, and UGe$_{2}$ \cite{Tateiwa_JPhysC_2001}.
Besides, $A/A(0)$ indicates the ratio of $A$ divided by the $A$-coefficient at 0 GPa and zero field, i.e. $A(0)$.
As for UCoAl, the $A$-coefficient is
$A \sim 0.28$ $\mu \Omega$ cm/K$^{2}$, and the electronic specific-heat coefficient is $\gamma \sim$ 75
mJ/K$^2$mol
\cite{Aoki_JPSJ_2011}.
The $A$-coefficient of UCoAl increases near the QCEP ($\sim$ 1.5 GPa, 7 T)
\cite{Aoki_JPSJ_2011},
but the enhancement of $A$ is not so large compared to
the pressure-induced large $A$-coefficient in URhAl.
Also, the $A$-coefficient of UGe$_{2}$ increases $\sim$14-fold under high pressure,
but the maximum value of $A$-coefficient is not so large ($\sim$ 0.1 $\mu \Omega$ cm/K$^{2}$)
at $\sim$1.3 GPa
\cite{Tateiwa_JPhysC_2001}.
On the other hand,
a large $A$-coefficient ($\sim$ 5 $\mu \Omega$cm/K$^{2}$) near the critical pressure
has been reported in an itinerant heavy-electron FM compound U$_{4}$Ru$_{7}$Ge$_{6}$
\cite{Hidaka_JPSJ_2011}.
The observed large $A$-coefficient in URhAl near $P_{\mathrm{c} }$
is comparable with the value observed in cerium heavy-electron compounds
such as CeCu$_{2}$Si$_{2}$
\cite{KodowakiWoods_SolidStateCom_58_1986, Holmes_PRB_2004}.
From the comparison with other heavy-electron materials using the Kadowaki-Woods relation,
the quantum critical region in URhAl
may be described grosso modo by the strongly correlated heavy quasiparticle
with the large $D(\epsilon_{\mathrm{F}})$ caused by spin fluctuations.
However,
we should be careful about the
above discussion since the value of $A/\gamma^2$ is not universal
depending on the correlation of the system
\cite{EndoNote_KadowakiWoods, Miyake_SolidStateCom_1989, Morales_Thesis_2014}.
\color{black}
\begin{centering}
\begin{figure}[!htb]
\includegraphics[width=7.8cm]{Fig4_URhAl_A_and_Rho0_PSEPS.eps}
\caption{(Color online) Pressure dependence of the $A$-coefficient and the residual resistivity $\rho_{0}$ of URhAl (sample $\#1$) in zero and magnetic fields, obtained from
the expression, $\rho(T) = \rho_0 + AT^2$.
The dashed lines are guides to the eyes.
}
\end{figure}
\end{centering}
Next, we shall see low-$T$ $\rho(T)$ curves in zero field and magnetic fields.
Figures 3(a), (b), (c), and (d) show
the resistivity $\rho(T)$ vs $T^{2}$ under high pressures from
3.75 to 7.34 GPa
for 0, 2, 4, and 7 T, respectively.
For zero field,
at lower pressures than 4.8 GPa,
we find $\rho(T) = \rho_{0} + AT^{2}$ behavior, as predicted for an
itinerant FM state at low temperature ($T$ $\ll$ $T_{\mathrm{C} }$)
\cite{Ueda_JPSJ_1975}.
On the other hand,
the resistivity shows a remarkable
variation with
a large increase of the slope ($A$-coefficient) at 0 T
between 4.82 and 4.93 GPa.
Around 5-6 GPa,
the temperature region where the resistivity can obey the expression
$\rho(T) = \rho_{0} + A T^2$ is much smaller than at
4.82, and 3.75 GPa.
In Fig. 3(e), we show the enlarged figure of $\rho(T)$ curves for 0 T
measured at 5.23, 6.03, and 7.34 GPa as a function of $T^2$. The arrows indicate the temperature, $T^{*}$ (Fig.2), below which the $T^2$-regime works.
\color{black}
For an applied field of 2 T, the large $A$-coefficient
is suppressed at 4.93, and 5.23 GPa, and $\rho(T)$ shows the $T^{2}$-temperature dependence, similar to that at 4.82, and 3.75 GPa.
On the other hand, the slope of $\rho(T)$ becomes large at around 6-6.6 GPa at 2 T.
For 4 T, and 7 T,
the variation of $\rho(T)$ at high-pressure region above $\sim$ 6.6 GPa becomes larger than for pressures below $\sim$ 6.0 GPa.
In Fig. 4, we summarize the pressure dependence of the $A$-coefficient and the residual resistivity $\rho_{0}$ of URhAl (sample $\#1$), obtained from
the expression, $\rho(T) = \rho_0 + AT^2$,
in zero and magnetic fields.
With increasing magnetic field,
the divergence of the $A$-coefficient is suppressed, and the step-like behavior
of $\rho_{0}$ becomes broad (Fig. 4).
Since the behaviors of the $A$-coefficient and $\rho_{0}$ above 5.0 GPa
differ evidently from those below $\sim$ 5.0 GPa in the FM state,
it is considered that URhAl
is \textit{not} in the FM state any more above 5.0 GPa.
This is consistent with the fact
that the anomaly due to the FM transition disappears above 5.0 GPa
(Fig. 2).
The Curie temperature, $T_{\mathrm{ C}}(P)$, possibly becomes a 1st-order phase transition,
and then $T_{\mathrm{C} }(P)$ suddenly collapses above 5.0 GPa.
\begin{figure}[!htb]
\includegraphics[width=7.8cm]{Fig5_URhAl_A_and_Rho0vsH_PSEPS.eps}
\caption{ (Color online) Magnetic-field dependence of the $A$-coefficient and the residual resistivity $\rho_{0}$ of URhAl (sample $\#1$), obtained from
the expression, $\rho(T) = \rho_0 + AT^2$.
}
\includegraphics[width=7.5cm]{Fig6_RvsH_avec_A_and_Rho0_URhAl_PSEPS.eps}
\caption{(Color online) Magnetic field dependence of resistivity of URhAl for the sample $\#1$, measured at 5.53 GPa.
The dashed lines are guides to the eyes.
\color{black}
}
\end{figure}
The experimentally observed large enhancement of the $A$-coefficient
suggests a large mass enhancement due to spin-fluctuation effects
and/or a variation of Fermi surface.
Generally, large fluctuation effects occur
for a 2nd-order phase transition with a divergence of the correlation length of the magnetic order.
In contrast, such an effect would not be expected for a 1st-order phase transition.
Nevertheless, if the transition near the $P_{\mathrm{c} }$ is only weakly 1st-order and the drop of the FM moment at the
FM-PM phase transition is very small, the critical behavior becomes similar to that of a QCP,
and
then a large maximum in the $A$-coefficient may emerge due to the increase of
correlation length as $T \rightarrow 0$.
Figure 5 shows the $A$-coefficient and the residual resistivity as a function of magnetic field.
At 4.5 GPa, $A(H)$ value is very small, and $A(H)$ monotonically decreases with increasing field.
At 5.0 GPa, the $A$-coefficient begins to increase in zero and low fields,
and $A(H)$ is suddenly suppressed by magnetic field of $\sim$1 T.
At around 5.2-5.5 GPa, the $A$-coefficient is very large in zero field and
remains large up to 1-1.5 T, then rapidly decreases at high fields (1.5-2 T).
At 6.0 GPa, the decrease of $A(H)$ occurs at higher field near 3 T.
At 6.9 and 7.3 GPa,
the value of $A$-coefficient at 0 T becomes about half of the $A$ value at 5.5 GPa,
and after showing a slight maximum at around 2 T, it monotonically decreases with increasing field.
At 6.9 and 7.3 GPa, $\rho_{0}(H)$ increases with increasing field,
and shows a smooth maximum at around 3 T.
To search for the FM wing structure,
we look at the magnetic field dependence of the resistivity [$\rho(H)$] under high pressure.
Figure 6 shows $\rho(H)$ under 5.53 GPa at 2.5, 1.75, 1, 0.75, and 0.1 K
with $A(H)$ and $\rho_{0}(H)$ obtained from the temperature dependence of $\rho(T) = \rho_{0} + AT^2$.
$\rho(H)$ curve bends at around 2.5-3 T for each temperature.
We define the anomaly at $H_{\mathrm{m} }$ at $T \rightarrow 0$ from $A(H)$ and $\rho_{0}(H)$ as indicated by the arrows in Fig. 6.
At the low-field region below $H_{\mathrm{m}}$,
the $A$-coefficient is very large compared to the high-field region above $H_{\mathrm{m} }$.
On the other hand,
the high-field region above $H_{\mathrm{m} }$
corresponds to the FM side,
where the resistivity obeys $\rho(T) = \rho_{0} + AT^2$ with the small $A$-coefficient.
\begin{figure}[!htb]
\includegraphics[width=8.6cm]{Fig7a_PH_PhaseDiagram_URhAl_PSEPS.eps}
\includegraphics[width=8.6cm]{Fig7b_ContourAcoef_URhAl_PSEPS.eps}
\caption{(Color online) (a) Plot of the observed anomalies in $A(H)$ and $\rho(H)$ of URhAl on the
$P$-$H$ phase diagram for the sample $\#1$.
The dashed line indicates the result of liner fitting.
\color{black}
(b)
Contour plot of the $A$-coefficient of resistivity on URhAl on the $P$-$H$ phase diagram, obtained from the expression $\rho(T) = \rho_0 + AT^2$
for the sample $\#1$.
}
\end{figure}
The anomaly in $\rho(H)$ curves supports the presence of FM wing structure in URhAl.
In Fig. 7(a), we plot the $P$-$H$ phase diagram for $H_{\mathrm{m} }$
obtained from the magnetic field dependences of $A$-coefficient and $\rho_{0}$.
We obtain the relation $\mu_{0} dH_{\mathrm{m}}/dP \sim$ 3.5 $\pm$0.1 T/GPa.
Then
we estimate the TCP at $P_{\mathrm{TCP} } \sim$ 4.8-4.9 GPa, when $H_{\mathrm{m}} \rightarrow 0$,
using the value of $\mu_{0} dH_{\mathrm{m}}/dP $.
In
UCoAl, which has the same crystal structure as URhAl,
a clear 1st-order metamagnetic transition is seen at $H_{\mathrm{m}}$ due to
the FM wing.
According to high pressure studies
\cite{Aoki_JPSJ_2011},
the metamagnetic transition field of UCoAl varies as
$\mu_{0} dH_{\mathrm{m}}/dP \sim$ 3.5 T/GPa
\cite{Aoki_JPSJ_2011},
which is very similar to that of URhAl.
We summarize the values of $P_{\mathrm{TCP}} $ and $\mu_{0} dH_{\mathrm{m}}/dP $ in Table. II.
\begin{figure}[!htb]
\includegraphics[width=8cm]{Fig8_URhAl_FMwing_PSEPS.eps}
\caption{(Color online) Schematic $T$-$P$-$H$ phase diagram of the FM wings in URhAl [See also the top panel of Fig. 2 and Fig. 7(a)].
}
\end{figure}
\begin{table}[!htb]
\caption{The values of $P_{\mathrm{TCP}}$ and
the slope of the FM wing, i.e. $\mu_{0} dH_{\mathrm{m}}/dP$
for URhAl and UCoAl \cite{Aoki_JPSJ_2011}.
}
\begin{ruledtabular}
\begin{tabular}{lcr}
& $P_{\mathrm{TCP}} $ [GPa] & $\mu_{0} dH_{\mathrm{m} }/dP$ [T/GPa] \\
\hline
URhAl & 4.8-4.9 & 3.5 $\pm$0.1 \\
UCoAl & $-$0.2 & 3.5 \\
\end{tabular}
\end{ruledtabular}
\end{table}
When we cross the FM wing, we expect the 1st-order PM-FM phase
transition at $H_{\mathrm{m} }$.
At the 1st-order transition in UCoAl,
the $A$-coefficient shows a step-like behavior as a function of magnetic field
\cite{Aoki_JPSJ_2011}.
On the other hand,
the step-like behavior in the $A$-coefficient of URhAl
near $H_{\mathrm{m}}$ is rather broad.
This may indicate that the transition at $H_{\mathrm{m} }$ is weakly 1st-order in URhAl.
However, the sample quality can be the origin of the broadness of the transition.
We shall compare $A(H)$ for URhAl with that for UCoAl.
For UCoAl, a step-like behavior in $A(H)$ at $H_{\mathrm{m}}$
is seen
under low pressure region below 0.54 GPa.
For 0.54 GPa,
the difference of pressure from the TCP ($\sim -$0.2 GPa ) is
estimated to be $\delta P \equiv P - P_{\mathrm{TCP} } \sim $ 0.74 GPa.
Since we estimate $P_{\mathrm{TCP} } \sim$ 4.8 GPa for URhAl in the present work,
the pressure of 0.54 GPa in UCoAl
may correspond to a pressure of
$4.8 + \delta P \sim$ 5.54 GPa in URhAl.
At $\sim$ 5.5 GPa, we obtain $H_{\mathrm{m}} \sim 2.5$ T for URhAl from Fig. 7(a),
which is close to the value of $H_{\mathrm{m} }$ for UCoAl at 0.54 GPa.
In contrast,
the enhancement of the $A$-coefficient in URhAl
is much larger than the value in UCoAl (see, Table I),
suggesting that the density of states (mass enhancement and/or the change of Fermi surface)
near the QPT is more drastic in URhAl than UCoAl.
In URhAl, the $A$-coefficient below $H_{\mathrm{m} }$ at $\sim$ 5.0 GPa
is about
20 times larger than the $A$-coefficient above $H_{\mathrm{m} }$ (Fig. 5).
In UCoAl, on the other hand,
the $A$-coefficient in the PM state below $H_{\mathrm{m} }$
is about only 2 times larger than
the $A$-coefficient above $H_{\mathrm{m} }$ at 0 and 0.54 GPa
\cite{Aoki_JPSJ_2011}.
The difference of the $A$-coefficient between URhAl and UCoAl may be related to that of the magnetic ordered moments;
the FM ordered moment in URhAl
is 3 times larger ($\sim$ 0.9 $\mu_{\mathrm{B} }$/U )
than the magnetic-field-induced FM moment ($\sim$ 0.3 $\mu_{\mathrm{B} }$/U) in UCoAl.
As shown theoretically by Yamada \cite{Yamada_PRB_1993},
thermally fluctuating magnetic moments enhance the scattering of quasiparticles in the PM state,
and may cause such a large $A$-coefficient.
At present, the ordered moment near $P_{\mathrm{c}}$ has not yet been studied for URhAl,
so further studies are necessary to clarify this point.
The behavior of the $A$-coefficient changes
in association with the wing structure.
To see the relationship between the enhancement of $A$-coefficient and the FM wing,
it is intriguing to see the contour plot of $A$-coefficient on the $P$-$H$ phase diagram.
Figure 7(b) shows the contour plot of the $A$-coefficient on the $P$-$H$
phase diagram, obtained from $\rho(T) = \rho_{0} + AT^2$.
The red-colored region in this plot shows the enhancement of the $A$-coefficient,
whereas the purple-colored region shows the small $A$-coefficient.
The $A$-coefficient is largest at around 5.2-5.5 GPa in zero field.
With increasing pressure and magnetic field,
the $A$-coefficient is suppressed.
The large enhancement of $A$-coefficient occurs outside of the FM wing (red region).
In Fig. 8, we plot the schematic $T$-$P$-$H$ phase diagram of the FM wings in URhAl
[See also the top panel of Fig. 2 and Fig. 7(a)].
The theoretically suggested 1st-order wing planes terminate at the QCEP
at zero temperature in a finite field.
In UCoAl
the magnetic-field dependence of $A(H)$ coefficient shows a sharp maximum at the PM-FM transition
near the QCEP ($P \sim$ 1.5 GPa in $H \sim$7 T)
\cite{Aoki_JPSJ_2011, TristanCombier_Dr_Thesis_2014}.
Such a field enhancement
in $A(H)$ was not observed in the present work,
suggesting that the
QCEP of URhAl may exist above $\sim$7 T.
Alternatively, the interplay between
spin fluctuation and Fermi-surface instability can
lead to complex phenomena as discussed later.
\begin{figure}[!htb]
\includegraphics[width=8.6cm]{Fig9_n_vs_P_GPa_URhAl_PSEPS.eps}
\caption{ (Color online) Pressure dependence
of the exponent($n$) of resistivity [$\rho(T) = \rho'_0 + A'T^n$] for URhAl (sample $\#1$) at 0, 2, and 7 T.
The dashed lines are guides to the eyes.
}
\end{figure}
As close to the critical pressure, the temperature range for Fermi-liquid regime [$\rho(T) = \rho_0 + AT^2$] is found
to be very small ($T^{*} \sim $ 0.4 K),
an alternative description is
to focus on the
NFL behavior.
We analyzed the resistivity data by the expression, $\rho(T) = \rho'_0 + A'T^n$.
Here, the maximum temperature for the NFL regime $\rho(T) = \rho'_0 + A'T^n$,
i.e., $T^{**} \sim 2.2$ K at $\sim$ $P_{\mathrm{c} }$.
\color{black}
Figure 9 \color{black} shows
the
pressure dependence
of the exponent of resistivity ($n$).
As seen in
Fig. 9\color{black}, at 0 T
the exponent $n$ is about 2 below 4.8 GPa,
whereas $n(P)$ decreases with a step-like variation to a minimum at around $\sim $5.0-6.0 GPa.
At around $\sim$5.0-6.0 GPa, the values of $n$ are about 1.6-1.8.
At 2 T and 7 T,
the step-like behavior and the minimum of the exponent $n(P)$ shift to higher pressure regions.
At 7 T, the dip of $n(P)$ at around $\sim$ 7.0 GPa becomes shallow and broad,
and the value of $n(P)$ becomes slightly larger than at 0 T and 2 T.
The exponent $n$ (Fig. 9)
\color{black}
varies as a function of pressure corresponding to
the behavior of the $A$-coefficient (Fig. 4).
Above $P_{\mathrm{c} }$, the exponent $n$ is nearly 1.6-1.7, which is close to the value ($n = $ 5/3)
suggested from the SCR theory for three-dimensional FM
spin fluctuation near a QCP.
However, this NFL behavior seems to conflict with the presence of the FM 1st-order wing structure
in URhAl.
A weakly 1st-order nature at the FM wing may
explain the NFL behavior in the resistivity.
Hereafter, we note that
the critical behavior around $P_{\mathrm{c}}$ in URhAl is
different from the theoretical suggestion for a 2nd-order FM QCP.
It is suggested that
the ratio of $T^{*}$ for the Fermi-liquid regime to
$T^{**}$ for the NFL regime
is enhanced as $T^{**}/T^{*} \propto (P - P_{\mathrm{c} })^{-3/4}$
as approaching $P_{\mathrm{c}}$ for a 2nd-order FM QCP
\cite{Millis_PRB_1993, Flouquet_ArXiv_2005}.
For URhAl, $T^{**}$ does not change clearly, i.e., $ T^{**} \sim 2.1 \pm 0.2$ K.
In addition,
$T^{*}(P)$ is almost linear (Fig. 2).
These experimental
results for URhAl suggest
that the spin-fluctuation
effects cannot be explained simply by
the 2nd-order FM QCP again.
In URhAl, the NFL properties (see Fig, 9) are
observed far above $P_{\mathrm{c}}$,
and the pressure domain of the enhancement of the $A$-coefficient
appears quite asymmetric around $P_{\mathrm{c} }$.
Furthermore the enhancement of the $A$-coefficient extends over a large $P$ window (5.5-7 GPa).
Then the key question is if
the switch from the FM state to the PM state simply occurs
at $P_{\mathrm{c}}$ or if there is a mark of a new pressure-induced phase intermediate between the FM and the PM phases.
Recently
it has been shown theoretically that
another new phase possibly stabilizes
near the FM-PM QPT
\cite{ Maslov_PRB_2009, Chubukov_PRL_2009, Conduit_PRL_2009,
Karahasanovic_PRB_2012, Thomson_PRB_2013, Pedder_PRB_2013};
if there are quantum fluctuations in terms of fermionic
particle-hole excitations on the Fermi surface,
some deformations of the Fermi surface enhance the phase space available for
the quantum fluctuations, and
such a Fermi-surface instability causes
another type of ordering to
gain the free energy of the system
\cite{Conduit_PRL_2009,
Karahasanovic_PRB_2012, Thomson_PRB_2013, Pedder_PRB_2013}.
It has been shown that
two cases of new ordering are possible near the FM-PM QPT:
(i) spiral magnetic phase, and (ii) spin-nematic phase
\cite{Karahasanovic_PRB_2012}.
The energy scale of the spin-nematic phase transition is
almost 10 times smaller than the case of spiral magnetic phase \cite{Karahasanovic_PRB_2012},
therefore a spiral magnetic phase
might be more likely to occur.
A spiral magnetic phase emerges \textit{below} the TCP
as an \textit{ intermediate state}
between the uniform FM and the PM states \cite{Karahasanovic_PRB_2012, Thomson_PRB_2013}.
The transition between the uniform FM and the spiral magnetic states
occurs as a Lifshitz transition,
whereas the transition between
the spiral magnetic state and the PM state
is of 1st-order
\cite{Karahasanovic_PRB_2012}.
Interestingly, an anisotropic dispersion for electron band changes
the nature of the spiral-to-PM phase transition,
and the transition possibly becomes 2nd-order \cite{Karahasanovic_PRB_2012}.
The possible presence of
the intermediate new phase might explain why
we cannot see a clear 1st-order transition between the FM and the PM states
above $P_{\mathrm{c} }$,
different from the case of UCoAl.
In order to explore such a new phase around the QPT,
further experimental studies are required.
In particular,
measurements of thermodynamic quantities and
observation of Fermi-surface change through $P_{\mathrm{c} }$ for URhAl,
though experimentally challenging,
would deepen the understanding
the Fermi-surface instability and the nature of the FM QPT.
In URhAl, no superconductivity was observed
under high-pressure up to 7.5 GPa at low temperature down to $\sim$ 0.1 K.
At present we cannot rule out
the possibility that
the sample quality affects the emergence of superconductivity and the superconducting transition temperature
is too low to be detected.
However, even if a superconductivity does not occur,
the FM system would resolve the instability due to the quantum fluctuation at $T$ $\rightarrow$ 0
by the occurrence of another new phase associated with the Fermi-surface instability
as mentioned just above.
It is interesting to consider why the intermediate phase possibly appears in URhAl.
One may consider that
the lack of local inversion center and/or the quasi-Kagome lattice in ZrNiAl-type structure can induce the intermediate phase.
However,
the ZrNiAl-type hexagonal symmetry structure ($P\bar{6}2m$: $D_{3h}^{3}$) does not lead
to Dzyaloshinskii-Moriya interaction
\cite{Kataoka_JPSJ_1981}, which could induce a helimagnetic order \cite{Dzyaloshinskii_JETP_1964}.
Such an intermediate phase has not yet been observed in UCoAl, which has the same ZrNiAl-type structure, around the PM-FM phase transition induced by uniaxial stress along the $c$-axis
\cite{Ishii_PhysicaB_2003, Karube_JPSJ_2014, YShimizu_JPSJ_2015}.
The relationship between the crystal structure and the occurrence of the intermediate phase remains open question. The authors in Ref. \cite{Karahasanovic_PRB_2012}
suggest that the intermediate phase may generally occur even for a simple spherical Fermi surface due to the Fermi-surface instability accompanying the quantum fluctuations (particle-hole excitations on the Fermi surface).
As seen in Fig. 9,
the NFL behavior of the resistivity is remarkable in URhAl far above $P_{\mathrm{c}}$.
Such strong quantum-fluctuation effects near the FM-PM QPT may invoke the intermediate phase in this material.
\color{black}
\color{black}
\section{Conclusion}
The quantum criticality of
the three-dimensional-Ising-type itinerant FM compound URhAl was studied
by low-temperature resistivity measurements under high pressure up to 7.5 GPa.
The Curie temperature is suppressed with increasing pressure, and suddenly disappears above 5.0 GPa.
Our resistivity results suggest the FM critical pressure of $\sim$ 5.2 GPa.
Above 5.2 GPa, the ground state is not FM, and
the $A$-coefficient is largely enhanced
at around 5.2-5.5 GPa in zero and low-field region.
The characteristics of the temperature and the magnetic-field dependences of the
resistivity may be consistent with the presence of a FM wing structure
with an estimated TCP at 4.8-4.9 GPa.
At least with the present quality of the crystal the 1st-order phase transition appears weak.
The resistivity shows the NFL behavior above 5.0 GPa up to 7.5 GPa.
URhAl may be a material in which
the switch from the FM state to the PM state occurs through an intermediate phase
around the QPT.
\section*{ACKNOWLEDGMENT}
We would like to thank S. Kambe, G. Knebel,
K. Ishida, Y. Tada, K. Hattori,
S. Hoshino, and Y. Ikeda for valuable
discussions and helpful comments.
This work was supported by ERC starting grant New Heavy Fermion,
KAKENHI, REIMEI, ICC-IMR, and ANR project PRINCESS.
|
1,477,468,751,143 | arxiv |
\section{Probing QCD Matter with Heavy-Ion Collisions}
\label{sec:intro}
Heavy-ion collision experiments at relativistic energies create extreme states of strongly interacting matter and enable their investigation in the laboratory.
Figure~\ref{fig:phasediagram} illustrates the conjectured phases of strongly interacting matter and their boundaries in a diagram of temperature versus baryon chemical potential~\cite{fukushima11}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1.0\linewidth]{figures/phasediagram}
\caption{Sketch of the phase diagram for strongly interacting matter (taken from~\cite{fukushima11}).}
\label{fig:phasediagram}
\end{center}
\end{figure}
Experiments at LHC and top RHIC energies explore the QCD phase diagram in the transition region between Quark-Gluon-Plasma (QGP) and hadron gas at small baryon chemical potentials, where matter is produced with almost equal numbers of particles and antiparticles.
This region resembles the situation in the early universe.
While cooling, the system hadronizes, and finally freezes out chemically at a temperature around 160 MeV~\cite{becattini13,stachel14}.
This temperature coincides with the transition temperature predicted by first-principle Lattice QCD calculations~\cite{borsanyi10,basasov12}, which find a smooth crossover from partonic to hadronic matter~\cite{aoki06}.
Lattice QCD calculations for finite baryon chemical potential are still suffering from the so-called sign problem, which makes the standard Monte-Carlo methods no longer applicable, and are not yet able to make firm predictions on possible phase transitions at large baryon chemical potentials.
On the other hand, effective-model calculations predict structures in the QCD phase diagram at large baryon chemical potentials, like a critical endpoint followed by a first-order phase transition~\cite{kashiwa08,luecker13,tawfik15}.
The development of a mixed phase of hadrons and quarks is e.g.\ predicted by a non-local \mbox{3-flavor} Nambu--Jona-Lasinio model calculation of a neutron star for densities around $5 \rho_0$, with a transition to pure quark matter above $8 \rho_0$.
This calculation is able to reproduce a two-solar mass neutron star~\cite{orsaria14}.
Moreover, a quarkyonic phase is predicted which has properties of both high density baryonic matter and deconfined and chirally symmetric quark matter~\cite{mclerran07,mclerran09}.
Other scenarios discussed for matter at extreme densities include colour-flavour locking~\cite{alford99} and skyrmion matter~\cite{lee03}.
The experimental discovery of landmarks like
a first-order phase transition or a critical point
in the QCD phase diagram would be a major breakthrough in our understanding of the strong interaction in the non-perturbative regime, with fundamental consequences for our knowledge on the structure of neutron star cores, chiral symmetry restoration, and the origin of hadron masses.
Heavy-ion collisions at moderate beam energies are well suited to provide high net-baryon densities.
This is illustrated in Fig.~\ref{fig:trajectories}, where the excitation energy density in the center of the collision zone is shown as a function of the net-baryon density for central Au+Au collisions at beam energies of 5$A$ and 10$A$~GeV as predicted by several transport models and a hydrodynamic calculation~\cite{arsene07,friman11}.
The excitation energy is defined as
\mbox{$ \epsilon^*(t) = \epsilon(t) - m_N \rho(t)$}
with $\epsilon(t)$ the energy density and $m_N \rho(t)$ the mass density.
The solid lines correspond to the time evolution of the system; they turn in a clockwise sense, and the dots on the curves labelled UrQMD and QGSM correspond to steps of 1 fm/$c$ in collision time.
The dashed lines enclose the expected region of phase coexistence~\cite{toneev03}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1.0\linewidth]{figures/trajectory05}
\includegraphics[width=1.0\linewidth]{figures/trajectory10}
\caption{
Time evolution of the excitation energy density $\epsilon^*$ versus the net-baryon density $\rho$ in the center of the fireball
for central Au+Au collisions at beam energies of 5$A$~GeV (upper panel) and 10$A$~GeV (lower panel) calculated by various transport codes and a hydrodynamic model~\cite{arsene07,friman11}.
The excitation energy density is defined as $ \epsilon^* = \epsilon - m_N \rho$ (see text).
The full symbols on the curves for UrQMD and QGSM indicate time steps of 1~fm/$c$.
The dashed lines enclose the regions of phase coexistence~\cite{toneev03}.
The yellow zone denotes post-freezeout streaming.
}
\label{fig:trajectories}
\end{center}
\end{figure}
According to these model calculations, the density in the center of the fireball exceeds 6 times saturation density $\rho_0$ at a beam energy of 5$A$~GeV, and at 10$A$~GeV even a density above $8 \rho_0$ is reached.
At such densities, the nucleons are expected to fuse and form large quark bags.
The calculations predict that the dense fireball spends a comparatively long time within the phase coexistence region at energies around 5$A$~GeV and goes beyond this region with increasing beam energy.
High-density matter as produced in nuclear collisions at FAIR energies also opens the possibility to search for multi-strange hypernuclei.
Experimental data on such objects are very scarce; detailed studies of their production will give information on the hyperon-hyperon interaction which is essential for the understanding of cores of neutron stars.
Models predict the FAIR energy range to be particularly well suited for such studies.
This also holds for the search for exotic composite objects carrying multiple units of strangeness like kaonic clusters or multi-strange di-baryons, the existence of which is still an open issue in high-energy physics.
In conclusion, the systematic and comprehensive exploration of the QCD phase diagram in the region of high-net baryon densities using heavy-ion collisions at SIS100 beam energies (up to 11$A$~GeV for Au ions) and measuring diagnostic probes never observed before in this energy regime will have a large discovery potential.
In particular, the CBM experiment operated at intermediate beam energies will be able to address the following fundamental questions:
\begin{itemize}
\item{
What is the equation of state of QCD matter at high net-baryon densities, and what are the relevant degrees of freedom at these densities?
Is there a phase transition from hadronic to quark-gluon matter, or a region of phase coexistence?
Do exotic QCD phases exist?
}
\item{
To what extent are the properties of hadrons modified in dense baryonic matter?
Are we able to find indications of chiral symmetry restoration?
}
\item{
How far can we extend the chart of nuclei towards the third (strange) dimension by producing single and double strange hypernuclei?
Does strange matter exist in the form of heavy multi-strange objects?
}
\end{itemize}
The focus of the CBM experiment at FAIR is to study observables related to the physics cases mentioned above.
The equation-of-state can be studied by measuring (i) the collected flow of identified particles, which is generated by the density gradient of the early fireball,
and (ii) by multi-strange hyperons, which are preferentially produced in the dense phase of the fireball via sequential collisions.
A phase transition from hadronic to partonic matter is expected to cause the following effects:
(i) multi-strange hyperons are driven into equilibrium at the phase boundary;
(ii) in case of a first-order phase transition, the excitation function of the fireball temperature -- measured by the invariant-mass spectra of lepton pairs -- should reflect a caloric curve.
A possible critical point should produce event-by-event fluctuations of conserved quantities such as strangeness, charge, and baryon number.
Modifications of hadron properties in dense baryonic matter and the onset of chiral symmetry restoration affect the invariant-mass spectra of di-leptons.
The measurement of (double-$\Lambda$) hyper-nuclei will provide information on the hyperon-nucleon and hyperon-hyperon interaction which will shed light on the hyperon puzzle in neutron stars.
A more detailed discussion of the relation between the various physics cases and observables is presented in section~\ref{sec:probes} of this article, together with a review of the current data situation and the discovery potential of the CBM experiment.
Before, we give a brief overview of the experimental landscape in section~\ref{sec:experiments} and of the CBM detector in section~\ref{sec:cbm}.
A general introduction into theoretical concepts and experimental programmes devoted to the exploration of the QCD phase diagram at high net-baryon densities can be found in the CBM Physics Book~\cite{friman11}.
\section{Experiments exploring high net-baryon densities}
\label{sec:experiments}
Most of the experimental observables which are sensitive to the properties of dense nuclear matter, like the flow of identified (anti-) particles, higher moments of event-by-event multiplicity distributions of conserved quantities, multi-strange (anti-) hyperons, di-leptons, and particles containing charm quarks are extremely statistics-demanding.
Therefore, the key feature of successful experiments will be rate capability in order to measure these observables with high precision.
The experimental challenge is to combine a large-acceptance fast detector and a high-speed data read-out system with high-luminosity beams.
The QCD phase diagram at large baryon chemical potentials has been explored by pioneering heavy-ion experiments performed at AGS in Brookhaven and at low CERN-SPS beam energies.
Because of the then available detector technologies these measurements were restricted to abundantly produced hadrons and to di-electron spectra with strongly limited statistics.
At the CERN-SPS, the NA61/SHINE experiment continues to search for the first-order phase transition by measuring hadrons using light and medium heavy ion beams~\cite{laszlo07}.
This detector setup is limited to reaction rates of about 80~Hz.
The existing HADES detector at SIS18 measures hadrons and electron pairs in heavy-ion collision systems with reaction rates up to 20~kHz.
The STAR collaboration at RHIC has performed a beam energy scan from top energies down to $\sqrt{s_{NN}} = 7.7$~GeV, and plans to improve the statistical significance of the data in a second beam energy scan~\cite{starbes}.
At beam energies above $\sqrt{s_{NN}} = 20$~GeV, the reaction rates of STAR are limited to about 800~Hz by the TPC read-out, and drop down to a few Hz at beam energies below $\sqrt{s_{NN}} = 8$~GeV because of the decreasing beam luminosity provided by the RHIC accelerator.
At the Joint Institute for Nuclear Research (JINR) in Dubna, the fixed-target experiment BM@N is being developed at the Nuclotron to study heavy-ion collisions at gold beam energies up to about 4$A$~GeV.
Moreover, at JINR the Nuclotron-based Ion Collider fAcility NICA with the Multi-Purpose Detector (MPD) is under construction~\cite{nica}.
The NICA collider is designed to run at a maximum luminosity of $L = 10^{27} \mathrm{cm^{-2}s^{-1}}$ at collision energies between $\sqrt{s_{NN}} = 8$ and 11 GeV corresponding to a reaction rate of 6~kHz for minimum bias Au+Au collisions.
The interaction rate at NICA decreases to about 100~Hz because of low luminosity at $\sqrt{s_{NN}} = 5$~GeV.
The Facility for Antiproton and Ion Research (FAIR), currently under construction in Darmstadt, will offer the opportunity to study nuclear collisions at extreme interactions rates.
The FAIR Modularized Start Version (MSV) comprises the SIS100 ring which provides energies for gold beams up to 11$A$~GeV ($\sqrt{s_{NN}} = 4.9$~GeV), for Z=N nuclei up to 15$A$~GeV, and for protons up to 30~GeV.
In order to reach higher energies, a booster ring is needed.
The space for this second accelerator is already foreseen in the ring tunnel building.
The rate capabilities of existing and planned heavy-ion experiments are presented in Fig.~\ref{fig:experiments} as a function of center-of-mass energy.
The research program on dense QCD matter at FAIR will be performed by the experiments CBM and HADES.
The HADES detector, with its large polar angle acceptance ranging from 18 to 85 degrees~\cite{agakishiev09}, is well suited for reference measurements with proton beams and heavy ion collision systems with moderate particle multiplicities, i.e.\ Ni+Ni or Ag+Ag collisions at the lowest SIS100 energies.
Electron pairs and hadrons including multi-strange hyperons can be reconstructed with HADES.
The CBM detector~\cite{friman11} is a fixed target experiment designed to run at extremely high interaction rates up to
10~MHz for selected observables such as J/$\psi$, at 1-5~MHz for multi-strange hyperons and dileptons, and at 100~kHz without any online event selection.
The CBM detector system will accept polar emission angles between 2.5 and 25 degrees in order to cover mid-rapidity and the forward rapidity hemisphere for symmetric collision systems over the FAIR energy range.
The combination of high-intensity beams with a high-rate detector system and sufficient beam time provides worldwide unique conditions for a comprehensive study of QCD matter at the highest net-baryon densities achievable in the laboratory.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1.0\linewidth]{figures/experiments}
\caption{Interaction rates achieved by existing and planned heavy-ion experiments as a function of center-of-mass energy~\cite{nica,montag14,michel11,odyniec13}. ``STAR FXT'' denotes the fixed-target operation of STAR.
High-rate experiments are also proposed at JPARC~\cite{sako15} and at SPS~\cite{dainese16}, but these are still in a conceptual stage.}
\label{fig:experiments}
\end{center}
\end{figure}
\section{The CBM experiment at FAIR}
\label{sec:cbm}
As discussed above, the SIS100 energy range is well suited to produce and to investigate strongly interacting matter at densities as they are expected to exist in the core of neutron stars.
This opens the perspective to study the fundamental questions raised above with a dedicated experiment which is ideally suited to measure rare diagnostic probes of dense matter with high accuracy.
In the following we discuss the detector requirements and highlights of the physics program.
The CBM detector has been designed as a multi-purpose device which will be capable to measure hadrons, electrons and muons in elementary nucleon and heavy-ion collisions over the full FAIR beam energy range.
Therefore, no major adjustments have to be made to optimize the experiment for SIS100 beams.
A staging scenario is, however, foreseen for some detector systems and for the DAQ system.
In order to perform high-precision multi-differential measurements of rare probes the experiment should run at event rates of 100~kHz up to 10~MHz for several months per year.
To filter out weakly decaying particles like hyperons or D mesons, no simple trigger signal can be generated.
Instead, the full events have to be reconstructed, and the decay topology has to be identified online by fast algorithms running on a high-performance computing farm hosted by the GSI GreenIT cube.
To utilize maximum rates, the data acquisition is based on self-triggered front-end electronics.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/cbmsetup}
\caption{
The CBM experimental setup together with the HADES detector (left).
Each setup has its own target.
During HADES operation, the beam will be stopped by a beam dump in front of CBM.
The CBM components are described in the text.
For muon measurements, the RICH will be exchanged by the MuCh which is shown in a parking position to the right of the beam axis.
}
\label{fig:cbmsetup}
\end{center}
\end{figure*}
The CBM experimental setup is depicted in Fig.~\ref{fig:cbmsetup} and comprises the following components:
\begin{itemize}
\item a superconducting dipole magnet,
\item a Micro Vertex Detector (MVD) consisting of four layers of silicon monolithic active pixel sensors,
\item a Silicon Tracking System (STS) based on double-sided silicon micro-strip sensors arranged in eight stations inside
a dipole magnet,
\item a Time-of-Flight wall (TOF) based on Multi-Gap Resistive Plate Chambers (MRPC) with low-resistivity glass,
\item a Ring Imaging Cherenkov (RICH) detector comprising a $\mathrm{CO_2}$ radiator and a UV photon detector realized with multi-anode photomultipliers for electron identification,
\item a Transition Radiation Detector (TRD) for pion suppression, particle tracking, and identification using specific energy loss,
\item a Muon Chamber (MuCh) system for muon identification consisting of a set of gaseous micro-pattern chambers sandwiched between hadron absorber plates made of graphite and iron,
\item an Electromagnetic Calorimeter (ECAL) for the measurement of photons,
\item a Projectile Spectator Detector (PSD) for event characterization,
\item a First-Level-Event-Selection (FLES) system for online event reconstruction and selection.
\end{itemize}
The preparation of the experiment is well advanced. The Technical Design Reports (TDRs) of the Dipole Magnet, the STS, the TOF wall, the RICH, the MuCh and the PSD have been approved~\cite{magnetTdr,stsTdr,tofTdr,richTdr,muchTdr,psdTdr}, and the TDRs of the MVD, the TRD and the FLES are in progress. According to the schedule, the CBM experiment will be ready to take the first beams from SIS100 in 2024.
\section{Probes of high-density QCD matter}
\label{sec:probes}
The theoretical understanding of the properties of strongly interacting matter at large net-baryon densities is still poor.
The scientific progress in this field is mainly driven by new experimental results.
Owing to the complexity of the final state of heavy-ion reactions, the extraction of significant information requires systematic measurements like excitation functions, system size dependencies and multi-differential phase-space distributions of identified particles, including flow, event-by-event fluctuations, and other types of correlations.
This task is even more challenging for high-statistics measurements of rare and penetrating probes.
In the following we discuss the most promising observables in some detail.
\subsection{Collectivity}
The collective flow of hadrons is driven by the pressure gradient created in the early fireball and provides information on the dense phase of the collision
(for an overview, see~\cite{herrmann99,oeschler10} and references therein).
Flow effects can be characterized by the azimuthal distribution of the emitted particles
$dN/d\phi = C \left( 1 + v_1 \cos(\phi) + v_2 \cos(2 \phi) + ... \right)$,
where $\phi$ is the azimuthal angle relative to the reaction plane, and the coefficients $v_1$ and $v_2$ represent the strengths of the directed (in-plane) and the elliptic flow, respectively.
At SIS100 energies, the proton flow has been measured between 2$A$ and 10.7$A$~GeV in Au+Au collisions~\cite{pinkenburg99}.
These data have been compared to the results of transport model calculations in order to extract information on the nuclear matter equation of state (EOS)~\cite{danielewicz02}.
Moreover, a large flow of kaons has been observed in Au+Au collisions at 6$A$~GeV~\cite{chung00}.
At SIS18 (1$A$ - 2$A$~GeV), exploratory measurements of kaon flow have been performed by the FOPI and KaoS experiments~\cite{shin98,zinhuk14}.
Recently, the STAR collaboration has measured the directed flow for protons and antiprotons~\cite{adamczyk14} and the elliptic flow for particles and antiparticles~\cite{starbes} in Au+Au collisions at energies from
$\sqrt{s_{NN}} = 62.4$~GeV down to $\sqrt{s_{NN}} = 7.7$~GeV.
Figure~\ref{fig:starv1} shows the measured slope of the directed flow of antiprotons, protons and net-protons together with the results of an UrQMD calculation.
The directed flow is sensitive to the details of the phase transition, the softening of the QCD matter EOS, and is an important observable for clarifying the role of partonic degrees of freedom~\cite{steinheimer14}.
Transport models such as UrQMD are challenged to reproduce details of the energy dependence and magnitude of the $v_1$ slope measured by STAR.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1.0\linewidth]{figures/starv1}
\caption{Directed flow slope $\mathrm{d}v_1/\mathrm{d}y$ near mid-rapidity versus beam energy for intermediate-centrality Au+Au. Panels (a), (b) and (c) depict measured antiprotons, protons, and net protons, respectively, along with UrQMD
calculations (grey bands)~\cite{adamczyk14}.}
\label{fig:starv1}
\end{center}
\end{figure}
Figure~\ref{fig:starv2} depicts the measured difference in elliptic flow $v_2$ for particles and antiparticles as a function of center-of-mass energy.
The difference increases with increasing particle mass towards lower collision energies.
This $v_2$ splitting was attributed to effects of the mean-field potential in both the partonic and the hadronic phase~\cite{xu14}.
On the other hand, it was argued that the baryon chemical potential is the determining factor for the observed particle type dependent splitting in $v_2$~\cite{hatta16}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1.0\linewidth]{figures/starv2}
\caption{
The difference in elliptic flow between particles and their corresponding antiparticles (see legend) as a function of
$\sqrt{s_{NN}}$ for 10\% - 40\% central Au+Au collisions as measured by the STAR collaboration at RHIC~\cite{starbes}.
The systematic errors are indicated by the brackets.
The dashed lines in the plot are fits with a power-law.
}
\label{fig:starv2}
\end{center}
\end{figure}
At the lowest collisions energy of the RHIC beam-energy scan, which is close to the SIS100 energy range,
$v_2$ measurements are only available for pions, protons, antiprotons, charged kaons, and (with poor precision)
$\Lambda$/$\overline{\Lambda}$.
The CBM experiment will therefore dramatically improve the data situation by measuring the flow of identified particles in the FAIR energy range, including multi-strange hyperons and di-leptons.
Of particular interest is the flow of particles not significantly suffering from rescattering like $\Omega$ hyperons or $\phi$ mesons, for which no experimental data exist.
These measurements will significantly contribute to our understanding of the QCD matter equation-of-state at neutron star core densities.
\subsection{Event-by-event fluctuations}
Event-by-event fluctuations of conserved quantities such as baryon number, strangeness and electrical charge can be related to the thermodynamical susceptibilities and hence provide insight into the properties of matter created in high-energy nuclear collisions.
Lattice QCD calculations suggest that higher moments of these distributions are more sensitive to the phase structure of the hot and dense matter created in such collisions.
Non-Gaussian moments (cumulants) of these fluctuations are expected to be sensitive to the proximity of the critical point since they are proportional to powers of the correlation length, with increasing sensitivity for higher-order moments.
Measurements of event-by-event fluctuations have been performed by the NA49, PHENIX and STAR collaborations in order to search for the QCD critical point~\cite{alt09,anticic15-1,anticic15-2,mitchell15,adare16,adamczyk15}.
Recent results from STAR are shown in Fig.~\ref{fig:fluctuations}, which depicts the volume-independent cumulant ratio $\kappa \sigma^2$ (excess kurtosis times squared standard deviation) of the net-proton multiplicity distribution as a function of the collision energy, measured in Au+Au collisions~\cite{luo14,thaeder16}.
In the absence of a critical point, this quantity is found to be constant as a function of collision energy in various model calculations~\cite{karsch11,skokov13,garg13,luo10}.
The presence of a critical point is expected to lead to a non-monotonic behaviour of the $\kappa \sigma^2$ observable~\cite{stephanov11,chen15}.
For the most central collisions the STAR-BES data exhibit a deviation from unity at the lowest measured energy as expected for a critical behaviour.
These results clearly call for a high-precision measurement of higher-order fluctuations at lower beam energies in order to search for the peak in $\kappa \sigma^2$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1.0\linewidth]{figures/fluct1}
\end{center}
\begin{center}
\caption{
Energy dependence of the product $\kappa \sigma^2$ (excess kurtosis times variance) of the net-proton multiplicity distribution (yellow circles) for top 0-5\% central Au+Au collisions.
The Poisson expectation is denoted as dotted line at \mbox{$\kappa \sigma^2$ = 1}~\cite{luo14,thaeder16}.
}
\label{fig:fluctuations}
\end{center}
\end{figure}
Up to date no higher-order event-by-event fluctuations have been measured at SIS100 energies.
The CBM experiment will, for the first time, perform a high-precision study of higher-order fluctuations at various beam energies in order to search for the elusive QCD critical point in the high net-baryon density region:
\mbox{ $\sqrt{s_{NN}} = 2.7$ -- 4.9~GeV }
corresponding to \mbox{$\mu_B \simeq 800$ -- 500~MeV}.
As recently pointed out, large clusters of nucleons might be important for the critical behaviour in the high net-baryon density region~\cite{bzdak16}. In addition, the density fluctuations arising from the criticality can also be accessed via the measurements of the yields of light nuclei such as deuterons, assuming coalescence to be the production mechanism.
Precise measurements of the energy dependence of light nuclei production will further aid and complement to the critical point searches at the high baryon density region at FAIR.
\subsection{Strangeness}
Particles containing strange quarks are important probes of the excited medium created in heavy-ion collisions~\cite{koch86,gazdzicki99,tomasik16}.
At top SPS energy strange hadrons, including $\Omega$ and $\overline{\Omega}$, appear to be produced in chemical
equilibrium~\cite{andronic10}.
The equilibration of in particular $\Omega$ baryons could not be understood in terms of hadronic two-body relaxation processes in the limited life time of the fireball.
It was thus taken as strong indication that the system had undergone a transition from a partonic phase to the hadronic final state, with the equilibration being driven by multi-body collisions in the high particle density regime near the phase boundary~\cite{pbm04}.
Agreement of the $\Omega$ baryon yield with thermal model calculations was found also at 40$A$~GeV in Pb+Pb collisions at the SPS~\cite{andronic09}, although the data statistics is rather poor.
In the AGS (SIS100) energy range, only about 300 $\Xi^-$ hyperons have been measured in Au+Au collisions at 6$A$~GeV~\cite{chung03}.
Figure~\ref{fig:hadesyields} depicts the yield of hadrons measured in Ar + KCl collisions at an energy of 1.76$A$~GeV together with the result of a statistical model calculation~\cite{agakishiev09b}.
The measured yield of $\Xi^-$ hyperons exceeds the model prediction by about a factor of 20, indicating that $\Xi^-$ is far off chemical equilibrium.
High-precision measurements of excitation functions of multi-strange hyperons in A+A collision with different mass numbers A at SIS100 energies will allow to study the degree of equilibration of the fireball, and, hence, open the possibility to find a signal for the onset of deconfinement in QCD matter at high net-baryon densities.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1.0\linewidth]{figures/hadesyields}
\caption{
Yield of hadrons measured in Ar + KCl collisions at an energy of 1.76$A$~GeV by the HADES
collaboration (full symbols).
The horizontal bars represent a fit by a thermo-statistical model~\cite{agakishiev09b}.
}
\label{fig:hadesyields}
\end{center}
\end{figure}
According to hadronic transport models, which do not feature a partonic phase, multi-strange (anti-)hyperons are produced in sequential collisions involving kaons and lambdas, and, therefore, are sensitive to the density in the fireball.
This sensitivity is largest at lower beam energies close to or even below the production threshold in elementary collisions, and is expected to shed light on the compressibility of nuclear matter.
The CBM experiment will open a new era of multi-differential precision measurements of strange hadrons including multi-strange (anti-) hyperons.
The expected particle yields are sufficient to study with excellent statistical significance the production and propagation of heavy strange and anti-strange baryons up to $\Omega^+$ in dense nuclear matter.
Also excited hyperon states can be identified.
Moreover, it will be possible to study hyperon-nucleon and hyperon-hyperon correlations in order to explore the role of hyperons in neutron stars, which is of utmost importance with respect to the difficulty to reconcile the measured masses of neutron stars with the presence of hyperons in their interiors, the so-called hyperon puzzle~\cite{demorest10}.
\subsection{Lepton pairs}
Di-leptons emitted in collisions of heavy ions offer the unique opportunity to investigate the microscopic properties of
strongly interacting matter~\cite{mclerran85,weldon90}.
Virtual photons are radiated off during the whole time evolution of a heavy-ion collision.
Once produced, they decouple from the collision zone and materialize as muon or electron pairs.
Hence, leptonic decay channels offer the possibility to look into the fireball and to probe the hadronic currents of strongly interacting systems in a state of high temperature and density.
For example, the low-mass continuum in the invariant mass spectrum of lepton pairs
($M < 1 \, \mathrm{GeV}/c^2$) probes the in-medium $\rho$ spectral function as this meson saturates, according to vector meson dominance, the hadronic current in a hadron resonance gas~\cite{gale91}.
Moreover, the excess yield of lepton pairs in this energy range is sensitive to both the temperature of the created matter and its lifetime (or more precisely its space-time extension).
This observable is expected to be a measure of the fireball lifetime and to be sensitive to chiral symmetry restoration~\cite{hohler14}.
The slope of the dilepton invariant mass distribution between 1 and 2.5~GeV/$c^2$ directly reflects the average temperature of the fireball~\cite{rapp16}.
This measurement would also provide indications for the onset of deconfinement and the location of the critical endpoint.
The flow of lepton pairs as function of their invariant mass would allow to disentangle radiation of the early partonic phase from the late
hadronic phase~\cite{chatterjee07,deng11,mohanty12,gale15}.
No di-lepton data have been measured in heavy-ion collisions at beam energies between 2$A$ and 40$A$~GeV.
The CBM experiment will perform pioneering multi-differential measurements of lepton pairs over the whole range of invariant masses emitted from a hot and dense fireball.
According to model calculations, various processes will contribute to the measured yield as shown in Fig.~\ref{fig:dielectrons}~\cite{rapp99}.
The thermal radiation includes a broadened in-medium $\rho$ meson~\cite{friman93,chanfray93,leupold98,chanfray96,oset02},
radiation from the QGP~\cite{ding16}, and dileptons from multi-pion
annihilation~\cite{hohler14,rapp16}.
The latter reflects $\rho$-$a_1$ chiral mixing and therefore provides a direct link to chiral symmetry restoration.
The experimental challenges are the very low signal cross sections, decay probabilities of the order of $10^{-4}$, and the high combinatorial background.
According to simulations, it will be possible to identify di-leptons in the relevant invariant mass regions with a signal-to-background ratio of at least S/B = 1/100.
In this case one needs about 10000 signal pairs in order to determine the yield with a statistical accuracy of 10\%.
The expected signal yield in 10 weeks of running the CBM experiment is higher by a factor of 100 -- 1000, depending on beam energy.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1.0\linewidth]{figures/dielectrons.png}
\caption{
Invariant-mass spectrum of $e^+e^-$ pairs radiated from a central Au+Au collision at 20$A$ GeV.
The solid red curve shows the contribution of the thermal radiation which includes in-medium $\rho$, $\omega$, 4-$\pi$ spectral functions and QGP spectrum calculated using the many-body approach of~\cite{rapp99}.
The freeze-out hadron cocktail (solid grey curve) is calculated using the Pluto event generator~\cite{froehlich07} and includes two-body and Dalitz decays of $\pi^0$, $\eta$, $\omega$, and $\phi$.
Contributions of Drell-Yan (green solid curve) and correlated open charm (solid violet curve) have been simulated based on~\cite{bhaduri14}.
}
\label{fig:dielectrons}
\end{center}
\end{figure}
A very important part of the CBM research program will be the high-precision measurement of the di-lepton invariant mass distribution between 1 and 2.5~GeV/$c^2$ for different beam energies.
With respect to top SPS, RHIC and LHC energies, the contribution of di-leptons from Drell-Yan processes or correlated charm decays, which also populate this mass region, are dramatically reduced at a beam energy of 20$A$ GeV as demonstrated in Fig.~\ref{fig:dielectrons} (note that at SIS100 energies these processes will contribute even less).
This allows to access directly the fireball temperature and a contribution from $\rho$-a$_1$ chiral mixing.
The precise measurement of the energy dependence of the spectral slope opens the unique possibility to measure the caloric curve, which would be the first direct experimental signature for phase coexistence in high-density nuclear matter.
The excitation function of the fireball temperature $T$ extracted from the intermediate dilepton mass range, as calculated within the coarse-graining approach~\cite{seck15}, is shown in Fig.~\ref{fig:temperature} (red dotted curve).
The dashed violet curve in Fig.~\ref{fig:temperature} shows a speculated shape of $T$ as function of collision energy, where the temperature saturates over a broad energy range.
The flattening (plateau) of the caloric curve would clearly indicate a first-order phase transition, similar to the one presented as evidence for the liquid-gas phase transition in nuclear matter~\cite{agostino05}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1.0\linewidth]{figures/temperature}
\caption{
Excitation function of the fireball temperature $T$ extracted from intermediate dilepton mass distributions as calculated with a coarse-graining approach (dotted red curve)~\cite{seck15}.
The dashed violet curve corresponds to a speculated shape with phase transition occurring in the SIS100 energy range.
The black triangle corresponds to the temperature as measured by the NA60 collaboration at SPS~\cite{specht10}.
}
\label{fig:temperature}
\end{center}
\end{figure}
In order to extract the continuum di-lepton signals, the physical and combinatorial background of lepton pairs has to be precisely determined, which is notoriously difficult.
Since the background sources of electrons and muons are fundamentally different, independent measurements in both the di-electron and in the di-muon channel are decisive for the understanding of the systematic errors.
\subsection{Charm}
Particles containing charm quarks are expected to be created in the very first stage of the reaction, and, therefore, offer the possibility to probe the degrees-of-freedom over the entire collision history~\cite{averbeck13}.
Depending on their interaction with the medium, the charm and anti-charm quarks hadronize into D mesons, charmed baryons, or charmonium.
The suppression of charmonium due to colour screening of the heavy quark potential in the deconfined phase has been the first predicted signature for quark-gluon plasma formation~\cite{matsui86}.
Charmonium suppression was first observed in central Pb+Pb collisions at 158$A$~GeV~\cite{abreu97}, and then also found in experiments at RHIC~\cite{adare07} and LHC~\cite{abelev12}.
No data on open and hidden charm production in heavy-ion collisions are available at beam energies below 158$A$~GeV. Moreover, the interpretation of existing data is complicated by lacking knowledge of interactions between charmed particles and the cold hadronic medium~\cite{kharzeev95}.
With CBM at SIS100, charm production will be studied for the first time at beam energies close to production threshold.
At these energies, the formation time of charmonium is small compared to the lifetime of the reaction system.
CBM is thus uniquely suited to study the interactions between fully formed J/$\psi$ and the dense medium with appropriate counting statistics and systematics.
Systematic measurements of charmonium in p+A collisions with varying target mass number A at proton energies up to 30~GeV will shed light on the charmonium interaction with cold nuclear matter and constitute an important baseline for measurements in nuclear collisions.
Moreover, the simultaneous measurement of open charm will give access to the basically unknown charm production cross section at or near the kinematic threshold.
Based on simulations with the HSD event generator~\cite{cassing01}, the yield of D mesons and charmonium expected in p+A collisions at SIS100 energies after a run of 10 weeks varies between $10^4$ and $10^6$, depending on proton energy, and is sufficient to perform a multi-differential analysis.
CBM will also extend the measurement of the J/$\psi$ as probe of the hot medium to the lower energies.
At SIS100, charmonium will be measured in collisions of symmetric nuclei up to 15$A$~GeV and, more challenging even, below threshold in Au+Au collisions at 10$A$~GeV.
Model predictions of the J/$\psi$ multiplicity in this energy range vary widely.
Taking the prediction of the HSD model~\cite{cassing01}, the yield obtained in one week of running at an interaction rate of 10~MHz would be about 300 J/$\psi$ for central Au+Au collisions at 10$A$~GeV, and about 600 J/$\psi$ for central Ni+Ni collisions at 15$A$~GeV.
In the latter case, also open charm production can be studied.
However, because of the rate limitations of the MVD which is needed to select the D meson decay vertex, the measurement will be performed at a rate of 300~kHz.
As a result, the expected yield in central Ni+Ni collisions at 15$A$~GeV will be about 30 D mesons per week.
This would be sufficient for an analysis of charmonium propagation and absorption in dense baryonic matter based on the ratio of hidden to open charm.
\subsection{Hypernuclei and strange objects}
Thermal model calculations predict the production of single and double hypernuclei in heavy-ion collisions~\cite{andronic11}.
The results of these calculations are shown in Fig.~\ref{fig:hypernuclei} demonstrating that the excitation function of hypernucleus production exhibits its maximum in the SIS100 energy range.
This is due to the superposition of two effects: the increase of light nuclei production with decreasing beam energy, and the increase of hyperon production with increasing beam energy.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1.0\linewidth]{figures/hypernuclei}
\caption{
Energy dependence of hypernuclei yields at midrapidity for $10^6$ central collisions as calculated with a thermal model.
The predicted yields of $^3$He and $^4$He nuclei are included for comparison~\cite{andronic11}.
}
\label{fig:hypernuclei}
\end{center}
\end{figure}
The CBM experiment at SIS100 will measure hydrogen and helium hypernuclei in huge amounts.
Moreover, the experiment has a substantial discovery potential for light double-$\Lambda$ hypernuclei.
According to Fig.~\ref{fig:hypernuclei}, in 1 million central Au+Au collisions the hypernuclei $\prescript{5}{\Lambda\Lambda}{\mathrm{H}}$ and $\prescript{6}{\Lambda\Lambda}{\mathrm{He}}$ will be produced at a beam energy around 10$A$~GeV with a yield of about 5 and 0.1, respectively.
Assuming a reaction rate of $10^6$ central events/s, a branching ratio of 10\% for two sequential weak decays, and an efficiency of 1\%, one would expect to measure within one week about 3000 $\prescript{5}{\Lambda\Lambda}{\mathrm{H}}$ and 60 $\prescript{6}{\Lambda\Lambda}{\mathrm{He}}$, respectively.
Such measurements would represent a breakthrough in hypernucleus physics, as up to now only very few double-$\Lambda$ hypernuclei events have been found~\cite{ahn13}.
The discovery of (double-) $\Lambda$ hypernuclei and the determination of their lifetimes will provide information on the hyperon-nucleon and hyperon-hyperon interactions, which are essential ingredients for the understanding of the nuclear matter equation-of-state at high densities, and, hence, of the structure of neutron stars~\cite{botvina14}.
According to a coupled transport-hydro-dynamics model, the high baryon densities created in heavy-ion collisions at FAIR energies favour the distillation of strangeness~\cite{stoecker09}.
The model predicts the production of hypernuclei, strange di-baryons, and multi-strange short-lived objects.
These predictions open the exciting perspective to explore the formation of composite objects with multiple strangeness in heavy-ion collisions at SIS100 energies.
\section{Experiments with CBM detectors in ``FAIR Phase 0''}
\label{sec:phase0}
The start version of the CBM experiment will be ready to take the first beam from SIS100 in the year 2024.
However, several detector and software components will be ready earlier.
Therefore, it is planned to install some of these components in existing heavy-ion experiments at other laboratories.
The benefits of these efforts are manifold: The additional detectors will improve the performance of the experiments, the CBM detectors and their readout chain will be commissioned and calibrated,
the reconstruction and analysis software will be tested and advanced on real data,
and members of the CBM collaboration will participate in data taking and analysis, thereby maintaining experience in performing physics measurements and educating the next generation of young physicists.
The projects are briefly sketched in the following.
The photon detector of the RICH detector in HADES will be replaced by modern Multi-Anode Photo-Multipliers (MAPM) which have been ordered for the CBM RICH detector.
The CBM RICH detector comprises 1100 MAPMs out of which 428 will be installed in HADES.
The new detector will substantially improve in particular the di-lepton pair efficiency for small opening angles, and, hence, the electron identification performance of the HADES experiment for the intermediate research program at GSI between 2018 and the start of FAIR.
After commissioning of CBM, a part of the MAPMs will be shared by both experiments applying an alternating running scenario.
About 10\% of the TOF-MRPC modules will be installed at the STAR detector at RHIC.
They will serve as an endcap TOF-wall in order to increase the particle identification capability at forward rapidities.
According to current planning, 36 CBM TOF modules housing 108 MRPC detectors will cover an active area of about 10 m$^2$ and comprise 10.000 read-out channels.
An installation of a prototype module is planned already for fall 2016 and is necessary in order to develop the interface of the different data acquisition systems of CBM and STAR.
The installation of the full set of MRPC detector modules is planned to start in spring 2018 allowing the participation in the Beam-Energy Scan II at RHIC in 2019/20, where STAR will run both in the collider and the fixed target mode.
Although the interaction rates will be relatively low, the CBM system will be exposed to particle multiplicities close to the ones expected for running at SIS100.
For STAR the extension of acceptance is vital to improve the particle identification coverage for a number of interesting bulk observables like the rapidity density of antiprotons, the directed and elliptic flow of protons and pions, and the measurement of event-by-event fluctuations of net-baryons.
Also for the strangeness program targeting among others on the $v_2$ measurement of $\phi$ mesons, visible contributions of the CBM-TOF subsystem are anticipated.
CBM members will gain operational experience in running the subsystem and will be involved in the data analysis and physics publications.
The CBM track finding algorithm based on the Cellular Automaton will be used in data production as well as in the High-Level Trigger (HLT) of the STAR experiment.
Similarly, the KF Particle package for the analysis of short-lived particles will be used for online event selection and offline physics analysis in STAR.
These new algorithms will improve the track reconstruction efficiency and the data processing speed,
and will enable online physics analysis on the STAR HLT computer farm equipped with many-core CPUs and accelerator cards.
The reconstruction algorithms will be used by the \mbox{ALICE} experiment at CERN as well.
CBM members will thus take part in data taking, physics analysis and in publications of these experiments.
Four prototype stations of the CBM Silicon Tracking System (STS) are considered to be installed in the future fixed-target experiment BM@N at the Nuclotron at JINR in Dubna.
The construction of the stations is planned as a joint venture of the CBM-STS group and the BM@N collaboration.
The silicon stations will significantly increase the track reconstruction efficiency in particular for low particle momenta, and, therefore, improve the performance for the identification of multi-strange hyperons, which are the most important observables of the physics program of BM@N.
The participation in this experiment will be very valuable in commissioning the CBM Silicon detector itself and for the development of tracking and physics analysis strategies under experimental conditions.
Data taking with Au-beams of energies up to 4.5$A$~GeV at moderate rates is planned from 2018 -- 2021.
A number of tests of the CBM Projectile Spectator Detector (PSD) components are planned and/or currently ongoing at the NA61/SHINE facility.
The readout electronics, the response of the hadron calorimeter modules, and the PSD performance for collision geometry determination (centrality and event plane) are being under investigation with a similarly designed PSD of NA61/SHINE.
At GSI/SIS18 we plan to install a setup consisting of full-size CBM detector modules including the data readout chain up to the GreenIT cube to perform system tests with high-rate nucleus-nucleus collisions.
These measurements will allow to optimize the performance of the detectors under experiment conditions and to test the free-streaming data transport including the online event selection on a high-performance computing cluster.
The goal is to reduce the time for CBM commissioning at SIS100.
The setup will be installed in 2017/2018.
\section{Conclusions}
\label{sec:conclusions}
In heavy-ion collisions at beam energies available at SIS100, model calculations predict the creation of strongly interacting QCD matter at extreme values of density, similar to neutron star core densities.
This offers the opportunity to explore the QCD phase diagram in the region of high net-baryon densities, to study the equation of state, to search for phase transitions, chiral symmetry restoration, and exotic forms of (strange) QCD matter with a dedicated experiment.
The CBM detector is designed to measure the collective behaviour of hadrons, together with rare diagnostic probes such as multi-strange hyperons, charmed particles and vector mesons decaying into lepton pairs with unprecedented precision and statistics.
Most of these particles will be studied for the first time in the FAIR energy range.
In order to achieve the required precision, the measurements will be performed at reaction rates up to 10~MHz.
This requires very fast and radiation hard detectors, a novel data read-out and analysis concept including free streaming front-end electronics, and a high performance computing cluster for online event selection.
Several of the CBM detector systems, the data read-out chain and event reconstruction will be commissioned and already used in experiments during the FAIR phase 0.
The unique combination of an accelerator which delivers a high-intensity heavy-ion beam with a modern high-rate experiment based on innovative detector and computer technology offers optimal conditions for a research program with substantial discovery potential for fundamental properties of QCD matter. |
1,477,468,751,144 | arxiv |
\section{Introduction}
\label{sec:introduction}
Structured, sparse communication patterns appear frequently in
parallel numerical applications, notably stencil-patterns in two and
higher dimensions~\cite{Sourcebook03,Epperson07,CuiOlsen10}. With MPI
3.0 and later versions of the \emph{Message-Passing
Interface}~\cite{MPI-3.1}, sparse communication patterns can be
expressed as so-called \emph{neighborhood collective operations}. The
specific mechanism of MPI relies on virtual process topologies to
define communication neighborhoods for the ensuing neighborhood
collective operations. In many respects this is undesirable. The
neighborhood that is implicit with Cartesian communicators is the set
of immediate distance one neighbors along the dimensions, thus
collective communication in a standard, 2-dimensional, 9-point (and
3-dimensional, 27-point, etc.) stencil pattern cannot be expressed
with Cartesian communicators. The general, distributed graph topology
interface allows specification of arbitrary, directed communication
graphs, and can thus express any desired stencil communication
pattern. However, information about the global, highly regular
structure of the communication graph is not conveyed to the MPI
library, which makes many types of beneficial optimizations difficult
and/or computationally hard.
We address these problems, and examine a restricted form of MPI-like,
sparse, collective communication which we term \emph{isomorphic,
sparse collective communication}~\cite{Traff15:isosparse}.
Isomorphic, sparse collective communication means that all processes
communicate in structurally similar patterns and that this property is
asserted to the processes. Concretely, the MPI processes are assumed
to be placed in some regular (virtual) topology, like for instance a
$d$-dimensional torus. A sparse process neighborhood is described by a
list of relative, $d$-dimensional vector offsets. In this situation,
process neighborhoods are isomorphic if the offset lists are identical
(same vector offsets in the same order) over all processes. The
proposed interfaces are persistent both in the sense that the same
sparse, isomorphic neighborhood can be used in different communication
operations, and that operations with the same buffer and datatype
parameters can be performed several times. The persistent interfaces
provide handles to precompute communication schedules such that the
costs of the schedule computation can be amortized over several,
actual collective communication operations. For the isomorphic
collective operations discussed here, the schedule computation is
actually very fast, but the setting up of the MPI derived datatypes
(that are used to make data blocks move between intermediate and final
result buffers) consumes enough time to make persistence worthwhile.
The main contribution of this paper is to show that efficient,
deadlock-free, message-combining communication schedules for
\emph{isomorphic all-to-all\xspace} and \emph{allgather\xspace} can be easily
computed, given the isomorphic assertion that all processes use the
exact same, relative neighborhood. The resulting message-combining
schedules correspond to communication optimizations typically made for
$9$- and $27$-point stencils in two and three dimensions,
respectively, where messages to corner processes piggyback on messages
sent along the principal dimensions.
However, our algorithms are general, and work for any isomorphic
neighborhood, such that also asymmetric patterns can be catered
to. For sparse neighborhoods consisting of $s$ neighbors,
message-combining reduces the number of communication rounds from $s$
send and receive rounds of a straightforward, linear algorithm to
$Nd$, where the constant $N$ depends on the structure of the
neighborhood and on assumptions about the underlying communication
system. For instance, the number of rounds in a $27$-point stencil
pattern in a $3$-dimensional mesh or torus is reduced from $26$ to
only $6$, under the assumption of a one-ported communication
system. This is achieved by combining messages to different neighbors
and sending larger, combined messages along the torus dimensions
only. Since some messages are thus sent via several, intermediate
processes, there is often a tradeoff between number of rounds and
total communication volume, as is the case for dense all-to-all\xspace
communication~\cite{Bruck97}. Message-combining is implemented using
the MPI derived datatype mechanism to specify for each communication
round which messages have to be sent and received and from which
communication buffers. Allowing space for an intermediate
communication buffer, a per-message double-buffering scheme can be
implemented in this manner, thereby completely eliminating explicit
message copying or packing/unpacking and leading to our resulting
\emph{zero-copy implementations}. We have used similar techniques
previously in~\cite{Traff14:bruck}.
Our first all-to-all\xspace and allgather\xspace algorithms assume a one-ported
torus communication network, and are round- and volume-optimal under
this assumption. We have implemented these algorithms, both in
regular and irregular versions, and present an extensive benchmark
evaluation with comparisons to both the current MPI 3.1 neighborhood
collective implementations and the straightforward, $s$-communication
round implementations of the isomorphic interfaces. For small message
sizes up to a few kilobytes, the experimental results show the
expected reduction in communication time. Furthermore, for larger
neighborhoods in three and higher dimensions, we observe very
substantial improvements.
For our second set of algorithms we relieve the restriction of only
immediate torus neighbor communication, and allow direct communication
along the torus dimensions. For neighborhoods with long-distance
neighbors, this can lead to significant reductions in the number of
communication rounds, which now depends only on the number of
different coordinate values in each dimension, and not on the
magnitude of the coordinates. A second set of experiments illustrates
the effects of the fewer communication rounds. Further relieving
network assumptions leads to interesting optimization problems for
minimizing the number of communication rounds or maximizing the number
of ports that can be used per communication round. We discuss some of
these problems.
There is a large amount of work on optimizations for stencil
computations,
see~\cite{BasuHallWilliamsStraalenOlikerColella15,Dursun09,Dursun12,StengelTreibigHagerWellein15,TangChowdhuryKuzmaulLukLeiserson11}
for some that has influenced this work, many of which also discuss
communication
optimizations~\cite{BordawekarChoudharyRamanujam96:automatic}.
Stencil computations have been used to analyze (implications of) new
MPI one-sided communication support by
Zhu~\textit{et~al.\@}\xspace~\cite{ZhuZhangYoshiiLiZhangBalaji15}. General optimization
techniques for the MPI neighborhood collectives were proposed by
Hoefler and Schneider~\cite{HoeflerSchneider12}, who do not exploit
external assertions about the overall structure of neighborhoods to
simplify, e.g., scheduling by coloring. More general, dynamic
neighborhood communication on top of MPI is discussed by
Ovcharenko~\textit{et~al.\@}\xspace~\cite{Ovcharenko12}. Souravlas and
Roumeliotis~\cite{SouravlasRoumeliotis08:torus} also considered
message-combining optimizations but in a more limited context than done
here.
\section{Isomorphic, Sparse Collective Communication}
\label{sec:definitions}
We now describe more formally what is meant by isomorphic, sparse
collective communication. The notation introduced here will be used
for the remainder of the paper. We show the concrete interfaces as
implemented in our library.
An isomorphic, sparse collective communication pattern is defined
relative to some given, structured organization of the processes. Let
$p$ be the number of processes, and assume that they are organized in
a $d$-dimensional torus with dimension sizes $p_0, p_1,\ldots,p_{d-1}$
and $\Pi_{i=0}^{d-1}p_i=p$. Each ranked process~$R, 0\leq R<p$ is
identified by a coordinate $(r_0,r_1,\ldots r_{d-1})$ with $0\leq
r_i<p_i$ for $i=0,\ldots, d-1$.
A (sparse) \emph{$s$-neighborhood} of a process is a collection of
$s$ processes to which the process shall \emph{send} data. The
collection is given as a sequence of $s$ \emph{relative-coordinate
vectors} $\allowbreak C^0, \allowbreak C^1,\ldots \allowbreak
C^{s-1}$. Each $C^i$ has the form $(c^i_0, \allowbreak c^i_1, \ldots,
\allowbreak c^i_{d-1})$ for arbitrary integer offsets $c^i_j$
(positive or negative). A set of identical $s$-neighborhoods for a set
of processes is said to be \emph{isomorphic}. An \emph{isomorphic,
sparse collective operation} is a collective operation over $p$
processes with isomorphic neighborhoods. Note that an $s$-neighborhood
is allowed to have repetitions of relative coordinates, and that a
process can be a neighbor of itself, for instance if relative coordinate
$(0,0,\ldots,0)$ is in the $s$-neighborhood. Also note that different
coordinates may denote the same neighbor, which can easily happen if
$p$ is small.
We define torus vector addition $\oplus$ for vectors $R$ and
$C$ in the given torus by $R\oplus C = ((r_0+c_0)\bmod p_0,
(r_1+c_1)\bmod p_1, \ldots, (r_{d-1}+c_{d-1})\bmod p_{d-1})$. Each
process $R=(r_0, r_1,\ldots, r_{d-1})$ with $s$-neighborhood
$\allowbreak C^0, \allowbreak C^1, \ldots, \allowbreak C^{s-1}$ shall
send data to the $s$ \emph{target processes} $R\oplus C^i$ for
$i=0,\ldots, s-1$. Since neighborhoods are isomorphic, it follows
that the process will need to receive data from $s$ \emph{source
processes} $R\ominus C^i$.
The concrete, isomorphic, sparse collective operations we consider
here are of the all-to-all\xspace and the allgather\xspace type. In an
\emph{isomorphic all-to-all\xspace communication}, each process sends an
individual, possibly different \emph{block} of data to each of its
target neighbors, and receives a block of data from each of its source
neighbors. In an \emph{isomorphic allgather\xspace communication}, each
process sends the \emph{same block} of data to each of its target
neighbors, and receives a block of data from each of its corresponding
sources.
\begin{lstlisting}[float,caption={
The collective, isomorphic neighborhood set-up\xspace function. Calling
processes must supply the same list of relative coordinates.
},
label=lst:isocreate,
floatplacement=ht!
]
Iso_neighborhood_create(MPI_Comm cartcomm,
int s, int relative_coordinates[],
MPI_Comm *isocomm)
\end{lstlisting}
\begin{lstlisting}[float,caption={
The interfaces for regular, persistent, isomorphic all-to-all\xspace
and allgather\xspace communication. Collective communication is initiated
and completed by the start call, which uses the buffer and datatype
parameters given in the corresponding init call. The neighborhood is
defined by the isomorphic communicator created by a previous set-up\xspace
call, and send and receive buffers must be large enough to store the
data blocks sent to and received from the neighbors.
},
label=lst:isoall,
floatplacement=ht!
]
Iso_neighbor_alltoall_init(void *sendbuf,
int sendcount, MPI_Datatype sendtype,
void *recvbuf,
int recvcount, MPI_Datatype recvtype,
MPI_Comm isocomm, Iso_request *request)
Iso_neighbor_allgather_init(void *sendbuf,
int sendcount, MPI_Datatype sendtype,
void *recvbuf,
int recvcount, MPI_Datatype recvtype,
MPI_Comm isocomm, Iso_request *request)
Iso_start(Iso_request *request);
Iso_request_free(Iso_request *request);
\end{lstlisting}
For a library on top of MPI, the corresponding interface functions are
as follows. First, the MPI processes need to be organized in a
$d$-dimensional Cartesian mesh or torus with a suitable
$d$-dimensional Cartesian communicator
(\texttt{cartcomm})~\cite[Chapter 7]{MPI-3.1}. The isomorphic
neighborhood set-up\xspace function is called on this communicator, and takes
a list of neighbor coordinates given as a one-dimensional, flattened
array of relative coordinates, and attaches this to a new communicator
\texttt{isocomm}. The set-up\xspace operation is collective, and a strict
requirement is that the calling processes all give the \emph{exact
same} list of relative neighbor coordinates. The function prototype
is shown in Listing\xspace~\ref{lst:isocreate}. As an example, assume we want to
perform isomorphic all-to-all\xspace to the processes in the positive octant
of a three-dimensional torus. The relative coordinates are
$(1,0,0),\allowbreak (0,1,0),\allowbreak (0,0,1),\allowbreak (1,1,0),
\allowbreak (1,0,1),\allowbreak (0,1,1),\allowbreak (1,1,1)$ (and
$(0,0,0)$ if the process has a message to itself). The corresponding
call would be
\begin{lstlisting}
int octant[] =
{1,0,0,0,1,0,0,0,1,1,1,0,1,0,1,0,1,1,1,1,1};
Iso_neighborhood_create(cartcomm,
7,octant,&isocomm);
\end{lstlisting}
Any permutation of the 7 neighbors would specify the same neighborhood
(provided that all calling processes give the neighbors in the same
order; if not, the outcome of the call and of ensuing communication
operations is undefined and either may deadlock), but the
order is important and determines the order of the message blocks in
the send and receive buffers of the isomorphic communication operations.
\begin{lstlisting}[float,caption={
The interfaces for irregular, persistent, isomorphic all-to-all\xspace
and allgather\xspace communication.
},
label=lst:isoirreg,
floatplacement=ht!
]
Iso_neighbor_alltoallv_init(void *sendbuf,
int sendcounts[], MPI_Aint senddispls[],
MPI_Datatype sendtype,
void *recvbuf,
int recvcounts[], MPI_Aint recvdispls[],
MPI_Datatype recvtype,
MPI_Comm isocomm, Iso_request *request)
Iso_neighbor_allgatherv_init(void *sendbuf,
int sendcount, MPI_Datatype sendtype,
void *recvbuf,
int recvcounts[], MPI_Aint recvdispls[],
MPI_Datatype recvtypes,
MPI_Comm isocomm, Iso_request *request)
Iso_neighbor_alltoallw_init(void *sendbuf,
int sendcounts[], MPI_Aint senddispls[],
MPI_Datatype sendtypes[],
void *recvbuf,
int recvcounts[], MPI_Aint recvdispls[],
MPI_Datatype recvtypes[],
MPI_Comm isocomm, Iso_request *request)
Iso_neighbor_allgatherw_init(void *sendbuf,
int sendcount, MPI_Datatype sendtype,
void *recvbuf,
int recvcounts[], MPI_Aint recvdispls[],
MPI_Datatype recvtypes[],
MPI_Comm isocomm, Iso_request *request)
\end{lstlisting}
The collective interface consists in two parts, namely an init call
where a communication schedule can be precomputed, and an ensuing
communication start call. This separation allows the reuse of a
communication schedule computed in the init call over a number of
collective communication operations with the same buffer and datatype
parameters. The idea is similar to the persistent point-to-point
communication operations of MPI~\cite[Section
3.9]{MPI-3.1}\footnote{There is so far no persistent collectives
counterpart in MPI. This is being considered by the MPI Forum.}. The
interface functions that we have implemented are shown in
Listing\xspace~\ref{lst:isoall}, and have the usual MPI flavor. The data blocks
for the target neighbors are stored consecutively at the
\texttt{sendbuf} address in the order determined by the order of the
neighbors; similarly, blocks from the source neighbors will be stored
at the \texttt{recvbuf} address in the same order. In the regular
variants of the isomorphic collectives, all blocks have the same size
and structure as determined by the count and MPI datatype
arguments. Irregular all-to-all\xspace versions, i.e.\@\xspace,
\texttt{Iso\_\-neighbor\_\-alltoallw\_\-init}\xspace and \texttt{Iso\_\-neighbor\_\-alltoallv\_\-init}\xspace, are defined
analogously, and are shown in Listing\xspace~\ref{lst:isoirreg}. The requirement
for these irregular versions is that all processes specify exactly the
same block sizes via count and datatype arguments, and that send and
receive block sizes match pairwise. Note that the isomorphic
requirement in neither regular nor irregular case means that processes
have to use the same datatype arguments; also the datatype for the
receive and the send buffers may be different. The regular variants of
the collectives only require that blocks all have the same size, whereas the
irregular variants require blocksizes to be pairwise equal.
\section{Message-Combining Algorithms}
\begin{lstlisting}[float,caption={
Straightforward, isomorphic all-to-all\xspace communication in $s$ communication
rounds. The implementation is deadlock free, since all processes have
specified the neighborhood by identical lists of relative
coordinates.
},
label=lst:s-round,
floatplacement=t
]
// R: process rank as d-dimensional vector
// C[i]: i-th offset vector from isocomm
// rank(C): linear MPI rank of vector C
for (i=0; i<s; i++)
MPI_Sendrecv(sendbuf[i],...,rank(R+C[i]),
recvbuf[i],...,rank(R-C[i]),
isocomm);
\end{lstlisting}
We now show how the isomorphic neighborhood assertion makes it easy to
precompute good, message-combining communication schedules. First
note that the simple scheme in Listing\xspace~\ref{lst:s-round} is correct
(deadlock free). Each process looks up its rank as a $d$-dimensional
vector $R$ in the underlying torus, and uses the coordinate offsets to
compute source and target ranks as explained in the previous section.
In the $i$th of $s$ communication rounds, it sends and receives blocks
directly to and from the $i$th source and target processes. Although
the algorithm is trivial, it is worth pointing out that deadlock
freedom follows from the assumption that neighborhoods are isomorphic. In
round $i$ when process $R$ is sending block $i$ to target neighbor
$R\oplus C^i$, this neighbor expects to receive a block from its $i$th
source process, which is indeed $(R\oplus C^i)\ominus C^i=R$. For
neighborhoods defined by unrestricted communication graphs as it is
the case with MPI distributed graph communicators, or if the processes
had given their list of neighbors in different orders, this would not
be the case, and the scheme can deadlock.
The $s$-round algorithm assumes that messages can be sent directly
from a process to its target neighbors, and performs one send and
receive operation per communication round. It can trivially be
extended to exploit $k$-ported communication systems also for $k>1$ by
sending and receiving instead $k$ blocks per round. Our first goal is
to provide message-combining schemes with fewer communication rounds,
and to precompute schedules that for each process tell which
(combined) message blocks to send and receive in each communication
round. Our schedules will have the property that all processes follow
the same steps, and can be computed locally for each process from its
list of neighbors.
For the algorithm design, we first assume that the underlying
communication network is a bidirectional (send-receive), one-ported,
$d$-dimensional torus, such that communication is allowed only along
the $d$ dimensions, and only between immediate neighbors. Only one
dimension can be actively communicating at any one instant, but a
process can simultaneously send and receive a message in the given
dimension. We stress that the torus assumption is made to help the
algorithm design, and is not necessarily an assumption about the
underlying hardware. The dimensions are processed in some order, and
in each iteration all blocks that have to go along one dimension are
sent together as one message. This reduces the number of communication
operations (and start-up latencies) from $s$ to $O(d)$. The schedules
for all-to-all\xspace and allgather\xspace communication operations are explained
and analyzed in more detail below. The key observation is that
schedules can be developed from the processes point of view by
analyzing the $s$-neighborhood of relative coordinates. As in
Listing\xspace~\ref{lst:s-round}, processes will follow the same schedule from
which deadlock freedom and correctness follow. In each communication
round, all processes will have the same (relative) blocks to forward
to other processes. Blocks are always routed along shortest paths in
the torus network, but may pass through processes that are not in the
neighborhood.
\subsection{All-to-all\xspace Schedule}
Define the norm of vector $C=(c_0, \allowbreak c_1, \ldots,
\allowbreak c_{d-1})$ by $\|C\|=\sum_{j=0}^{d-1}|c_j|$. This norm
counts how many communication steps are needed in the torus to route a
block from (any) process $R$ to its target neighbor $R\oplus C$. The
block can be (minimally) routed from $R$ to $R\oplus C$ by sending it
successively $c_j$ (positive or negative) hops along dimension $j$ for
$j=0,\ldots, d-1$. All $s$ blocks from process $R$ to its relative
neighbors $C^i$ are routed as follows in $d$ rounds. In round $j$
each process will be handling blocks to be passed along dimension
$j$. To route all blocks along dimension $j$,
$\max_{i=0}^{s-1}(\max(c^i_j,0)+\max(-c^i_j,0))$ communication steps
are necessary. In step $h$, for each coordinate $|c^i_j|>h$, an old
block is sent and a new one received, with all such blocks combined
into a single message. By the end of a communication round, all blocks
of a process will have been routed the corresponding $c^i_j$ hops
ahead, and after all $d$ rounds, all blocks have been received by
their target processes.
Since a process can, in each communication step, only send and receive
along one dimension, it follows that in total $D =
\sum_{j=0}^{d-1}\max_{i=0}^{s-1}(\max(c^i_j,0)+\max(-c^i_j,0))$
communication steps are required, which is exactly the number of steps
performed by the algorithm. Since communication is only done between
direct torus neighbors, the shortest path for each block is $\|C^i\|$
hops, such that the total number of blocks sent per process
(communication volume) is $V=\sum_{i=0}^{s-1}\|C^i\|$. Also this is
achieved by the algorithm. For a given (isomorphic) $s$-neighborhood,
both $D$ and $V$ can be easily computed and used to estimate the cost
of the all-to-all\xspace communication. In a simple, linear cost model with
latency $\alpha$ and cost per unit $\beta$, this would be $D\alpha +
\beta Vm$ for blocks of $m$ units. In this cost model, the
message-combining schedule can be faster than the direct schedule for
fully connected, bidirectional networks of Listing\xspace~\ref{lst:s-round}, if
$D\alpha + \beta Vm < s(\alpha+\beta m)$, that is
$m<\frac{\alpha}{\beta}\frac{s-D}{V-s}$ for $s<V$ and $D<s$.
We have argued for the following statement.
\begin{proposition}
\label{prop:alltoall}
In $d$-dimensional, 1-ported, bidirectional tori, isomorphic all-to-all\xspace
communication in $s$-neighborhoods with blocks of size $m$ can be
performed round- and volume-optimally in $D$ communication rounds and
total communication volume $Vm$. A corresponding schedule can be
computed in $O(sD)$ operations.
\end{proposition}
The schedule computation is described in detail in
Section\xspace~\ref{sec:zerocopy} from which the stated bound follows. If
coordinates of all $s$ neighbors are bounded (each
$c_j=0,1,2,3,\ldots,k$ for some small constant $k$), then $D\leq kd$,
and the number of communication rounds will be small.
\subsection{Allgather\xspace Schedule}
\begin{figure}
\centering
\includestandalone[width=\linewidth]{figs/fig1}
\caption{A prefix trie for some $s$-neighborhood. Weights on edges
from dimension level $j$ to dimension level $j+1$ correspond to the
$j$th coordinate of some neighbor in the $s$-neighborhood. Each node
at level $j$ represents the neighbors that share a common prefix of
$j-1$ coordinates.}
\label{fig:prefixtrie}
\end{figure}
We also use dimension-wise routing for the isomorphic allgather\xspace
operation. The observation here is that for all relative neighbors
$C^i$ that share a common prefix, i.e., have the same first $j$
coordinates for some $j<d$, the block has to be routed only once to
that prefix (recall that the allgather\xspace operation communicates the
same block to all target neighbors). We construct a prefix-trie as
illustrated in Figure\xspace~\ref{fig:prefixtrie}. In order to ensure that
prefixes get as long as possible, we assume that the order in which
coordinates are visited is in decreasing number of neighbors having
the same coordinate value. Starting from dimension \num{0} there is an
outgoing edge to dimension \num{1} for each different coordinate at
index \num{0} in the $s$-neighborhood. From a node at dimension $j$
with prefix $P_j$ (corresponding to the path from the root of the
trie), representing a block that has been received at that point from
process $R\ominus P_j$, there are outgoing edges to dimension $j+1$
for each different coordinate at index $j$ of the relative neighbors
$C^i$ sharing prefix $P_j$. The leaf nodes of the trie represent the
blocks that will have been received after the $d$ rounds. The prefixes
corresponding to the nodes in the trie can be found by sorting the $s$
neighbor vectors lexicographically. When routing in round $j$, the
number of nodes at level $j$ is the number of different blocks that
have to be sent in that round, and the edges determine the number of
hops that each of these blocks have to be sent. As in the all-to-all\xspace
schedule, in each round~$j$,
$\max_{i=0}^{s-1}(\max(c^i_j,0)+\max(-c^i_j,0))$ communication steps
are necessary, but the communication volume is smaller. The number of
different blocks $W$ received per process throughout the algorithm is
the sum of all weighted path lengths in the trie from the root to the
leaves. Each coordinate value associated with a trie edge determines
the number of hops a certain block is sent in some round. Note that
$W\leq V$, so for each fixed $s$-neighborhood, allgather\xspace is
potentially less costly than all-to-all\xspace. Lexicographic sorting can be
done by bucket sort in $O(sd)$ operations. We have argued informally
for the following statement.
\begin{proposition}
In $d$-dimensional, 1-ported, bidirectional tori, isomorphic allgather\xspace
communication in $s$-neighborhoods with blocks of size $m$ can be
performed round- and volume-optimally in $D$ communication rounds and a total
communication volume $Wm$. A schedule can be computed in $O(sD)$ operations.
\end{proposition}
By the same argument as for all-to-all\xspace, $D$ communication rounds are
necessary: there is some neighbor with $j$th coordinate
$\max(c^i_j,0)$ and some with $\max(-c^i_j,0)$, and since
communication is one-ported, this many steps are required for each of
the $j$ rounds.
\subsection{Zero-Copy Implementations}
\label{sec:zerocopy}
\begin{algorithm}[h!]
\begin{algorithmic}
\STATE $\forall 0\leq i<s: \mathrm{hops}[i] \leftarrow \|C^i\|$
\STATE $D\leftarrow 0; V\leftarrow 0$
\FOR{$j\leftarrow 0,\ldots,d-1$}
\FOR{$h\leftarrow 0,\ldots,(\max_{i=0}^{s-1}c_j^i)-1$}
\STATE $k\leftarrow 0; D\leftarrow D+1$ // Positive coordinates
\FOR{$i\leftarrow 0,\ldots,s-1$}
\IF{$h<c^i_j$}
\IF{$\textrm{firstround}[i]\equiv j$}
\IF{$\mathrm{even}(\mathrm{hops}[i])$}
\STATE $\mathrm{RECV}(\mathrm{part}\ k)\leftarrow \mathtt{interbuf}[i]$
\STATE $\mathrm{SEND}(\mathrm{part}\ k)\leftarrow \mathtt{sendbuf}[i]$
\ELSE
\STATE $\mathrm{RECV}(\mathrm{part}\ k)\leftarrow \mathtt{recvbuf}[i]$
\STATE $\mathrm{SEND}(\mathrm{part}\ k)\leftarrow \mathtt{sendbuf}[i]$
\ENDIF
\ELSE
\IF{$\mathrm{even}(\mathrm{hops}[i])$}
\STATE $\mathrm{RECV}(\mathrm{part}\ k)\leftarrow \mathtt{interbuf}[i]$
\STATE $\mathrm{SEND}(\mathrm{part}\ k)\leftarrow \mathtt{recvbuf}[i]$
\ELSE
\STATE $\mathrm{RECV}(\mathrm{part}\ k)\leftarrow \mathtt{recvbuf}[i]$
\STATE $\mathrm{SEND}(\mathrm{part}\ k)\leftarrow \mathtt{interbuf}[i]$
\ENDIF
\ENDIF
\STATE{$V\leftarrow V+1;\mathrm{hops}[i] \leftarrow \textrm{hops}[i]-1; k\leftarrow k+1$}
\ENDIF
\ENDFOR
\ENDFOR
\FOR{$h\leftarrow 0,\ldots,(\max_{i=0}^{s-1}-c_j^i)-1$}
\STATE // Negative coordinates (analogous)
\ENDFOR
\ENDFOR
\end{algorithmic}
\caption{Computing the alternating, zero-copy all-to-all\xspace schedule for
$s$ blocks and neighbors in $O(sD)$ operations. User buffers
\texttt{recvbuf} and \texttt{sendbuf} are supplied by the
\texttt{Iso\_\-neighbor\_\-alltoall}\xspace call, while \texttt{interbuf} is an
intermediate buffer of the same size. The $\mathrm{firstround}[i]$
for neighbor $i$ is the first dimension $j$ with $c_j^i\neq 0$. The
RECV and SEND output consisting of $k$ parts represents the
schedule, and describes what happens for neighbor block $i$ in
communication step $h$. RECV and SEND is used to set up the MPI derived
datatypes for immediate neighbor communication in each step. Only
the computations for positive coordinates are shown.}
\label{alg:zero-alltoall}
\end{algorithm}
So far we did not describe how the blocks to send and receive are
combined in the steps of the communication rounds. We now present the
full schedule computation for the all-to-all\xspace operation. In each of the
$D$ communication steps (see Proposition~\ref{prop:alltoall}), at
least one new block is received and one block one sent. The initial
blocks are present in the send buffer given in the
\texttt{Iso\_\-neighbor\_\-alltoall}\xspace call (which must not be changed), and eventually
all source blocks have to be received into the given receive
buffer. Over the communication rounds, the block to the $i$th neighbor
$R\oplus C^i$ will traverse $\|C^i\|$ hops. We will let the block
alternate between intermediate and receive buffers of the processes
that it traverses, such that it ends up in the $i$th position of the
receive buffer at process $R\oplus C^i$ in the last round. In each
communication step, some blocks are sent from the intermediate buffer
and received into the receive buffer, and other blocks are sent from
the receive buffer and received into the intermediate buffer. A block
will end up in the receive buffer if we receive it into the receive
buffer when there are an odd number of hops is remaining. In each
step of the schedule, all blocks to be sent in that step are combined
into one message; likewise for the blocks received. Instead of doing
this explicitly by copying into yet another intermediate buffer, two
MPI derived datatypes are constructed, one describing the blocks to be
received (whether into receive or intermediate buffer) and one
describing the blocks to be sent. These MPI datatypes consist of $k$
parts corresponding to the $k$ blocks sent and received in that step.
Since blocks are located in one of three different buffers (send,
receive and intermediate), an MPI structured type is needed and
constructed with \texttt{MPI\_\-Type\_\-create\_\-struct}\xspace. With these derived datatypes, the MPI
send and receive operations directly access the blocks from the
corresponding buffers without any need for explicit packing and
unpacking from contiguous communication buffers. The same kind of
block-by-block double buffering with derived datatypes was used by
Tr\"aff~\textit{et~al.\@}\xspace~\cite{Traff14:bruck}. This is our final, so-called
\emph{zero-copy implementation}: all data movement operations between
buffers are done implicitly by MPI communication operations using the
MPI derived datatypes constructed by the schedule without any
process-local, explicit copying of blocks to be sent or received. The
construction of the alternating buffer schedule is shown as
Algorithm~\ref{alg:zero-alltoall}. The schedule is represented by the
$D$ send and receive datatypes for the $D$ communication steps. The
schedule is precomputed at the \texttt{Iso\_\-neighbor\_\-alltoall\_\-init}\xspace call, so that
data type creation can be amortized over the ensuing \texttt{Iso\_\-Start}\xspace
calls.
\section{Experimental Evaluation, Part One}
\label{sec:experiment1}
In order to assess the potential gains of zero-copy message-combining,
we compare our isomorphic collective implementations to the MPI neighborhood
collectives that express the same communication patterns, namely
\texttt{MPI\_\-Neighbor\_\-alltoall}\xspace, \texttt{MPI\_\-Neighbor\_\-allgather}\xspace, and
\texttt{MPI\_\-Neighbor\_\-alltoallw}\xspace.
For our basic comparisons, we use generalizations of the
application-relevant, two-dimensional $9$-point stencil
pattern, so-called \emph{Moore\xspace neighborhoods}\footnote{See
\url{http://mathworld.wolfram.com/MooreNeighborhood.html}, last
visited on April 8, 2016.}~\cite{ToffoliMargolus87}. A
$d$-dimensional, radius $r$ Moore neighborhood consists of all
neighbors $C^i$ whose largest absolute value coordinate $|c^i_j|$ is
at most $r$.
Moore\xspace neighborhoods have large numbers of neighbors, namely
$s=(2r+1)^{d}-1$ (excluding the process itself), which can reduce the
number of communication rounds from $s$ down to $D=2rd$ for the
torus-based message-combining algorithms.
\begin{table}[t]
\centering
\caption{Parallel machines used in experiments.}
\label{tab:machines}
\begin{footnotesize}
\begin{tabular}{l@{\hskip .05in}l@{\hskip .05in}l}
\toprule
name & hardware & MPI libraries \\
\midrule
Jupiter\xspace & \num{36} Dual Opteron 6134\,@\,\SI{2.3}{\giga\hertz} & NEC\,MPI~1.3.1\xspace \\
& InfiniBand\xspace QDR MT4036 & \\
\mbox{VSC-3}\xspace & \num{2000} Dual Xeon E5-2650V2\,@\,\SI{2.6}{\giga\hertz} & MVAPICH~2-2.2b \\
& InfiniBand\xspace QDR-80 & \\
ARCHER\xspace & \num{4920} Dual Xeon E5-2697V2\,@\,\SI{2.7}{\giga\hertz} & Cray\,MPICH~7.2.6\xspace \\
& Cray Dragonfly & \\
\bottomrule
\end{tabular}
\end{footnotesize}
\end{table}
Initial experiments were conducted on a small \num{36}~node cluster.
We expect the performance to depend mostly on the neighborhood, and
less on the number of processes. To corroborate, we repeated the
experiments on \num{70}~and \num{500}~nodes of two larger systems using
different MPI libraries. The system configurations are summarized in
Table\xspace~\ref{tab:machines}.
In each experiment we measure the run-time\xspace of either all-to-all\xspace or
allgather\xspace implementations over different, small block sizes. We
perform \num{100}~repetitions of each measurement and synchronize MPI
processes before each measurement. We compute the run-time\xspace by taking
the maximum local run-time\xspace across all processes in the collective
operation. Each experiment is repeated \num{10}~times to account for
run-time\xspace variations across individual \texttt{mpirun}'s\xspace. Processes are always
pinned to specific cores, and the CPU frequency is set as high as
possible.
We remove outliers with Tukey's method (using a bound of three times
the inter-quartile range), and we compute the median run-time\xspace of the
remaining measurements. Results are shown as bar plots of the median
of the previously obtained medians over the \num{10}~\texttt{mpirun}'s\xspace, along
with their minimum and maximum values to visualize possible run-time\xspace
variations.
\begin{table}
\caption{Set-up\xspace times of neighborhoods and schedule computations
(median of \num{400}~measurements, Moore\xspace neighborhood,
\num{30x16} processes, NEC\,MPI~1.3.1\xspace, Jupiter\xspace).}
\label{tab:setuptimes}
\input{figs/input1}
\vspace{-5pt}
\end{table}
\begin{figure*}[t]
\centering
\begin{subfigure}[$d=2$ dimensions, radius $r=1$ (\num{8}~neighbors).]%
{\includestandalone[width=0.48\linewidth]{{figs/fig2}}
\label{exp:dim2rad1-moore}%
}%
\end{subfigure}%
\hfill%
\begin{subfigure}[$d=3$ dimensions, radius $r=1$ (\num{26}~neighbors).]%
{\includestandalone[width=0.48\linewidth]{{figs/fig3}}
\label{exp:dim3rad1-moore}%
}
\end{subfigure}
\begin{subfigure}[$d=4$ dimensions, radius $r=1$ (\num{80}~neighbors).]%
{\includestandalone[width=0.48\linewidth]{{figs/fig4}}
\label{exp:dim4rad1-moore}%
}
\end{subfigure}
\hfill%
\begin{subfigure}[$d=5$ dimensions, radius $r=1$ (\num{242}~neighbors).]%
{\includestandalone[width=0.48\linewidth]{{figs/fig5}}
\label{exp:dim5rad1-moore}%
}
\end{subfigure}
\begin{subfigure}[$d=3$ dimensions, radius $r=3$ (\num{342}~neighbors).]%
{\includestandalone[width=0.48\linewidth]{{figs/fig6}}
\label{exp:dim3rad3-moore}%
}
\end{subfigure}
\hfill%
\begin{subfigure}[Asymmetric neighborhood (positive coordinates), $d=3$ dimensions, radius $r=3$ (\num{63}~neighbors).]%
{\includestandalone[width=0.48\linewidth]{{figs/fig7}}
\label{exp:dim3rad3-asymmoore}%
}
\end{subfigure}
\caption{\label{exp:alltoall} Median run-times\xspace of \texttt{Iso\_\-neighbor\_\-alltoall}\xspace and
\texttt{MPI\_\-Neighbor\_\-alltoall}\xspace, Moore\xspace neighborhood, row order of neighbors, \num{30x16}~processes,
NEC\,MPI~1.3.1\xspace, machine: Jupiter\xspace.}
\end{figure*}
Our first set of experiments compares our message-combining all-to-all\xspace
algorithms to the \texttt{MPI\_\-Neighbor\_\-alltoall}\xspace collective on a series of
Moore\xspace neighborhoods. This is a regular exchange operation, and all
blocks have the same size. The measured run-times\xspace are shown for
different block sizes. Neighborhoods for the MPI collectives have to
be set up using one of the two distributed graph constructors
\texttt{MPI\_\-Dist\_\-graph\_\-create}\xspace or \texttt{MPI\_\-Dist\_\-graph\_\-create\_\-adjacent}\xspace, which can both be
rather costly. In Table~\ref{tab:setuptimes} we compare the set-up\xspace
times for the full Moore\xspace neighborhoods used in the experiments for
dimension $d=2, 3, 4, 5$ and radius $r=1, 2, 3$. As expected, the
\texttt{MPI\_\-Dist\_\-graph\_\-create}\xspace constructor is significantly more expensive than
the more specific \texttt{MPI\_\-Dist\_\-graph\_\-create\_\-adjacent}\xspace, with an unexplained
drop in the MPI set-up\xspace times when going from \num{3} to \num{4}
dimensions. Our \texttt{Iso\_\-neighborhood\_\-create}\xspace is faster than or at least in
the same ballpark as \texttt{MPI\_\-Dist\_\-graph\_\-create\_\-adjacent}\xspace. We also report the
time for \texttt{Iso\_\-neighbor\_\-alltoall\_\-init}\xspace, in which the schedule computation
of Algorithm\xspace~\ref{alg:zero-alltoall} is performed, including the creation
of the MPI derived datatypes. With our interface,
setup and initialization time can be amortized over several
\texttt{Iso\_\-neighbor\_\-alltoall}\xspace calls, still it is important that these times be as
low as possible.
For the underlying Cartesian communicator of the isomorphic
neighborhoods, we use \texttt{MPI\_\-Dims\_\-create}\xspace (despite its potential
problems~\cite{Traff15:dimscreate}) and enable reordering, such that
the virtual torus may be aligned with the underlying communication
system.
For higher dimensions of the tested neighborhoods, the number of
relative neighbors is larger than the number of processes, such that
the same process is a neighbor for many different blocks. Our
implementations work regardless, and all such block are combined into
the same message.
Our communication experiments use small block sizes from \SI{1}{\byte}
to \SI{2}{\kilo\byte}. Selected results for Moore\xspace neighborhoods in
dimension $d=2,3,4,5$ with radius $r=1,3$ are shown in
Figure\xspace~\ref{exp:dim2rad1-moore} to Figure\xspace~\ref{exp:dim3rad3-moore}. For
small block sizes, we observe considerable improvements over the MPI
neighborhood collectives, close to the ratio of number of neighbors to
$2d$. It is interesting to note that the performance of the MPI
neighborhood collectives sometimes depends on whether the neighborhood
was set up with \texttt{MPI\_\-Dist\_\-graph\_\-create}\xspace or \texttt{MPI\_\-Dist\_\-graph\_\-create\_\-adjacent}\xspace. As
block sizes grow, the advantage of message-combining diminishes.
Finally, the experiment in Figure\xspace~\ref{exp:dim3rad3-asymmoore} considers
isomorphic all-to-all\xspace communication with asymmetric neighborhoods, and
shows the benefits of zero-copy message-combining in this situation.
We used an incomplete Moore\xspace neighborhood in $d=3$ dimensions and
radius $r=3$ consisting only of the positive coordinate neighbors, as
in Section~\ref{sec:definitions}.
\begin{figure*}[t]
\centering
\begin{subfigure}[$d=3$ dimensions, radius $r=1$ (\num{26}~neighbors).]%
{\includestandalone[width=0.48\linewidth]{{figs/fig8}}
\label{exp:alltoallw-irregular-d3r1}%
}%
\end{subfigure}%
\hfill%
\begin{subfigure}[$d=4$ dimensions, radius $r=1$ (\num{80}~neighbors).]%
{\includestandalone[width=0.48\linewidth]{{figs/fig9}}
\label{exp:alltoallw-irregular-d4r1}%
}
\end{subfigure}
\caption{\label{exp:alltoallw-irregular} Median run-times\xspace of \texttt{Iso\_\-neighbor\_\-alltoallw}\xspace and \texttt{MPI\_\-Neighbor\_\-alltoallw}\xspace, Moore\xspace neighborhood with irregular data distribution to neighbors,
row order of neighbors, \num{30x16}~processes, NEC\,MPI~1.3.1\xspace, machine: Jupiter\xspace.}
\end{figure*}
Our implementation of the irregular \texttt{Iso\_\-neighbor\_\-alltoallw}\xspace operation,
which uses the same schedules as in the regular case, is benchmarked
in Figure\xspace~\ref{exp:alltoallw-irregular}. The plots show the results of
the experiment with an irregular data distribution. Here, the block
sizes sent to each neighbor depend on the distance of that neighbor
$\|C^i\|$, such that the block sent to neighbor $i$ is of size
$\hat{m}^{d-\|C^i\|}$. This emulates the behavior of many stencil
computations, where the messages exchanged with corners are smaller
than with edges and hyperplanes. We tested the algorithm with three-
and four-dimensional Moore\xspace neighborhoods with radius $r=1$, having
\num{26}~and \num{80}~neighbors, respectively. The base block size
$\hat{m}$ is varied between the different experiments and is shown on
the x-axis, together with the total size of the send buffer per
process. For example, in Figure\xspace~\ref{exp:alltoallw-irregular-d3r1}, for
$\hat{m}=\SI{512}{\byte}$, each process sends messages with one of the
following sizes to the \num{26} neighbors: \SI{1}{\byte},
\SI[exponent-base = 512]{e1}{\byte}, and \SI[exponent-base =
512]{e2}{\byte}, amounting to a total size of \SI{1.5}{\mega\byte}
for the entire send buffer. In the experiment
(see Figure\xspace~\ref{exp:alltoallw-irregular}), our all-to-allw\xspace implementation
outperforms the standard MPI collective in most of the cases.
\begin{figure*}[t]
\centering
\begin{subfigure}[Moore\xspace neighborhood in $d=3$ dimensions, radius $r=3$ (\num{342}~neighbors).]%
{\includestandalone[width=0.48\linewidth]{{figs/fig10}}
\label{exp:dim3rad3-direct}%
}%
\end{subfigure}%
\hfill%
\begin{subfigure}[``Shales'' corresponding to radius $r_1=3,r_2=7$ in a Moore\xspace neighborhood in $d=3$ dimensions (\num{1396}~neighbors).]%
{\includestandalone[width=0.48\linewidth]{{figs/fig11}}
\label{exp:dim3shales-direct}%
}
\end{subfigure}
\caption{\label{exp:alltoall-pers-vs-direct} Median run-times\xspace of the straightforward neighbor
all-to-all\xspace implementation, \texttt{Iso\_\-neighbor\_\-alltoall}\xspace and \texttt{Iso\_\-neighbor\_\-alltoall\_\-direct}\xspace, row order of neighbors,
\num{30x16} processes, NEC\,MPI~1.3.1\xspace, machine: Jupiter\xspace.}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}[$d=3$ dimensions, radius $r=3$ (\num{342}~neighbors).]%
{\includestandalone[width=0.48\linewidth]{{figs/fig12}}
\label{exp:dim3rad3-allgathermoore}%
}
\end{subfigure}
\hfill%
\begin{subfigure}[Asymmetric (positive coordinates) neighborhood in $d=3$ dimensions, radius $r=3$ (63 neighbors).]%
{\includestandalone[width=0.48\linewidth]{{figs/fig13}}
\label{exp:dim3rad3-allgatherasym}%
}%
\end{subfigure}%
\caption{\label{exp:allgather} Median run-times\xspace of \texttt{Iso\_\-neighbor\_\-allgather}\xspace, \texttt{Iso\_\-neighbor\_\-alltoall}\xspace and \texttt{MPI\_\-Neighbor\_\-allgather}\xspace,
Moore\xspace neighborhood, row order of neighbors, \num{30x16}~processes, NEC\,MPI~1.3.1\xspace, machine: Jupiter\xspace.}
\end{figure*}
\section{Better Algorithms}
\label{sec:optimizations}
The assumption of a one-ported torus network was useful in that it led
to easily computable, optimal message-combining schedules. However,
most real systems (e.g., as in Table\xspace~\ref{tab:machines}) have
different, more powerful communication systems. If we relieve the torus
assumption, better algorithms for more powerful communication systems
may be possible, and interesting optimization problems and tradeoffs
between the number of communication rounds and volume
arise~\cite{Bruck97}.
Assume that we have---at the other extreme---a fully connected,
bidirectional, $k$-ported communication system. In this case, we could
ask: What is the minimal number of communication rounds for a given
$s$-neighborhood? What is the optimal load balance in number of blocks
sent per communication round? What is the optimal schedule for an
irregular $s$-neighborhood where blocks to be sent to different
neighbors may have different sizes?
To minimize the number of communication rounds in a one-ported,
fully-connected system, the following optimization problem has to be
solved. Given a set of $s$ vectors $\cal C$, find a smallest
\emph{additive basis} $\cal B$ such that each $C\in\cal C$ can be
written as a sum of distinct $B_i\in \cal B$. Note that it is
explicitly not required that ${\cal B}\subseteq {\cal C}$. Our torus
algorithms use the additive basis vectors
$(1,0,0,\ldots),(0,1,0,\ldots),(0,0,1,\ldots)$, but in general need
repetitions (several hops) of the basis vectors. The algorithm that
will be sketched below uses distinct basis vectors. Given an additive
basis, we claim that a schedule can be computed easily and similarly
to the torus schedules, and both all-to-all\xspace and allgather\xspace operations
will require $|\cal B|$ rounds. How hard is the problem of finding
smallest additive bases for arbitrary $s$-neighborhoods? Some $d=1$
dimensional examples are illustrative. For ${\cal C}=\{1,2,3\}$, a
minimal additive basis is $\{1,2\}$. For ${\cal
C}=\{1,2,3,4,5,6,7\}$, a minimal additive basis is $\{1,2,4\}$,
which is the scheme used by logarithmic doubling all-to-all\xspace and
allgather\xspace algorithms~\cite{Bruck97}. For ${\cal
C}=\{1,2,3,4,5,6,7,8\}$, minimal additive bases are
$\{1,2,3,6\}$ or $\{1,2,4,8\}$.
Let us assume instead a $d$-dimensional torus communication system
with direct communication along the dimensions, such that it is
possible to send a message directly to a neighbor with relative
coordinate $c_j$ in any of the dimensions. We can perform the
communication operations using an additive (but not necessarily
minimal) basis consisting of all projected vectors
$(0,\ldots,c_j,\ldots,0)$ for the different $c_j$ in each of the $d$
dimensions. We can easily modify our schedules to use this basis,
namely to send directly to relative neighbor $(0,\ldots,c_j,\ldots,0)$
instead of via $c_j$ hops. All blocks going to the same relative
neighbor in round $j$ can be combined. In order to achieve this, in
communication round $j$ the relative neighbors need to be (bucket)
sorted for the $j$th dimension. For each neighbor, the number of hops
to traverse is reduced from $\|C^i\|$ to the number of non-zero
coordinates in $C^i$, and summing this over all $s$ neighbors gives
the total number of messages sent. The number of rounds needed per
dimension is the number of different, non-zero coordinates, and
summing over all dimensions gives the total number of rounds. Since
the number of rounds is no longer dependent on the magnitude of the
coordinates, schedules can now be computed in $O(sd)$ operations.
We have implemented both \texttt{Iso\_\-neighbor\_\-alltoall}\xspace and
\texttt{Iso\_\-neighbor\_\-allgather}\xspace along these lines which we call the \emph{torus
direct}
algorithms. For non-torus systems, e.g., those in
Table~\ref{tab:machines}, we expect that direct communication can be
exploited so that the smaller number of communication rounds
will indeed pay off.
\section{Experimental Evaluation, Part Two}
\label{sec:experiment2}
\begin{figure*}[t]
\centering
\begin{subfigure}[\num{70x1}~processes.]%
{\includestandalone[width=0.48\linewidth]{{figs/fig14}}
\label{exp:vsc3-nnp1}%
}%
\end{subfigure}%
\hfill%
\begin{subfigure}[\num{70x16}~processes.]%
{\includestandalone[width=0.48\linewidth]{{figs/fig15}}
\label{exp:vsc3-nnp16}%
}
\end{subfigure}
\caption{\label{exp:vsc-alltoall-pers-vs-direct} Median run-times\xspace of \texttt{Iso\_\-neighbor\_\-alltoall}\xspace and
\texttt{Iso\_\-neighbor\_\-alltoall\_\-direct}\xspace (neighborhood set up using \texttt{Iso\_\-neighborhood\_\-create}\xspace),
Moore\xspace neighborhood in $d=3$ dimensions, radius $r=3$ (\num{342}~neighbors),
row order of neighbors, MVAPICH~2-2.2b, machine: \mbox{VSC-3}\xspace.}
\end{figure*}
\begin{figure}[t]
\centering
\includestandalone[width=\linewidth]{{figs/fig16}}
\caption{\label{exp:vsc3-comparison} Median run-times\xspace of \texttt{Iso\_\-neighbor\_\-alltoall}\xspace and
\texttt{Iso\_\-neighbor\_\-alltoall\_\-direct}\xspace, Moore\xspace neighborhood, row order of neighbors,
\num{70x16}~processes, MVAPICH~2-2.2b, machine: \mbox{VSC-3}\xspace.}
\end{figure}
\begin{figure*}[t]
\centering
\begin{subfigure}[$d=4$ dimensions, radius $r=1$ (\num{80}~neighbors).]%
{\includestandalone[width=0.48\linewidth]{{figs/fig17}}
\label{exp:archer-alltoall-pers-vs-trivial-d4}%
}%
\end{subfigure}%
\hfill%
\begin{subfigure}[$d=5$ dimensions, radius $r=1$ (\num{242}~neighbors).]%
{\includestandalone[width=0.48\linewidth]{{figs/fig18}}
\label{exp:archer-alltoall-pers-vs-trivial-d5}%
}
\end{subfigure}
\caption{\label{exp:archer-alltoall-pers-vs-trivial} Median run-times\xspace of \texttt{Iso\_\-neighbor\_\-alltoall}\xspace,
\texttt{MPI\_\-Neighbor\_\-alltoall}\xspace and the straightforward implementation of \texttt{Iso\_\-neighbor\_\-alltoall}\xspace, Moore\xspace neighborhood, row order of neighbors,
\num{500x24}~processes, Cray\,MPICH~7.2.6\xspace, machine: ARCHER\xspace.}
\end{figure*}
We have benchmarked the torus direct implementations using the
same systems and Moore\xspace neighborhoods as in
Section\xspace~\ref{sec:experiment1}, but the emphasis is on comparing our three
implementations, namely the straightforward
implementation shown in Listing\xspace~\ref{lst:s-round}, the optimal torus
implementations, and the torus direct algorithms. Selected
results for \texttt{Iso\_\-neighbor\_\-alltoall}\xspace are shown in
Figure\xspace~\ref{exp:dim3rad3-direct}, while in
Figure\xspace~\ref{exp:dim3shales-direct} we have used a neighborhood
consisting of ``shales'' of neighbors at the Chebyshev distances $r_1=3$
and $r_2=7$. As message sizes grow, the smaller number of
communication rounds and the smaller total communication volume of the
torus direct algorithm make it perform gradually better than the
optimal torus algorithm. For the shales neighborhood in
Figure\xspace~\ref{exp:dim3shales-direct}, the number of communication rounds
for the torus algorithm is about $2 r_2 d =42$ compared to only $(2+2)
d=12$ for the direct algorithm, and, more significantly, in the former
the number of times blocks are sent further on is proportional to the
number of rounds. The torus algorithm becomes slower than the
straightforward algorithm already for message sizes of
\SI{500}{\byte}; in contrast, the direct algorithm stays on
par with the straightforward one in the message range shown. The
experiments show that exploiting direct communication can lead to
better performing message-combining implementations; it is therefore
relevant to pursue the optimization problems posed in
Section~\ref{sec:optimizations}.
The \texttt{Iso\_\-neighbor\_\-allgather}\xspace collective is investigated in
Figure\xspace~\ref{exp:dim3rad3-allgathermoore} with a complete
three-dimensional Moore\xspace neighborhood and in
Figure\xspace~\ref{exp:dim3rad3-allgatherasym} with an asymmetric Moore\xspace
neighborhood. The run-times\xspace of the \texttt{MPI\_\-Neighbor\_\-allgather}\xspace operation are
similar to those of the \texttt{MPI\_\-Neighbor\_\-alltoall}\xspace for the same
neighborhood, as can be seen for small message sizes by comparing to
Figure\xspace~\ref{exp:dim3rad3-moore} and Figure\xspace~\ref{exp:dim3rad3-asymmoore}.
Thus, Figure\xspace~\ref{exp:allgather} suggests that the MPI library we used
implements the allgather\xspace and all-to-all\xspace operations in exactly the same
way: each block of data is sent directly to the corresponding
neighbor. In contrast, the \texttt{Iso\_\-neighbor\_\-allgather}\xspace operation achieves an
\SI{80}{\percent} run-time\xspace reduction over \texttt{MPI\_\-Neighbor\_\-allgather}\xspace for
the tested message sizes, as well as a substantially improved
performance over its all-to-all\xspace counterpart. This behavior can be
explained by the design of the allgather\xspace schedule, which reduces the
volume of data sent, compared to the all-to-all\xspace one. To further
highlight the efficiency of the allgather\xspace schedule, we compare
\texttt{Iso\_\-neighbor\_\-allgather}\xspace with \texttt{Iso\_\-neighbor\_\-alltoall}\xspace for an asymmetric
Moore neighborhood in Figure\xspace~\ref{exp:dim3rad3-allgatherasym}. Here, we
can again see that using \texttt{Iso\_\-neighbor\_\-allgather}\xspace pays off as the message
size increases, as the operation completes three times faster than
all-to-all\xspace for message sizes of up to \SI{40}{\kilo\byte}.
Finally, we have evaluated the proposed torus implementations on the
\mbox{VSC-3}\xspace machine, using the MVAPICH~2-2.2b~library, and the ARCHER\xspace
machine with the Cray\,MPICH~7.2.6\xspace~library. As in this scenario we do not
have dedicated access to the entire machine, we have conducted
\num{300}~measurements for each collective operation to compensate for
the possible variations and we have repeated each experiment
\num{10}~times. Figure\xspace~\ref{exp:vsc-alltoall-pers-vs-direct} compares
the run-times\xspace of the optimal torus all-to-all\xspace and the torus direct
algorithm with the MPI neighborhood all-to-all\xspace implementation and the
straightforward algorithm shown in
Listing~\ref{lst:s-round}. Similarly to our previous experiments, this
scenario emphasizes the advantage of the direct strategy in the case
of a fully-connected hardware topology. While \texttt{Iso\_\-neighbor\_\-alltoall}\xspace
outperforms the MPI implementation only for smaller message sizes, the
direct algorithm achieves the best run-time\xspace performance up to
\SI{1}{\kilo\byte}. For message sizes under \SI{512}{\byte}, both
implementations outperform the straightforward algorithm in the
\num{70x1}~processes scenario. When the total data size exchanged
increases in Figure\xspace~\ref{exp:vsc3-nnp16}, our implementations show less
improvement due to the larger number of processes per node.
Figure\xspace~\ref{exp:vsc3-comparison} compares the torus all-to-all\xspace
implementations with the straightforward algorithm. Even though the
neighborhood size is comparable to the first scenario, the overhead of
the optimal torus all-to-all\xspace algorithm relative to the direct algorithm
is smaller, showing the impact of the size of the neighborhood radius
(and therefore of the number of hops along each dimension) on the
operation run-time\xspace. Nevertheless, for small message sizes both
implementations provide better results than the straightforward
all-to-all\xspace algorithm.
Figure\xspace~\ref{exp:archer-alltoall-pers-vs-trivial} shows our results on
ARCHER\xspace. The MPI collectives perform much better here than was the
case for the other machines, such that our message-combining algorithms
for the small $r=1$ case show only little advantage. The MPI
neighborhood collectives can apparently use the pipelining and
multi-ported capabilities of the ARCHER\xspace network better than our
send and receive based implementations. We have therefore compared our
message combining algorithm with the straightforward algorithm of
Listing~\ref{lst:s-round}, over which we can improve by large factors
(as for the other machines). Again, this shows that finding additive
bases that allow for many simultaneous communication operations is an
important optimization problem (Section\xspace~\ref{sec:optimizations}).
\section{Summary}
\label{sec:summary}
We proposed a specification for isomorphic (sparse) collective
communication to derive simple, message-combining algorithms for
all-to-all\xspace and allgather\xspace type of sparse collective communication
operations. We outlined two types of algorithms, one assuming a torus
communication network that is optimal in both the number of
communication rounds and the total number of messages sent, and one
assuming a more liberal torus allowing direct communication along the
torus dimensions that reduces both the number of rounds and the
communication volume. The latter algorithm is an in-between the torus
algorithm and an algorithm using direct communication between
neighbors. Both types of algorithms were implemented and compared to
typical implementations of the corresponding MPI neighborhood
collective communication operations, against which our implementations
perform significantly better for smaller message sizes. In our
experiments we used (also asymmetric) variations of the Moore\xspace
neighborhoods. The experiments show that there is large room for
improvements of current implementations of the MPI neighborhood
collectives. Our algorithms could potentially be used to obtain such
improvements, but only if it is externally asserted (or can easily be
detected) that neighborhoods are indeed isomorphic.
Our isomorphic neighborhoods are embedded in $d$-dimensional tori, but
our schedules can easily be extended to non-periodic tori, as can be
defined with MPI Cartesian topologies. Furthermore, it would be
possible to extend the idea of isomorphic neighborhoods also to other
regular underlying virtual topologies. Our experiments were performed
on non-torus systems, for which the virtual torus topology used to
describe relative neighborhoods is only a convenience. It would be
interesting to perform experiments on actual torus systems (Blue Gene
or K Computer), where the virtual topology has actually been mapped
efficiently onto the hardware topology.
For stencil-type computations, non-blocking communication is natural
to potentially overlap parts of the stencil update with neighborhood
communication. The proposed, persistent interface has a blocking
\texttt{Iso\_\-Start}\xspace operation. Similarly to what is currently being discussed in
the MPI community, it could be declared non-blocking by adding the
following call
\begin{lstlisting}
Iso_wait(Iso_request *request);
\end{lstlisting}
\noindent
at which local completion can be enforced. We think that this is a
valuable extension, for which algorithms and implementations should be
developed.
\balance
\bibliographystyle{IEEEtran}
|
1,477,468,751,145 | arxiv | \section{Introduction and motivation}
\subsection{Nonuniform networks}
The field of natural-network study became popular when Watts and
Strogatz \cite{WaSt98} published their observations on the short
average path length and the high clustering coefficient of many
natural graphs, followed by the observations of scale-free
distributions \cite{BaAl99,FFFa99} in the degrees and other structural
properties of such networks. As a consequence of the resulting wide
interest in the properties of natural networks, there now exist
numerous models to meet the observations made on natural networks
\cite{DoMe03,Newm03,Virt03}.
\subsection{Graph clustering}
One of the properties of interest in the field of natural graphs is
the presence of \emph{clusters} or \emph{communities} \cite{NeGi03},
that is, the existence of dense induced subgraphs that have relatively
few connections outside compared to the internal density
\cite{Klei01}.
\emph{Graph clustering} is the task of grouping the vertices of the
graph into clusters taking into consideration the edge structure of
the graph in such a way that there should be many edges \emph{within}
each cluster and relatively few \emph{between} the clusters. For an
artificial example, see Figure \ref{fig:caveman} that illustrates a
small graph with a clear six-cluster structure. Another classic
example is a small real-world social network studied by Zachary
\cite{Zach77} and often referred to in graph clustering papers
\cite{WuHu04,OrSc05,Newm03}. It is a social network of a small karate
club that was just about to split into two (see Figure
\ref{fig:karate}), making it an ideal case for two-classification
algorithms. For a survey on graph-clustering algorithms, see
\cite{Scha07}.
\begin{figure}
\centerline{\includegraphics[width=50mm]{fig_1_caveman_graph.eps}}
\caption{A \emph{caveman graph} \cite{Watt99} composed of six
near-cliques of five vertices each that have been connected into a
circulant graph by ``opening'' one edge from each clique (the
removed edge is shown with a dotted line).}
\label{fig:caveman}
\end{figure}
\begin{figure}
\centerline{\includegraphics{fig_2_karate_club.eps}}
\caption{The karate club social network studied by Zachary
\cite{Zach77}. The two groups into which the club split are
indicated by the shape with which the vertices are drawn: the
squares later formed their own club, and the circles formed another
club.}
\label{fig:karate}
\end{figure}
\subsection{Local clustering}
In \emph{local clustering}, the goal is to find the cluster of a given
\emph{seed vertex} $s \in V$. Hence, essentially, it is the task of
finding a \emph{bipartition} of the graph $G$ into two vertex sets $S$
and $V \setminus S$ such that $s \in S$ and $S$ makes a good cluster
in some predefined sense. Common cluster quality criteria include {\em
cut capacity} and related measures such as \emph{conductance}
\cite{SiSc06} or density-based measures \cite{Scha05}. Also methods
motivated by electric networks have been proposed for global and local
clustering alike \cite{WuHu04,OrSc05,NeGi04}.
\subsection{Spectra of graphs}
Let $G = (V, E)$ be an unweighted undirected connected graph with at
least two vertices. For simplicity, we focus on unweighted graphs,
although much of what follows can easily be generalised to the
weighted case. Denote the order of $G$, i.e.\ its number of vertices,
by $n$ and identify each vertex $v$ with a label in $\{1, 2, \ldots,
n\}$. Denote the seed vertex by $s$. The \emph{adjacency matrix} of
$G$ is the binary matrix $\matr{A}$, where $a_{ij} = 1$ if edge $\{i,j\}$ is
in $E$, and otherwise $a_{ij} = 0$.
For a weighted graph, one would consider instead the analogous
\emph{edge-weight matrix}. Note also that for multigraphs, edge
multiplicities can in the present context be considered simply as
integer weights. For an undirected graph, the adjacency (resp.\ edge
weight) matrix is symmetric, whereas directed graphs pose further
complications in the algebraic manipulation --- we refer the reader to
the textbook and other works of Chung
\cite{Chun97,dirlocal,AnCh07,heatkernel} for properties and local
clustering of directed graphs.
The \emph{degree} $\deg{v}$ of a vertex $v$ is the number (resp.\ total
weight) of its incident edges; thus the components of the \emph{degree
vector} $\d$ of $G$ are the row sums of $\matr{A}$. Denote by $\matr{D}$ the
diagonal $n\times n$ matrix formed by setting the diagonal elements
to $d_{ii} = \deg{i}$ and all other elements to zero.
Let $\matr{I}$ be the $n\times n$ unit matrix. The \emph{Laplacian} matrix
of $G$ is $\L = \matr{D} - \matr{A}$ and the \emph{normalised Laplacian} matrix of
$G$ is $\cal{L} = \invsqrt{\matr{D}} \L \invsqrt{\matr{D}} = \matr{I} - \invsqrt{\matr{D}} \matr{A}
\invsqrt{\matr{D}}$. Since both $\L$ and $\cal{L}$ are symmetric, all their
eigenvalues are real. It turns out that $\cal{L}$ is in some respects a
more natural object of study than $\L$, and we shall mostly focus on
that. It is easy to see that zero is an eigenvalue of both $\L$ and
$\cal{L}$, and for $\cal{L}$ it can be shown that all the other $n-1$
eigenvalues (counting multiplicities) lie in the interval $[0,2]$.
Denote these in increasing order as $0 = \mu_0 \leq \mu_1 \leq
\dots \leq \mu_{n-1} \leq 2$, and let $\u_i$ be some right
eigenvector associated to $\mu_i$. We may assume that the distinct
eigenvectors $\u_i$ are orthogonal to each other. For more
information on the spectral and algebraic properties of graphs, see
e.g.\ the excellent monographs of Biggs \cite{Bigg94} and Chung
\cite{Chun97}.
\subsection{Random walks}
\label{sec:random walks}
The \emph{simple random walk} on a graph $G$ is a Markov chain where
each vertex $v \in V$ corresponds to a state and the transition
probability from state $i$ to state $j$ is $p_{ij} = \inv{\deg{i}}$ if
$\{i,j\} \in E$ and zero otherwise. For a weighted graph, $p_{ij}$ is
the ratio of the weight of edge $\{i,j\}$ to the total weight of edges
incident to $i$.
Denote the transition probability matrix of this Markov chain by $\P =
\matr{D}^{-1} \matr{A}$. Note that even for undirected graphs, $\P$ is not
in general symmetric. However, it is similar to the matrix
\begin{equation}
{\cal P} = \sqrt{\matr{D}} \P \invsqrt{\matr{D}} = \invsqrt{\matr{D}} \matr{A} \invsqrt{\matr{D}}
\label{similar}
\end{equation}
which \emph{is} symmetric because $\matr{A}$ is the adjacency matrix of an
undirected graph. Thus, $\P$ and ${\cal P}$ have the same spectrum of
eigenvalues, which are all real. Moreover,
\begin{equation}
\begin{array}{rcl}
\displaystyle
\cal{L}
& = & \invsqrt{\matr{D}} \L \invsqrt{\matr{D}}
= \invsqrt{\matr{D}} (\matr{D} - \matr{A}) \invsqrt{\matr{D}} \\
\displaystyle
& = & \invsqrt{\matr{D}} (\matr{D} - \matr{D}\P) \invsqrt{\matr{D}}
= \matr{I} - \sqrt{\matr{D}}\P\invsqrt{\matr{D}} \\
\displaystyle
& = & \matr{I} - {\cal P}.
\end{array}
\label{eq:lapprob}
\end{equation}
Consequently, $\lambda$ is an eigenvalue of the normalised transition matrix ${\cal P}$
if and only if $\mu = 1 - \lambda$ is an eigenvalue of the normalised
Laplacian matrix $\cal{L}$. Thus, $\P$, ${\cal P}$ and $\cal{L}$ have the following
correspondence: $\vect{v}$ is a right eigenvector associated to eigenvalue $\lambda$ in
$\P$ if and only if $\u = \sqrt{\matr{D}}\vect{v}$ is a right eigenvector associated
to the same eigenvalue in ${\cal P}$, and to eigenvalue $1 - \lambda$ in
$\cal{L}$.
Since in the case of Markov chains, \emph{left} eigenvectors are also
of interest, let us note in passing that the analogous correspondence
holds between each left eigenvector $\pi$ of $\P$ and left eigenvector
$\rho = \pi\invsqrt{\matr{D}}$ of ${\cal P}$ or $\cal{L}$.
Denote the eigenvalues of $\P$ in decreasing order as $\eigval{\P}{0}
\geq \eigval{\P}{1} \geq \dots \geq \eigval{\P}{n-1}$. Since $\P$ is
a stochastic matrix, it always has eigenvalue $\eigval{\P}{0} = 1$,
corresponding to the smallest Laplacian eigenvalue $\leigval{\cal{L}}{0} =
0$. All the other eigenvalues of $\P$ satisfy $|\eigval{\P}{i}| \leq
1$. If moreover $G$ is connected and not bipartite, the Markov chain
determined by $\P$ is ergodic, in which case $|\eigval{\P}{i}| < 1$
for all $i \geq 1$. Without much loss of generality, we shall
assume this condition, and moreover that all the eigenvalues $\eigval{\P}{i}$
are nonnegative. Both of these conditions can be enforced by
considering, if necessary, instead of $\P$ the ``lazy random walk''
with transition matrix
\begin{equation}
\P' = \frac{1}{2}(\matr{I} + \P).
\end{equation}
For a connected graph $G$ this chain is ergodic, and has
nonnegative eigenvalues
\begin{equation}
\eigval{\P'}{i} = \frac{1}{2}(1+\eigval{\P}{i}),
\end{equation}
with the same eigenvectors as $\P$.
Let us then consider a transition matrix $\hat{\P}$ obtained from $\P$
by making a given state, or vertex $s$ \emph{absorbing}.
Thus, $\hat{\P}$ is otherwise equal to $\P$, but all
$\hat{p}_{si} = 0$ except for $\hat{p}_{ss} = 1$.
We shall henceforth assume, for simplicity of notation, that
$s = n$, so that in particular $\hat{\P}$ has the block structure:
\begin{equation}
\hat{\P} =
\left(
\begin{array}{c|c}
& p_1 \\
\matr{Q} & \vdots \\
& p_{n-1} \\
\hline \\
0 \cdots 0 & 1
\end{array}
\right)
\label{eq:HPblocks}
\end{equation}
The \emph{absorption time} $m_i$ from vertex $i \neq s$ to the seed
vertex $s$ is the expected number of steps that a walk initiated at
$i$ will take before hitting $s$. Intuitively, as the absorption time
measures in a certain sense the proximity of vertex $i$ to vertex $s$,
vertices belonging to a good cluster $S$ for $s$, if such a cluster
exists, should have characteristically smaller absorption times to $s$
than vertices in $V \setminus S$. Note that not all graphs exhibit a
clustered structure, in which case no clustering method will be able
to pinpoint a high-quality cluster \cite{Scha07}.
It is well known that the absorption times to vertex $s = n$
can be calculated
as row sums
\begin{equation}
m_i = m_{i,1} + m_{i,2} + \ldots + m_{i, n-1}.
\label{eq:rowsums}
\end{equation}
from the {\em fundamental matrix}
\begin{equation}
\matr{M} = \matr{I} + \matr{Q} + \matr{Q}^2 + \matr{Q}^3 + \ldots =
\inv{(\matr{I} - \matr{Q})},
\label{eq:fundamental}
\end{equation}
where $\matr{Q}$ is the matrix obtained from $\hat{\P}$ (or equivalently from $\P$)
by eliminating the row and column corresponding to vertex $s = n$
(as shown above in Equation~(\ref{eq:HPblocks})),
\begin{figure}
\centerline{\includegraphics[width=60mm]{fig_3_abstime_matrix.eps}}
\caption{The absorption time matrix composed of 30 absorption-time
vectors using each vertex of the caveman graph of Figure
\ref{fig:caveman} in turn as a seed vertex, with white corresponding
to the maximum $m_{i,j}$ thus obtained and black corresponding to
the minimum $m_{i,j}$ \emph{and} the diagonal zeroes.}
\label{fig:caveman_abstime}
\end{figure}
In Figure \ref{fig:caveman_abstime}, we illustrate the absorption
times in the caveman graph of Figure~\ref{fig:caveman}: we computed
with Matlab the absorption times from all vertices to a given seed vertex
$j$, repeated the computation for each $j \in V$, and formed a matrix
where each column represents the absorption-time vector for the
corresponding vertex $j$.
The columns are ordered so that all absorption-time vectors associated
to a given cave are grouped together, before those of the next cave,
and so forth. The matrix is visualised as a gray-scale colour map by
placing a tiny \emph{black} square where either $m_{i,j} = 0$ (that is, along
the diagonal) or $m_{i,j} = 10.6$ (the minimal off-diagonal absorption
time observed), a \emph{white} square where $m_{i,j} = 319.6$
(the maximum observed), and discretising the
intermediate values to 254 gray-scale colours correspondingly. The
caves can be distinguished as dark five-by-five blocks along the diagonal,
although the matrix is somewhat too noisy to be trivially clustered.
Now consider the eigenvalue spectra of matrices $\hat{\P}$ and $\matr{Q}$.
Matrix $\hat{\P}$ is still stochastic, so it has largest eigenvalue
$\eigval{\hat{\P}}{0} = 1$, and since the chain is absorbing, all the other
eigenvalues satisfy $|\eigval{\hat{\P}}{i}| < 1$, $i = 1,\dots,n-1$.
Denote $\cal{Q} = \sqrt{\matr{D}}\matr{Q}\invsqrt{\matr{D}}$, where $\matr{D} =
\textrm{diag}(d_1,\dots,d_{n-1})$. As $\cal{Q}$ is symmetric (it is obtained by
eliminating the last row and column from the symmetric matrix ${\cal P}$)
and $\matr{Q}$ is similar to $\cal{Q}$, both have a spectrum of
real eigenvalues $\spec{\matr{Q}} = \{\eigval{\matr{Q}}{1} \geq \dots \geq
\eigval{\matr{Q}}{n-1}\}$. This spectrum is properly contained in the
interval $[-1, 1]$, because for any vertex $i \neq n$ adjacent to $n$,
$p_{in} > 0$, and so the $i$\th\ row sum of $\matr{Q}$ is less than $1$.
We claim that in fact
\begin{equation}
\spec{\matr{Q}} = \spec{\hat{\P}} \setminus \{1\}.
\end{equation}
To prove this claim, let namely $\lambda \neq 1$ be any non-principal
eigenvalue of $\hat{\P}$ and $\vect{v}$ a corresponding eigenvector, so that
$\hat{\P} \vect{v} = \lambda \vect{v}$. Since the $n$\th\ row of $\hat{\P}$ is zero
except for $\hat{p}_{nn} = 1$, it follows that $\lambda v_n = (\hat{\P}
\vect{v})_n = v_n$, and since $\lambda \neq 1$ that necessarily $v_n =
0$. Then for the $(n-1)$-dimensional vector $\vect{v}' =
(v_1,\dots,v_{n-1})$ and for any $i = 1,\dots,n-1$ it holds that:
\begin{equation}
\begin{array}{rcl}
\displaystyle
(\matr{Q} \vect{v}')_i
& = & \displaystyle
\sum_{j=1}^{n-1} p_{ij}v'_j
= \sum_{j=1}^{n-1} p_{ij}v_j
= \sum_{j=1}^n p_{ij}v_j - p_{in}v_n \\
& = & (\hat{\P} \vect{v})_i - v_n p_{in}
= (\hat{\P} \vect{v})_i \\
& = & \lambda v_i
= \lambda v'_i.
\end{array}
\label{eq:Qeigvect}
\end{equation}
Consequently, $\vect{v}'$ is an eigenvector associated to
eigenvalue $\lambda$ of $\matr{Q}$. Since $\lambda$ was chosen
arbitrarily from $\spec{\hat{\P}}\setminus\{1\}$, this
establishes that $\spec{\hat{\P}}\setminus\{1\} \subseteq \spec{\matr{Q}}$.
For the converse direction, a similar argument shows that if
$\vect{v}' = (v_1,\dots,v_{n-1})$ is an eigenvector associated
to an eigenvalue $\lambda$ of $\matr{Q}$, then the vector
$\vect{v} = (v_1,\dots,v_{n-1},0)$ is an
eigenvector associated to eigenvalue $\lambda$ of $\hat{\P}$.
\section{Spectral methods for bipartitioning}
\subsection{Fiedler vectors}
Spectral clustering of points in space, often modelled as (complete)
weighted graphs, is a widely studied topic \cite{HKK07, KVV04}. In the
context of graphs, the technique is usually applied so that some right
eigenvector associated to the smallest nonzero eigenvalue
$\leigval{\L}{1}$ of $\L$ is used to produce a bipartitioning of the
graph such that those vertices that have negative values in the
eigenvector form one side of the bipartition $S$ and the vertices with
positive values are the other side $S \setminus V$. These
eigenvectors are called \emph{Fiedler vectors} following
\cite{Fied73,Fied75}, where the technique was first proposed. The
corresponding eigenvectors based on $\cal{L}$ are called \emph{normalised}
Fiedler vectors. The works on Fiedler-vector based spectral clustering
are numerous and go back for decades \cite{SpTe96,QiHa06,HoSu99}
For our example graph illustrated in Figure \ref{fig:caveman}, such a
bipartition based on $\L$ puts three of the caves in $S$ such that it
assigns negative values to every other cave along the cycle of six
caves. Using the eigenvector of $\cal{L}$, however, assigns only negative
values in the vector and does not yield an intuitive division that
preserves the caves. The two vectors are visualised in Figure
\ref{fig:laplace}.
\begin{figure}
\centerline{\includegraphics[width=128mm]{fig_4_laplace_caveman.eps}}
\caption{The components of the Fiedler vector (left) and the
normalised Fiedler vector (right) for the caveman graph of Figure
\ref{fig:caveman}. For the human eye, the six-cluster structure is
evident in the Fiedler vectors, whereas in the normalised Fiedler vector
the vertices are grouped into four clusters (two of them consisting
of two caves).}
\label{fig:laplace}
\end{figure}
If there are only two natural clusters in the graph, such bipartition
works nicely. An example is the Zachary karate club network of Figure
\ref{fig:karate}: the corresponding Fiedler vectors are shown in
Figure \ref{fig:lapkarate}. Also, recursively performing bipartitions
on the subgraphs induced by $S$ and $V \setminus S$ will help cluster
the input graph $G$ in more than two clusters, but a \emph{stopping
condition} needs to be imposed to determine when to stop
bipartitioning the resulting subgraphs further.
\begin{figure}
\centerline{\includegraphics[width=128mm]{fig_5_laplace_karate.eps}}
\caption{The components of the Fiedler vector (left) and the
normalised Fiedler vector (right) for the karate club graph of Figure
\ref{fig:karate}. The vertices can be classified in two groups:
those with positive values in the Fiedler vector and those with
negative values.}
\label{fig:lapkarate}
\end{figure}
\subsection{Spectral partitioning as integer program relaxation}
The use of Fiedler vectors for graph bipartitioning can be motivated
as follows (see for example \cite{HKK07}). Denote a \emph{cut}
(bipartition) of a graph $G = (V,E)$ into vertex sets $S$ and $\bar{S}
= V \setminus S$ as $(S,\Sbar)$. The {\em capacity} of a cut $(S,\Sbar)$ is
defined as
\begin{equation}
C(S,\Sbar) = \left|\{\{i,j\} \in E : i \in S, j \in \bar{S}\}\right|.
\end{equation}
A cut $(S,\Sbar)$ can be conveniently represented by an indicator vector
$\vect{v} \in \{+1,-1\}^n$, where $v_i = +1$ if $i \in S$, and $v_i = -1$
if $i \in \bar{S}$.
Then
\begin{equation}
C(S,\Sbar) = \frac{1}{4} \sum_{i \sim j} (v_i - v_j)^2,
\end{equation}
where the sum is over all the (undirected) edges $\{i,j\} \in E$.
For simplicity, assume now that $|V| = n$ is even, and
consider the task of finding an {\em optimal bisection} of $G$,
i.e.\ a cut $(S,\Sbar)$ that satisfies $|S| = |\bar{S}| = n/2$
and minimises $C(S,\Sbar)$ subject to this condition.
This is equivalent to finding an indicator vector
$\vect{v} \in \{+1,-1\}^n$ that satisfies $\sum_i v_i = 0$ and
minimises the quadratic form $\sum_{i \sim j} (v_i - v_j)^2$,
or equivalently (since $n$ is fixed) minimises the ratio:
\begin{align*}
\frac{\frac{1}{4}\sum_{i \sim j} (v_i - v_j)^2}{n/4}
& = \frac{\sum_{i \sim j} (v_i - v_j)^2}{n}\\
& = \frac{\sum_{i \sim j} (v_i - v_j)^2}{\sum_i v_i^2}.
\end{align*}
Since the all-ones vector $\mathbf{1}$ is associated to the
eigenvalue $\leigval{\L}{0} = 0$, we have by the
Courant-Fischer characterisation of the smallest
nonzero eigenvalue $\leigval{\L}{1}$:
\begin{equation}
\leigval{\L}{1}
\quad = \quad
\min_{\vect{v} \bot \mathbf{1}}\frac{\transpose{\vect{v}} L \vect{v}}{\transpose{\vect{v}} \vect{v}}
\quad = \quad
\min_{\sum_i v_i = 0}
\frac{\sum_{i \sim j} (v_i - v_j)^2}
{\sum_i v_i^2},
\end{equation}
where the minimum is taken over all vectors $\vect{v} \neq 0$
satisfying the given condition. Since we can without
loss of generality also constrain the minimisation to,
say, the vectors of norm $\|\vect{v}\|^2 = n$, we see that
the task of finding a Fiedler vector of $G$ is in fact
a fractional relaxation of the combinatorial problem of
determining an optimal bisection of $G$.
This correspondence motivates the previously indicated
spectral approach to bisectioning a connected graph $G$
\cite{DoHo73,Fied73}:
\begin{enumerate}
\item Compute Fiedler vector $\vect{v} \in \mathbb{R}^n$ of $G$.
\item Determine cut $(S,\Sbar)$ by rule:
\begin{equation}\left\{
\begin{array}{lcl}
v_i > \theta & \Rightarrow & i \in S, \\
v_i < \theta & \Rightarrow & i \in \bar{S},
\end{array}
\right.
\end{equation}
\end{enumerate}
where $\theta$ is the median value of the $v_i$'s.
The use of \emph{normalised} Fiedler vectors to graph
bipartitioning was explored in \cite{ShMa00},
where it was shown that Fiedler vectors of $\cal{L}$
yield fractionally optimal graph bipartitions
according to the {\em normalised cut capacity} measure:
\begin{equation}\hat{C}(S,\Sbar) = \frac{C(S,\Sbar)}{\mbox{Vol}(S)} +
\frac{C(S,\Sbar)}{\mbox{Vol}(\bar{S})},
\end{equation}
where $\mbox{Vol}(S) = \sum_{i \in S} d_i$.
\ignore{
Note that this is quite close to the notion of {\em conductance}
of cut $\bar{S}$:
\begin{equation}
\Phi(S,\Sbar) = \max\{\frac{C(S,\Sbar)}{\mbox{Vol}(S)},
\frac{C(S,\Sbar)}{\mbox{Vol}(\bar{S})}\}.
\end{equation}
}
Since $\u$ is an eigenvector of $\cal{L}$ with
eigenvalue $\lambda$ if and only if $\vect{v} = \invsqrt{\matr{D}} \u$
is eigenvector of $\matr{D}^{-1}\L$ with eigenvalue $\lambda$,
the eigenvalue $\leigval{\cal{L}}{1}$ can be characterised
in terms of a ``degree-adjusted'' Rayleigh quotient:
\begin{equation}
\leigval{\cal{L}}{1}
= \min_{\u \bot \sqrt{\matr{D}}\mathbf{1}}
\frac{\transpose{\u} \cal{L} \u}{\transpose{\u} \u}
= \min_{\vect{v} \bot \matr{D}\mathbf{1}}
\frac{\sum_{i \sim j} (v_i - v_j)^2}{\sum_i d_i v_i^2}.
\end{equation}
\ignore{
Note that
\begin{equation}\vect{v} \bot D\mathbf{1} \iff \sum_i v_i d_i = 0.\end{equation}
}
Since $\u$ is an eigenvector of $\cal{L}$ with eigenvalue $\lambda$ if and
only if $\vect{v} = \invsqrt{\matr{D}} \u$ is eigenvector of $\matr{D}^{-1}\L$ with
eigenvalue $\lambda$, the eigenvalue $\leigval{\cal{L}}{1}$ can be
characterised in terms of a ``degree-adjusted'' Rayleigh quotient:
\begin{equation}
\leigval{\cal{L}}{1}
= \min_{\u \bot \sqrt{\matr{D}}\mathbf{1}}
\frac{\transpose{\u} \cal{L} \u}{\transpose{\u} \u}
= \min_{\vect{v} \bot \matr{D}\mathbf{1}}
\frac{\sum_{i \sim j} (v_i - v_j)^2}{\sum_i d_i v_i^2}.
\end{equation}
A natural extension of the spectral clustering idea to the local
clustering context is to consider the Laplacian $\L$ or $\cal{L}$ together
with the \emph{Dirichlet boundary condition} that only clustering
vectors $\vect{v}$ with the seed vertex $v_s$ fixed to some particular
value are acceptable solutions.
We follow \cite{Chun97,ChEl02} in using the normalised Laplacian $\cal{L}$
and choosing $v_s = 0$, or equivalently $u_s = (\sqrt{\matr{D}}\vect{v})_s = 0$
as the boundary condition. We thus aim to cluster according to the
``Dirichlet-Fiedler vector'' minimising the constrained Rayleigh
quotient:
\begin{equation}
\min_{\u: u_s = 0} \frac{\transpose{\u} \cal{L} \u}{\transpose{\u} \u}
= \min_{\vect{v}: v_s = 0}
\frac{\sum_{i \sim j} (v_i - v_j)^2}{\sum_i d_i v_i^2}.
\label{eq:rayleigh}
\end{equation}
For notational simplicity, assume again that $s = n$, and observe
that for every vector $\u = (u_1,\dots,u_{n-1},0)$, the value of
the Rayleigh quotient in equation~(\ref{eq:rayleigh}) is the same
as the value of the $(n-1)$-dimensional quotient with respect to vector
$\u' = (u_1,\dots,u_{n-1})$ and Laplacian $\cal{L}'$ which equals
$\cal{L}$ with its $n$\th\ row and column removed. Thus, our clustering
vector $\vect{v}$ is, except for the final zero, the one minimising:
\begin{equation}
\min_{\u'} \frac{\transpose{(\u')} \cal{L}' \u'}{\transpose{(\u')} \u'}
= \min_{\vect{v}'}
\frac{\sum_{i \sim j} (v'_i - v'_j)^2}{\sum_i d_i (v'_i)^2},
\label{eq:rayleighp}
\end{equation}
i.e.\ $\vect{v}' = \invsqrt{\matr{D}}\u'$ for the principal eigenvector
$\u'$ of the Laplacian $\cal{L}'$. Let us denote $\vect{v} = \vv^f$ and call
this the \emph{local Fiedler vector} associated to graph $G$ and
seed vertex $s = n$.
\section{Local Fiedler vectors and absorption times of random walks}
We shall now show that the components of the local Fiedler vector $\vv^f
= (v_1, \dots, v_{n-1})$ are in fact approximately proportional to the
absorption times $m_i$ discussed in Section~\ref{sec:random walks}.
The connection between the absorption time provides a natural
interpretation to the notion of the local Fiedler vector, and yields
further support to the idea of local clustering by constrained
spectral techniques. Previously random walks and spectral clustering
have been jointly addressed by Meila and Shi \cite{MiSh01} and local
clustering by PageRank by Andersen, Chung, and Lang \cite{dirlocal}.
Important papers linking structural properties of graphs to convergence
rates of random walks via spectral techniques are \cite{Alon86,SiJe89}.
Observe first, from equation~(\ref{eq:lapprob}), that:
\begin{equation}
\cal{L}' = \matr{I} - \sqrt{\matr{D}}\matr{Q}\invsqrt{\matr{D}} = \matr{I} - \cal{Q},
\end{equation}
where $\matr{Q}$ is as in Equation~(\ref{eq:HPblocks})
and $\matr{D} = \textrm{diag}(d_1,\dots,d_{n-1})$.
Since $\cal{Q}$ is similar to $\matr{Q}$, its spectrum satisfies:
\begin{equation}
\spec{\cal{Q}} = \spec{\matr{Q}} = \spec{\hat{\P}} \setminus \{1\}.
\end{equation}
Thus, $\mu \neq 0$ is an eigenvalue of $\cal{L}'$
if and only if $\lambda = 1-\mu \neq 1$
is an eigenvalue of both $\cal{Q}$ and $\matr{Q}$.
Moreover, if $\u$ is an eigenvector associated to eigenvalue
$\lambda$ in $\cal{Q}$, then $\vect{v} = \invsqrt{\matr{D}} \u$ is an
eigenvector associated to the same eigenvalue in $\matr{Q}$.
Let then the eigenvalues of $\cal{Q}$ (or equivalently $\matr{Q}$) be $1 >
\lambda_1 \geq \dots \geq \lambda_{n-1} \geq 0$. Since $\cal{Q}$ is symmetric, it
has a corresponding orthonormal system of eigenvectors
$\u_1,\dots,\u_{n-1}$ and a representation:
\begin{equation}
\cal{Q} = \displaystyle\sum_{i = 1}^{n - 1} \lambda_i \u_i \transpose{\u_i}.
\label{eq:cqsumrep}
\end{equation}
Denoting the component matrices
$\matr{U}_i = \u_i \transpose{\u_i}$, we observe that by
orthogonality of the eigenvectors we have
$\matr{U}_i \matr{U}_j = 0$ for $i \neq j$, and by normality $\matr{U}_i^2 = \matr{U}_i$.
From these two observations it follows that:
\begin{equation}
\cal{Q}^t = \displaystyle\sum_{i = 1}^{n-1} \lambda_i^t \matr{U}_i,
\qquad \text{for } t = 0,1,\dots
\end{equation}
Since $\matr{Q} = \invsqrt{\matr{D}} \cal{Q} \sqrt{\matr{D}}$,
we obtain from this for $\matr{Q}^t$ the representation:
\begin{equation}
\matr{Q}^t =\invsqrt{\matr{D}} \cal{Q}^t \sqrt{\matr{D}}
= \displaystyle
\sum_{i = 1}^{n - 1} \lambda_i^t
(\invsqrt{\matr{D}} \u_i)(\transpose{\u_i} \sqrt{\matr{D}})
= \displaystyle
\sum_{i = 1}^{n-1} \lambda_i^t \vect{v}_i \transpose{\vect{v}_i} \matr{D},
\end{equation}
where $\vect{v}_i = \invsqrt{\matr{D}}\u_i$ is an eigenvector associated
to eigenvalue $\lambda_i$ in $\matr{Q}$.
Substituting this to Equation~(\ref{eq:fundamental}) and denoting
the $(n-1)$-dimensional all-ones vector by $\mathbf{1}$, we thus obtain
an expression for the vector $\vect{m}$ of absorption times $m_i$
in terms of the eigenvalues and eigenvectors of $\matr{Q}$, or
equivalently $\cal{Q}$:
\begin{equation}
\begin{array}{rcl}
\vect{m} &=& \displaystyle
\sum_{t = 0}^\infty \matr{Q}^t \mathbf{1} \\
&=& \displaystyle
\sum_{t = 0}^\infty
\sum_{i = 1}^{n-1} \lambda_i^t \vect{v}_i \transpose{\vect{v}_i} \matr{D} \mathbf{1} \\
&=& \displaystyle
\sum_{t = 0}^\infty
\left(\sum_{i = 1}^{n-1} \lambda_i^t \vect{v}_i \transpose{\vect{v}_i} \right) \d ,
\end{array}
\label{eq:abssum}
\end{equation}
where $\d = \transpose{(d_1,\dots,d_{n-1})}$.
Now if the principal eigenvalue $\lambda_1$ is well-separated
from the others, i.e.\ if the ratio $|\lambda_i / \lambda_1|$ is small
for $i > 1$, this yields a good approximation for $\vect{m}$:
\begin{equation}
\begin{array}{rcl}
\vect{m} &=& \displaystyle
\mathbf{1} +
\sum_{t = 1}^\infty
\lambda_1^t \left(\vect{v}_1 \transpose{\vect{v}_1} \d +
\underbrace{
\sum_{i = 2}^{n-1}
\left(\frac{\lambda_i}{\lambda_1}\right)^t \vect{v}_i \transpose{\vect{v}_i} \d
}_{\text{small-norm "noise"}}
\right) \\
&\approx& \displaystyle
\mathbf{1} + \sum_{t = 1}^\infty \lambda_1^t \vect{v}_1 \transpose{\vect{v}_1} \d \\
&=& \displaystyle
\mathbf{1} + \frac{\lambda_1}{1 - \lambda_1} \vect{v}_1 \transpose{\vect{v}_1} \d.
\end{array}
\label{eq:approxabsvect}
\end{equation}
Even in cases where there is no evident gap in the spectrum and hence
near-equality cannot be assumed, we have found in our experiments that
the approximations obtained are near-perfectly correlated with the exact
absorption times for a variety of graphs.
We study three example graphs to point out the strengths and
weaknesses of the proposed approximation. The first example graph is
the clustered but highly symmetric caveman graph of Figure
\ref{fig:caveman}, where the symmetries present cause problems for the
proposed approximation. Our second example is the karate club network
shown in Figure \ref{fig:karate}. The third example graph is a uniform
random graph $\mathcal{G}_{n, p}$, with $n = 100$ and $p = 0.1$
\cite{Gilb59}, which by definition has no clear cluster structure, and
hence the absorption times cannot be expected to have interesting patterns.
In Figure~\ref{fig:examples}, we show comparisons of some approximate
and exact spectral computations for three example graphs.
In each case, the highest-numbered vertex
of the graph has been chosen as the unique seed vertex.
It can be noted, from the top row of plots in Figure~\ref{fig:examples},
that the spectra of the graphs' $\hat{\P}$ matrices do not exhibit
large gaps between their second and third largest eigenvalues.
Thus, it can not be expected \emph{a priori} that the Fiedler-vector
based approximations to the absorption times, from
Equation~(\ref{eq:approxabsvect}), would be even
of the same magnitude as the exact ones, as calculated from
Equations~(\ref{eq:rowsums}) and~(\ref{eq:fundamental}).
(Observe also how the structure of the caveman graph is reflected
in the corresponding $\hat{\P}$ spectrum: a notable eigenvalue gap
occurs after the six largest eigenvalues, each representing the
dominant convergence behaviour of one of the clusters.)
Correlations between the approximate and exact absorption times
are apparent in the quantile-quantile plots presented in the
second row of Figure~\ref{fig:examples}: here the values
group diagonally when a linear dependency exists.
The correlation is very high in all
cases: $0.99863$ for the caveman graph, $0.99636$ for the karate club
network, and $0.99999$ for the uniform random graph.
The two lowest rows in Figure~\ref{fig:examples} present the
actual values of the exact and approximate absorption-time vectors,
indexed by vertex number. These plots illustrate the usefulness
of these quantities for performing a two-classification of
the vertices into the local cluster of the seed vertex
(low values) versus the other vertices (high values).
In fact, for the caveman graph, the full
six-cluster structure is visible. In the karate club network it can be
seen that two groups are present: one with high values and another one
with low values. (Cf.\ Figure \ref{fig:karateclass}, which indicates
the ``ground truth'' clustering of the vertices in this graph.)
As expected, the uniform random graph
reveals no significant cluster structure, but the vertices near the
seed vertex can be identified by their lower values, whereas most of
the graph has another, higher value.
\begin{figure}
\centerline{\includegraphics[width=128mm]{fig_6_examples.eps}}
\caption{Comparisons of approximate and exact spectral computations
for three
example graphs: the small graphs of Figures \ref{fig:caveman} and
\ref{fig:karate}, and a uniform random graph $\mathcal{G}_{n, p}$,
using a random vertex as the seed vertex.
The top row presents the sorted spectra of the
$\hat{\P}$ matrices of the graphs,
the second row plots the approximate and exact absorption-time
values for the given seed vertex against each other,
and the lowest two rows indicate the exact and approximate
absorption-time values as ordered by vertex number.
The bottom rows can be seen as illustrating the quantities'
capability of distinguishing the cluster of the seed vertex
(low values) from the other vertices (high values).}
\label{fig:examples}
\end{figure}
In practice, it is not always interesting to compute the absorption
times for all vertices, especially in local computation, in which case
we may only have approximated some of the components of the Fiedler
vector. For these situations, we may write the $k$\th\ component of
the result vector as
\begin{equation}
\begin{array}{rcl}
(\matr{Q}^t \mathbf{1})_k &=& \displaystyle
(\sum_{i = 1}^{n-1} \lambda_i^t \vect{v}_i \transpose{\vect{v}_i} \matr{D} \mathbf{1})_k \\
&=& \displaystyle
(\sum_{i = 1}^{n-1} \lambda_i^t (\transpose{\vect{v}_i} \d) \vect{v}_i)_k \\
&=& \displaystyle
\sum_{i = 1}^{n-1} \lambda_i^t (\vect{v}_i)_k
\underbrace{
\sum_{\ell = 1}^{n-1} (\vect{v}_i)_\ell (\d)_{\ell}
}_{c_i}.
\end{array}
\end{equation}
From this we obtain for the absorption time from vertex $k$
to vertex $s$ the expression
\begin{equation}
\begin{array}{rcl}
m_k &=& \displaystyle
\sum_{t = 0}^\infty (\matr{Q}^t \mathbf{1})_k \\
&=& \displaystyle
1 + \sum_{t = 1}^\infty \lambda_1^t
\left(c_i \cdot (\vect{v}_i)_k
+ \sum_{i = 2}^{n-1} \left(\frac{\lambda_i}{\lambda_1}\right)^t
c_i \cdot (\vect{v}_i)_k
\right) \\
&\approx& \displaystyle
1 + \sum_{t = 1}^\infty \lambda_1 \cdot c_1 \cdot (\vect{v}_1)_k \\
& = & \displaystyle
1 + \underbrace{
\frac{\lambda_1}{1 - \lambda_1} \cdot c_1
}_{c'} \cdot (\vect{v}_1)_k.
\end{array}
\label{eq:fiedlerabsapprox}
\end{equation}
Now for a given graph $G$, $c'$ is a constant and so we obtain
the very simple approximate correspondence $\vect{m} \approx \mathbf{1} +
c' \vv^f$ between the absorption time vector $\vect{m}$ and the local
Fiedler vector $\vv^f = \vect{v}_1$.
In order to compare the quality of the approximation as well as to
illustrate the computational load in approximating by summing term by
term the series of Equation~(\ref{eq:abssum}), we calculated for each
cutoff length the sum of squares of the differences between the
partial sums and the exact absorption times, divided by the order of
each of the three example graphs: the graph of Figure
\ref{fig:caveman}, the Zachary karate club graph of Figure
\ref{fig:karate}, and the uniform random graph $\mathcal{G}_{n,
p}$. The resulting values over the set of vertices are shown in
Figure \ref{fig:absconv} (on the left) together with the Pearson
correlations (on the right) achieved at each iteration. In both
plots, mean and standard deviation are shown.
\begin{figure}
\centerline{\includegraphics[width=128mm]{fig_7_correlations.eps}}
\caption{The sum of squares of the difference from the exact
absorption-time (on the left) of estimate vectors with different
cutoff values for approximating through Equation~(\ref{eq:abssum}) and
Pearson correlation between the exact and the estimate vectors (on
the right) for the three example graphs: the small graphs of Figures
\ref{fig:caveman} and \ref{fig:karate}, and the $\mathcal{G}_{n,
p}$. The values shown are averaged over the vertex sets of the two
small graphs and over a set of 30 vertices selected uniformly at
random for the $\mathcal{G}_{n, p}$ graph. The smallest standard
deviation corresponds to the caveman graph and the largest to the
uniform random graph. The horizontal lines (all three overlap
between $0.980$ and $0.997$) correspond to the average correlation
coefficients between the exact and the approximate absorption times
of Equation~(\ref{eq:approxabsvect}).}
\label{fig:absconv}
\end{figure}
\section{Local approximation of Fiedler vectors}
We take as a starting point the Rayleigh quotient of
Equation~(\ref{eq:rayleighp}). Since we are free to normalise our
eventual Fiedler vector $\vv^f$ to any length we wish, we can constrain
the minimisation to vectors $\vect{v}$ that satisfy, say, $\|\vect{v}\|_2^2 = n
= |V|$. Thus, the task becomes one of finding a vector $\vect{v}$ that
satisfies for a given $s \in V$:
\begin{equation}
\vv^f \quad = \quad
\textrm{argmin} \bigg \{\sum_{j \sim k} (v_j - v_k)^2 :
v_s = 0,\; \|\vect{v}\|_2^2 = n \bigg\}.
\label{eq:fiedler}
\end{equation}
We can solve this task approximately by reformulating the requirement
that $\|\vect{v}\|_2^2 = n$ as a ``soft constraint'' with weight $c > 0$,
and minimising the objective function
\begin{equation}
f(\vect{v}) \quad = \quad \frac{1}{2} \sum_{j \sim k}
\bigg(v_j - v_k\bigg)^2 +
\frac{c}{2} \cdot
\bigg(n - \sum_j v_j^2\bigg)
\label{eq:soft_fiedler}
\end{equation}
by gradient descent. Since the partial derivatives of $f$ have
the simple form
\begin{equation} \label{eq:soft_partials}
\frac{\partial f}{\partial v_j} \quad = \quad
- \sum_{k \sim j} v_k+ (\deg{j} - c) \cdot v_j,
\end{equation}
the descent step can be computed locally at each vertex at time $t +
1$, based on information about the values of the vector $\vect{v}$ at time
$t$, denoted by $\tilde{\vect{v}}(t)$, for the vertex itself and its
neighbours:
\begin{equation} \label{eq:grad_desc} \tilde{v}_j(t+1) \quad = \quad
\tilde{v}_j(t) + \delta \cdot \left(\sum_{k \sim j} \tilde{v}_k -
(\deg{j} - c) \cdot \tilde{v}_j\right),
\end{equation}
where $\delta > 0$ is a parameter determining the speed of the descent.
Assuming that the natural cluster of vertex $s$ is small compared to
the order of the graph $n$, the normalisation $\|\vect{v}\|_2^2 = n$ entails
that most vertices $j$ in the network will have $v_j \approx 1$. Thus
the descent iterations~(\ref{eq:grad_desc}) can be started from an
initial vector $\tilde{\vect{v}}(0)$ that has $\tilde{v}_s(0) = 0$ for the
seed vertex $s \in V$ and $\tilde{v}_k(0) = 1$ for all $k \neq i$.
The estimates need then to be updated at time $t > 0$ only for those
vertices $j$ that have at least one neighbour $k$ such that
$\tilde{v}_k(t-1) < 1$.
Balancing the constraint weight $c$ against the speed of gradient
descent $\delta$ naturally requires some care. We have obtained
reasonably stable results with the following heuristic: given an
estimate $\bar{k}$ for the average degree of the vertices in the
network, set $c = 1/\bar{k}$ and $\delta = c/10$. The gradient
iterations (\ref{eq:grad_desc}) are then continued until all the
changes in the $v$-estimates are below $\varepsilon = \delta/10$. We
leave the calibration of these parameters to future work.
The (approximate) Fiedler values thus obtained represent
proximity-values of the vertices in $V$ to the cluster of vertex $s$.
Determining a bisection into $S$ and $V \setminus S$ is now a
one-dimensional two-classification task that can in principle be
solved using any of the standard pattern classifiers, such as
variations of the basic $k$-means algorithm~\cite{HaWo79}.
We illustrate the applicability approximate absorption times for
clustering the karate club network (Figure \ref{fig:karate}). The
approximate absorption times shown in Figure \ref{fig:karateclass} are
computed directly with Equation~(\ref{eq:fiedlerabsapprox}): the group
structure is seen to be strong when the seed vertex is one of the
central members of the group, whereas the classification task is
harder for the ``border'' vertices, as can be expected. For more
extensive examples of clustering with the locally computed
approximates, we refer the reader to previous work \cite{OrSc05}.
\begin{figure}
\centerline{\includegraphics[width=128mm]{fig_8_groups.eps}}
\caption{Four examples of two-classifying vertices of the Zachary
karate club graph. The examples on the left have the seed vertex
among the ``rectangles'' of Figure \ref{fig:karate} and the examples
of the right have the seed vertex among the ``circles''. The
vertices are ordered by their label in Figure \ref{fig:karate} and a
zero has been inserted to represent the absorption time to the seed
vertex itself. The group in which the seed belongs is drawn in black
and the other group in white.}
\label{fig:karateclass}
\end{figure}
\section{Conclusions and further work}
In this work we have derived an expression for the absorption times to
a single absorbing vertex $s$ in a simple random walk in an
undirected, unweighted graph in terms of the spectrum of the
normalised Laplacian matrix of the graph. We have shown that by
only knowing the Fiedler vector corresponding to $s$ on the boundary and
the corresponding eigenvalue provides an approximation of the
absorption times if the spectrum of the graph presents a gap after the
first eigenvalue. Experimentally we have confirmed that the values given
by the approximation are nearly perfectly correlated with the exact
absorption times even in the absence of such a gap.
Our motivation is to use the absorption times into a seed vertex $s$
as a measure of proximity in two-classifying the graph into two
partitions: vertices that are ``relevant'' to the seed vertex and
other vertices. Hence, not knowing the exact values but rather another
vector of perfectly correlated values is sufficient for separating
between the vertices with higher values from those with lower values
(which is the classical two-classification task).
Such a two-partition of a graph is known as local clustering. In order
for the proposed values to be locally computable, we have also presented
a gradient-descent method to approximate the Fiedler vector using only
local information in the graph. The method iteratively processes the
neighbourhoods of vertices starting from the seed vertex and expanding
outwards within the group of potentially ``relevant'' vertices,
without any need to process other parts of the graph. We have
illustrated the potential of these vectors in two-classification for local
clustering on a classical example graph representing a social network.
In further work, we seek to study further the effects of the presence
or absence of a spectral gap in the input graph into the approximation
proposed. We also want to calibrate the parameters of the locally
computable approximation in such a way that no a priori knowledge of
the input graph would be needed, but that the method would rather
adapt to the structure of the graph at runtime by dynamic parameter
adjustment. Of additional interest are extensions of this work to
weighted and directed graphs as well as case studies of applications
of local clustering. We also contemplate possible uses for approximate
absorption times in resolving other problems of interest that involve
complex systems represented as graphs.
\section*{Acknowledgements}
The work of Orponen and Schaeffer was supported by the Academy of
Finland under grant 206235 (ANNE, 2004--2006). Schaeffer and Avalos
received support from the UANL under grant CA1475-07 and from PROMEP
under grant 103,5/07/2523. Avalos also thanks CONACYT for support.
A preliminary report on parts of this work was presented as ``Local
clustering of large graphs by approximate Fiedler vectors'' by
P. Orponen and S. E. Schaeffer, at the Fourth International Workshop
on Efficient and Experimental Algorithms in Santorini, Greece, May
2005. The current work was presented at The Fifteenth Conference of the
International Linear Algebra Society (ILAS) in Canc\'{u}n, Quintana
Roo, Mexico, in June 2008.
|
1,477,468,751,146 | arxiv | \section{Introduction}
Entanglement is perhaps the most puzzling feature of quantum
mechanics and in the last two decades it became the key resource
in quantum information processing \cite{NielsenBook}.
Entangled qubits prepared in pure, maximally entangled states are
required by many quantum-information processes. However, in a
mundane world, a pure maximally entangled state is an idealization
as, e.g., a plane wave in classical optics. In fact, interaction
of qubits with the environment leads to decoherence that may cause
a pure entangled state to become less pure (mixed) and less
entangled.
Thus, any \emph{realistic} quantum-communication/computation
protocol must cope with entangled mixed states and it is desirable
to attain the maximum amount of entanglement for a given degree of
mixedness. States that fulfill this condition
are called maximally entangled mixed states
(MEMS) and, recently, they have been the subject of several papers
(see, e.g., \cite{Peters,Barbieri} and references therein).
In this Article we propose a new method to create MEMS from a pair
of photons initially prepared in the singlet polarization state.
Kwiat and coworkers \cite{Peters} were the first to achieve MEMS
using photon pairs from spontaneous parametric down
conversion (SPDC). They induced decoherence in SPDC pairs
initially prepared in a pure entangled state by coupling
polarization and time degrees of freedom of the photons. At the
same time, a somewhat different scheme was used by De Martini and
coworkers \cite{Barbieri} who instead used the spatial degrees of
freedom of SPDC photons to induce decoherence. However, both the
Kwiat and the De Martini method require operations on \emph{both}
photons of the SPDC pair. On the contrary, our technique has the
advantage to require only \emph{local} operations upon one of the
two photons.
This Article is structured as follows: In the first part of Sec.
II we show the relation existing between a one-qubit quantum map
and a classical-optics setup on the laboratory bench. In the
second part of Sec. II, we exploit this knowledge to design a
simple linear-optical set-up to generate MEMS from a pair of
photons via local operations and postselection. Then, in Sec.
III we provide an experimental demonstration of our method, using
entangled photons from parametric down-conversion. Finally, we
draw our conclusions in Sec. IV.
\section{Theory}
We begin by giving a brief description of the connection between
classical polarization optics and quantum mechanics of qubits, as
recently put forward by several authors
\cite{Brunner03,Aiello061}.
Most textbooks on classical optics introduce the concept of
polarized and unpolarized light with the help of the Jones and
Stokes-Mueller calculi, respectively \cite{Damask}. In these
calculi, the description of classical polarization of light is
formally identical to the quantum description of pure and mixed
states of two-level systems, respectively \cite{Iso}.
Mathematically speaking, there is an isomorphism between the
quantum density matrix $\rho$ describing a qubit and the classical
\emph{coherency matrix} $J$ \cite{BornWolf} describing
polarization of a beam of light: $\rho \sim J/\mathrm{Tr} J$.
$J$ is an Hermitean, positive semidefinite $2 \times 2$ matrix, as
is $\rho$.
A classical linear optical process (as, e.g., the passage of a
beam of light through an optical device), can be described by a $4
\times 4$ complex-valued matrix $\mathcal{M}$ such that
$(J_\mathrm{out})_{ij} = \mathcal{M}_{ij,kl}(J_\mathrm{in})_{kl}$,
where, from now on, we adopt the
convention that summation over repeated Latin indices is
understood. Moreover, we assume that all Latin indices
$i,j,k,l,m,n, \ldots$ take the values $0$ and $1$, while Greek
indices $\alpha, \beta, \dots $ take the values $0,1,2,3$.
In polarization optics one usually deals with the real-valued
Mueller matrix $M$ which is connected to $\mathcal{M}$ via a
unitary transformation $\Lambda: M = \Lambda^\dagger \mathcal{M}
\Lambda$ \cite{AielloMath}.
The matrix $M$ is often written as \cite{Lu96}
\begin{equation}\label{eq30}
M = \left(%
\begin{array}{cc}
m_{00} & \mathbf{d}^T \\
\mathbf{p} & W \\
\end{array}%
\right),
\end{equation}
where $(\mathbf{p}, \mathbf{d})\in \mathbb{R}^3$, are known as the
\emph{polarizance vector} and the \emph{diattenuation vector}
(superscript $T$ indicates transposition), respectively. Note that
$\mathbf{d}$ is nonzero only for dichroic media, namely media that
induce polarization-dependent losses (PDL) \cite{Damask}.
$W$ is a $3 \times 3$ real-valued matrix.
It should be noticed that if we choose $m_{00}=1$ (this can be
always done since it amounts to a trivial
\emph{polarization-independent} renormalization), the Mueller
matrix of a non-dichroic optical element ($\mathbf{d} =
\mathbf{0}$), is formally identical to a non-unital,
trace-preserving, one-qubit quantum map (also called channel)
\cite{Ruskai}. If also $\mathbf{p}=\mathbf{0}$ (pure depolarizers
and pure retarders \cite{Damask}), then $M$ becomes identical to
a unital, one-qubit channel \cite{NielsenBook}.
It is not difficult to show that any linear optical device that
can be represented by $\mathcal{M}$ (or $M$), can also be
described by a set of at most four distinct optical elements in
parallel as $\mathcal{M} = \sum_{\alpha } \lambda_\alpha T_\alpha
\otimes T_\alpha^*$, where the four $2 \times 2$ \emph{Jones}
matrices $T_\alpha$, represent four different
non-depolarizing optical elements and $\lambda_\alpha \geq 0$
\cite{Anderson94,AielloMath} .
From the results above it readily follows that the most general
operation that a linear optical element can perform upon a beam of
light can be written as $J_\mathrm{in} \rightarrow J_\mathrm{out} = \sum_{\alpha}
\lambda_\alpha T_\alpha J_\mathrm{in} T_\alpha^\dagger$.
Since $\lambda_\alpha \geq 0$, the previous equation is formally
identical to the Kraus form \cite{NielsenBook} of a completely
positive one-qubit quantum map $\mathcal{E}$. Therefore, if a
single photon encoding a polarization qubit passes through an
optical device classically described by the Mueller matrix
$\mathcal{M} = \sum_{\alpha } \lambda_\alpha T_\alpha \otimes
T_\alpha^*$, its initial state $\rho_\mathrm{in}$ will be transformed
according to $\rho_\mathrm{in} \rightarrow \rho_\mathrm{out} \propto \sum_\alpha
\lambda_\alpha T_\alpha \rho_\mathrm{in} T_\alpha^\dagger$.
Now that we have learned how to associate a quantum map to a set
of at most four optical elements, we can apply this knowledge to
design a simple optical scheme suitable for MEMS production.
Suppose to have
two qubits (encoded in the polarization degrees of
freedom of two SPDC photons, say $A$ and $B$), initially prepared
in the state $\rho: \rho = \rho_{ij,kl} |ij\rangle \langle kl|
\doteq \rho_{ik,jl}^R |i\rangle \langle k| \otimes |j\rangle
\langle l| $. Superscript $R$ indicates \emph{reshuffling}
\cite{Zico} of the indices: $\rho_{ik,jl}^R \equiv \rho_{ij,kl}$.
Following Ziman and Bu\v{z}ek \cite{Ziman1} we assume that $\rho$
is transformed under the action of the most general \emph{local}
(that is, acting upon a single qubit) linear map $\mathcal{E}
\otimes \mathcal{I}$ into the state
\begin{equation}\label{eq50}
\rho_\mathcal{E} = \mathcal{E} \otimes \mathcal{I}[\rho] \propto
\sum_{\alpha=0}^3 \lambda_\alpha T_\alpha \otimes I \, \rho \,
T_\alpha^\dagger \otimes I .
\end{equation}
By writing explicitly Eq. (\ref{eq50}) in the two-qubit basis $\{|
ij\rangle \equiv |i \rangle \otimes |j \rangle \}$, it is
straightforward to obtain $(\rho_\mathcal{E})_{ij,kl} \propto
\sum_\alpha \lambda_\alpha \rho^R_{mn,jl} (T_\alpha)_{im}
(T^*_\alpha)_{kn}$. Then, from the definition of $\mathcal{M}$ it
easily follows that $ (\rho_\mathcal{E})_{ij,kl} \propto (
\mathcal{M} \rho^R) _{ik,jl}$. By reshuffling $\rho_\mathcal{E}$,
this last result can be written in matrix form as
$\rho_\mathcal{E}^R \propto \mathcal{M} \rho^R$ which displays
the very simple relation existing between the \emph{classical}
Mueller matrix $\mathcal{M}$ and the \emph{quantum} state
$\rho_\mathcal{E}$. Via a direct calculation, it is possible to
show that if $\rho$ represents two qubits in the singlet state
$\rho_s = \frac{1}{4}(I \otimes I - \sigma_x \otimes \sigma_x -
\sigma_y \otimes \sigma_y - \sigma_z\otimes \sigma_z)$
\cite{Pauli}, then the proportionality symbol in the last equation
above can be substituted with the equality symbol:
$\rho_\mathcal{E}^R = \mathcal{M} \rho^R_s$.
Note that this
pleasant property is true only for the singlet state. However, if
the initial state $\rho$ is different from the singlet one, then
$\mathcal{M}$ must be simply renormalized by imposing $\mathrm{Tr}
(\mathcal{M} \rho^R)=1$ .
Now, suppose that we have an experimental setup producing pairs of
SPDC photons in the singlet state $\rho_s$, and we want to
transform $\rho_s$ into the target state $\rho_\mathcal{T}$ via a
local map $\mathcal{T} \otimes \mathcal{I}: \, \rho_s \rightarrow
\rho_\mathcal{T} = (\mathcal{M}_\mathcal{T} \rho_s^R)^R$. All we
have to do is first to invert the latter equation to obtain
\begin{equation}\label{eq70}
\mathcal{M}_\mathcal{T} = \rho_\mathcal{T}^R (\rho_s^R)^{-1},
\end{equation}
and then to decompose $\mathcal{M}_\mathcal{T}$ as
$\mathcal{M}_\mathcal{T} = \sum_{\alpha } \lambda_\alpha T_\alpha
\otimes T_\alpha^*$. Thus, we get the (at most four) Jones
matrices $ T_\alpha$ representing the optical elements necessary
to implement the desired transformation.
This is the main theoretical result of this Article.
Our technique is very straightforward and we shall demonstrate its
feasibility later, by applying it to design an
optical setup devoted to MEMS generation. However, at this moment,
some caveats are in order. To make $\mathcal{M}_\mathcal{T}$ a
physically realizable Mueller matrix, its associated matrix
$H_\mathcal{T}$ should be positive semidefinite \cite{NotaH}.
If
this is not the case, then the transformation $\rho \rightarrow
\rho_\mathcal{T}$ cannot be implemented via local operations. For
example, it is easy to see that if the initial state is a Werner
state $\rho_W = p\rho_s + \frac{1-p}{4}I, \; (0\leq p \leq 1)$ and
the target state is the singlet $\rho_\mathcal{T}=\rho_s$, then
such operation (known as \emph{concentration} \cite{Thew01R})
cannot be physically implemented by a local setup since
$H_\mathcal{T}$ has three degenerate negative eigenvalues. Another
caveat comes from the no-signalling constraint. Since
$\mathcal{M}_\mathcal{T}$ describes a local device operating only
upon photon $A$, a second observer watching at photon $B$ cannot
distinguish the initial state $\rho_s$ from the transformed state
$\rho_\mathcal{T}$, that is: $\rho^B = \mathrm{Tr}_A (\rho_s) = \mathrm{Tr}_A
(\rho_\mathcal{T})$. This condition requires the one-qubit map
$\mathcal{T}$ to be trace-preserving: $\sum_\alpha \lambda_\alpha
T_\alpha^\dagger T_\alpha = I$. From Eq. (\ref{eq30}), a
straightforward calculation shows that such condition cannot be
fulfilled if $\mathbf{d} \neq \mathbf{0}$, that is if the device
implementing $\mathcal{T}$ contains dichroic (or PDL) elements.
PDL is important in many commonly used optical devices as
polarizers, circulators, isolators, etc., \cite{Damask}. Within
the framework of quantum information theory, all these
\emph{physical} devices may be represented by
``\emph{unphysical}'' one-qubit maps $\mathcal{T}$ that violate
the no-signalling condition. This apparent paradox disappears if
one allows causal classical communications between observers who
actually measure and reconstruct the target state
$\rho_\mathcal{T}$ generated by the ``unphysical'' local map
$\mathcal{T} \otimes \mathcal{I}$ \cite{Aiello062}.
In fact, in coincidence measurements (required to reconstruct
$\rho_\mathcal{T}$), classical (as opposed to quantum) signalling
between the two observers is necessary to allow them to compare
their own experimental results and select from the raw data the
coincidence counts.
In other words, a coincidence measurement post-selects only those
photons that have not been absorbed by the PDL element
\cite{Brunner03}.
\begin{figure}[!htr]
\includegraphics[angle=0,width=8truecm]{Layout.eps}
\caption{\label{fig:A} (color online) Layout of the experimental
setup. The two-path optical device acts only on photon $A$.
Detectors $\mathrm{\textsf{D}}_\mathrm{\textsf{A}}$ and
$\mathrm{\textsf{D}}_\mathrm{\textsf{B}}$ perform coincidence
measurements.}
\end{figure}
With these caveats in mind, we come to the experimental
validation of our method. We choose to generate MEMS I states
\cite{NoteMEMS}, represented by the density matrix
$\rho_\mathrm{I} = p | \phi_+ \rangle \langle \phi_+ | + (1-p)|01
\rangle \langle 01|$, where $| \phi_+ \rangle = (|00 \rangle +
|11 \rangle)/\sqrt{2}$ and $(2/3 \leq p \leq 1)$.
By varying the parameter $p$, the entanglement and mixedness of
the state $\rho_\mathrm{I}$ change. Here, we use the linear
entropy $S_L$ \cite{Bose01R} and the tangle $T$, namely, the
concurrence squared \cite{Wootters98}, to quantify the degree of
mixedness and of entanglement, respectively. They are defined as
$S_L(\rho) = \frac{4}{3}[1 - \mathrm{Tr} (\rho^2)]$, and $T(\rho) =
[\max\{0 , \sqrt{\lambda_0} - \sqrt{\lambda_1} -\sqrt{\lambda_2}
-\sqrt{\lambda_3}\}]^2$, where $\lambda_0 \geq
\lambda_1\geq\lambda_2\geq \lambda_3 \geq 0$ are the eigenvalues
of $\rho (\sigma_y \otimes \sigma_y) \rho^* (\sigma_y \otimes
\sigma_y)$.
After applying Eq. (\ref{eq70}) with $\rho_\mathcal{T} =
\rho_\mathrm{I}$, a straightforward calculation shows that there
are only two non-zero terms in the decomposition of
$\mathcal{M}_\mathcal{T}$,
namely $\{\lambda_0 = 2(1-p) , \lambda_1 = p \}$, $ \{ T_0 = \left(%
\begin{array}{cc}
1 & 0 \\
0 & 0 \\
\end{array}%
\right), T_1 = \left(%
\begin{array}{cc}
0 & -1 \\
1 & 0 \\
\end{array}%
\right)\}$. In physical terms, $T_0$ is a polarizer and $T_1$ is a
$90^\circ$ polarization rotator. The two eigenvalues $\{\lambda_0,
\lambda_1 \}$ give the relative intensity in the two arms of the
device and are physically realized by intensity attenuators.
\section{Experiment}
Our experimental set-up is shown in Fig.1. Its first part
(\textsf{Singlet state preparation}) comprises a Krypton-ion laser
at 413.1~nm that pumps a 1-mm thick $\beta-\mathrm{Ba}
\mathrm{B}_2 \mathrm{O}_4$ (\textsf{BBO}) crystal, where
polarization-entangled photon pairs at wavelength 826.2~nm are
created by SPDC in degenerate type II phase-matching configuration
\cite{Kwiat95}. Single-mode fibers (\textsf{SMF}) are used as
spatial filters to assure that the initial two-photon state is in
a single transverse mode. Spurious birefringence along the fibers
is compensated by suitably oriented polarization controllers
(\textsf{PC}) \cite{Puentes061}. In addition, total retardation
introduced by the fibers and walk-off effects at the \textsf{BBO}
crystal are compensated by compensating crystals (\textsf{CC}:
0.5-mm thick \textsf{BBO} crystals) and half-wave plates
($\lambda/2$) in both photonic paths. In this way the initial
two-photon state is prepared in the polarization singlet state
$|\psi_s \rangle=(|HV\rangle-|VH\rangle)/\sqrt{2}$, where $H(=0)$
and $V(=1)$ are labels for horizontal and vertical polarizations
of the two photons, respectively.
In the second part of the experimental set-up (\textsf{MEMS
preparation}) the \emph{two-term} decomposition of
$\mathcal{M}_\mathcal{T}$ is physically realized by a
\emph{two-path} optical device. A photon enters such a device
through a $50/50$ beam splitter (\textsf{BS}) and can be either
transmitted to path $1$ or reflected to path $2$. The two paths
defines two independent \emph{spatial} modes of the field.
In path $1$ a neutral-density filter ($\mathrm{\textsf{A}}_1$) is
followed by a linear polarizer (\textsf{P}) oriented horizontally
(with respect to the \textsf{BBO} crystal basis). When the photon
goes in this path, the initial singlet is reduced to $| HV
\rangle$ with probability proportional to the attenuation ratio
$a_1$ of $\mathrm{\textsf{A}}_1$ ($a
=P_\mathrm{out}/P_\mathrm{in}$). In path $2$ a second
neutral-density filter ($\mathrm{\textsf{A}}_2$) is followed by
two half wave-plates ($\lambda/{2}$) in cascade relatively
oriented at $45^{\circ}$: they work as a $90^\circ$ polarization
rotator. When the photon
goes in path $2$, the singlet undergoes a local rotation with
probability proportional to the attenuation ratio $a_2$ of
$\mathrm{\textsf{A}}_2$.
The third and last part of the experimental set-up
(\textsf{Tomographic analysis}), consists of two tomographic
analyzers (one per photon), each made of a quarter-wave plate
($\lambda/4$) followed by a linear polarizer (\textsf{P}). Such
analyzers permit a tomographically complete reconstruction, via a
maximum-likelihood technique \cite{James01}, of the two-photon
state. Additionally, interference filters (\textsf{IF}) in front
of each detector ($\Delta \lambda = 5$ nm) provide for bandwidth
selection.
It should be noticed that detector
$\mathrm{\textsf{D}}_\mathrm{\textsf{A}}$ does not distinguish
which path (either $1$ or $2$) a photon comes from, thus photon
$A$ is detected in a \emph{mode-insensitive} way: This is the
simple mechanism we use to induce decoherence. In the actual
setup, a lens (not shown in Fig. 1) placed in front of detector
$\mathrm{\textsf{D}}_\mathrm{\textsf{A}}$ focusses both paths $1$
and $2$ upon the sensitive area of the detector which becomes thus
unable to distinguish between photons coming from either path $1$
or $2$ (``mode-insensitive detection'').
\begin{figure}[!hbr]
\includegraphics[angle=0,width=7truecm]{Data.eps}
\caption{\label{fig:B} Experimental data and theoretical
prediction (continuous line) in the linear entropy-tangle plane.
The gray region represents unphysical states and it is bounded
from below by MEMS (dashed curve). The lower dotted-dashed curve
represents Werner states. The horizontal (dotted) line at $T =
4/9$ separates MEMS I (above), from MEMS II (below). Stars
denote MEMS I states $\rho_\star$ that have the same linear
entropy as the measured states $\rho_\mathrm{I}^\mathrm{exp}$
(i.e., the experimental points above the line $T = 4/9$).
All measured data follow very well the theoretical curve.}
\end{figure}
Experimental results are shown in Fig. 2 together with
theoretical predictions in the linear entropy-tangle plane. The
agreement between theoretical predictions and measured data is
very good.
The experimentally prepared initial singlet state
$\rho_s^\mathrm{exp}$ has a fidelity \cite{Jozsa} $F(\rho_s,
\rho_s^\mathrm{exp}) = \left|\mathrm{Tr}( \sqrt{\sqrt{\rho_s}
\rho_s^\mathrm{exp} \sqrt{\rho_s}})\right|^2 \sim 97 \% $ with the
theoretical singlet state $\rho_s$.
The continuous curve is calculated from the matrix $\rho_c:
\rho_c = \mathcal{M}_\mathcal{T} \rho_s^\mathrm{exp}$, and
varying $p$. It represents our theoretical prediction for the
given initially prepared state $\rho_s^\mathrm{exp}$.
If it were possible to achieve exactly $\rho_s^\mathrm{exp} =
\rho_s$, then such curve would coincide with the MEMS curve above
the horizontal (dotted) line $T = 4/9 $.
Experimental points with $T \geq 4/9$
($\rho_\mathrm{I}^\mathrm{exp}$) are obtained by varying the
neutral-density filters
$\mathrm{\textsf{A}}_1,\mathrm{\textsf{A}}_2$ in such a way that $
a_2 \geq a_1$; while
points with $T < 4/9$ are achieved for $a_2 <a_1 $. Note that the latter points
do not represent MEMSs, but different mixed entangled states whose
density matrix is still given by $\rho_\mathrm{I}$ but with the
parameter $p$ now varying as $0 \leq p \leq 2/3$.
The average fidelity between the measured states
$\rho_\mathrm{I}^\mathrm{exp}$ an the ``target'' states
$\rho_\star$, is given by $\overline{F(\rho_\star,
\rho_\mathrm{I}^\mathrm{exp})} \sim 80 \% $. The main reason for
its deviation from $\lesssim 100 \%$, is due to spurious,
uncontrolled birefringence in the \textsf{BS} and the prism
composing the set-up. To verify this, first we calculated the
fidelity between the states $\rho_c(p)$ (obtained by applying the
theoretically determined map $\mathcal{T}\otimes \mathcal{I}$ to
the experimentally prepared initial singlet state
$\rho_s^\mathrm{exp}$), with the theoretical MEMS
$\rho_\mathrm{I}(p)$. We have found $F[\rho_\mathrm{I}(p),
\rho_c(p)] \geq 97 \% $ for all $2/3 \leq p \leq 1$; thus the
value of $\overline{F} \sim 80 \% $ cannot be ascribed to the
imperfect initial singlet preparation. Second, we explicitly
measured the Mueller matrices for both the \textsf{BS} and the
prism (matrices that would be equal to the identity for ideal
non-birefringent elements) and we actually found spurious
birefringence. From such measured matrices it was possible to
determine the unwanted local unitary operation induced by these
optical elements \cite{Aiello07}. It is important to notice that
such operation does not change the position of our experimental
points in the linear entropy-tangle plane. Now, if one applies
this unitary operation to our raw data and calculates once
again the average fidelity, the result would be $\overline{F} \sim
91 \% $. However, since this ``compensation'' of the spurious
birefringence is performed upon the measured data and not directly
on the physical setup, we felt that it was more fair to present
the uncorrected data.
\section{Discussion and conclusions}
In conclusion, we have theoretically proposed and experimentally
tested a new, simple method to create MEMS I states of photons.
This method can be easily generalized to generate MEMS II states,
as well. However, this task would require a slightly different
experimental setup with a \emph{three}-path linear optical device
acting only upon photon $A$ \cite{Aiello07}. In particular, we
have shown that it is possible to create a MEMS
from a SPDC photon pair, by acting on just a \emph{single}
photon of the pair. This task could appear, at first sight,
impossible since it was recently demonstrated \cite{Ziman1} that
even the most general local operation cannot generate MEMS because
this would violate relativistic causality. However, as we
discussed in the text, our results do not contradict Ref.
\cite{Ziman1} since we obtained them via postselection operated by
coincidence measurements. The latter are possible only when causal
classical communication between detectors is permitted.
Still, the connection between relativistic causality, dichroic
(or, PDL) devices and postselection, is far from being trivial.
For example, suppose that a two-photon state is produced by an
optical setup containing \emph{local} PDL elements, and that we
tomographically reconstruct it after coincidence measurements.
Such a reconstructed state will correctly describe the result of
any other measurement involving coincidence measurements (as,
e.g., Bell measurements), but it will \emph{fail} when describing
the result of any single-photon measurement.
We stress that this limitation is \emph{not} inherent to our
scheme, but it is shared by all optical set-ups containing PDL
elements.
\begin{acknowledgments}
We acknowledge Vladimir Bu\v{z}ek for useful correspondence. We
also thank Fabio Antonio Bovino for helpful suggestions. This
project is supported by FOM.
\end{acknowledgments}
|
1,477,468,751,147 | arxiv | \section{Introduction}
In 1965 Roger Penrose published his seminal paper \cite{Pen} which established
the first of the modern singularity theorems. In this paper Penrose
introduced the notion of a trapped surface $\cT $,
which he defined as ``a closed spacelike, two-surface with
the property that the two systems of null geodesics which meet $\cT $
orthogonally converge locally in future directions at $\cT $''. He then
showed that if the spacetime $M$ possesses both a closed trapped
surface and a non-compact Cauchy surface then provided the local
energy density is always positive (so that via Einstein's equations
the Ricci tensor satisfies the null convergence condition) the
spacetime cannot be future null complete. The Penrose paper established
for the first time that the gravitational singularity found in the
Schwarzschild solution was not a result of the high degree of
symmetry but that provided the gravitational collapse qualitatively
resembles the spherically symmetric case then (subject to the above
conditions) deviations from spherical symmetry cannot
prevent the formation of a gravitational singularity.
Penrose's paper was not only the first to define the notion of a
trapped surface but it also introduced the idea of using geodesic
incompleteness to give a mathematical characterisation of a singular
spacetime. The 1965 paper had immediate impact and inspired a series
of papers by Hawking, Penrose, Ellis, Geroch and others which led to
the development of modern singularity theorems (see the recent review
paper \cite{SenGar} for details). Despite the great power of these
theorems they follow Penrose in defining singularities in terms of
geodesic incompleteness and as a result say little about the nature of
the singularity. In particular there is nothing in the original
theorems to say
that the gravitational forces become unbounded at the singularity\footnote{
{{See however results on the extendability
of incomplete spacetimes under suitable curvature conditions, e.g.\ \cite{Cl82, Clarke,Racz,Thorpe},}{ which indicate that such spacetimes cannot be maximal unless the curvature blows up.}}}.
Furthermore the statement and proofs of the various singularity
theorems assume that the metric is at least $C^2$ and Senovilla in
\cite[Sec.\ 6.1]{Seno1} highlights the places where this assumption is
explicitly used. Thus the singularities predicted by the singularity
theorems could in principle be physically innocuous and simply be a
result of the differentiability of the metric dropping below $C^2$. As
emphasised by a number of authors (see e.g.\ \cite{Clarke,MS,Seno1})
the requirement of $C^2$-differentiability is significantly stronger
than one would want since it fails to hold in a number of physically
reasonable situations. In particular it fails across an interface
(such as the surface of a star) where there is a jump in the energy
density which, via the field equations, corresponds to the metric
being of regularity $C^{1,1}$ (also denoted by $C^{2-}$, the first
derivatives of the metric being locally Lipschitz continuous). For more
details see e.g.\ \cite[Sec.\ 6.1]{Seno1}.
Furthermore from the point of view of the singularity
theorems themselves the natural differentiability class for the
metric again is $C^{1,1}$ as this is the minimal condition which
ensures existence and uniqueness of geodesics.
Since the connection of a $C^{1,1}$-metric is locally Lipschitz,
Rademacher's theorem implies that it is differentiable almost
everywhere so that the (Ricci) curvature exists almost everywhere and
is locally bounded. Any further lowering of the differentiability
would result in a loss of uniqueness of causal geodesics\footnote{
In fact, uniqueness is lost for metrics
of local H\"older regularity class $C^{1,\alpha}$ ($\alpha<1$), see \cite{HW}.} (and hence of
the worldlines of observers) and generically in unbounded curvature\footnote{
While the curvature can be stably defined as a distribution even for metrics
of local Sobolev regularity $W^{1,2}\cap L^\infty$ (\cite{GT}) the curvature will in general
not be in $L^\infty$ unless the metric is $C^{1,1}=W^{2,\infty}$.},
both of which correspond more closely to our physical expectations of
a gravitational singularity than in the $C^2$-case.
The singularity theorems involve an interplay between results in
differential geometry and causality theory and it is only recently
that the key elements of $C^{1,1}$-causality have been established. In
particular it was only in \cite[Th.\ 1.11]{M} and in \cite[Th.\ 2.1]{KSS}
that the exponential map was shown to be a bi-Lipschitz homeomorphism, a key
result needed to derive many standard results in causality theory.
Building on the regularisation results of
\cite{CG,KSSV} and combining them with recent advances in
causality theory \cite{Chrusciel_causality, CG, M, KSSV} the present
authors in \cite{hawkingc11} gave a detailed proof of the Hawking singularity theorem for
$C^{1,1}$-metrics by following the basic strategy outlined in
\cite[Sec.\ 8.4]{HE}. In the present paper we establish the Penrose
singularity theorem for a $C^{1,1}$-metric. To be precise we prove
the following result:
\begin{Theorem}\label{penrose} Let $(M,g)$ be an $n$-dimensional $C^{1,1}$-spacetime. Assume
\begin{itemize}
\item[(i)] For any Lipschitz-continuous local null vector field $X$,
$\Ric(X,X)\ge 0$.
\item[(ii)] $M$ possesses a non-compact Cauchy-hypersurface $S$.
\item[(iii)] There exists a compact achronal spacelike submanifold $\cT $
in $M$ of codimension $2$ with past-pointing timelike mean curvature vector field $H$.
\end{itemize}
Then $M$ is not future null complete.
\end{Theorem}
For the definition of a $C^{1,1}$-spacetime, see below.
\begin{remark}\label{rem1.2}\
\begin{itemize}
\item[(a)]
As explained above the Ricci-tensor, $\Ric$, of a $C^{1,1}$-metric is an (almost everywhere defined)
$L^\infty_{\mbox{\scriptsize loc}}$-tensor field. Condition (i) in Theorem \ref{penrose} is adapted
to this situation and reduces to the usual pointwise condition for metrics
of regularity $C^2$. In fact, any null vector can be extended (by parallel transport)
to a local null vector field that is $C^1$ if the metric is $C^2$ and
locally Lipschitz if $g$ is $C^{1,1}$ (cf.\ also the proof of Lemma \ref{approxlemma} below).
The assumption in (i) then means that the $L^\infty_{\mbox{\scriptsize loc}}$-function
$\Ric(X,X)$ is non-negative almost everywhere.
Since being a null vector field is not an `open' condition
(unlike the case of timelike vector fields as in Hawking's singularity theorem,
see \cite[Rem.\ 1.2]{hawkingc11}),
it will in general not be possible to extend a given null vector to a {\em smooth} local null
vector field.
\item[(b)] Concerning condition (iii), our conventions are as follows
(cf.\ \cite{ON83}): we define the mean curvature field as
$H_p=\frac{1}{n-2}\sum_{i=1}^{n-2}\text{II}(e_i,e_i)$ where
$\{e_i\}$ is any orthonormal basis of $T_p\cT $ and
the second fundamental form tensor is given by
$\text{II}(V,W)=\text{nor}\nabla_V W$ where $\text{nor}$ denotes
the projection orthogonal to $T_p\cT$.
Also the condition on $H$ in (iii) is equivalent to the
convergence $\conv(v):=g(H,v)$ being strictly positive for all
future pointing null vectors normal to $\cT $ and with our conventions is therefore
equivalent to the Penrose trapped surface
condition.
\end{itemize}
\end{remark}
The key idea behind Penrose's proof of the $C^2$-theorem is to look at
the properties of the boundary of the future of the trapped surface
$\cT $. The boundary $\partial J^+(\cT )$
is generated by null geodesics but Raychaudhury's
equation and the initial trapped surface condition together with the null
convergence condition result in there being a focal point along every
geodesic. This fact together with the assumption of null geodesic
completeness may be used to show that $\partial J^+(\cT )$ is compact. On
the other hand one may use the existence of the Cauchy surface $S$
together with some basic causality theory to construct a homeomorphism
between $\partial J^+(\cT )$ and $S$. This is not possible if $S$ is not
compact so that there must be a contradiction between the four
assumptions.
In our proof of the theorem for the $C^{1,1}$-case we need to further
extend the methods of \cite{CG, KSS, KSSV,hawkingc11} and approximate $g$ by a
smooth family of Lorentzian metrics $\hat g_\eps$ which have strictly wider
lightcones than $g$ and which are themselves globally hyperbolic. We then show
that by choosing $\eps$ sufficiently small the associated Ricci
tensor, $\Ric_\eps$, violates the null convergence condition by an
arbitrarily small amount, which allows us to establish the compactness
of $\partial J_\eps^+(\cT )=E_\eps^+(\cT )$ under the assumption of
null geodesic completeness. We then use the global
hyperbolicity of the $\hat g_\eps$ together with the fact that $S$ is
a Cauchy surface for $g$ to show that $E_\eps^+(\cT )$ is homeomorphic to
$S$, which leads to a contradiction with the non-compactness of $S$.
Finally, in Theorem \ref{penrose_alt} we show that if $M$ is
future null complete and the assumption that $S$ be non-compact is dropped
in (ii) then $E^+(\cT )$ is a compact Cauchy-hypersurface in
$M$. A main difficulty in these proofs, as compared to the case of
Hawking's singularity theorem in \cite{hawkingc11} lies in the
fact that curvature conditions on null vectors are less suitable
for approximation arguments (cf.\ Lemma \ref{approxlemma} below)
than conditions on timelike vectors (`timelike' being an `open' condition,
as opposed to `null').
\medskip
In the remainder of this section we fix key notions to be used throughout this
paper, cf.\ also \cite{hawkingc11}. We assume all
manifolds to be of class $C^\infty$ and connected (as well as Hausdorff and second countable), and
only lower the regularity of the metric. By a $C^{1,1}$-
(resp.\ $C^k$-, $k\in \N_0$) spacetime $(M,g)$, we mean a smooth manifold $M$
of dimension $n$ endowed with a Lorentzian metric $g$ of
signature $(-+\dots+)$ possessing locally Lipschitz continuous first
derivatives (resp.\ of class $C^k$) and with a time orientation given by a continuous timelike
vector field.
If $K$ is a compact set in $M$ we write $K\comp M$.
Following \cite{ON83}, we define the curvature tensor by
$R(X,Y)Z=\nabla_{[X,Y]}Z - [\nabla_X,\nabla_Y]Z$ and the Ricci
tensor by $R_{ab}=R^c{}_{abc}$. Since both of these conventions differ by a sign from
those of \cite{HE}, the respective definitions of Ricci curvature agree.
Note also that our definition of the convergence
$\conv$ follows \cite{ON83} and differs by a sign from that used by some other authors.
Our notation for causal structures will basically follow \cite{ON83},
although as in \cite{Chrusciel_causality,KSSV} we base all
causality notions on locally Lipschitz curves. Any
locally Lipschitz curve $c$ is differentiable almost everywhere with locally bounded
velocity. We call $c$ timelike, causal, spacelike or null, if $c'(t)$ has the
corresponding property almost everywhere. Based on these notions we
define the relative chronological future $I^+(A,U)$ and causal future
$J^+(A,U)$ of a set $A\subseteq M$ relative to $U\subseteq M$ literally as
in the smooth case (see \cite[Def.\ 3.1]{KSSV} \cite[2.4]{Chrusciel_causality}).
The future horismos of $A$ is defined as $E^+(A,U)=J^+(A,U)\setminus I^+(A,U)$.
As was shown in \cite[Th.\ 7]{M}, \cite[Cor.\ 3.1]{KSSV},
our definitions coincide with the ones based on smooth curves.
A Cauchy hypersurface is a
subset $S$ of $M$ which every inextendible timelike curve intersects
exactly once, see \cite[Def.\ 14.28]{ON83}. In the smooth case,
for spacelike hypersurfaces this definition of a Cauchy hypersurface
is equivalent to the one in \cite{HE}, and this remains true in the $C^{1,1}$-case \cite[Prop.\ A.31]{hawkingc11}.
A $C^{1,1}$-spacetime $(M,g)$ is called globally hyperbolic if it is strongly causal
and any causal diamond $J(p,q) = J^+(p)\cap J^-(q)$ is compact.
It follows from \cite[Lem.\ A.20, Th.\ A.22]{hawkingc11} that $M$ is globally hyperbolic if it
possesses a Cauchy-hypersurface.
We will write
$\exp_p$ for the exponential map of the metric $g$ at $p$, and $\exp_p^{g_\eps}$ for
the one corresponding to the metric $g_\eps$.
For a semi-Riemannian submanifold $S$ of $M$ we denote by $(N(S), \pi)$ its normal
bundle. By \cite[Th.\ 13]{M}, $N(S)$ is a Lipschitz bundle.
\section{Approximation results}
In this section we extend the approximation results of
\cite{hawkingc11} to deal with the fact that we need to be able to
approximate a globally hyperbolic $C^{1,1}$-metric by a smooth family
of globally hyperbolic metrics. In addition we require a more delicate
estimate for the Ricci curvature than that given in \cite[Lemma
3.2]{hawkingc11} due to the fact that the Penrose singularity theorem
makes use of the null convergence condition for the Ricci tensor
rather than the timelike convergence condition used in the Hawking
theorem.
We start by recalling from \cite[Sec.\ 3.8.2]{ladder}, \cite[Sec.\ 1.2]{CG}
that for two Lorentzian metrics $g_1$,
$g_2$, we say that $g_2$ has \emph{strictly wider light cones} than $g_1$, denoted by
\begin{equation}
g_1\prec g_2, \text{ if for any tangent vector } X\not=0,\ g_1(X,X)\le 0 \text{ implies that } g_2(X,X)<0.
\end{equation}
Thus any $g_1$-causal vector is $g_2$-timelike.
The key result now is \cite[Prop.\ 1.2]{CG}, which we give here in the slightly refined
version of \cite[Prop.\ 2.5]{KSSV}. Note that the smoothness of the approximating net with
respect to $\eps$ and $p$ is vital in Proposition \ref{CGrefined} below.
\begin{Proposition}\label{CGapprox} Let $(M,g)$ be a $C^0$-spacetime
and let $h$ be some smooth
background Riemannian metric on $M$. Then for any $\eps>0$, there exist smooth
Lorentzian metrics $\check g_\eps$ and $\hat g_\eps$ on $M$ such that $\check g_\eps
\prec g \prec \hat g_\eps$ and $d_h(\check g_\eps,g) + d_h(\hat g_\eps,g)<\eps$,
where
\begin{equation}\label{CGdh}
d_h(g_1,g_2) := \sup_{p\in M,0\not=X,Y\in T_pM} \frac{|g_1(X,Y)-g_2(X,Y)|}{\|X\|_h
\|Y\|_h}.
\end{equation}
Moreover, $\hat g_\eps(p)$ and $\check g_\eps(p)$ depend smoothly on $(\eps,p)\in \R^+\times M$, and if
$g\in C^{1,1}$ then letting $g_\eps$ be either $\check g_\eps$ or $\hat g_\eps$,
we additionally have
\begin{itemize}
\item[(i)] $g_\eps$ converges to $g$ in the $C^1$-topology as $\eps\to 0$, and
\item[(ii)] the second derivatives of $g_\eps$ are bounded, uniformly in $\eps$, on compact sets.
\end{itemize}
\end{Proposition}
\begin{remark}\label{ghstab}
In several places below we will need approximations as in
Proposition \ref{CGapprox}, but with additional properties. In particular, we will
require that for globally hyperbolic metrics there exist approximations with
strictly wider lightcones that are themselves globally hyperbolic.
Extending methods of \cite{Ger70}, it was shown in \cite{BM11} that global hyperbolicity is stable
in the interval topology. Consequently, if $g$ is a smooth, globally hyperbolic Lorentzian metric
then there exists some smooth globally hyperbolic metric $g'\succ g$. In \cite[Th.\ 1.2]{FS11}, the
stability of global hyperbolicity was established for continuous cone structures. It has to be
noted, however, that the definition of global hyperbolicity in \cite{FS11} requires stable causality
(in addition to the compactness of the causal diamonds),
which is stronger than the usual assumption of strong causality, so this result is not directly
applicable in our setting. In \cite{S14} it is proved directly that if $g$ is a continuous
metric that is non-totally imprisoning and has the property that all causal diamonds are compact
(as is the case for any globally hyperbolic $C^{1,1}$-metric by the proof of \cite[Lemma 14.13]{ON83})
then there exists a smooth metric $g'\succ g$ that has the same properties, hence in particular is
causal with compact causal diamonds and thereby globally hyperbolic by
\cite{BS07}.
\end{remark}
\begin{Proposition}\label{CGrefined} Let $(M,g)$ be a $C^0$-spacetime
with a smooth background Riemannian
metric $h$.
\begin{itemize}
\item[(i)] Let $\gec$, $\hat g_\eps$ as in Proposition \ref{CGapprox}. Then
for any compact subset $K\comp M$ there exists a sequence $\eps_j\searrow 0$ such that
$\hat g_{\eps_{j+1}}\prec \hat g_{\eps_{j}}$ on $K$
(resp.\ $\check g_{\eps_{j}}\prec \check g_{\eps_{j+1}}$ on $K$)
for all $j\in \N_0$.
\item[(ii)] If $g'$ is a continuous Lorentzian metric with $g\prec g'$ (resp.\ $g'\prec g$)
then $\hat g_\eps$ (resp.\ $\gec$) as in Proposition \ref{CGapprox} can be chosen such that
$g\prec \hat g_\eps \prec g'$ (resp.\ $g'\prec \gec \prec g$) for all $\eps$.
\item[(iii)] There exist sequences of smooth Lorentzian metrics $\check g_j\prec g \prec \hat g_{j}$
($j\in \N$)
such that $d_h(\check g_j,g) + d_h(\hat g_j,g)<1/j$ and $\check g_j \prec \check g_{j+1}$ as well
as $\hat g_{j+1}\prec \hat g_{j}$ for all $j\in \N$.
\item[(iv)] If $g$ is $C^{1,1}$ and globally hyperbolic then the $\hat g_\eps$
from Proposition \ref{CGapprox}, as well as the
$\hat g_j$ from (iii) can be chosen globally hyperbolic as well.
\item[(v)] If $g$ is $C^{1,1}$ then the regularizations constructed in (i)--(iv)
can in addition be chosen such that they converge to $g$ in the $C^1$-topology and
such that their second
derivatives are bounded, uniformly in $\eps$ (resp.\ $j$) on compact sets.
\end{itemize}
\end{Proposition}
\begin{proof} (i) We follow the argument of \cite[Lemma 1.5]{S14}: Pick any $\eps_0>0$. Since $g\prec \hat g_{\eps_0}$,
there exists some $\delta>0$ such that $\{X\in TM|_K\mid \|X\|_h=1,\ g(X,X)<\delta\}$ is contained in
$\{X\in TM\mid \hat g_{\eps_0}(X,X)< 0\}$. In fact, otherwise there would exist a convergent sequence
$X_k\to X$ in $TM|_K$ with $\|X_k\|_h=1$, $g(X_k,X_k)<1/k$, and $\hat g_{\eps_0}(X_k,X_k)\ge 0$. But then
$g(X,X)\le 0$ and $\hat g_{\eps_0}(X,X)\ge 0$, contradicting $g\prec \hat g_{\eps_0}$. Next, we
choose $\eps_1<\min(\eps_0,\delta)$, so $d_h(g,\hat g_{\eps_1})<\delta$. Then if $X\in TM|_K$, $\|X\|_h=1$ and
$\hat g_{\eps_1}(X,X)\le 0$, we obtain $g(X,X)< \hat g_{\eps_1}(X,X)+\delta \le \delta$, so $\hat g_{\eps_0}(X,X)<0$,
i.e., $\hat g_{\eps_1} \prec \hat g_{\eps_0}$ on $K$. The claim therefore follows by induction. Analogously one can construct
the sequence $\check g_{\eps_j}$.
\noindent(ii) The proof of (i) shows that for any $K\comp M$ there exists some $\eps_K$ such that for all
$\eps<\eps_K$ we have $g\prec \hat g_\eps \prec g'$ on $K$, and $d_h(g|_K,\hat g_\eps|_K)<\eps$.
Clearly all these properties are stable under shrinking $K$ or $\eps_K$. Therefore, \cite[Lemma 2.4]{KSSV}
shows that there exists a smooth map $(\eps,p)\mapsto \tilde g_\eps(p)$ such that for each fixed $\eps$,
$\tilde g_\eps$ is a Lorentzian metric on $M$ with $g\prec \tilde g_\eps \prec g'$ and such that
$d_h(g,\tilde g_\eps)<\eps$ on $M$. Again the proof for $\gec$ is analogous.
\noindent(iii) This follows from (ii) by induction.
\noindent(iv) By Remark \ref{ghstab} there exists a smooth globally hyperbolic metric $g'\succ g$.
Constructing $\hat g_\eps$ resp.\ $\hat g_j$ as in (ii) resp.\ (iii) then automatically gives
globally hyperbolic metrics (cf.\ \cite[Sec.\ II]{BM11} ).
\noindent(v) By \cite[Lemma 2.4]{KSSV}, in the construction given in (ii) above, for any $K\comp M$,
$\tilde g_\eps$ coincides with the original $\hat g_\eps$ on $K$ for $\eps$ sufficiently small.
Thus by (i) and (ii) from Proposition \ref{CGapprox} the $\tilde g_\eps$ (i.e., the new $\hat g_\eps$)
have the desired properties, and analogously for the new $\check g_\eps$.
Concerning (iii), fix any atlas $\mathcal A$ of $M$ and an exhaustive sequence $K_n$ of compact
sets in $M$ with $K_n\sse K_{n+1}^\circ$ for all $n$. Then in the inductive construction
of the $\hat g_j$ we may additionally require that the $C^1$-distance of $g$ and $\hat g_j$
on $K_j$ (as measured with respect to the $C^1$-seminorms induced by the charts in $\mathcal A$)
be less than $1/j$.
Moreover, for any $K_j$ there is some constant $C_j$ bounding
the second derivatives of the $\hat g_\eps$ from (ii) (again w.r.t.\ the charts in $\mathcal A$)
for $\eps$ smaller than some $\eps_j$. It is therefore also possible to have the
second derivatives of $\hat g_k$ bounded by $C_j$ on $K_j$ for all $k\ge j$.
Altogether, this gives the claimed properties for the sequence $(\hat g_j)$, and analogously for $(\check g_j)$.
\end{proof}
\begin{Lemma}\label{approxlemma} Let $(M,g)$ be a $C^{1,1}$-spacetime and
let $h$, $\tilde h$ be Riemannian metrics on $M$ and $TM$, respectively.
Suppose that $\Ric(Y,Y)\ge 0$ for every Lipschitz-continuous $g$-null local vector field $Y$.
Let $K\comp M$ and let $C$, $\delta > 0$. Then there exist $\eta>0$ and $\eps_0>0$
such that for all $\eps<\eps_0$ we have: If $p\in K$ and $X\in T_pM$ is such that $\|X\|_h \le C$
and there exists a $g$-null vector $Y_0\in TM|_K$ with $d_{\tilde h}(X,Y_0) \le \eta$ and $\|Y_0\|_h\le C$ then
$\Ric_\eps(X,X) > -\delta$.
Here $\Ric_\eps$ is the Ricci-tensor corresponding to a metric $\hat g_\eps$ as in Proposition \ref{CGapprox}.
\end{Lemma}
\begin{proof}
We first note that as in the proof of \cite[Lemma 3.2]{hawkingc11} it follows that we may assume
that $M=\R^n$, $\|\,.\,\|_h = \|\,.\,\|$ is the Euclidean norm and we may replace
$\hat g_\eps$ by $g_\eps:=g*\rho_\eps$
(component-wise convolution), and prove the claim for $\Ric_\eps$ calculated from $g_\eps$.
For the distance on $TM\cong \R^{2n}$ we may then simply use
$d(X_p,Y_q) := \|p-q\|+\|X-Y\|$ (which is equivalent to the distance function induced by the
natural product metric on $T\R^n$).
Denote by $E$ the map $v\mapsto (\pi(v),\exp(v))$, defined on an open neighbourhood of the zero
section in $T\R^n$. Let $L$ be a compact neighbourhood of $K$.
Then $E$ is a homeomorphism from some open
neighbourhood $\mathcal U$ of $L\times \{0\}$ in $T\R^n$ onto an open neighbourhood
$\mathcal V$ of $\{(q,q)\mid q\in L\}$
in $\R^n\times \R^n$ and there exists some $r>0$ such that for any $q\in L$
the set $U_r(q):=\exp_q(B_r(0))$ is a totally normal neighbourhood of $q$ and
$\bigcup_{q\in L} (U_r(q)\times U_r(q))\sse {\mathcal V}$
(cf.\ the proof of \cite[Th.\ 4.1]{KSS}). We may assume that $\mathcal U$ is of the form
$\{(q,v)\mid q\in L', \|v\|< a\}$ for some open $L'\supseteq L$ and some $a>0$ and
that $\overline {\mathcal U}$ is contained in the domain of $E$.
It follows from standard ODE theory
(cf.\ \cite[Sec.\ 2]{KSS}) that
\begin{equation}\label{geocon1}
\frac{d}{dt}(\exp^{g_\eps}_q(tv)) \to \frac{d}{dt}(\exp_q(tv)) \quad (\eps\to 0),
\end{equation}
uniformly in $v\in \R^n$ with $\|v\|\le 1$, $t\in [0,a]$, and $q\in L$. Hence for $\eps$ small
and such $v$, $t$ and $q$ and we have
\begin{equation}\label{geocon2}
\left\|\frac{d}{dt}(\exp_q(tv))\right\| \le \left\|\frac{d}{dt}(\exp^{g_\eps}_q(tv))\right\| +1.
\end{equation}
Furthermore, for $\eps$ small the operator norms of $T_v\exp_q^{g_\eps}$ are bounded,
uniformly in $\eps$, $v\in \R^n$ with $\|v\|\le a$ and $q\in L$ by some
constant $\tilde C_1$: this follows from (7) in \cite{KSS}, noting that we may assume that
$a$ as above is so small that this estimate is satisfied uniformly in $\eps$,
$\|v\|\le a$, and $q\in L$.
Consequently, for $\eps$ small, $q\in L$, $t\in [0,a]$ and $\|v\|\le 1$ we have
\begin{equation}\label{geocon3}
\left\|\frac{d}{dt}(\exp^{g_\eps}_q(tv))\right\| = \left\|T_{tv}\exp^{g_\eps}_q(v)\right\| \le \tilde C_1.
\end{equation}
It follows from \eqref{geocon2}, \eqref{geocon3} that there exists some $\eps'>0$ such that for any $\eps\in (0,\eps')$,
any $q\in L$, any $v\in \R^n$ with $\|v\|\le a$ and any $t\in [0,1]$ we have
\begin{equation}\label{geocon4}
\left\|\frac{d}{dt}(\exp_q(tv))\right\| = \left\|\left.\frac{d}{ds}\right|_{s=t\|v\|}
\left(\exp_q\left(s\frac{v}{\|v\|}\right)\right)\right\|
\|v\| \le (\tilde C_1 +1)\|v\|.
\end{equation}
Set
\begin{equation}\label{c12def}
C_1 := (\tilde C_1 +1)\sup_{p\in L}\|\Gamma(p)\|,\qquad
C_2 :=\sup_{p\in L}\|\Ric(p)\|
\end{equation}
Given any $C>0$ and $\delta>0$, pick $\eta_1\in (0,1)$ so small that $6C_2C \eta_1<\delta/2$ and let
\begin{equation}\label{rtildef}
\tilde r := \sup\{\|E^{-1}(p,p')\| \mid p,p' \in U_r(q),\, q\in L\}.
\end{equation}
Then $\tilde r <a$ and by compactness we may suppose that $r$ from above is so small that
$e^{C_1 \tilde r}<2$, $2C_1C\tilde r < \eta_1$, and $U_r(q)\sse L$ for all $q\in K$.
We may then cover $K$ by finitely many such sets $U_{r}(q_1),\dots,U_{r}(q_N)$.
Then $K=\bigcup_{j=1}^N K_j$ with $K_j\comp U_j:=U_{r}(q_j)$ for each $j$.
Set $s:=\min_{1\le j\le N}\text{dist}(K_j,\partial U_j)$
and let $0<\eta<\min(\eta_1,s/2)$.
Next, let $\rho\in {\mathcal D}(\R^n)$ be a standard mollifier, i.e., $\rho\ge 0$,
$\text{supp}(\rho)\sse B_1(0)$ and $\int \rho(x)\,dx=1$. From (3) in \cite{hawkingc11} we know that
\begin{equation
R_{\eps ik} - R_{ik}*\rho_\eps \to 0 \ \text{ uniformly on compact sets}.
\end{equation}
Hence there exists some $\eps'' \in (0,\eps')$ such that for all $0<\eps<\eps''$ we have
\begin{equation}\label{rest}
\sup_{x\in K} |R_{\eps ik}(x) - R_{ik}*\rho_\eps(x)| < \frac{\delta}{2C^2}.
\end{equation}
To conclude our preparations, we set $\eps_0:=\min(\eps'',s/2)$ and consider any $\eps<\eps_0$.
Now let $p\in K$ and $X\in \R^n$ such that $\|X\| \le C$
and suppose there exists some $g(q)$-null vector
$Y_0\in \R^n$ with $q\in K$,
\begin{equation}
d(X_p,(Y_0)_q) = \|p-q\| + \|X-Y_0\| \le \eta,
\end{equation}
and $\|Y_0\|\le C$.
Then for some $j\in \{1,\dots,N\}$ we have $p\in K_j$, and since $\eta<s/2$ we also have
$q\in U_j$.
Since $g(q)(Y_0,Y_0)=0$,
we may extend $Y_0$ to a Lipschitz-continuous null vector field, denoted by $Y$, on all of $U_j$ by parallelly
transporting it radially outward from $q$.
Let $p'\in U_j$ be any point different from $q$ and let $v:=\overrightarrow{qp'}
=E^{-1}(q,p')$. Then $Y(p')=Z(1)$, where
$Z(t) = Y(\exp_q(tv))$ for all $t\in [0,1]$ and $Z$ satisfies the linear ODE
\begin{equation}\label{ode}
\frac{dZ^k}{dt} = -\Gamma_{ij}^k(\exp_q(tv))\frac{d}{dt}(\exp_q^i(tv))Z^j(t)
\end{equation}
with initial condition $Z(0)=Y(q)=Y_0$. By Gronwall's inequality it follows that
\begin{equation}\label{zt}
\|Z(t)\| \le \|Y_0\| e^{t \|\Gamma\|_{L^\infty(U_j)}\sup_{t\in [0,1]}\|\frac{d}{dt}(\exp_q(tv))\| } \quad (t\in [0,1]).
\end{equation}
Therefore, \eqref{geocon4}, \eqref{c12def}, and \eqref{rtildef} give
\begin{equation}\label{yp}
\|Y(p')\|\le \|Y_0\|e^{C_1\tilde r} < 2 \|Y_0\|
\end{equation}
for all $p'\in U_j$. Moreover, for all $t\in [0,1]$ we have
\begin{equation}
\|Z(t)-Y_0\|\le t\cdot \sup_{t\in [0,1]}\left \|\frac{dZ^k}{dt}\right\|,
\end{equation}
which, due to $\|Y_0\|\le C$, by \eqref{ode}, \eqref{zt}, and \eqref{yp} leads to
\begin{equation}
\|Y(p')-Y_0\|\le \sup_{t\in [0,1]} \left \|\frac{dZ^k}{dt}\right\|\le C_1 C \tilde r e^{C_1\tilde r}
< 2 C_1 C \tilde r < \eta_1.
\end{equation}
We also extend $X$ to a constant vector field on $U_j$, again denoted by $X$.
Then $\|Y\| < 2C$ by \eqref{yp}, and
\begin{equation}
\|X-Y\|\le \|X-Y_0\| + \|Y_0-Y\| < 2\eta_1
\end{equation}
on $U_j$.
It follows that, on $U_j$, we have the following inequality
\begin{equation}
\begin{split}
|\Ric(X,X)-\Ric(Y,Y)| & = |\Ric(X-Y, X)+\Ric(X-Y,Y)|\\
&\le C_2\|X-Y\|\|X\| + C_2\|X-Y\|\|Y\| \le 6C_2C\eta_1 <\delta/2.
\end{split}
\end{equation}
Since $\Ric(Y,Y)\ge 0$, we conclude that $\Ric(X,X)>-\delta/2$ on $U_j$.
Set
\begin{equation}
\tilde R_{ik}(x) := \left\{
\begin{array}{rl}
R_{ik}(x) & \text{ for } x\in B_{s/2}(p)\\
0 & \text{otherwise}.
\end{array}\right.
\end{equation}
By our assumption and the fact that $\rho\ge 0$ we then have $(\tilde R_{ik}X^iX^k)*\rho_\eps\ge -\delta/2$ on $\R^n$.
Furthermore, since $\eps<s/2$ it follows that $(R_{ik}*\rho_\eps)(p) =
(\tilde R_{ik}*\rho_\eps)(p)$, so \eqref{rest} gives:
\begin{equation}
\begin{aligned}
|R_{\eps ik}(p)X^iX^k - ((\tilde R_{ik}X^iX^k)*\rho_\eps)(p)| &= |(R_{\eps ik}(p) - (R_{ik}*\rho_\eps)(p))X^iX^k| \\
&\le C^2 \sup_{x\in K} |R_{\eps ik}(x) - R_{ik}*\rho_\eps(x)|<\delta/2.\end{aligned}
\end{equation}
It follows that $R_{\eps ik}(p)X^iX^k>-\delta$, as claimed.
\end{proof}
\section{Proof of the main result}\label{mainproof}
Based on the approximation results of the previous section we are now ready to
prove Theorem \ref{penrose}. As a final preliminary result we need:
\begin{Proposition} \label{eepscomp}
Let $(M,g)$ be a $C^{1,1}$-spacetime that is future null complete and suppose
that assumptions (i) and (iii) of Theorem \ref{penrose} are satisfied.
Moreover, suppose that $\hat g_\eps$ ($\eps>0$) is a net of smooth
Lorentzian metrics on $M$
as in Proposition \ref{CGapprox}.
Then there exists some $\eps_0>0$ such that for all $\eps<\eps_0$ the future horismos
$E_\eps^+(\cT )$ of $\cT $ with respect to the metric $\hat g_\eps$ is relatively compact.
\end{Proposition}
\begin{proof} Let $h$ be a smooth background Riemannian metric and define
$$
\tilde T := \{v\in N(\cT )\mid v \text{ future-directed } g\text{-null and } h(v,v)=1\},
$$
where $N(\cT )$ is the $g$-normal bundle of $\cT $ and analogously
$$
\tilde T_\eps := \{v\in N_\eps(\cT )\mid v \text{ future-directed } \hat g_\eps\text{-null and } h(v,v)=1\},
$$
where $N_\eps(\cT )$ is the $\hat g_\eps$-normal bundle of $\cT $.
Moreover, we set (cf.\ Remark \ref{rem1.2}(b))
\begin{equation*}
m:=(n-2)\min_{v\in \tilde T}\conv(v) = (n-2)\min_{v\in \tilde T}g(\pi(v))(H,v) >0
\end{equation*}
and pick $b>0$ such that $(n-2)/b<m$.
Denote by $H_\eps$ the mean curvature vector field of $\cT $ with respect to $\hat g_\eps$, and
similarly for $\conv_\eps$. Then $H_\eps\to H$ uniformly on $\cT $ and we claim that
for $\eps$ sufficiently small and all $v\in \tilde T_\eps$ we have $\conv_\eps(v)>1/b$.
To see this, suppose to the contrary that there exist a sequence $\eps_k\searrow 0$ and
vectors $v_k\in \tilde T_{\eps_k}$ such that $\hat g_{\eps_k}(\pi(v_k))(H_{\eps_k},v_k)\le 1/b$
for all $k$. By compactness we may suppose without loss of generality that $v_k\to v$
as $k\to \infty$. Then $v\in \tilde T$ but $\conv(v)\le 1/b$, a contradiction.
Now we show that there exists some $\eps_0>0$ such that for all $\eps<\eps_0$
we have
\begin{equation}\label{relcomp}
E_\eps^+(\cT ) \sse \exp^{\hat g_{\eps}}(\{sv\mid s\in [0,b],\, v\in \tilde T_{\eps}\}) \comp M.
\end{equation}
Again arguing by contradiction, suppose that there exists a sequence $\eps_j\searrow 0$ and
points $q_j\in E_{\eps_j}^+(\cT )\setminus \exp^{\hat g_{\eps_j}}(\{sv\mid s\in [0,b],\, v\in \tilde T_{\eps_j}\})$.
By \cite[Th.\ 10.51, Cor.\ 14.5]{ON83}, for each $j\in \N$ there exists a
$\hat g_{\eps_j}$-null-geodesic $\gamma_j$ from $\cT $ to $q_j$ which is $\hat g_{\eps_j}$-normal to $\cT $ and
has no focal point before $q_j$. Let
$\gamma_j(t)=\exp^{\hat g_{\eps_j}}(t\tilde v_j)$
with $\tilde v_j\in \tilde T_{\eps_j}$.
Let $t_j$ be such that $\gamma_j(t_j)=q_j$. Then by our indirect assumption, $t_j>b$ for all $j$.
In particular, each $\gamma_j$ is defined at least on $[0,b]$.
By compactness, we may assume that $\tilde v_j\to \tilde v$ as $j\to \infty$. Then $\tilde v\in \tilde T$, and
we set $\gamma(t):=\exp^g(t\tilde v)$. As $(M,g)$ is future-null complete,
$\gamma$ is defined on $[0,\infty)$. It now follows from standard ODE-results
(cf.\ \cite[Sec.\ 2]{KSS})
that $\gamma_j\to \gamma$ in the $C^1$-topology on $[0,b]$.
In particular, $\gamma_j'(t)\to \gamma'(t)$ uniformly on $[0,b]$. Pick $C>0$
and a compact set $K\Subset M$ such that $\|\gamma_j'(t)\|_h\le C$
and $\gamma_j(t)\in K$ for all $t\in [0,b]$ and all $j\in \N$.
Then by Lemma \ref{approxlemma}, for any $\delta>0$ there exists some $j_0\in \N$ such that
$\Ric_{\eps_j}(\gamma_j'(t),\gamma_j'(t))>-\delta$ for all $j\ge j_0$ and all $t\in [0,b]$.
Denoting by $\theta_j$ the expansion of $\gamma_j$ we have by the Raychaudhuri equation
\begin{equation}\label{deltaest}
\frac{d(\theta_j^{-1})}{dt}\geq\frac{1}{n-2}+\frac{1}{\theta_j^2}
\Ric_{\hat g_{\eps_j}}({\gamma}'_j,{\gamma}'_j) > \frac{1}{n-2}-\frac{\delta}{\theta_j^2}.
\end{equation}
At this point we fix $\delta>0$ so small that
\begin{equation}\label{bc}
a:=\frac{n-2}{m} < \frac{n-2}{\alpha m} <b,
\end{equation}
where $\alpha:= 1 - (n-2)m^{-2}\delta$ and choose $j_0$ as above for this $\delta$.
For $j\ge j_0$ let $m_j:=(n-2)\min_{v\in \tilde T_{\eps_j}}\conv_{\varepsilon_j}(v)$, then $m_j\to m$ ($j\to \infty$)
and $\alpha_j:= 1 - (n-2)m_j^{-2}\delta\to \alpha$ ($j\to \infty$), so for $j$ large, \eqref{bc} implies
\begin{equation}\label{9}
a<\frac{n-2}{\alpha_j m_j} < b.
\end{equation}
Consequently, choosing $j$ so large that $\alpha_j>0$, the right hand side of \eqref{deltaest} is
strictly positive at $t=0$.
Thus $\theta_j^{-1}$ is initially strictly increasing and $\theta_j(0)=-(n-2)\conv_j(\gamma_j'(0))<-m_j<0$, so
from \eqref{deltaest} we conclude that $\theta_j^{-1}(t)\in [-m_j^{-1},0)$
on its entire domain of definition. Hence $\theta_j$ has no zero on
$[0,b]$, whereby $\theta_j^{-1}$ exists on all of $[0,b]$.
From this, using \eqref{deltaest}, it follows that $\theta_j^{-1}(t)
\ge f_j(t) := -m_j^{-1} + t \frac{\alpha_j}{n-2}$
on $[0,b]$. In particular this means that $\theta_j^{-1}$ must go to zero at or before the zero of $f_j$,
i.e., there exists some $\tau\in (0,\frac{n-2}{\alpha_j m_j})$ such that $\theta_j^{-1}(t)\to 0$ as $t\to \tau$.
But for $j$ sufficiently large \eqref{9} implies that $\theta_j^{-1}\to 0$
within $[0,b]$. However, since
$\gamma_j$ does not incur a focal point between $t=0$ and $t=t_j>b$,
$\theta_j$ is smooth, hence bounded, on $[0,b]$, a contradiction.
\end{proof}
\begin{remark}\label{minass} As an inspection of the proofs of Lemma \ref{approxlemma} and Proposition
\ref{eepscomp} shows, both results remain valid for any approximating net $g_\eps$
(or sequence $g_j$) of metrics that satisfy properties (i) and (ii) from Proposition \ref{CGapprox}.
In particular, this applies to the approximations $\check g_\eps$ from the inside.
For the proof of the main result, however, it will be essential to use approximations
from the outside that themselves are globally hyperbolic.
\end{remark}
\noindent{\bf Proof of Theorem \ref{penrose}:}
Suppose, to the contrary, that $M$ is future null complete.
Proposition \ref{eepscomp} applies, in particular, to a net $\hat g_\eps$ as in
Proposition \ref{CGrefined} (iv), approximating $g$ from the outside and such that each $\hat g_\eps$
is itself globally hyperbolic.
Fix any $\eps<\eps_0$, such that by Proposition \ref{eepscomp} $E^+_\eps(\cT )$
is relatively compact. Then since $\hat g_\eps$ is
globally hyperbolic, smooth causality theory (cf.\ the proof of \cite[Th.\ 14.61]{ON83})
implies that $E_{\eps}^+(\cT ) = \partial J^+_{\hat g_{\eps}}(\cT )$
is a topological hypersurface that is $\hat g_{\eps}$-achronal.
We obtain that $E_{\eps}^+(\cT )$ is compact and since $g\prec \hat g_{\eps}$, it
is also $g$-achronal.
As in the proof of \cite[Th.\ 14.61]{ON83} let now $X$ be a smooth $g$-timelike vector field
on $M$ and denote by $\rho: E_\eps^+(\cT )\to S$ the map that assigns to each $p\in E_\eps^+(\cT )$
the intersection of the maximal integral curve of $X$ through $p$ with $S$. Then due to the
achronality of $E_\eps^+(\cT )$, $\rho$ is injective, so by invariance of domain it
is a homeomorphism of $E_\eps^+(\cT )$ onto an open subset of $S$. By compactness this set is
also closed in $S$. But also in the $C^{1,1}$-case, any Cauchy hypersurface is connected
(the proof of \cite[Prop.\ 14.31]{ON83} also works in this regularity).
Thus $\rho(E_\eps^+(\cT ))=S$, contradicting the fact that $S$ is non-compact.
This concludes the proof of Theorem \ref{penrose}. \hspace*{\fill}$\Box$\medskip
We also have the following analogue of \cite[Th.\ 14.61]{ON83}:
\begin{Theorem}\label{penrose_alt} Let $(M,g)$ be an $n$-dimensional $C^{1,1}$-spacetime.
Assume that
\begin{itemize}
\item[(i)] For any Lipschitz-continuous local null vector field $X$,
$\Ric(X,X)\ge 0$.
\item[(ii)] $M$ possesses a Cauchy-hypersurface $S$.
\item[(iii)] There exists a compact spacelike achronal submanifold $\cT $
in $M$ of codimension $2$ with past-pointing timelike mean curvature vector field $H$.
\item[(iv)] $M$ is future null complete.
\end{itemize}
Then the future horismos of $\cT $, $E^+(\cT )$, is a compact Cauchy-hypersurface in $M$.
\end{Theorem}
\begin{proof} Since $(M,g)$ is globally hyperbolic,
\cite[Prop.\ A.28]{hawkingc11} implies that the causality relation $\le$ on $M$
is closed.
Thus since $\cT $ is compact it follows that $J^+(\cT )$ is closed. Also, by \cite[Cor.\ 3.16]{KSSV},
$J^+(\cT )^\circ=I^+(\cT )$, so $E^+(\cT )=\partial J^+(\cT )$. It is thereby the topological boundary of a
future set and the proof of \cite[Cor.\ 14.27]{ON83} carries over to the $C^{1,1}$-setting
(using \cite[Th.\ A.1, Prop.\ A.18]{hawkingc11}) to show
that $E^+(\cT )$ is a closed achronal topological hypersurface. It
remains to show that any inextendible
timelike curve intersects it.
Suppose to the contrary that there exists some inextendible
timelike (locally Lipschitz) curve $\tilde \alpha$ that is disjoint from
$E^+(\cT )$. Then as in (the proof of) \cite[Lemma A.10]{hawkingc11} we may also
construct an inextendible timelike $C^2$-curve $\alpha$
that does not meet $E^+(\cT )$ (round off the breakpoints of the piecewise
geodesic obtained in \cite[Lemma A.10]{hawkingc11} in a timelike way).
By \cite[Ex.\ 14.11]{ON83}, since $(M,g)$ is strongly causal, $\alpha$ is an
integral curve of a timelike $C^1$-vector field $X$ on $M$.
Next, let $\hat g_j$ be an approximating net as in
Proposition \ref{CGrefined} (iv),(v) (to which thereby all arguments from the proof
of Theorem \ref{penrose} apply, cf.\ Remark \ref{minass}). Denote by $I^+_j(\cT )$, $J^+_j(\cT )$,
$E^+_j(\cT )$ the chronological and causal future, and the future horismos, respectively, of $\cT $
with respect to $\hat g_j$.
Set $K:=\{sv\mid s\in [0,b],\, v\in TM|_\cT ,\, \|v\|_h=1\}\comp TM$,
where $h$ is some complete smooth Riemannian background metric on $M$. It then follows
from the locally uniform convergence of $\exp^{\hat g_j}$ to $\exp^g$, together with
\eqref{relcomp} that there exists some $j_0\in \N$ such that for $j\ge j_0$ we have
\begin{equation}
\partial J_j^+(\cT ) = E_j^+(\cT )\sse \exp^{\hat g_j}(K)\sse
\overline{\{p\in M\mid \text{dist}_h(p,\exp^g(K))\le 1\}}=:L\comp M.
\end{equation}
Let the map $\rho$ from the proof of Theorem \ref{penrose} be constructed from the
vector field $X$ from above. Then by the proof of Theorem \ref{penrose}
we may additionally suppose that $j_0$ is such that, for
each $j\ge j_0$, $E_j^+(\cT )$ is a compact achronal topological hypersurface
in $(M,g)$ that is homeomorphic
via $\rho$ to $S$. Therefore $\alpha$ (which is timelike for all $\hat
g_j$) intersects every $E^+_j(\cT )$ ($j\ge j_0$) precisely
once. Let $q_j$ be the intersection
point of $\alpha$ with $\partial J_{j}^+(\cT )=E^+_{j}(\cT )$.
We now pick $t_j$ such that $q_j=\alpha(t_j)$ for all $j\in \N$. Each
$q_j$ is contained in $L$, so since $(M,g)$ is globally hyperbolic, hence
non-partially-imprisoning (as already noted in Rem.\ \ref{ghstab}, the proof of \cite[Lemma 14.13]{ON83}
carries over verbatim to the $C^{1,1}$-case),
it follows that $(t_j)$ is a bounded sequence in $\R$ and without loss of
generality we may suppose that
in fact $t_j\to \tau$ for some $\tau \in \R$. Then also $q_j=\alpha(t_j)\to
q=\alpha(\tau)\in L$.
As $q_j\in \partial J_{j}^+(\cT )$ there exist $p_j\in \cT $ and $\hat g_{j}$-causal curves
$\beta_j$ from $p_j$ to $q_j$ (in fact, the $\beta_j$ are $\hat g_j$-normal
$\hat g_j$-null geodesics). Again
without loss of generality we may assume that $p_j\to p\in \cT $.
By \cite[Th.\ 3.1]{Minguzzicurves} (or \cite[Prop.\ 2.8.1]{Chrusciel_causality}) there exists an
accumulation curve $\beta$ of the sequence $\beta_j$ such that $\beta$ goes from $p$ to $q$.
Moreover, since $\hat g_{j+1}\prec \hat g_j$ for all $j$, each
$\beta_k$ is $\hat g_{j}$-causal for all $k\ge j$. Therefore, $\beta$
is $\hat g_{j}$-causal for each $j$. Thus by (the proof of) \cite[Prop.\ 1.5]{CG},
$\beta$ is $g$-causal and we conclude that $q=\alpha(\tau)\in J^+(\cT )$. If we had $q\in I^+(\cT )$
then for some $j_1$ we would also have $q_j\in I^+(\cT )\sse I^+_{j}(\cT )$ for all $j\ge j_1$
(using \cite[Cor.\ 3.12]{KSSV}). But this
is impossible since $q_j\in \partial J^+_{j}(\cT )=E^+_{j}(\cT )$.
Thus
\begin{equation}
q=\alpha(\tau)\in E^+(\cT ),
\end{equation}
a contradiction to our initial assumption. We conclude that $E^+(\cT )$ is indeed a Cauchy-hypersurface in $M$.
Finally, as in the proof of Theorem \ref{penrose}, the map $\rho$ is a homeomorphism
from $E_j^+(\cT )$ onto $E^+(\cT )$ (for $j\ge j_0$), so $E^+(\cT )$ is compact.
\end{proof}
In particular, as in \cite[Cor.\ B of Th.\ 14.61]{ON83} it follows that if (i), (ii)
and (iii) from Theorem \ref{penrose_alt} hold and there exists some inextendible
causal curve that does not meet $E^+(\cT )$ then $(M,g)$ is future null incomplete.
Indeed by \cite[Lemma A.20]{hawkingc11} the existence of such a curve shows that
$E^+(\cT )$ cannot be a Cauchy-hypersurface.
\medskip\noindent
{\bf Acknowledgements.} We would like to thank Clemens S\"amann for helpful discussions.
This work was supported by FWF-projects P23714 and P25326.
|
1,477,468,751,148 | arxiv | \section{Introduction}
A few years back Jacobs et al.\ \cite{Jacobs2015}, inspired by the advances in creating nanoparticles that interact highly specifically by leveraging the extreme selectivity of base-pairing interaction in DNA, introduced the notion of self-assembling systems with `addressable complexity', i.e.\ the creation of regular structures in which one has full control over the spatial arrangement of different particle types. Stylized prototypes of such systems are multicomponent lattice gases with isotropic interactions in which one is free to choose the strength, sign, selectivity and range(s) of the interparticle interactions.
Arguably the simplest system of this type is the equal mole fraction binary lattice gas, which can be mapped onto the field free (= equal chemical potential) Ising model. If only nearest neighbour (nn) interactions with coupling constant $J_1$ are taken into account the results depend strongly on the underlying lattice structure. On the triangular lattice, when $J_1 >0$ we obtain a homogeneous ferromagnetic low-temperature phase (F), corresponding to a complete demixing of the particles, while for $J_1 <0$ no long range order develops and the system is caught in a finite-entropy ground state \cite{Wannier1950Antiferromagnetism.Net}. The square lattice, however, is bipartite and hence not frustrated by a $J_1 <0$ coupling, and exhibits a regular anti-ferromagnetic (AF) checkerboard phase at low temperatures.
Thus, if one wishes to observe more complex ordering patterns on the square lattice, longer-ranged interactions are required, and specifically those that introduce frustration, effectively preempting the period-2 repeat of the AF state. Hence, starting in the 70's of the previous century, a long line of authors has studied the so-called frustrated Ising model obtained by introducing anti-ferromagnetic ($J_2 <0$) next-nearest-neighbour (nnn) interactions on the square lattice \cite{Nightingale1977Non-universalitySystems,Swendsen1979Monte2,Oitmaa1981TheInteractions,Binder1980PhaseInteractions,Landau1980PhaseInteractions,Landau1985PhaseCouplings,Moran-Lopez1993First-orderInteractions}, with more recent work appearing in the past decade or so \cite{dosAnjos2008PhaseLattice,Kalz2008PhaseInteractions,Kalz2011AnalysisLattice}. As this type of interaction penalizes equal spins across the diagonal of each square unit cell, it frustrates the nn-interactions independently of their sign.
However, increasing the range of interactions even further allows the degree of frustration also to be increased. Indeed, very general arguments suggest that in order to obtain the maximum complexity periodic patterns on a given lattice structure all symmetries implied by the point-group of the lattice must be suppressed by the interactions \cite{Tindemans2010b}. For the square lattice, this implies that also next-next-nearest-neighbor couplings (nnnn) need to be taken into account, as shown in Fig.\ \ref{fig:local}. Clearly, an anti-ferromagnetic nnnn interaction ($J_3 <0$) adds yet another level of frustration as it potentially frustrates \emph{both} the nn- and nnn- bonds independently of the sign of their interaction. In fact this latter extension was already studied actively a couple of decades back purely for its theoretical interest \cite{Kanamori1983ExactLattice,Brandt1983Ground-stateInteractions,Landau1985PhaseCouplings}. Strikingly, interest in this nnnn-model, also known as the $J_1-J_2-J_3$ model, was revived in the past decade with a few theoretical studies appearing \cite{Kassan-Ogly2015IsingInteractions,Liu2016RoleFrustration}, as well as a significant paper showing that a model with up to third-neighbor coupling is actually relevant to understanding the magnetic origin of high-$T_{c}$ superconductivity in a class of iron chalcogenides \cite{Glasbrenner2015EffectChalcogenides}.
Reviewing these works, however, reveals that we are far from having a complete picture of the phase behavior of these systems. Most of the effort was devoted to understanding the structure of the ground states, using either the method of inequalities introduced by Kanamori \cite{Kanamori1966MagnetizationSystem} or direct enumeration. These analyses are, however, all limited by implicit or explicit assumptions on the size of the repeating patterns considered. Characteristically, Landau and Binder \cite{Landau1985PhaseCouplings} remark \textquotedblleft\textit{Since the phase diagram is expected to be very complicated ("devil's staircase" of phases), no attempt to include these phases has been made}\textquotedblright. Where the behavior at finite temperature is concerned, the main tool has been Monte Carlo simulations, but again the attention was mostly devoted to the nature of the transitions towards certain specific states, or to the behavior in response to external fields.
Driven by the question to what extent one can `design' specific magnetisation patterns on the square lattice, our aim here is to provide a fresh perspective on the phase behavior of the field-free nnnn-model in a way that systematically allows the consideration of phases of increasing complexity. We do this in the framework of mean-field theory, which allows us to exactly formulate the criteria if and when the high-temperature disordered phase becomes unstable to magnetization modes belonging to periodicities with increasing unit cell size $N$. This analysis reveals that the region in phase space where the disordered phase is stable is a convex polytope whose complexity increases as we increase $N$. Each of the faces of this polytope defines the values of the coupling constants for which a specific equivalence class of magnetization modes is spontaneously excited. We probe the structure of this polytope as a function of the unit cell size of the periodicities included, which provides a fingerprint of the complexity of the predicted phase space. On the basis of this analysis, we are able to analytically pass to the limit $N\rightarrow\infty$ to give a closed form description of the order-disorder surface in the thermodynamic limit. This shows that in the strongly-frustrated region of phase space $J_3 <0$, the mean-field theory predicts a `devil's surface'-like structure for the modes developing from the disordered phase, in which in an arbitrarily small neighborhood of any set of coupling parameters one can find phases of arbitrary spatial complexity becoming stable.
While the mean field results are quantitatively at best a severe approximation to the true phase boundaries, its predictions regarding the possible symmetry breaking patterns, however, are potentially more robust. We explore this latter premise by performing MC simulations with the appropriate finite periodic boundary conditions along rays in phase space, corresponding to decreasing temperature at fixed coupling constants, that pass through the centers of the predicted mode instability faces. These show that the mean-field analysis consistently correctly predicts the dominant mode first appearing in the ordered region in the cases considered.
The structure of the paper is as follows: In Section \ref{sec:model} we set up the model. The mean field treatment is discussed in Section \ref{sec:mft}. The bifurcation analysis is presented in Section \ref{sec:bifurcation}, which introduces our main object of interest, the disorder polytope. In Section \ref{sec:geometry} we first discuss the phenomenology of the disorder polytope (Section \ref{sec:phenomenology}), then discuss some of its specific features (Section \ref{sec:specific-features}), and finally take the limit $N\rightarrow\infty$ (Section \ref{sec:limit}) leading to our major result, the prediction of the full order-disorder surface. Finally, in Section \ref{sec:simulations} we show using Monte Carlo simulations that for finite $N$, implemented through periodic boundary conditions, the mean-field analysis correctly predicts the bifurcating modes.
\section{Model}
\label{sec:model}
We consider the 2-dimensional square lattice $\mathrm{L}=\left\{
\mathrm{z}=\left( z^{1},z^{2}\right) |z^{1},z^{2}\in\mathbb{Z}\right\} .$
Throughout, we will lower case roman letters to denote sites of the lattice,
and capital roman letters to denote sets of sites. We also make use of the
fact that the square lattice forms a group under vector addition, which is
generated by the basis vectors $\mathrm{e}_{1}=\left( 1,0\right) $ and
$\mathrm{e}_{2}=\left( 0,1\right) $ and can be equipped with an inner
product $\left\langle \mathrm{z},\mathrm{z}^{\prime}\right\rangle
=z^{1}z^{1\prime}+z^{2}z^{2\prime}.$ The sites of the lattice are occupied by
Ising spins $\sigma_{\mathrm{z}}\in\left\{ -1,1\right\} .$ To denote a spin
configuration on a set of sites $\mathrm{C,}$ we use the notation
$\sigma_{\mathrm{C}}.$ We define the \emph{range} $r\left( \mathrm{z,z}%
^{\prime}\right) $ between two distinct sites as the index of the Euclidean
distance $\left\vert \mathrm{z-z}^{\prime}\right\vert $ in the ordered list of
distances between sites of the lattice, with $r=1$ denoting nearest neighbours
($\left\vert \mathrm{z-z}^{\prime}\right\vert =1$), $r=2$ next nearest
neighbours ($\left\vert \mathrm{z-z}^{\prime}\right\vert =\sqrt{2}$), $r=3$
next next nearest neighbours ($\left\vert \mathrm{z-z}^{\prime}\right\vert
=2$) and so on. We focus on the field-free range 3 Ising model, defined by the
Hamiltonian%
\begin{equation}
\mathcal{H}\left( \sigma_{\mathrm{L}}\right) =-J_{1}\sum_{r\left(
\mathrm{z,z}^{\prime}\right) =1}\sigma_{\mathrm{z}}\sigma_{\mathrm{z}%
^{\prime}}-J_{2}\sum_{r\left( \mathrm{z,z}^{\prime}\right) =2}%
\sigma_{\mathrm{z}}\sigma_{\mathrm{z}^{\prime}}-J_{3}\sum_{r\left(
\mathrm{z,z}^{\prime}\right) =3}\sigma_{\mathrm{z}}\sigma_{\mathrm{z}%
^{\prime}}, \label{eq:H}%
\end{equation}
where the minus sign in front of the \emph{coupling constants} $J_{1},J_{2}$
and $J_{3}$ is conventional. Further on, we will make regular use of the the
range $r$ neighborhoods of the origin
\begin{align}
\mathrm{N}_{1} & =\left\{ \mathrm{e}_{1},-\mathrm{e}_{1},\mathrm{e}%
_{2},-\mathrm{e}_{2}\right\}, \\
\mathrm{N}_{2} & =\left\{ \mathrm{e}_{1}+\mathrm{e}_{2},-\mathrm{e}%
_{1}-\mathrm{e}_{2},\mathrm{e}_{1}-\mathrm{e}_{2},-\mathrm{e}_{1}%
+\mathrm{e}_{2}\right\}, \\
\mathrm{N}_{3} & =\left\{ 2\mathrm{e}_{1},-2\mathrm{e}_{1},2\mathrm{e}%
_{2},-2\mathrm{e}_{2}\right\},
\end{align}
which we show in Figure \ref{fig:local}.
\begin{figure}[ptb]
\centering
\includegraphics{images/neighborhoods.pdf} \caption{The interaction
neighborhoods of the origin site ($0$: grey site) in the range-3 Ising model
on the square lattice. $\mathrm{N}_{1}$: red sites, $\mathrm{N}_{2}$: green
sites, $\mathrm{N}_{3}$: blue sites. An anti-ferromagnetic nnnn-bond (blue line) between two sites, frustrates the spin arrangement both along the shortest nn-paths (red lines) and nnn-paths (green lines) that connect them.}%
\label{fig:local}%
\end{figure}
\section{Mean field theory}
\label{sec:mft}
Our approach to understanding the phase behaviour of the model
(\ref{eq:H}) is through mean-field theory (MFT). Although MFT is a drastic
approximation, and a fortiori so in lower dimensions, it nevertheless
generically is a good guide into the possible phases a system can display, as these are to a large extent determined by universal symmetry relations (see e.g.\ \cite{Boccara1976SymetriesDordre,Toledano1987TheTransitions}). MFT is typically formulated as a set of self-consistent equations for the single
site spin probabilities%
\begin{equation}
P_{\mathrm{z}}\left( \sigma_{\mathrm{z}}\right) =\frac{e^{-\beta
V_{\mathrm{z}}\left( \sigma_{\mathrm{z}}\right) }}{\sum_{\sigma_{\mathrm{z}%
}}e^{-\beta V_{\mathrm{z}}\left( \sigma_{\mathrm{z}}\right) }},
\label{eq:MFT}%
\end{equation}
where $\beta=1/k_{B}T$ is the inverse temperature and the effective mean field
$V_{\mathrm{z}}\left( \sigma_{\mathrm{z}}\right) $ itself depends on the
spin probabilities on each site with the spin $\sigma_{\mathrm{z}}$ interacts%
\begin{multline}
V_{\mathrm{z}}\left( \sigma_{\mathrm{z}}\right) =-\sigma_{\mathrm{z}%
}\left\{ J_{1}\sum_{\mathrm{n}_{1}\mathrm{\in N}_{1}}\sum_{\sigma
_{\mathrm{z+n}_{1}}}\sigma_{\mathrm{z+n}_{1}}P_{\mathrm{z+n}_{1}}\left(
\sigma_{\mathrm{z+n}_{1}}\right) +\right. \\
\left. J_{2}\sum_{\mathrm{n}_{2}\mathrm{\in N}_{2}}\sum_{\sigma
_{\mathrm{z+n}_{2}}}\sigma_{\mathrm{z+n}_{2}}P_{_{\mathrm{z+n}_{2}}}\left(
\sigma_{\mathrm{z+n}_{2}}\right) +J_{3}\sum_{\mathrm{n}_{3}\mathrm{\in N}%
_{3}}\sum_{\sigma_{\mathrm{z+n}_{2}}}\sigma_{\mathrm{z+n}_{3}}%
P_{_{\mathrm{z+n}_{3}}}\left( \sigma_{\mathrm{z+n}_{3}}\right) \right\}.
\end{multline}
The averages over the spin values in this expression can all be succinctly
summarized using the definition of the site magnetisation%
\begin{equation}
m\left( \mathrm{z}\right) =\sum_{\sigma_{\mathrm{z}}}\sigma_{\mathrm{z}%
}P_{\mathrm{z}}\left( \sigma_{\mathrm{z}}\right), \label{eq:mdef}%
\end{equation}
which allows us to reformulate (\ref{eq:MFT}) as%
\begin{equation}
m\left( \mathrm{z}\right) =\frac{\sum_{\sigma_{\mathrm{z}}}\sigma
_{\mathrm{z}}e^{-W_{\mathrm{z}}\left( \sigma_{\mathrm{z}}\right) }}%
{\sum_{\sigma_{\mathrm{z}}}e^{-W_{\mathrm{z}}\left( \sigma_{\mathrm{z}%
}\right) }}, \label{eq:MFTm}%
\end{equation}
with%
\begin{equation}\label{eq:W_z}
W_{\mathrm{z}}\left( \sigma_{\mathrm{z}}\right) =-\sigma_{\mathrm{z}%
}\left\{ K_{1}\sum_{\mathrm{n}_{1}\mathrm{\in N}_{1}}m\left( \mathrm{z+n}%
_{1}\right) +K_{2}\sum_{\mathrm{n}_{2}\mathrm{\in N}_{2}}m\left(
\mathrm{z+n}_{2}\right) +K_{3}\sum_{\mathrm{n}_{3}\mathrm{\in N}_{3}}m\left(
\mathrm{z+n}_{3}\right) \right\}, %
\end{equation}
where we have absorbed the common positive prefactor $\beta$ into the now
dimensionless coupling constants $K_{r}=\beta J_{r}$.
In anticipation of the further developments below, it will turn out to be convenient to consider the triplets of possible values of the coupling constants $K_{1},K_{2}$ and $K_{3}$ as a linear vector space, whose elements we will denote by bold symbols, viz.
$\mathbf{K}=\left( K_{1},K_{2},K_{3}\right)$. To further compactify notation, we also introduce summed neighborhood magnetizations
\begin{equation}
M_{r}\left( \mathrm{z}\right) =\sum_{\mathrm{n}_{r}\mathrm{\in N}_{r}%
}m\left( \mathrm{z+n}_{r}\right)
\end{equation}
and define $\mathbf{M}\left( \mathrm{z}\right) =\left( M_{1}\left( \mathrm{z}%
\right) ,M_{2}\left( \mathrm{z}\right) ,M_{3}\left( \mathrm{z}\right)
\right)$, so that $W_{\mathrm{z}}(\sigma_{\mathrm{z}})=-\sigma_{\mathrm{z}} \mathbf{K}\cdot \mathbf{M}(\mathrm{z})$, where $\cdot$ is the Euclidean innerproduct.
Using these definitions, we can simplify Eq.~(\ref{eq:MFTm}) to take on the familiar form%
\begin{equation}
m\left(\mathrm{z}\right) =\tanh{\left(\mathbf{K}\cdot \mathbf{M}(\mathrm{z})\right)} \label{eq:MFTSCm},%
\end{equation}
which constitutes an (infinite) set of coupled non linear
self-consistency equations for the magnetizations $\left\{m\left(
\mathrm{z}\right) \right\} _{\mathrm{z}\in \mathrm{L}}$.
\section{Bifurcation analysis}
\label{sec:bifurcation}
We do not attempt to solve Eqs.~(\ref{eq:MFTSCm}) in all generality, but focus
on understanding the\ phases that develop from the high-temperature disordered
phase upon a temperature quench. First note that infinite temperature $\left(
\beta=0\right) $ corresponds to the origin $\mathbf{K}=0$ of the
3-dimensional phase space of the model. It is easy to see that in this point
all spins are decoupled as the effective field vanishes, and we have $m\left(
\mathrm{z}\right) =0.$ Moreover, by the same token, the disordered state with
$m\left( \mathrm{z}\right)=0$ for which $\mathbf{M}(\mathrm{z})=0$ is in fact a solution for any value
of $\mathbf{K}$. We now inquire at which values of
$\mathbf{K}$ Eq.\ (\ref{eq:MFTSCm}) can support a non-zero
solution. To that end we expand Eq. (\ref{eq:MFTSCm}) to first order in the
magnetisations, yielding%
\begin{equation}\label{eq:bif}%
m\left( \mathrm{z}\right) = \mathbf{K}\cdot \mathbf{M}(\mathrm{z}).
\end{equation}
The values of the coupling constants $\mathbf{K}$ for which this set of equations,
admits a non-zero solution defines the set of \emph{order-disorder points}, in which an ordered solution to the self-consistency equation branches off from the disordered solution.
Since $\mathbf{M}(\mathrm{z})$ (c.f.\ Eq.~(\ref{eq:W_z})) involves the magnetisation of all sites in the interaction neighborhood of $\mathrm{z}$, even in the linear approximation defining the bifurcation equation, the magnetisations of all sites remain coupled. To proceed we therefore take the Fourier transform of (\ref{eq:bif}) with respect to lattice compatible wavevectors, which generically are of the form
\begin{equation}
\mathrm{q}=2\pi\left( \frac{j_{1}}{n_{1}},\frac{j_{2}}{n_{2}}\right)
,\;j_{i}\in\mathbb{Z},n_{i}\in\mathbb{N}^{+},
\end{equation}
to obtain%
\begin{equation}
\hat{m}\left(\mathrm{q}\right) = \mathbf{K\cdot F}\left(
\mathrm{q}\right)\,\hat{m}\left(\mathrm{q}\right), \label{eq:bifq}%
\end{equation}
where $\mathbf{F}\left( \mathrm{q}\right) \equiv\left( F_{1}\left(
\mathrm{q}\right) ,F_{2}\left( \mathrm{q}\right) ,F_{3}\left(
\mathrm{q}\right) \right)$ is the set of Fourier transforms of the
indicator functions of the neighborhood clusters defined through%
\begin{equation}
F_{r}\left( \mathrm{q}\right) =\sum_{\mathrm{n}_{r}\mathrm{\in N}_{r}%
}e^{-i\left\langle \mathrm{n}_{r},\mathrm{q}\right\rangle }.
\end{equation}
For the range 3 model on the square lattice, the relevant lattice neighborhood
transforms are%
\begin{align}
F_{1}\left( \mathrm{q}\right) & =2\cos q_{1}+2\cos q_{2}, \label{eq:F1}\\
F_{2}\left( \mathrm{q}\right) & =2\cos\left( q_{1}-q_{2}\right)
+2\cos\left( q_{1}+q_{2}\right), \label{eq:F2}\\
F_{3}\left( \mathrm{q}\right) & =2\cos2q_{1}+2\cos2q_{2}. \label{eq:F3}%
\end{align}
An important property of these functions is that they are invariant with
respect to the point symmetry group of the lattice -- here the dihedral group
$\mathfrak{D}_{4}$, the symmetry group of a square. Let $\mathrm{G}$ be the
real unitary 2D matrix representation of $\mathfrak{D}_{4},$ then for any
element $\mathrm{g}\in\mathrm{G}$%
\begin{equation}
F_{r}\left( \mathrm{gq}\right) =\sum_{\mathrm{n}_{r}\mathrm{\in N}_{r}%
}e^{-i\left\langle \mathrm{n}_{r},\mathrm{gq}\right\rangle }=\sum
_{g\mathrm{n}_{r}\mathrm{\in N}_{r}}e^{-i\left\langle \mathrm{gn}%
_{r},\mathrm{gq}\right\rangle }=\sum_{\mathrm{n}_{r}\mathrm{\in N}_{r}%
}e^{-i\left\langle \mathrm{n}_{r},\mathrm{q}\right\rangle }=F_{r}\left(
\mathrm{q}\right), \label{eq:Finv}%
\end{equation}
where we have used the fact that $g$ simply permutes the sites of the lattice
neighborhoods $\mathrm{N}_{r}.$ This implies that instead of individual modes,
it suffices to consider the equivalence classes of modes defined by the orbits
$\mathrm{Gq=}\left\{ \mathrm{gq}|\mathrm{g}\in\mathrm{G}\right\} $. In
passing, we also note that (\ref{eq:bifq}) is in fact readily generalised to
other lattices and models with longer-ranged pair interactions, as the lattice
structure enters only through the functions $\mathbf{F}\left( \mathrm{q}%
\right) ,$ and increasing the range of the pair interactions simply requires
increasing the dimensionality of the phase space spanned by the
coupling-constant vectors $\mathbf{K}$.
As Eq.\ (\ref{eq:bifq}) shows, close to a bifurcation, all magnetization modes
are decoupled. Also, it is clear that the loci in phase space at which the
state with zero magnetisation becomes unstable to the mode $\mathrm{q}$ lie on
the plane $L_{\mathrm{q}}=\left\{ \mathbf{K|K\cdot F}\left( \mathrm{q}%
\right) =1\right\} $. Since at infinite temperature, where $\mathbf{K=0}$,
the system is surely disordered, we infer that the disordered phase is stable
against this mode in the half-space containing the origin bounded by
$L_{\mathrm{q}},$ i.e.
\begin{equation}
H_{\mathrm{q}}=\left\{ \mathbf{K|K\cdot F}\left( \mathrm{q}\right)
<1\right\} . \label{eq:half}%
\end{equation}
The problem we face, however, is that are in principle an infinite number of
modes to consider. In order to tackle this problem, we choose to
systematically enumerate the potential modes, ordering them by a natural
measure of the \textquotedblleft size\textquotedblright\ of the periodicity
they represent. Each periodically repeating pattern on the lattice
$\mathrm{L}$ is characterized by two basis vectors $\mathrm{p}_{1}=\left(
p_{1}^{1},p_{1}^{2}\right) $ , $\mathrm{p}_{2}=\left( p_{2}^{1},p_{2}%
^{2}\right) \in\mathbb{Z}^{2}$ \ conveniently presented in matrix form%
\begin{equation}
\mathrm{P}=\left(
\begin{array}
[c]{cc}%
p_{1}^{1} & p_{1}^{2}\\
p_{2}^{1} & p_{2}^{2}%
\end{array}
\right) ,
\end{equation}
where we choose the order of $\mathrm{p}_{1}$ and $\mathrm{p}_{2}$ such
$\det\mathrm{P}=N>0$. It is easy to see that $N$ is just the number of sites
in the unit cell $\mathcal{U}_{\mathrm{P}}$ of the periodic pattern. We call
it the \emph{index} of the periodicity, following the mathematical
nomenclature that associates it with the size of the quotient
group $\mathrm{L/P}$ when $\mathrm{P}$ is interpreted as a subgroup of
$\mathrm{L}$ \cite{Dummit2004AbstractAlgebra}. In Appendix \ref{app:periodic}
we review the construction of periodic patterns on L, their corresponding
discrete Brillouin zones $\widehat{\mathcal{U}}_{_{\mathrm{P}}}$, and their
enumeration. An important result is that the structure of the set%
\begin{equation}
\widehat{\mathcal{U}}_{N}=%
{\displaystyle\bigcup\limits_{\left\{ \mathrm{P|}\left\vert
\widehat{\mathcal{U}}_{_{\mathrm{P}}}\right\vert =N\right\} }}
\widehat{\mathcal{U}}_{_{\mathrm{P}}}=\left\{ \mathrm{q=}\frac{2\pi}%
{N}\left( l_{1},l_{2}\right) |0\leq l_{1},l_{2}<N\right\} ,
\label{eq:UhatN}%
\end{equation}
which includes the wave vectors of all patterns of index $N$, is simply a
square array, and equal to the Brillouin zone of the square $N\times N$ periodicity $\mathrm{P}_{\square N}=\diag(N,N)$. For any lattice mode $\mathrm{q}$ we can define its \emph{complexity} as the smallest periodicity to which it belongs \footnote{Note that any mode $\mathfrak{q}$ compatible with periodicity $\mathrm{P}$ is trivially also compatible with periodicity $k\mathrm{P},\,k \ge 2$.}. If $\mathrm{q}=(2\pi n_1/d_1,2\pi n_2/d_2)$ with $n_i$ and $d_i$ relatively prime, then the complexity is simply given by $C(\mathrm{q})=\lcm(d_1,d_2)$.
In view of the invariance (\ref{eq:Finv}) of the
neighbourhood transforms $F_{r}\left( \mathrm{q}\right)$, however, the proper
degrees of freedom for the mode analysis are the elements of the orbit space%
\begin{equation}
\widehat{\mathfrak{U}}_{N}\equiv\widehat{\mathcal{U}}_{N}/G=\left\{
\mathrm{Gq}|\mathrm{q}\in\widehat{\mathcal{U}}_{N}\right\} ,
\end{equation}
which we will denote by $\mathfrak{q}$, throughout using a Gothic font to
indicate quantities related to orbits with respect to the point group
$G=\mathfrak{D}_{4}$. The set $\widehat{\mathfrak{U}}_{N}$ is commonly called the Irreducible Brillouin Zone, henceforth abbreviated as IBZ. Note that different $\mathrm{q}\in\widehat{\mathcal{U}}_{N}$ behave differently under the action of the point-symmetry group $G$, depending on their location within $\widehat{\mathcal{U}}_{N}$. Specifically, to each mode in $\mathfrak{q}\in \widehat{\mathfrak{U}}_{N}$ we can associate a \emph{multiplicity} $M\left(\mathfrak{q}\right)=|G\mathrm{q}|$, i.e. the length of the orbit under the action of $G$ to which is belongs, which will play an important role in the further analysis. Details of the construction of the IBZ and the number of modes of complexity $N$ contained in it are discussed in Appendix \ref{app:Uhat}.
We now define our main object of interest, the region $D_{N}$ around the origin in phase space in which the
disordered solution is stable against all modes in $\widehat{\mathfrak{U}}_{N}$, which is formed by the intersection of all the pertinent half spaces of the type (\ref{eq:half})
\begin{equation}
D_{N}=%
{\displaystyle\bigcap\limits_{\mathfrak{q}\in\widehat{\mathfrak{U}}_{N}}}
H_{\mathfrak{q}}. \label{eq:Ddis_N}%
\end{equation}
Generically, the intersection of a finite number of half-spaces is a so-called convex \emph{polytope}, a bounded polyhedron \cite{Grunbaum2003ConvexPolytopes}. Our main goal here is to understand the
structure of these \emph{disorder polytopes} and their behavior as a function of $N$. The surface of the disorder polytopes is the locus in phase space where the disordered high-temperature solution becomes unstable, which we will call the \emph{order-disorder surface}. Note that not all modes in $\widehat{\mathfrak{U}}_{N}$ necessarily contribute a face to $D_{N}$: These \textquotedblleft faceless\textquotedblright\ modes are preempted by other modes whose instability surface lies closer to the origin. The problem of determining the structure of a polytope from the set of defining half-spaces is known as the \emph{vertex enumeration problem}. Intriguingly, the computational complexity of the vertex enumeration problem in its most general form is as yet undecided \cite{Reimers2014PolynomialBranch-width}. However, several well-developed
algorithms exist that are both polynomial in time and memory when the polytopes are known to be bounded \cite{Avis2015ComparativeCodes}.
In Figure \ref{fig:overview} we illustrate the relationship between the faces of order-disorder surface, the boundary of the polytope $D_4$, the modes $\mathfrak{q}\in\widehat{\mathfrak{U}}_{N}$ in the IBZ which become unstable at these faces, and the periodic magnetisation patterns that these modes represent.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{images/polytopes/D4_patterns.pdf}
\caption{The order-disorder surface of the nnnn-Ising model for modes with periodic unit cell with size $N=4$ in the space of coupling dimensionless coupling constants $(K_1,K_2,K_3)$. Each of the faces of this polytope is labelled by the wavevector $\mathfrak{q}=2\pi\left(\frac{i}{6},\frac{j}{4}\right)$ of the mode in the Irreducible Brillouin Zone that becomes unstable at this face, and the corresponding periodic pattern of magnetisations is visualized.}
\label{fig:overview}
\end{figure}
Ultimately, we are of course interested in the limit $N\rightarrow\infty$, where all restrictions on the periodicity of the bifurcation modes is lifted, to obtain the full domain of stability of the disordered phase, i.e.\
\begin{equation}
D_{\infty}=\lim_{N\rightarrow\infty}D_{N}. \label{eq:D}%
\end{equation}
We will show how $D_{\infty}$ can be constructed and how the finite $N$ ``approximations'' approach this limit from below.
\section{The geometry of the disordered region}
\label{sec:geometry}
\subsection{Phenomenology}
\label{sec:phenomenology}
We first present an overview of the results on the disorder polytopes for finite $N$. These results were obtained using the vertex enumeration package \texttt{lrs} based on the algorithm developed by Avis and Fukuda \cite{Avis1992APolyhedra,Avis2018Mplrs:Code}, with bespoke post-processing to remove rationalization artifacts (for details see Appendix \ref{app:lrs}), and rendered with Mathematica. As we go along, we point out a number of features that are dealt with in more detail in Section \ref{sec:specific-features} below.
We start off by noting that $D_1$, $D_2$ and $D_3$ are unbounded convex polyhedra, as they lack the requisite number of constraints to create a bounded domain, and we therefore do not display them. In Figure \ref{fig:D4-D9} we show the disorder polytopes $D_4$ through $D_9$. Throughout, we will use a color code to indicate the multiplicity of the mode corresponding to each face of the poylytope: $M=1$: citrus, $M=2$: tawny, $M=4$: purple , $M=8$: blue.Two features immediately stand out. First, the polytopes with even $N$ appear symmetric upon changing the sign of $K_1$, whereas those with odd $N$ are clearly asymmetric in this respect. We discuss this symmetry in Section \ref{sec:odd-even}. Secondly, the top of the polytope in the halfspace $K_3 >0$ is bounded by just three faces, which moreover appear to be the same ones for all even $N$. The geometry of the top of the disorder polytope and the associated modes are examined more closely in Section \ref{sec:major-modes}.
\begin{figure}[htbp]
\centering
\subfloat[$D_4$]{\includegraphics[width=0.4\textwidth]{images/polytopes_trial/D4ypbyd.jpg}}
\hfill
\subfloat[$D_5$]{\includegraphics[width=0.4\textwidth]{images/polytopes_trial/D5ypbyd.jpg}}
\\
\subfloat[$D_6$]{\includegraphics[width=0.4\textwidth]{images/polytopes_trial/D6ypbyd.jpg}}
\hfill
\subfloat[$D_7$]{\includegraphics[width=0.4\textwidth]{images/polytopes_trial/D7ypbyd.jpg}}
\\
\subfloat[$D_8$]{\includegraphics[width=0.4\textwidth]{images/polytopes_trial/D8ypbyd.jpg}}
\hfill
\subfloat[$D_9$]{\includegraphics[width=0.4\textwidth]{images/polytopes_trial/D9ypbyd.jpg}}
\caption{The disorder polytopes $D_4$ through $D_9$. Faces are color coded for the multiplicity $M$ of the associated unstable mode: $M=1$: citrus, $M=2$: tawny, $M=4$: purple , $M=8$: blue.}
\label{fig:D4-D9}
\end{figure}
We also notice that as $N$ increases the difference between the successive even and odd polytopes appears to decrease. As we will show explicitly later on in Section \ref{sec:limit} this difference indeed disappears in the limit $N\rightarrow\infty$.
Next, in Figure \ref{fig:D10-D16} we show the even polytopes form $N=10$ to $N=16$. Again a number of features stand out. As $N$ increases, the complexity of the bottom of the polytope in the halfspace $K_3 <0$, where as we argued the system is strongly frustrated, increases. Moreover, we see a marked clustering of the faces corresponding to modes with multiplicity $M=4$ into \emph{fan}-like structures, while those belonging to modes with multiplicity $M=8$ seem to string out along a curve, which we will call the \emph{ridge}. These structures are brought into focus in Figure \ref{fig:bottom} where we show a view of $D_{16}$ and $D_{32}$ `from below' with a viewpoint on the negative $K_3$-axis. In Sections \ref{sec:fan-modes} and \ref{sec:ridge} we address the fans and ridge in more detail.
\begin{figure}[htbp]
\centering
\subfloat[$D_{10}$]{\includegraphics[width=0.4\textwidth]{images/polytopes_trial/D10ypbyd.jpg}}
\hfill
\subfloat[$D_{12}$]{\includegraphics[width=0.4\textwidth]{images/polytopes_trial/D12ypbyd.jpg}}
\\
\subfloat[$D_{14}$]{\includegraphics[width=0.4\textwidth]{images/polytopes_trial/D14ypbyd.jpg}}
\hfill
\subfloat[$D_{16}$]{\includegraphics[width=0.4\textwidth]{images/polytopes_trial/D16ypbyd.jpg}}
\caption{The even disorder polytopes $D_{10}$ through $D_{16}$. Faces are color coded for the multiplicity $M$ of the associated unstable mode: $M=1$: citrus, $M=2$: tawny, $M=4$: purple , $M=8$: blue.}
\label{fig:D10-D16}
\end{figure}
\begin{figure}[htbp]
\centering
\subfloat[$D_{16}$]{\includegraphics[width=0.5\textwidth]{images/polytopes_trial/F16ypbydd.jpg}}
\hfill
\subfloat[$D_{32}$]{\includegraphics[width=0.5\textwidth]{images/polytopes_trial/F32ypbydd.jpg}}
\caption{``Bottom'' view of disorder polytopes $D_{16}$ and $D_{32}$, showing the fans of striped, modulated-stripe and diagonal stripe $M=4$ modes (purple) emanating from the vertices $\mathbf{K}^S$, $\mathbf{K}^{MS}$ and $\mathbf{K}^{DS}$ respectively, as well as the $M=8$ modes (blue) that cluster around the so-called ridge. Note the decrease in area of the wedge-like $M=8$ modes that interdigitate the diagonal stripe fan as $N$ increases.}
\label{fig:bottom}
\end{figure}
Finally, in Table \ref{tab:faceless} we list the number of faces of the disorder surface as a function $N$ compared to the maximal number of modes available, which indicates that for even $N$ a number of modes does not contribute a face to $D_N$. In Section \ref{sec:faceless} we characterise these \emph{`faceless'} modes.
\begin{table}[htbp]
\centering
\begin{tabular}{l|cccccccccc}
N & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 12 & 14 & 16 \\
\hline
\#faces & 6 & 6 & 10 & 10 & 14 & 15& 20 & 26 & 34 & 42 \\
$|\hat{\mathfrak{U}}_N| $ & 6 & 6 & 10 & 10 & 15 & 15 & 21 & 28 & 36 & 45
\end{tabular}
\caption{Number of faces of the disorder polytopes as function $N$ compared to $|\hat{\mathfrak{U}}_N|$.}
\label{tab:faceless}
\end{table}
\subsection{Specific features}
\label{sec:specific-features}
\subsubsection{Odd-even effects}
\label{sec:odd-even}
On the square lattice we can define a unique parity of each
site by defining $\left\Vert \mathrm{z}\right\Vert =\left(z_{1}+z_{2}\right) \mod 2$. Considering Figure \ref{fig:local}, we see that the standard neighbourhood set $\mathrm{N}_{1}$ consists of sites with parity $1$, while both $\mathrm{N}_{2}$ and $\mathrm{N}_{3}$ only contain sites with parity $0$. This implies that for every solution $m_{\mathrm{z}}$ of the bifurcation equation Eq. (\ref{eq:bif}) with coupling constants $\mathbf{K}=\left(K_{1},K_{2},K_{3}\right)$ there is a solution $\bar{m}_{\mathrm{z}}=\left( -\right) ^{\left\Vert
\mathrm{z}\right\Vert }m_{\mathrm{z}}$ with coupling constants $\mathbf{\bar
{K}}=\left(-K_{1},K_{2},K_{3}\right)$. Fourier transforming $\bar
{m}_{\mathrm{z}}$, we find that $\mathrm{\bar{\mathrm{q}}=\mathrm{q}-(\pi},\pi)$. We also find that $F_{1}\left(\mathrm{\bar{q}}\right)=-F_{1}\left(\mathrm{q}\right)$, while $F_{2}\left(\mathrm{\bar{q}}\right)=F_{2}\left(\mathrm{q}\right)$ and $F_{3}\left(\mathrm{\bar{q}}\right)=F_{3}\left(\mathrm{q}\right)$, so that if $\mathbf{K\cdot F}\left( \mathrm{q}\right) =1$ then $\mathbf{\bar{K}\cdot F}\left( \mathrm{\bar{q}}\right)=1$ and therefore also solves Eq.~(\ref{eq:bifq}). Referring to Figure \ref{fig:Uhat_infty}, we see that the mapping $\mathrm{q}\rightarrow\mathrm{q}-(\pi,\pi)$ corresponds to the reflection $r$ with respect to what we call the \emph{anti-diagonal}, the perpendicular bisector onto the hypotenuse of the the symmetry reduced Brillouin zone $\widehat{\mathfrak{U}}_{\infty}$. We now ask under what conditions $\mathfrak{q}\in\widehat{\mathfrak{U}}_{N}\Rightarrow r\mathfrak{q}\in\widehat{\mathfrak{U}}_{N}$. As $\mathfrak{q}=\left( 2\pi\frac{i}{N},2\pi\frac{j}{N}\right) ,0\leq j\leq i\leq\left\lfloor \frac{N}{2}\right\rfloor$, we have $r\mathfrak{q}=\left( \frac{N-2j}{N}\pi,\frac{N-2i}{N}\pi\right)$, so that $r\mathfrak{q}\in\widehat{\mathfrak{U}}_{N}$ if and only if $N$ even, as is also illustrated in Figure \ref{fig:Uhat_odd_even}. Thus any facet of $D_{2N}$ associated with mode $\mathfrak{q}$ and normal vector $\mathbf{F}\left( \mathfrak{q}\right)$ is paired with a facet with mode $\bar{\mathfrak{q}}$ and normal vector $\mathbf{F}\left(\mathfrak{q}\right)=\mathbf{F}\left(r\mathfrak{q}\right)$, and the whole polytope is mirror-symmetric with respect to the plane $K_{1}=0$ for all even $N$.
\subsubsection{The major modes for $K_3 >0$}
\label{sec:major-modes}
The three faces that bound the polytope in the half-space $K_3 >0$ are associated with the modes that are located at the extreme points of the IBZ $\widehat{\mathfrak{U}}_{N}$. Defining
\begin{equation}
q^{m}_{N}=\left\lfloor \frac{N}{2}\right\rfloor \frac{2\pi}{N},%
\end{equation}
these are the modes $\mathfrak{q}_{0}=\left(0,0\right)$, $\mathfrak{q}_{1}=\left(
q^{m}_{N},0\right)$ and $\mathfrak{q}_{2}=\left( q^{m}_{N},q^{m}_{N}\right)$. As $q^{m}_{2k}=\pi$ these facets are the same for all even $N$. In that case it is easy to see they represent the \emph{ferromagnetic}- ($\mathfrak{q}^{F}$), \emph{alternating striped}- ($\mathfrak{q}^{AS}$) and \emph{anti-ferromagnetic} ($\mathfrak{q}^{AF}$) ordering patterns respectively. A visualization of these modes can be found in Appendix \ref{app:visualization}. Also, as $q^{m}_{2k+1}=\pi\frac{2k}{2k+1}$, we see $\lim_{k\rightarrow \infty}q^{m}_{2k+1}=\pi$, so that as $N$ increases the odd top facets converge to the even ones. A direct computation of the location of the
top vertex $\mathbf{K}^{T}$ of the polytope, obtained by solving the conditions $\mathbf{K}\cdot\mathbf{F}(\mathfrak{q}^{F})=\mathbf{K}\cdot\mathbf{F}(\mathfrak{q}^{AS})=\mathbf{K}\cdot\mathbf{F}(\mathfrak{q}^{AF})=1$, then yields for even $N$ the vertex $\mathbf{K}^{T} = \left( 0,0,\frac{1}{4}\right)$, while odd $N=2 k+1$ we have $\mathbf{K}^{T}_{odd}=\left(\frac{1}{4}\left(1+1/(2\cos\left( \pi\frac{2k}{2k+1}\right)+1)\right),0,-1/(8\cos\left(\pi\frac{2k}{2k+1}\right)+4)\right)$. The latter, as expected, converges to $\mathbf{K}^{T}$ as $k\rightarrow\infty$.
\subsubsection{The fan modes}
\label{sec:fan-modes}
The three fans of faces shown most clearly in Figure \ref{fig:bottom} are associated with the multiplicity $M=4$ modes on the edges of the IBZ. We distinguish the modes of the form $\mathfrak{q}^{S}(i)=(2\pi i/N,0),\,i=1,\ldots,l(N)$ on the horizontal leg, which are associated with \emph{striped} ordering patterns, modes of the form $\mathfrak{q}^{MS}(i)=(\pi,2\pi i/N),\,i=1,l(N)$ on the vertical leg, which we associate with \emph{modulated-stripe} ordering patterns, and the modes on the hypotenuse of the form $\mathfrak{q}^{DS}(i)=(2\pi i/N,2 \pi i/N),\,k=1,\ldots,l(N)$, which we associate with \emph{diagonal stripe} ordering patterns, where $l(2k)=k-1$ and $l(2k+1)=k$. These modes are visualized in Appendix \ref{app:visualization}.
We can show by explicit construction that the facets corresponding to any three successive fan modes share a common vertex, which moreover is independent of which triplet is considered. For the striped modes we find on solving $\mathbf{K}\cdot\mathbf{F}(\mathfrak{q}^{S}(i-1)=\mathbf{K}\cdot\mathbf{F}(\mathfrak{q}^{S}(i))=\mathbf{K}\cdot\mathbf{F}(\mathfrak{q}^{S}(i+1))=1$, the vertex $\mathbf{K}^{S}=(1/2,-1/4,0)$ for all $N$. The analogous calculation for the modulated-stripe modes yields for even $N$ the vertex $\mathbf{K}^{MS}=(-1/2,-1/4,0)$, consistent with the symmetry of $D_{2k}$ discussed above, while for odd $N=2 k+1$ we find $\mathbf{K}^{MS}_{odd}=\left\{\frac{1}{2} \sec \left(\frac{2 \pi k}{2 k+1}\right),-\frac{1}{4} \sec ^2\left(\frac{2 \pi k}{2 k+1}\right),0\right\}$, which converges to $\mathbf{K}^{MS}$ for $k\rightarrow\infty$. Finally, for the diagonal stripe modes we find $\mathbf{K}^{DS}=(0,1/2,-1/4)$ for all $N$.
Details on how these fans meet in the middle area of the bottom of the polytopes will be addressed in the following section.
\subsubsection{The $M=8$ modes and the ridge}
\label{sec:ridge}
The modes with multiplicity $M=8$ have fewer remaining symmetries. A few examples are shown in Appendix \ref{app:visualization}. As Figure \ref{fig:bottom} suggests, the faces corresponding to these modes are directly connected to the striped- and modulated stripe fans and are clustered around an increasingly narrow quasi one-dimensional structure which we call the ridge. This structure can be characterised by considering the common vertex belonging to the faces corresponding to two successive modes along either of the legs of the IBZ and one of the interior $M=8$ modes nearest to this pair. Considering e.g.\ the pair striped modes $\left(\mathfrak{q}^{S}(i),\mathfrak{q}^{S}(i+1)\right)$ on the horizontal leg, the nearest interior mode is $\mathfrak{q}^{int}(i)=(2\pi i/N,2\pi/N)$, and we solve for $\mathbf{K}\cdot\mathbf{F}(\mathfrak{q}^{S}(i)=\mathbf{K}\cdot\mathbf{F}(\mathfrak{q}^{S}(i+1))=\mathbf{K}\cdot\mathbf{F}(\mathfrak{q}^{int}(i))=1$. For finite $N$, the resulting analytical expressions for the solution $\mathbf{K}^{\text{ridge}}(i)$ are rather unwieldy and we refrain from presenting them. However, by parameterizing $i=a N,\,a\in [0,1/2]$ we can take the limit $N\rightarrow\infty$ yielding
\begin{equation}
\label{eq:ridge}
\mathbf{K}^{\text{Ridge}}(a)=\frac{1}{4 \cos (2 \pi a)+\cos (4 \pi a)+5}\left(4 \cos ^2(\pi a),-1,-\frac{1}{2}\right).
\end{equation}
A similar analysis for the modulated stripe modes on the vertical leg, now parameterized by $i = (1/2-a)N,\,a\in [0,1/2]$ yields, as expected by the reflection symmetry in the anti-diagonal of the IBZ, the same result mirrored in the plane $K_1=0$. We also note that the ridge is a planar curve embedded in the plane $K_2=2 K_3$. For future reference we name the two end points of the ridge $\mathbf{K}^{R}_{\pm}=(\pm 2/5,-1/10,-1/20)$ and the lowest point on the curve $\mathbf{K}^{B}\equiv\mathbf{K}^{\text{Split}}(1/2)=(0,-1/2,-1/4)$.
One also notices that the faces belonging to the diagonal stripe fan are ``split'' by wedge-shaped faces belonging to $M=8$ modes. The vertices at which this happens can be found by considering the common vertex between two subsequent diagonal stripe modes $\left(\mathfrak{q}^{DS}(i),\mathfrak{q}^{DS}(i+1)\right)$ with their common nearest interior mode $(2\pi (i+1)/N,2\pi i/N)$. Using a similar parameterization as above, i.e.\ $i=b N,\,b\in [0,1/2]$, and passing to the limit $N\rightarrow\infty$ we obtain the curve
\begin{equation}
\mathbf{K}^{\text{Split}}(a)= \left(\frac{\cos (2 \pi b)}{\cos (4 \pi b)+2},0,-\frac{1}{4 (\cos (4 \pi b)+2)}\right).
\end{equation}
However, by considering the angle between the pair of edges defined by the two pair of modes $\left(\mathfrak{q}^{DS}(i),(2\pi (i+1)/N,2\pi i/N)\right)$ and $\left((2\pi (i+1)/N,2\pi i/N),\mathfrak{q}^{DS}(i+1)\right)$, one can show that the surface area of these wedge-like $M=8$ faces vanishes in the limit $N\rightarrow\infty$.
\subsubsection{The faceless modes}
\label{sec:faceless}
The so-called faceless modes for even $N$ are all located on the \emph{anti-diagonal} that runs from the vertex $\mathfrak{q}^{AF}=(\pi,0)$ to the midpoint of the hypotenuse of the IBZ. These modes can generically be parameterized as $\mathfrak{q}^{AD}(\alpha) = (\pi-\alpha,\alpha),\,\alpha\in[0,\pi/2]$. It follows that $F(\mathfrak{q}^{AD}(\alpha))=\left(0,-2(1+\cos{2\alpha}),4\cos{2\alpha}\right)$. Considering the family of planes defined through $\mathbf{K}\cdot \mathbf{F}(\mathfrak{q}^{AD}(\alpha))=1$, we see that these share a common line of intersection given by $(K_1,-1/2,-1/4)$. Hence only the planes defined by the relevant endpoints, $\mathfrak{q}^{AS}=(\pi,0)$ and $\mathfrak{q}^{AD}=\left(\pi/2,\pi/2\right)$ for $N =4k$ or $\mathfrak{q}^{AD}=\left(2\pi (k+1)/(4k+2),2\pi k/(4k+2)\right)$ for $N= 4k+2$ (see Figure \ref{fig:Uhat_odd_even}) can contribute a face to $D_N$, and all the modes between these endpoints do not, which exactly explains the pattern observed in Table \ref{tab:faceless}. We note, however, that these modes will of course play a role for $\mathbf{K}$-values located on the common edge they share.
\subsection{The limit $N\rightarrow\infty$}
\label{sec:limit}
\subsubsection{The natural coordinate frame}
\label{sec:natural}
As $\mathbf{F}$ is a vector-valued mapping from the two dimensional domain $\widehat{\mathfrak{U}}_{\infty}$ to $\mathbb{R}^3$, it is clear that there must be a dependency between the elements of $\mathbf{F}\left(\mathfrak{q}\right)$. Indeed, we find that%
\begin{equation}
\begin{split}\label{eq:Fdepend}
F_{1}\left(\mathfrak{q}\right)^{2}=&\left( 2\cos q_{1}+2\cos q_{2}\right)
^{2}\\=&2\left( 2\cos\left( q_{1}-q_{2}\right) +2\cos\left( q_{1}%
+q_{2}\right) \right) +2\cos2q_{1}+2\cos2q_{2}+4\\=&2F_{2}\left(
\mathfrak{q}\right) +F_{3}\left( \mathfrak{q}\right) +4.
\end{split}
\end{equation}
This allow us to define a new coordinate frame with orthonormal basis vectors $\mathbf{\hat{n}}_{1}=\left( 1,0,0\right)
$, $\mathbf{\hat{n}}_{2}=\left( 0,1/\sqrt{5},-2/\sqrt{5}\right)$ and
$\mathbf{\hat{n}}_{3}=\left( 0,2/\sqrt{5},1/\sqrt{5}\right)$, which represents a clockwise rotation of the original frame by an angle $\chi = \arctan{2}$ around the $K_1$-axis. Defining the coordinates with respect to this frame through $\varphi_{j}=\mathbf{F}\left(\mathfrak{q}\right)\cdot\mathbf{\hat{n}}_{j}$ we find that $\varphi_{3}=\frac{1}{\sqrt{5}}\left(\varphi_{1}^{2}-4\right)$, so that we are left with the simple representation
\begin{equation}
\mathbf{F}\left( \mathfrak{\varphi}_{1},\varphi_{2}\right) =\varphi
_{1}\mathbf{\hat{n}}_{1}+\varphi_{2}\mathbf{\hat{n}}_{2}+\frac{1}{\sqrt{5}}\left(
\varphi_{1}^{2}-4\right) \mathbf{\hat{n}}_{3}.\label{eq:F_phi}%
\end{equation}
The details of this transformation, as well as the shape of the IBZ in the new coordinates are presented in Appendix \ref{app:natural}.
\subsubsection{Surface reconstruction}
We now ask, given the relatively simple parametrization Eq.~(\ref{eq:F_phi}),
whether it is possible to reconstruct $D_{\infty}$ from the definition
$\mathbf{F}\left( \mathfrak{\varphi}_{1},\varphi_{2}\right) \cdot
\mathbf{K}\left( \mathfrak{\varphi}_{1},\varphi_{2}\right) =1$, the relation that characterizes the boundary points, cf.\ Eq.~(\ref{eq:bifq}). To that end, we introduce $\mathbf{\hat{u}}\left( \mathfrak{\varphi}_{1},\varphi
_{2}\right) =$ $\mathbf{F}\left( \mathfrak{\varphi}_{1},\varphi_{2}\right)
/\left\vert \mathbf{F}\left( \mathfrak{\varphi}_{1},\varphi_{2}\right)
\right\vert $ and note that this is the unit normal to the surface $\mathbf{K}%
\left( \mathfrak{\varphi}_{1},\varphi_{2}\right)$. The defining equation
then reads%
\begin{equation}
\mathbf{\hat{u}}\left( \mathfrak{\varphi}_{1},\varphi_{2}\right)
\cdot\mathbf{K}\left( \mathfrak{\varphi}_{1},\varphi_{2}\right) =\frac
{1}{\left\vert \mathbf{F}\left( \mathfrak{\varphi}_{1},\varphi_{2}\right)
\right\vert }\equiv h\left( \mathfrak{\varphi}_{1},\varphi_{2}\right) ,
\end{equation}
which introduces the so-called \emph{support function} $h$. It is a standard result of convexity theory (see e.g.\ \cite{Schneider2013ConvexBrunn-MinkowskiTheory}) that a convex body is fully determined by its support function. As the domain of our parameterization of the body is a compact set with only piecewise smooth boundary, we will need to preform the necessary inversion in the interior, the smooth boundary components, and the extreme points separately.
\paragraph{Interior: the ridge}
\label{sec:ridge-infinity}
For notational brevity we omit the explicit dependence of all dependent variables on the coordinates $\varphi_j$, and denote the partial derivatives $\partial/\partial \varphi_j$ simply by $\partial_j$. The vectors $\partial_i\mathbf{K}$ are by definition tangent to the surface, so we have that%
\begin{equation}
\partial_{i}\left(\mathbf{\hat{u}}\cdot\mathbf{K}\right) =\partial_{i}\mathbf{\hat{u}}\cdot\mathbf{K} +\mathbf{\hat{u}} \cdot\partial_{i}\mathbf{K}
=\left(\partial_{i}\mathbf{\hat{u}}\right) \cdot\mathbf{K} =\partial_{i}h.
\end{equation}
Also, as $\mathbf{\hat{u}}\cdot\mathbf{\hat{u}}=1$, we have $ \partial_{i}\mathbf{\hat{u}} \cdot\mathbf{\hat{u}}=0,$ so that
$\partial_{i}\mathbf{\hat{u}}$ are also vectors in the tangent plane. This implies that
\begin{equation}
\mathbf{K} =h \mathbf{\hat{u}}+\gamma_{1}\partial_{1}\mathbf{\hat{u}}
+\gamma_{2} \partial_{2}\mathbf{\hat{u}}.
\end{equation}
To obtain the unknown coefficient functions $\gamma_{i}$ we consider%
\begin{align}
\left( \partial_{1}\mathbf{\hat{u}}\right) \cdot\mathbf{K} & =\gamma
_{1}\partial_{1}\mathbf{\hat{u}}\cdot\partial_{1}\mathbf{\hat{u}}+\gamma
_{2}\partial_{1}\mathbf{\hat{u}}\cdot\partial_{2}\mathbf{\hat{u}}=\partial
_{1}h,\\
\left( \partial_{2}\mathbf{\hat{u}}\right) \cdot\mathbf{K} & =\gamma
_{1}\partial_{2}\mathbf{\hat{u}}\cdot\partial_{1}\mathbf{\hat{u}}+\gamma
_{2}\partial_{2}\mathbf{\hat{u}}\cdot\partial_{2}\mathbf{\hat{u}}=\partial
_{2}h,
\end{align}
which is readily solved by%
\begin{equation}
\left(
\begin{array}
[c]{c}%
\gamma_{1}\\
\gamma_{2}%
\end{array}
\right) =\frac{1}{\Delta\left( \mathbf{\hat{u}}\right) }\left(
\begin{array}
[c]{cc}%
\partial_{2}\mathbf{\hat{u}}\cdot\partial_{2}\mathbf{\hat{u}} & -\partial
_{1}\mathbf{\hat{u}}\cdot\partial_{2}\mathbf{\hat{u}}\\
-\partial_{2}\mathbf{\hat{u}}\cdot\partial_{1}\mathbf{\hat{u}} & \partial
_{1}\mathbf{\hat{u}}\cdot\partial_{1}\mathbf{\hat{u}}%
\end{array}
\right) \left(
\begin{array}
[c]{c}%
\partial_{1}h\\
\partial_{2}h
\end{array}
\right) ,
\end{equation}
where the determinant is given by $\Delta\left( \mathbf{\hat{u}}\right)
=\left( \partial_{1}\mathbf{\hat{u}}\cdot\partial_{1}\mathbf{\hat{u}}\right)
\left( \partial_{2}\mathbf{\hat{u}}\cdot\partial_{2}\mathbf{\hat{u}}\right)
-\left( \partial_{1}\mathbf{\hat{u}}\cdot\partial_{2}\mathbf{\hat{u}}\right)
^{2}.$
The explicit calculation is performed using Mathematica and yields the curve
\begin{equation}\label{eq:Kridge}
\mathbf{K}^{R}(\varphi_1) = \frac{2 \varphi_1}{4+\varphi_1^2} \hat{\mathbf{n}}_1-\frac{\sqrt{5}}{4+\varphi_1^2}\hat{\mathbf{n}}_3.
\end{equation}
This result implies that for fixed $\varphi_1$ the mode instability surfaces with different values of $\varphi_2$ are all tangent to a single ridge-like structure. Substituting $\varphi_1=\sign(\varphi_1)\,4\cos^2{(a\pi)}$ and transforming back to the original frame then shows that this is in fact the ridge Eq.~(\ref{eq:ridge}) as introduced in Section \ref{sec:ridge}. This proves the perhaps surprising fact that, as we already hypothesized on the basis of the finite $N$ results, all the $M=8$ modes that make up the interior of the IBZ become unstable on a set of measure zero in phase space.
\paragraph{The boundary: the fans}
\label{sec:fans-infinity}
Referring to Figure \ref{fig:phi_domain} and Eqs.~(\ref{eq:phi2max}) and (\ref{eq:phi2min}), we see that for each $\varphi_1$ there are two limiting tangent planes whose orientations are determined by $\mathbf{F}\left(\varphi_1,\varphi^{max}_2(\varphi_1)\right)$ and $\mathbf{F}\left(\varphi_1,\varphi^{min}_2(\varphi_1)\right)$ respectively. The former corresponds to a diagonal stripe mode, whereas the latter corresponds to striped ($\varphi_1>0$) and modulated stripe ($\varphi_1<0$) modes. Thus from each location on the ridge there are two straight lines with given orientation that end up in the already identified apices of the fans, the points $\mathbf{K}^{S}$, $\mathbf{K}^{MS}$ and $\mathbf{K}^{DS}$. Hence, in this limit the fans become sectors of a generalized cone with as base (a segment of) the ridge. These cone sectors are ruled surfaces, whose we can conveniently parametrize as
\begin{equation}
\mathbf{K}^{X}(\varphi_1,l) = \mathbf{K}^{R}(\varphi_1)+l \left( \mathbf{K}^{X}-\mathbf{K}^{R}(\varphi_1)\right),\,l\in[0,1],
\end{equation}
where $X$ labels the specific apical vertex of the cone sector.
\paragraph{The extreme points: the major modes} The three extreme points of the IBZ simply yield the major modes already discussed in Section \ref{sec:major-modes} that dominate the phase diagram for $K_3>0$.
\subsubsection{The geometry of $D_{\infty}$}\label{sec:dinfty}
It is now straightforward to verify how the fans connect up with the major modes. With all these components in place we can now give the full description of $D_\infty$, by enumerating the components of its boundary $\partial D_{\infty}$.
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c|cc|c|lc|}
\hline
Type & Symbol & Mode(s) & & M & Specification &\\
\hline
& F & $(0,0)$ & & 1 & $\conv\left( \mathbf{K}^{T}, \mathbf{K}^{S},\mathbf{K}^{DS}\right)$ & \\
Major modes & AF & $(\pi,\pi)$ & & 1 & $\conv\left(\mathbf{K}^{T}, \mathbf{K}^{MS}, \mathbf{K}^{DS}\right)$ & \\
& AS & $(\pi,0)$ & & 2 & $\conv\left( \mathbf{K}^{T}, \mathbf{K}^{S},\mathbf{K}^{MS},\mathbf{K}^{B}\right)$ & \\
\hline
& S & $(a\pi,0)$ & & 4 & $\mathbf{K}^{R}(\varphi_1)+l \left( \mathbf{K}^{S}-\mathbf{K}^{R}(\varphi_1)\right)$ & \\
Fans & MS& $(\pi,a\pi)$ & $a\in[0,1] $ & 4 & $\mathbf{K}^{R}(\varphi_1)+l \left( \mathbf{K}^{MS}-\mathbf{K}^{R}(\varphi_1)\right)$ & $l\in[0,1]$ \\
& DS& $(a\pi,a\pi)$ & &4 & $\mathbf{K}^{R}(\varphi_1)+l \left( \mathbf{K}^{DS}-\mathbf{K}^{R}(\varphi_1)\right) $ & \\
\hline
Ridge & R & all others & & 8 & $\mathbf{K}^{R}(\varphi_1) \quad \varphi_1\in[-4,4]$ & \\
\hline
\end{tabular}
\caption{The components of the surface of the order-disorder surface $\partial D_\infty$. Here $\conv(\mathbf{K}_1,\mathbf{K}_2,\ldots)$ denotes the convex hull of the set of points in the argument list.}
\label{tab:D_infinity}
\end{table}
We visualize $D_\infty$ in Figure \ref{fig:D_infinity}. We now note that due to the fact that both the fans and the ridge are sets with non-zero curvature, the structure of the bifurcation modes in these regimes of phase space are inevitably of a `devil's surface' nature. Any variation of $\mathbf{K}$ in these regimes leads to a smooth non-constant variation of the critical modes $\mathfrak{q}$ that satisfy the bifurcation condition $\mathbf{K}\cdot F(\mathfrak{q})=1$. As $2\pi\mathbb{Q}^2\cap\widehat{\mathfrak{U}}_{\infty}$ is dense in $\widehat{\mathfrak{U}}_{\infty}$, there are bifurcating modes of arbitrary complexity in the neighbourhood of any mode $\mathfrak{q}$ in this regime.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{images/polytopes_trial/Dinf_ypbyd.jpg}
\caption{The disordered region $D_\infty$. Shading highlights the curved generalized cones with apices at the vertices $\mathbf{K}^{S}$, $\mathbf{K}^{MS}$ and $\mathbf{K}^{DS}$, which are the locus of the $M=4$ unstable modes. The ridge $\mathbf{K}^{R}(\varphi_1)$, Eq.\ (\ref{eq:Kridge}), is the common boundary of these cones, and the locus of the $M=8$ unstable modes.}
\label{fig:D_infinity}
\end{figure}
\section{Comparison with simulations}
\label{sec:simulations}
It is clearly infeasible to test the predicted devil's surface like complexity of the mode structure of the nascent phases at the order-disorder boundary by numerical means. However, our analysis of finite periodicities with fixed index $N$, which led to to the definition of the disorder polytopes $D_N$, showed that these are all realized on the common $N\times N$ square periodicity. The latter condition is readily realized by imposing periodic boundary conditions in a standard single spin-flip Metropolis simulation. To be able to limit ourselves to a finite number of simulations we make the following choice. For fixed $N$ we consider the set of bifurcating modes $\{\mathfrak{q}_f\}$, where $f$ indexes the set of faces of $D_N$. For each mode $\mathfrak{q}_f$ we determine a representative coupling vector $\mathbf{K}^{*}_f$ as the centroid of the face it belongs to. We then perform a series of simulations along the ray in phase space $\beta\mathbf{K}^{*}_f,\,\beta\in[0,\infty)$. The scaled inverse temperature $\beta$ is thus chosen so that the predicted transition occurs at $\beta=1$, which allows for easy comparison with the simulations independent of the details of each face.
In order to analyze the results of the simulation we need a suitable order parameter to signal the presence (or non-presence) of certain modes. As we will perform multiple replicates of the simulations at each inverse temperature, this order parameter has to be insensitive to any of the possible global symmetries that link different replicates. Defining the Fourier transform of the site magnetisation pattern by
\begin{equation}\label{eq:antif}
\hat{m}_{\mathrm{q}}=\frac{1}{N}\sum_{\mathrm{z}\in \mathcal{U}_{\mathrm{P}}} m_{\mathrm{z}}e^{-i\langle\mathrm{q},\mathrm{z}\rangle},
\end{equation}
we can define
\begin{equation}\label{eq:op}
\mu_{\mathrm{q}} \equiv\frac
{1}{\left\vert \mathfrak{D}_4\right\vert }\sum_{g\in\mathfrak{D}_4}m_{g\mathrm{q}}^{\ast}m_{g\mathrm{q}}.%
\end{equation}
By virtue of being square in the magnetisations, this expression divides out the up-down symmetry of the Hamiltonian. By multiplying complex conjugates, the translation symmetries, which generate unitary phase factors, are divided out. Finally, the explicit ``averaging'' over the point group symmetries, divides out the remaining symmetries.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{images/sixmosaic/simulations.pdf}
\\
\caption{Order parameter $\mu_{\mathfrak{q}}$ as function of the reduced inverse temperature $\beta$ for all possible modes on a $6\times6$ periodic lattice, for coupling constants corresponding to the centroid of each of the $10$ possible faces of $D_6$, as visualized in the upper left corner. In all cases the predicted bifurcating mode is the sole dominant one. }
\label{fig:sixmosaic}
\end{figure}
We performed simulations for $N=4,6,8,12$. As a proof of principle we show the order parameter values for the emergent modes beyond each of the $10$ faces of $D_6$ in Figure \ref{fig:sixmosaic}. The results for the other $N$ values were similar (data not shown).
For ease of reference these plots are organized to mimic the geometry of the salient IBZ, $\mathfrak{\widehat{U}}_6$. In all cases the observed dominant mode is the one predicted by our mean-field analysis. Moreover, in all cases the other modes, which have fairly significant amplitudes due to inevitable finite-size effects in the disordered phase, appear to be suppressed in the ordered regime. Strikingly, the shape of the order parameter curves also obeys the predicted symmetry in the antidiagonal of the IBZ (see \ref{sec:odd-even}). Finally, and as expected, the mean-field analysis appears to underestimate the value of the inverse temperature at which the ordering transition occurs. In Appendix \ref{app:simulations} we provide a few more technical details about the simulations.
\section{Conclusions}
Our analysis of the order-disorder transitions of the field-free nnnn Ising model on the square shows that the observation by Landau and Binder in their seminal paper on this topic almost four decades back \cite{Landau1985PhaseCouplings} that ``Using mean-field theory, we also find indications of interesting behavior for $T>0$'' was prescient. Our results indicate that in this approximation the strong frustration induced by an antiferromagnetic nnnn interactions produces fully developed complexity already at the level of the high-temperature order-disorder transition. Indeed, a large part of the order-disorder surface in the half-space $K_3 <0$ represents a `devil's surface', where bifurcating modes of arbitrary complexity are densely interspersed.
Our results also bring to the fore a hitherto perhaps less appreciated role for the lattice symmetry group and its action on the space of lattice modes by showing that the multiplicity $M$ of these modes under the point group is a strong determinant whether and where in the phase space these modes become unstable. Strikingly, the $K_3>0$ part of the order-disorder surface is entirely determined by the three major modes at the extreme points of the IBZ, while the three one-parameter families of $M=4$ modes associated with the edges of the IBZ all located in the half-space $K_3 <0$ make up the remaining surface area. Thus a set of measure zero in the IBZ accounts for all the bifurcation modes except for a set of measure zero, the ridge, to which all the $M=8$ modes, which represent the full measure of the IBZ, are compressed. It is our intuition that these results can possibly be interpreted within the setting of the so-called Equivariant Branching Lemma \cite{Golubitsky1988SingularitiesTheory,Golubitsky2003TheSpace}, a cornerstone of the theory of bifurcations with symmetry, which identifies a role for solutions with `maximal' residual symmetry with respect to the symmetry group being broken.
All together these result provide a somewhat paradoxical answer to our original question of the designability of complex patterns in binary lattice gases. On the one hand, the antiferromagnetic nnnn interactions enable a vast array of patterns to become accessible directly from the disordered phase. On the other hand, the ultra-sensitive dependence on the precise values of the coupling constants, implied by the devil's surface for the $M=4$ modes and the collapse onto a set of zero measure of the $M=8$ modes, effectively precludes a requisite degree of control in selecting specific patterns. It is an interesting question whether it is possible to circumvent the latter defect, perhaps through multi-spin interactions, and construct a system with a more robust yet sufficiently rich phase behaviour.
Obviously, the mean-field approach is a severe approximation, and one may well ask whether any of these features survive the inclusion of the inevitably strong correlations in a low-dimensional system such as the 2D square lattice. Here, we were able to provide limited evidence using Monte Carlo simulations that at least some of the predictions remain valid when we include these correlations up to cutoff imposed by periodic boundary conditions. Specifically, we correctly predict the dominant mode developing from the disordered phase along rays in phase space that pass through the center of the faces of the calculated disorder polytopes $D_N$. The `optimistic' view suggests that we can expect that results on the nature of symmetry-breaking events, which are to a large extent constrained by purely group-theoretical properties, may be more universal, and hence transcend the specific approximation chosen.
There are several directions of further research suggested by our results. First, it would be interesting to study this system beyond the mean field approximation, perhaps using a variant of the Cluster Variation Method \cite{Pelizzola2005ClusterModels}. Obvious questions are: (i) does the order-disorder surface remain a convex polytope and (ii) if so, which of its features remain invariant. Next, one could explore the immediate generalisations of the bifurcation conditions Eq. (\ref{eq:bif}) to different lattices and/or longer-ranged interactions. The analysis framework we set up here can readily be extended in these directions, albeit that as we increase the interaction range we also increase the dimensionality of the disorder polytopes with concomitant increase of geometrical complexity. So far, we have also limited our analysis to the order-disorder surface. What happens beyond it is an open question. We have indications that, at least for finite $N$, the dimensionality of the solution spaces associated with the bifurcating modes is significantly smaller than $N$, which would possible make it tractable to at least numerically track these solutions to possible lower temperature transitions. We certainly expect that secondary transitions are likely to occur, as most of the bifurcating modes only partially break the symmetry of the underlying lattice. Although we did not dwell on this here, our simulations also point to the occurrence of such transitions.
It would also be interesting to see what, if anything, the present analysis reveals about the ground-state phase diagram. Here, the recently developed method of mapping the ground-state problem of arbitrary spin models into a Maximum Satisfiability problem \cite{Huang2016FindingMAX-SAT}, or tensor network approaches for frustrated systems \cite{Vanhecke2021SolvingNetworks} may prove useful.
Finally, on a much more abstract level, there recently has been a series of papers that focus on the universality and complexity of classical spin models from the perspective of the theory of computation \cite{DeLasCuevas2016SimplePhysics,Kohler2019TranslationallyHamiltonians, Drexel2020DescribingMeasure}. It would be fascinating to explore what these insights could contribute to understanding the present system and frustrated systems in general.
\begin{acknowledgments}
The authors would like thank David Avis and Charles `Skip' Jordan for their kind assistance in using \texttt{lrs}. This work is part of the Dutch Research Council (NWO) and was performed at the research institute AMOLF.
\end{acknowledgments}
|
1,477,468,751,149 | arxiv | \section{Introduction}
\subsection{Properties of partially ionized warm clouds in the local
interstellar medium}
The Sun is surrounded by patchy network of warm (5,000--10,000~K)
partially ionized gas
clouds extending out to a distance of about 15~pc in the local interstellar
medium (LISM). We have learned about the properties
of these gas clouds from high-resolution spectra of absorption lines
produced by interstellar gas along lines of sight to nearby stars
and from satellite measurements
of interstellar gas flowing through the heliosphere.
\cite{Crutcher1982} first reported that the radial velocities of
interstellar absorption lines in
the spectra of nearby stars are consistent with interstellar gas
flowing toward the Sun from the direction of the Scorpio-Centaurus
Association. Later
investigations \citep[e.g.,][]{Lallement1992,Frisch2002} found
that the local interstellar gas flow has a number of velocity
components with slightly different flow directions and speeds.
\cite{Redfield2008} subsequently identified 15 different velocity
components of warm interstellar gas located within 15~pc of the Sun
by analyzing interstellar absorption lines in {\em HST} spectra of 157
nearby stars. The measured radial velocities along the lines of sight to
stars distributed within large solid angles allowed
\cite{Redfield2008} to determine velocity vectors for each of the 15
clouds. Using these velocity vectors, \cite{Malamut2014} predicted accurate
radial velocities for the interstellar gas along the lines of sight to
all 34 stars observed in their new data set.
This success in predicting accurate radial velocities for these new sight
lines demonstrated that these cloud vectors have accurate predictive
power. Figure~\ref{allclouds} shows the angular extent
in Galactic coordinates of the four closest
interstellar gas clouds named LIC, G, Blue, and Aql.
The Local Interstellar Cloud (LIC) is so-named
because its angular extent covers nearly half of the sky, implying
that the Sun is located just inside of the LIC or possibly immediately outside.
The decision as to which option is more likely valid requires a
second data type ---
measurements of interstellar gas flowing through the heliosphere.
Table~\ref{tab:inflow} compares the properties of the interstellar gas flowing
through the heliosphere with the parameters of gas located in the four nearest
interstellar clouds. The properties listed are the flow speeds
($v_{\rm LISM}$) relative to the Sun, temperatures ($T$) inferred
from the interstellar line widths, and the ecliptic longitude
($\lambda$) and latitude ($\beta$) of the flow. In the table, we
list parameters for neutral and ionized helium
gas flowing through the heliosphere from the LISM as measured by four
spacecraft: the {\em Extreme Ultraviolet Explorer (EUVE)},
{\em Interstellar Boundary Explorer (IBEX)}, {\em Ulysses}, and the
{\em Solar TErrestrial Relations Observatory (STEREO)}. {\em EUVE}
measured the resonant scattering of solar EUV photons by
inflowing neutral helium atoms.
{\em IBEX} measured the direction of neutral helium
atoms that flow through the heliosphere without direction changing collisions.
The resulting parameters obtained from four analyses of {\em IBEX}
data listed in the table are in excellent agreement, indicating that the
inflow speed of LISM gas relative to the Sun's motion though the LISM
is about $v_{\rm LISM}$=26~km~s$^{-1}$ and the inflow direction is
given by ecliptic longitude $\lambda=75.5^{\circ}$ and ecliptic longitude of
$\beta=-5.2^{\circ}$. The most recent {\em IBEX} measurement of inflowing
helium \citep{Swaczyna2018} refers to the primary component, which is
separated from the ``warm breeze'' secondary component.
The inflow
direction of neutral helium observed by the {\em Ulysses} spacecraft and He$^+$
pick-up ions (PUI) measured with {\em STEREO} spacecraft data also agree with
$\lambda=75.5^{\circ}$. Helium pick-up ions are previously neutral
helium atoms that were ionized by EUV photons or charge-exchange processes
in the heliosphere, then
picked-up by the magnetic solar wind and gravitationally focused into
a cone in the downwind direction. \cite{Taut2018} have investigated
possible systematic errors in the analysis of the {\em STEREO}
He$^+$ pick-up ion data, but they found no significant change in $\lambda$
compared to the earlier results \citep{Mobius2015b}, except for a more
realistic assessment of the errors.
Since the analysis of {\em IBEX} data of neutral
helium results in a
tight coupling between $v_{\rm LISM}$, $\lambda$, and $\beta$
\citep{McComas2015,Mobius2015a,Mobius2015b}, independent measurements
of $\lambda$
from {\em Ulysses} and {\em STEREO} are essential in pinning down
accurate values for $v_{\rm ISM}$ and $\beta$. There have been a
number of studies concerning whether the inflow vector obtained from
{\em IBEX} observations of neutral helium is affected by
interstellar magnetic fields or by confusion with a second component
of the inflowing
helium, called the ``Warm Breeze'' \citep{Kubiak2014},
but these possible effects appear to be very small \citep{Kubiak2016}
Included in Table~\ref{tab:inflow} are the properties of the neutral
hydrogen gas located in the nearby partially ionized LISM clouds.
\cite{Redfield2008} obtained these properties from their analysis of
interstellar absorption lines in {\em HST} spectra of nearby stars.
Also included in the table is a reanalysis
of the flow vector through the LIC including 25\% more sightlines than
were not available at the time of the 2008 paper. The addition of
these new sightlines produced only a slight change in the LIC flow
parameters. The LIC cloud provides the closest match to the inflow
parameters provided by {\em IBEX}, {\em Ulysses}, and {\em STEREO}, but
the match is not perfect. We will discuss this agreement or
disagreement in Section 6.
\cite{Slavin2008} computed a
model for the LIC with neutral hydrogen number density
$n_{\rm HI}=$ 0.19--0.20~cm$^{-3}$, electron density
$n_e=0.007\pm0.01$~cm$^{-3}$ and temperature $T=6300$~K. \cite{Redfield2008}
found that the temperature of gas in the LIC is $7500\pm 1300$~K,
that the temperatures of the other clouds lie in the
range 5300--9900~K, and that their neutral hydrogen column densities lie in
the range $\log N_{\rm HI}=$ 17.2--18.8. The values of $n_{\rm HI}$ and $n_e$
in the other clouds are unknown, although the clouds are likely partially
ionized like the LIC.
The absence of interstellar absorption at the predicted LIC velocity
in the direction of the Sun's motion implies that the Sun will leave
the LIC in less than 3000 years \citep{Redfield2008}.
\cite{Frisch2013} and \cite{Frisch2015}
proposed that the inflow direction has changed over the last 40 years,
suggesting that the heliosphere's environment is changing in our lifetime.
\cite{Lallement2014}, however, argued against changes in the neutral helium
inflow direction based on a reanalysis of the {\em IBEX} data including
dead-time counting effects and the ecliptic longitude directions of
pick-up ions measured by {\em STEREO}. Their conclusion was supported by the
absence of any measurable change over a 20-year time span in the
interstellar flow vector of neutral hydrogen measured by
the Solar Wind ANisotropies (SWAN) experiment on {\em SOHO}
\citep{Koutroumpa2017}.
\subsection{What are the properties of the intercloud gas?}
The theoretical models of the interstellar medium proposed by \cite{Field1969},
\cite{McKee1977}, and \cite{Wolfire1995} decribe the interstellar gas as
consisting of three components: cold ($T\leq 50$~K) neutral and molecular gas,
warm neutral or partially ionized gas, and million-degree
low-density fully ionized plasma. These classical models assume that the three
components are each in thermal equilibrium and coexist in pressure
equilibrium, but steady
state equilibrium is highly unlikely in the low density dynamic interstellar
medium where the time scales for ionization and recombination are
on the order of $10^7$ years \citep{Chassefiere1986}.
The warm partially ionized gas clouds within a few parsecs of the Sun
have properties roughly consistent with the warm component
predicted by the classical models, and dense cold molecular clouds are
observed typically by CO and H~I 21-cm emission.
The nearest cold gas with a temperature of 15--30~K is the
Leo cloud located
at a distance between 11.3 and 24.3~pc from the Sun \citep{Peek2011}.
However, numerical simulations by \cite{Berghofer2002}, which include supernova
explosions and realistic thermal and dynamic processes, predict a very
wide range of densities and temperatures in the ISM but no pressure equilibrium
and no identifiable thermal phases.
The Sun is located in a low-density region called the Local Cavity
that extends more than 80~pc in all directions \citep{Frisch2011}.
Inside of the Local Cavity are at least 15 partially ionized warm clouds
\citep{Redfield2008} and intercluster gas
that was originally assumed to be hot (roughly $10^6$~K), fully
ionized, and low density (roughly 0.005~cm$^{-3}$).
This Local Hot Bubble model was supported by the predictions of
the classical models and observations of diffuse soft X-ray emission
detected by rocket experiments and the {\em ROSAT} satellite. However,
the presence of hot gas in the Local Cavity is now challenged on the basis
of the following observational problems presented by \cite{Welsh2009}:
\begin{description}
\item[Solar wind charge exchange (SWCX) emission] The unexpected
detection of X-ray emission from Comet Hyakutake \citep{Lisse1996}
led to the recognition that charge exchange reactions between solar
wind ions and neutral gas in the heliosphere can
produce X-ray emission \citep{Cravens1997} that is
similar to the emission produced by a million degree
plasma. This result led to two different scenarios: (1) that roughly half
of the observed diffuse X-ray emission in the Galactic plane is produced by
SWCX reactions inside the heliosphere with the other half produced
by hot intercloud plasma
\citep{Robertson2003,Galeazzi2014}, or (2) essentially all of the 0.75 keV
emission in the Galactic plane is SWCX emission and there is no need
for emission from a hot plasma except near the Galactic poles
\citep{Snowden1994,Cox1998,Koutroumpa2009,Koutroumpa2012}.
\item[O~VI absorption] If hot gas were present in the Local Cavity,
then interstellar absorption in the far-ultraviolet lines of O~VI
would indicate that intermediate temperature ($T\approx 300,000$~K) gas
is present where the hot gas comes in contact with cooler gas at the
edges of the partially ionized warm gas clouds. O~VI
absorption lines are detected in the circumstellar environment of hot
stars and at high Galactic latitudes where there is hot gas in
contact with the Galactic halo, but O~VI absorption is not detected in
lines of sight towards stars within 58~pc of the
Sun \citep{Barstow2010}. The intercloud gas must, therefore, be cooler than
300,000~K yet still be mostly ionized so as to not show neutral
hydrogen absorption.
\item[Pressure imbalance] If the diffuse X-ray emission were produced
by hot plasma, then the inferred emission measure predicts a gas
pressure $P/k$ = 10,000--15,000~cm$^{-3}$K \citep{Snowden2014b}
that is much larger than the gas pressure in the warm partially
ionized clouds like the LIC where
$P/k \approx 2500$~cm$^{-3}$K \citep{Redfield2008}.
While additional pressure terms
(e.g., magnetic fields, cosmic rays, and ram pressure) may be
important, the very large pressure difference argues against the presence
of hot plasma at least in the Galactic plane.
\item[Upper limits on EUV line emission] Upper limits for diffuse
high-temperature emission obtained by the {\em EURD}
(Espectr\'ografo Ultra-violeta extremo para la Radiaci\'on Difusa)
satellite \citep{Edelstein2001} exclude significant emission
from both $10^6$~K and intermediate temperature ($10^5$~K) gas in the
Local Cavity. Upper limits obtained with the {\em Cosmic Hot
Interstellar Plasma Spectrometer (CHIPS)} satelite by \cite{Hurwitz2005}
for diffuse emission of Fe lines, in
particular the \ion{Fe}{9} 171.1~\AA\ line, are also inconsistent with the
predicted emission from putative $10^6$~K thermal plasma in the Local Cavity.
\end{description}
Given these strong arguments against the presence of hot gas in the Local
Cavity except towards the Galactic poles, the gas located between the warm
partially ionized clouds
(intercloud gas) and elsewhere within 80~pc of the Sun must be ionized
but not necessarily hot in order to be not detected as neutral gas.
Upper limits on the non-SWCX
X-ray emission requires that the intercluster gas must be
much cooler than $10^6$~K or have a very low emission measure
as indicated by X-ray shadowing experiments \citep[e.g.,][]{Peek2011}
and by extreme ultraviolet spectroscopy \citep{Hurwitz2005}.
Various authors have proposed different
solutions to the intercloud gas problem by identifying different
sources of past and present ionization. \cite{Lyu1996} and
\cite{Breitschwerdt1999} proposed that
the intercloud gas is a recombining remnant of a
past ionization event such as a supernova shock wave. In this
non-equilibrium plasma, the
degree of ionization can be far higher than the electron temperature of the
gas. This model is supported by the presence of young massive stars in
the nearby Scopius-Centaurus OB Association and the likely presence of
a previous group of massive stars that produced many supernova
explosions with the last supernova perhaps as recent as 0.5 Myr.
\cite{Welsh2009} proposed a ``Hot-Top model'' in which there is no
hot gas except near the Galactic poles, but elsewhere the intercloud
gas is highly ionized with an electron temperature of about 20,000~K
in rough pressure equilibrium with the partially ionized warm clouds.
An important source of ionization is the EUV radiation from
$\epsilon$~CMa, the brightest EUV source detected by the
{\em Extreme Ultraviolet Explorer (EUVE)} satellite and other hot stellar
and white dwarf sources \citep{Vallerga1995, Vallerga1998}.
Among the nearby sources of EUV emission is Sirius~B, located only
2.6~pc from the Sun.
\cite{Stromgren1939} showed that the EUV emission of hot stars
photoionizes the surrounding gas producing an HII region extending out to a
distance that defines a classical Str\"omgren sphere. Our model for
the intercloud gas near the Sun is a Str\"omgren sphere-like H~II
region photoionized primarily by $\epsilon$~CMa rather than a
recombining plasma, because the ionization state of the gas seen towards
$\epsilon$~CMa is modest (mostly singly-ionized atoms) and the EUV
radiation field is very strong.
\subsection{Outline of this paper}
In this paper, we describe the properties of the partially
ionized warm gas clouds in the immediate neighborhood (within 4 pc)
of the Sun and the intercloud gas present between these clouds. In a
subsequent paper, we will extend this analysis further into the LISM.
In Sections 2 and 3, we measure the size and shape of the LIC from 62 column
densities of \ion{D}{1}, \ion{Fe}{2}, and \ion{Mg}{2}. Section 4
describes the properties of Str\"omgren spheres of ionized gas
surrounding nearby hot stars and white dwarfs. In Section 5,
we identify the hydrogen hole in the LIC, which is
photoionized by EUV radiation primarily from $\epsilon$~CMa, and
propose that the Blue cloud in the direction of the hydrogen hole is
a Str\"omgren shell. In Section 6 we consider whether the flow vector
measured by {\em IBEX} and other satellites is inconsistent with the
LIC flow vector measured from interstellar absorption lines,
and, if so, what information can be gleaned from this inconsistency.
In Section 7, we
identify the gas clouds and intercloud components in the sightlines to
nearby stars, In Section 8 we propose that the recent measurement
in Antarctic snow of
interstellar grains containing $^{60}$Fe from a supernova could be
explained by the inflow of dust grains from warm clouds in contact
with the heliosphere either at a continuous low level rate or during
an unusual event. Finally, we
list our conclusions and needed future work.
\section{DATA ANALYSIS}
\subsection{Estimating distances to the edge of the LIC}
The procedure for estimating distances to the edge of the LIC is conceptually
straightforward. We assume that the LIC surrounds the heliosphere in
most directions and has
constant neutral hydrogen density $n$(H~I). Therefore, the neutral
hydrogen column density $N$(H~I) divided by $n$(H~I)
gives the path length of the absorbing medium, which in this case
is simply the distance to the edge of the LIC along this line of sight
(LOS). A fairly robust measurement of $n$(H~I)
exists for the immediate vicinity of the Sun, derived from (a)
Lyman-$\alpha$ backscatter estimates \citep{Quemerais1994}, (b)
{\it in situ} neutral helium measurements \citep{Gloeckler2004}, and (c)
\ion{He}{1}/\ion{H}{1} measurements from extreme-UV observations of local
white dwarfs \citep{Dupuis1995}. These diverse measurements indicate that
$n($\ion{H}{1}$) \approx 0.2$~cm$^{-3}$ in the immediate
vicinity of the Sun. This density is slightly higher than the lower limit of
$n($\ion{H}{1}) derived from dividing $N$(H~I) by the
distance to the nearest observed stars, which has a maximum value of
$\langle n($\ion{H}{1}$)\rangle \sim 0.1$~cm$^{-3}$ This difference
between $n($\ion{H}{1}) and $\langle n($\ion{H}{1}$)\rangle$
indicates that either the filling factor of warm gas inside the LIC and
presumably other clouds is less than unity or that
the portion of the LIC very close to the Sun has a higher density
than elsewhere in the LIC \citep{Redfield2008}. In addition,
several sight lines to nearby stars (e.g., 61 Cyg,
$\alpha$ CMi, and $\alpha$ Aql) limit the average value of
$\langle n($\ion{H}{1}$)\rangle$ to $\leq 0.2$~cm$^{-3}$.
Hydrogen column densities, however, are difficult to measure towards nearby
stars. The strongest transition (e.g., Lyman-$\alpha$) is saturated,
contaminated by airglow emission, and complicated by heliospheric and
astrospheric absorption \citep{Wood2005}. We, therefore, estimate $N$(H~I)
from other available atoms and ions in the following priority:
\ion{D}{1}, \ion{Fe}{2}, and \ion{Mg}{2}. Deuterium is an excellent tracer
of hydrogen due to its tight abundance ratio within the Local Bubble,
D/H$ = 15.6 \pm 0.4 \times 10^{-6}$ \citep{Linsky2006}. Both \ion{Fe}{2} and
\ion{Mg}{2} are the dominant ionization stages for these elements, and despite
significant depletion onto dust grains, they have relatively tight and narrow
correlations with hydrogen. Also, these three ions are commonly observed.
Of the 79 sight lines assigned to the LIC on the basis of their radial
velocities being consistent with the LIC velocity vector
\citep{Redfield2008}, 64 (81\%) have observations in one or more of these
ions. The remaining LIC sight lines were detected in \ion{Ca}{2} alone. Most
measurements were taken from compilations of observations of these ions from
\citet{Redfield2004a} for \ion{D}{1} and \citet{Redfield2002} and
\citet{Malamut2014} for \ion{Fe}{2} and \ion{Mg}{2}.
We use LIC sight lines with measurements of \ion{D}{1},
\ion{Fe}{2}, and \ion{Mg}{2} to empirically determine the appropriate
conversion to hydrogen column densities.
\citet{Redfield2008} calculated the depletion of
\ion{Fe}{2} and \ion{Mg}{2} inside the LIC based on 12 sight lines with both
\ion{D}{1} and \ion{Fe}{2}, and 21 sight lines that have both \ion{D}{1} and
\ion{Mg}{2} observations. The weighted mean value of \ion{Fe}{2}/\ion{H}{1},
where the hydrogen column density is calculated from \ion{D}{1} and the
well-determined D/H ratio given above, is
$\langle$\ion{Fe}{2}/\ion{H}{1}$\rangle = 2.14^{+0.61}_{-0.48} \times 10^{-6}$,
and for magnesium,
$\langle$\ion{Mg}{2}/\ion{H}{1}$\rangle = 3.6^{+2.8}_{-1.6} \times 10^{-6}$.
The errors include the error in the D/H ratio, as well as the standard
deviation in the distribution of measurements, which would include any
depletion or ionization variations within the LIC.
Figure~\ref{fig:columnest} shows a comparison of the estimated \ion{H}{1}
column densities based on observations of the three ions (\ion{D}{1},
\ion{Mg}{2}, and \ion{Fe}{2}). Typical errors in the log \ion{H}{1}
column density
estimate based on \ion{D}{1} are only $\sim$0.09, while for \ion{Mg}{2} and
\ion{Fe}{2} they are 0.35 and 0.27, respectively. Note the tight
correlation and small errors associated with \ion{D}{1}. For all three
comparison plots, $\sim$77\% of the data pairs predict consistent
values for $N$(H~I) pairs within 1$\sigma$. While there is a large
dispersion in the comparison of $N$(H~I) for \ion{Mg}{2} and \ion{Fe}{2},
they still seem to be relatively
good proxies for the \ion{H}{1} column density, provided there is a sensible
estimate of the elemental depletion.
Other ions were investigated, but they suffer more seriously from
ionization and depletion effects. For example, \ion{Ca}{2} is not the
dominant ionization stage of calcium in the LISM, but makes up only 1.6\% of
the gas phase calcium, whereas 98.4\% is expected to be in \ion{Ca}{3}
\citep{Slavin2008}. For this reason, the \ion{Ca}{2}/\ion{H}{1} ratio varies
much more significantly than the corresponding ratios for \ion{Mg}{2} and
\ion{Fe}{2}, and does not provide a useful means of estimating
the hydrogen column density.
The magnitudes of the errors in the abundance ratio to hydrogen (X/H) clearly
show that deuterium measurements are most desirable, typically followed by
\ion{Fe}{2}, and then by \ion{Mg}{2}. Both the \ion{Fe}{2} and \ion{Mg}{2}
measurements are given for all LIC sight lines in Table~\ref{tab:licmem}.
Along six lines of sight, both Fe~II and Mg~II were measured and are
listed in the table. In all
six cases, the two ions lead to $N$(H~I) estimates that
agree to within 3$\sigma$. The estimates of $N$(H~I) used in the
subsequent analysis are typically the ones with higher accuracy, which
for these sight lines without D~I data were obtained from
\ion{Fe}{2} in all cases. Of the 67 sight lines with observations of the LIC
in one of these three ions, 34 (51\%) have \ion{D}{1} observations, 19 (28\%)
have \ion{Fe}{2} but no \ion{D}{1}, and 14 (21\%) were observed in \ion{Mg}{2}
alone. Table~\ref{tab:licmem} identifies each sight line, and which ion was
used in the \ion{H}{1} calculation. We assume
$n($\ion{H}{1}$) = 0.2$~cm$^{-3}$ to calculate the distance from the Sun to
the edge of the LIC.
Sight lines in which the relative error in the distance to the LIC edge
[$\sigma(d_{\rm edge})/d_{\rm edge}$] is of order unity (i.e., $>$0.9) were
removed from the sample, as they do not provide any significant constraint on
the distance to the LIC edge and result typically from large errors
in the measured ion column density because of saturation or the difficulty in
establishing the velocity component structure of blended profiles. Only 3
sight lines were removed ($\sim$4\% of the sample) for this reason.
In addition, sight
lines for which the distance to the LIC edge is estimated at more then 5
standard deviations from the median value were also removed. Again, it is
possible that these are due to erroneously large column density measurements
(in all cases they were larger than the median value) due to saturation or
blending. Five measurements were thus removed ($\sim$7\% of the sample),
with three of them observed in \ion{Fe}{2} and two in \ion{Mg}{2} (which had
complimentary \ion{Fe}{2} observations that could be used). No targets
observed in \ion{D}{1} were removed for these reasons.
We note that three of these targets are roughly in the same part of the sky
(l=47$^{\circ}$--57$^{\circ}$, b=27$^{\circ}$--32$^{\circ}$):
72 Her, 99 Her, and HD157466. They all have nearly saturated absorption
lines, leading to the large column densities and high path length measurements
with large errors. This part of the sky is near the upwind direction and is
complicated by the presence of several LISM clouds
\citep[see Fig. 19 in][]{Redfield2008}.
Also, other LIC absorbers in the vicinity (e.g., $\gamma$ Ser, $\alpha$ Oph,
$\delta$ Cyg, $\iota$ Oph) show relatively high column densities. Three of
these sight lines were observed only in \ion{Ca}{2} ($\alpha$ Oph,
$\delta$ Cyg, $\iota$ Oph), but have among the highest values of $N$(H~I)
detected in the LISM \citep[see Fig. 7 in][]{Redfield2002}.
While $\gamma$ Ser and $\alpha$ Lyr are both within 5$\sigma$ of the
mean LIC path length, both have observed values
that are 2.7$\sigma$ from the predicted value based on our morphological
model. Clearly, something complex and interesting is occurring in this
region of the sky.
A significant number of the \ion{D}{1} measurements were made without full
knowledge of the velocity component structure. If more than one absorbing
cloud is present along the line of sight, this can lead to overestimated
column densities if the profile is modeled as a single component. Due to the
strong thermal broadening of \ion{D}{1} \citep{Redfield2004b}, multiple velocity
components are not immediately obvious and thus require observations at
high resolution of heavier ions such as \ion{Fe}{2} or \ion{Mg}{2} to
infer a more accurate value of $N$(H~I). Because
the contributions of such possible systematic errors are not included in the
\ion{D}{1} column density estimate, we treat the estimates derived from these
sight lines with caution. Of the 34 \ion{D}{1} measurements, this effects 11
(32\%) of them ($\tau$ Cet, $\delta$ Eri, EP Eri, HR1925, HR8, DX Leo, PW And,
SAO136111, V471 Tau, HR1608, and Feige 24). Future short observations
of \ion{Fe}{2} or \ion{Mg}{2} could easily resolve the component
structure and greatly improve
the accuracy of the \ion{D}{1} analysis \citep[e.g.,][]{Malamut2014}.
\section{Morphology of the Local Interstellar Cloud (LIC)}
Motivated by a significant increase in the number of observations of
the LISM, we present a revised analysis of the three-dimensional morphology
of the interstellar material that directly surrounds our solar system,
the Local Interstellar Cloud (LIC). We follow the same procedure outlined
in \citet{Redfield2000}, which involves fitting a series of spherical
harmonics to the estimated distances to the edge of the LIC given
in Table~\ref{tab:licmem}. As in \citet{Redfield2000}, we fit the data
to 9 basis functions (e.g., $l = 0, 1, 2$). We assume a homogeneous and
constant density cloud in order to estimate the distance to the edge of the
LIC from our column density measurements. In Section 6 we will
test the validity of these assumptions. With a large enough sample and
corresponding high orders of spherical harmonics, any arbitrary closed
surface can be characterized with this technique.
The best fitting amplitude for each
spherical harmonic basis function is determined using a least-squared
minimization routine.
There are two significant changes in our analysis compared to
\citet{Redfield2000}. First is the quantity and quality of the input
data. In order to have a sufficient number of data points,
\citet{Redfield2000} used hydrogen column density measurements derived
from observations using {\it HST}, {\it EUVE}, and ground-based
\ion{Ca}{2}. This resulted in a sample comprised of 32 measurements. In
the current analysis, we limit ourselves to a homogenous sample of only
the highest quality column density measurements, derived from
{\it HST} spectra. Given the significant increase in LISM measurements
\citep[e.g.,][]{Redfield2002, Redfield2004a, Malamut2014}, our current
sample includes 62 measurements.
The second significant difference with the \citet{Redfield2000} analysis,
is our improved knowledge of the hydrogen volume density in
the LISM. \citet{Redfield2000} used a value of $n_{\rm HI} =0.1$~cm$^{-3}$,
based on limits toward the nearest stars. However, we now estimate the
neutral hydrogen density to be closer to $n_{\rm HI} = 0.2$~cm$^{-3}$
\citep{Slavin2008}. Therefore, our estimates for the size of the LIC will be
roughly a factor of two smaller than in \citet{Redfield2000} as
distances are inversely proportional to the assumed
neutral hydrogen number density.
The resulting parameters of the best fit spherical harmonics model is
given in Table~\ref{tab:licfit}. Like the original model in
\citet{Redfield2000}, the dominant harmonic is the spherical $l=0$
component indicating that to first-order the LIC can be characterized
as a quasi-spherical closed volume. However, the contributions of the
additional spherical harmonic orders lead to significant departures
from a pure sphere. Contours of the best fit three-dimensional model
are shown in Figures~\ref{fig:licconngp}--\ref{fig:liccongc}. The
shape is much better constrained than in the \citet{Redfield2000} model,
although the general geometry is not all that different. In particular,
the observations continue to support the interesting conclusion that
the Sun is located very near to the edge of the LIC, indicating that
in as little as $\approx$3000 years, the Sun will pass out of the LIC
and into a different interstellar environment.
The geometry of the LIC can also be visualized in Figure~\ref{LICfromCenter},
where the shading indicates the distance to the edge of the LIC from
the geometric center of the LIC (as given
in Table~\ref{tab:licfit}).
The reduced $\chi^2$ is
relatively high, indicating that there are significant departures
from this relatively simple model. In particular, out of 62
total data points, 26 (42\%) are within 1$\sigma$, 38 (61\%) within 2$\sigma$,
and 54 (87\%) within 3$\sigma$. The most discrepant sight lines
are toward $\eta$ Ari (for which the model predicts a larger distance than
the observations imply) and $\tau^6$ Eri (for which the model predicts
a much closer
distance than the observations imply). Some of the discrepancies could
be explained by misidentifications or blends of multiple components as
singular LIC absorption features. However, it is likely that our simple
assumptions of homogeneity and constant density are, unsurprisingly,
not completely realistic. While, by and large, the structure is well
characterized in this way, there are likely to be regions where there
are significant departures in homogeneity or density that lead to
discrepancies between the model and observations.
\section{What is the Ionization Source for the Intercloud Gas near the Sun?}
The brightest nonsolar source of extreme-UV radiation detected by the
EUVE satellite was the B2~II star $\epsilon$~CMa ($d$=124~pc) with
an intrinsic ionizating flux of about $2.7\times10^{46}$~s$^{-1}$
\citep{Vallerga1995}. This flux estimate includes a correction for
absorption by a hydrogen column density, $N$(H~I)=$9\times 10^{17}$ cm$^{-2}$,
along the line of sight to the star.
If the \cite{Gry1995} estimate of $N$(H~I)$<5\times 10^{17}$ cm$^{-2}$
is more realistic, then the intrinsic ionizing flux from $\epsilon$~CMa
will be larger. The next brightest EUV source is
$\beta$~CMa (B1 II-III; $d$=151~pc), followed by many hot white dwarfs located
inside of the Local Bubble \citep{Vallerga1998}.
The total ionization rate of 33 hot white
dwarfs measured by EUVE is $\sim 1.6\times 10^{45}$ photons ~s$^{-1}$
\citep{Welsh2013}, which is more than a factor of 10 times smaller than the
ionizing flux from $\alpha$~CMa.
In a classic paper, \cite{Stromgren1939} showed that the EUV radiation
($\lambda < 912$~K) from a hot star completely ionizes hydrogen in its
surrounding volume (called a Str\"omgren sphere) out to a distance now called
the Str\"omgren radius where
the build up of neutral hydrogen opacity absorbs the photoionizing radiation,
producing a narrow partially ionized shell surrounded by neutral hydrogen gas.
In this paper, Str\"omgren developed a simple model assuming that
the hot star is located in a constant density environment in which
flows are ignored and
photoionization of hydrogen is balanced by recombination in a steady state.
In this case, the radius of the classical Str\"omgren sphere is
\begin{equation}
R^3=3 (dN_i/dt) /(4\pi\alpha n_in_e),
\end{equation}
where $dN_i/dt$ is the number of ionizing photons per second and $n_i$ and
$n_e$ are the number densities of ions and electrons inside of the
Str\"omgren sphere, and $\alpha$ is the recombination factor
\citep{Harwit1988}. For relatively soft stellar radiation such as from
$\epsilon$~CMa where most of the ionizing radiation is in the
504--912~\AA\ band, hydrogen inside of the Str\"omgren sphere
will be fully ionized and helium mostly
neutral. When the radiation field
is harder with significant radiation as wavelengths shortward of
the 504~\AA\ photoionization edge of He$^0$ or the 228~\AA\ photo-ionization
edge of He$^+$, as is the case for very hot white dwarfs such as G191-B2B
and HZ~43, then helium will be either singly or doubly ionized.
\cite{Tat1999} estimated the sizes of Str\"omgren spheres around
hot white dwarfs in the Local Cavity using the classical
Str\"omgren sphere model. This model has been extended to include
dust opacity, clumpiness, diffuse radiative transfer, and dynamics
\citep[e.g.,][]{Yorke1986}. \cite{McCullough2000} computed modified
Str\"omgren sphere models for the case of a hot star embedded in
a larger ionized cavity. Depending on the location of the hot
star in the cavity, the H~II region around the star is no longer a sphere.
Rather, the H~II
region produced by the hot star is larger than for the classic case
because the surrounding gas is not neutral and the two H~II regions
can merge. Since Sirius~B resides inside of
the H~II region produced by $\epsilon$~CMa, we refer to the H~II
region near Sirius~B as an ``extended H~II region''.
The electron density in the line of sight to $\epsilon$~CMa is unknown,
but if it is about 0.01 cm$^{-3}$, then the radius of the star's
Str\"omgren sphere equals the distance to the star (130 pc). This is
consistent with the conclusion by \cite{Welsh2013} that
$\epsilon$~CMa is the primary source responsible for the ionization of the
local ISM. They found that the volumetric
filling factor of the classical Str\"omgren spheres of all 33 of the hottest
white dwarfs (excluding Sirius~B) in the Local Cavity is less than 6\%
and that none of these hot white dwarfs are close enough to the Sun
to influence the local ionization.
We next consider whether Sirius~B could be an important
local ionization source given its short 2.6~pc distance from the heliosphere.
Fitting the {\em HST} spectrum of Sirius~B with a non-LTE model
atmosphere, \cite{Barstow2005} obtained
the stellar parameters $T_{\rm eff}=25,193$~K, $\log g=8.556$, and radius
0.0081 solar. Martin Barstow (private communication) kindly computed the flux
shortward of 912~\AA\ for this model as $9.4\times 10^{39}$~photons~s$^{-1}$.
The radius of a classical Str\"omgren sphere for this photon flux is
0.25~pc for an assumed $n_e=0.1$ cm$^{-3}$ \citep{Redfield2008b}
or 1.14~pc for an assumed
$n_e=0.01$~cm$^{-3}$. These calculations are for an isolated Str\"omgren
sphere surrounded by neutral hydrogen, but Sirius~B is embedded in
the large H~II region ionized
by $\epsilon$~CMa, and the physical conditions of interstellar
gas near Sirius and the Sun are controlled by stellar radiation
in the 504--912~\AA\ region and by hot white dwarfs
at shorter wavelengths.
Table~\ref{tab:composition} lists the stars within 16~pc of the Sun for
which neutral hydrogen column densities through the nearby clouds have
been measured
by \cite{Redfield2008} and by \cite{Malamut2014}. We list the distances
through neutral hydrogen gas, $\Delta d$(neutral), along the
lines of sight (LOS) through the identified clouds, assuming that the
neutral hydrogen density is the same as that measured for the LIC
($n_{HI}=0.2$ cm$^{-3}$) by \cite{Slavin2008}. We presume that
the remaining path length $\Delta d$(ionized) is filled by
ionized gas, but we revist this assumption when we discuss the G cloud
sight line in Section~7.
We note that the thickness of the partially ionized outer shell
of a Str\"omgren sphere is $\delta=(n_{HI}\sigma)^{-1}$, where
$\sigma\approx 10^{-17}$~cm$^2$ \citep{Harwit1988} is the
hydrogen-ionization cross section for EUV photons near 912~\AA.
Thus for $n$(H~I)$\approx 0.2$~cm$^{-3}$, the
Stromgren shell thickness is $\delta\approx 0.2$~pc. Filamentary warm
clouds like the Mic Cloud could have roughly this thickness.
The incident EUV radiation from $\epsilon$~CMa should produce higher
ionization inside H~II regions and Str\"omgren shells.
\cite{Gry1995} found interstellar absorption by Si~III and C~IV
at the predicted LIC velocity in the sight line to $\epsilon$~CMa,
although the C~IV absorption could be in the stellar wind.
\cite{Dupin1998} also found C~IV absorption at the LIC cloud
velocity in the sight line to $\beta$~CMa. Since both sight lines
pass through Str\"omgren shells, there is strong evidence for higher
ionization in these shells.
In Table~\ref{shells}, we compare the thicknesses of the LIC and Blue clouds
seen in the directions of $\epsilon$~CMa and Sirius. The cloud thicknesses
in these directions are consistent with their being Str\"omgren
shells. It is likely that the outer edges of all clouds facing the EUV
radiation from $\epsilon$~CMa are Str\"omgren shells.
\section{The Hydrogen Hole and Blue Cloud have a common origin}
Figure~\ref{LICfromCenter} shows the distance to the edge of the LIC
from its geometric center computed from the measured values of
$N$(H~I) along the lines of sight to
nearby stars. The region within Galactic longitude
$225^{\circ} \leq l \leq 290^{\circ}$ and Galactic latitude
$-60^{\circ} \leq b \leq +10^{\circ}$ shows
very low H~I column densities corresponding to a skin depth of
$<0.5$~pc from the geometric center of the LIC.
We call this region the ``hydrogen hole''. The
Galactic coordinates of $\epsilon$~CMa, $\beta$~CMa, and Sirius as seen
from the center of the LIC all lie within this region. The location of the
hydrogen hole is consistent with the coordinates of these three strong
sources of ionizing radiation that apparently shape the morphology of
the LIC in the $\epsilon$~CMa direction.
\cite{Welsh1999} refer to the low hydrogen
column density along the lines of sight to $\epsilon$~CMa and
$\beta$~CMa as an interstellar tunnel or local chimney that extends
beyond these stars to the Galactic halo.
Figure~\ref{fig:dliceuve} shows the distance in pc from the geometric center of
the LIC to its edge along the sightlines to the 10 brightest EUV sources
observed by the {\em EUVE} spacecraft \citep{Vallerga1995}. The lines
of sight to the brightest EUV
source $\epsilon$~CMa and four of the five next brightest sources all have
the shortest distances from the center of the LIC edge to its edge,
which is consistent with EUV
radiation sculpting the LIC by photoionizing neutral hydrogen.
We now consider what type of gas lies immediately outside of the heliosphere in
the direction of the hydrogen hole. Three possibilities are
(i) a very thin layer of partially ionized LIC cloud gas, (ii) partially
ionized gas of another cloud, or (iii) fully ionized
hydrogen gas (H~II region). A test for the
first two possibilities would be the detection of Lyman-$\alpha$
absorption in the hydrogen wall near the Sun where the
inflowing neutral hydrogen atoms from a partially ionized cloud (LIC
or another cloud) charge-exchange with the outflowing protons in the
solar wind leading
to a pile-up of heated, red shifted, neutral hydrogen atoms. The result
would be
Lyman-$\alpha$ heliospheric absorption that is red shifted relative to the
inflowing neutral hydrogen. Table~\ref{tab:outside} summarizes the data for
those stars
located inside of or near the region of minimum $N$(H~I) that have
detected or nondetected solar hydrogen wall absorption. The data fall
into three groups detailed in Sections 5.1 to 5.3.
\subsection{Stars well inside of the hydrogen hole}
The nine stars with lines of sight that traverse the hydrogen hole
all show an absorption component at the predicted Blue
cloud radial velocity and, except for $\epsilon$~CMa,
no absorption component at the predicted
LIC radial velocity. Three nearby stars, $\zeta$~Dor,
HR~2225, and HR~2882, also do not show solar hydrogen wall
absorption that would indicate the absence of neutral hydrogen in the
interstellar gas immediately outside of the heliosphere. For the other six
stars there are no high-resolution Lyman-$\alpha$ spectra needed
to determine whether or not they show solar hydrogen absorption.
However, all nine stars
are located in the downwind direction of the LIC flow where it
difficult to detect solar hydrogen wall absorption \citep{Wood2005}.
Therefore, immediately outside of the heliosphere in the hydrogen hole
direction there could be either Blue cloud or H~II gas. We propose that
Blue cloud gas is in contact
with the outer heliosphere along these lines of sight,
and that ionized hydrogen gas is located outside of the Blue cloud
where it receives unattenuated EUV radiation from $\epsilon$~CMa and Sirius.
In their study of the $\epsilon$~CMa line of sight with the Ech B and
G160M gratings on {\em HST}/GHRS, \cite{Gry1995} found absorption at the
predicted radial velocities of the LIC and Blue clouds (their
components 1 and 2). The low value for the hydrogen column density
for component 1, log $N$(H~I)$\approx 17.34$,
implies that the line of sight passes through the edge of the LIC
(see Figure~\ref{fig:lism3}), and the gas temperature $T=7450$~K is
similar to that found by \cite{Redfield2008} for the LIC.
For the Blue cloud, the
inferred neutral hydrogen column density is also very small,
log $N$(H~I)=16.88, and the
cloud gas temperature is low, $T=3600\pm 1500$~K, similar to that
found by \cite{Redfield2008}. The electron density in the Blue cloud,
$n_e=0.46^{+0.40}_{-0.30}$~cm$^{-3}$, found by \cite{Gry1995} could be
much larger than what they found for the LIC,
$n_e=0.09^{+0.23}_{-0.07}$~cm$^{-3}$.
The high electron density of the Blue cloud, {\bf if real}, could
be explained either by a higher total density or less shielding from the EUV
radiation of $\epsilon$~CMa compared to the LIC or both effects.
In addition to interstellar absorption in low excitation lines,
\cite{Gry1995} also found absorption features in the C~IV lines at the predicted
radial velocity of the LIC cloud. They could not rule out the
possibility that the C~IV absorption is stellar, perhaps from the
star's wind, rather than interstellar. Further study is needed to
determine whether the absorption indicates the presence of
highly ionized gas surrounding the cooler material in the LIC
and Blue clouds.
Interstellar absorption along the line of sight to $\beta$~CMa has
been studied by \cite{Dupin1998} using UV spectra from {\em HST}/GHRS
and by \cite{Jenkins2000} using UV spectra from the {\em IMAPS} instrument.
In their paper, \cite{Jenkins2000} identified a velocity component that
is at the
predicted radial velocity of the Blue cloud. There is also absorption
at the predicted radial velocity of the LIC, but \cite{Jenkins2000}
argued that this absorption is not likely from the LIC on the basis of the
unrealistically high ionization required to fit the observed absorption
in many ions.
\subsection{Stars near the edge of the hydrogen hole}
Five stars are located near the edge of the hydrogen hole either just
inside or outside. Four show LIC absorption and one (Sirius) also shows
solar hydrogen wall absorption. At the outer edge of the hydrogen
hole, therefore, LIC gas is in contact with the outer heliosphere.
For the lines of sight that also
show Blue cloud absorption, we place the Blue cloud just outside of the LIC.
\subsection{Stars outside of but near the hydrogen hole}
Outside of the hydrogen hole
at Galactic longitudes greater than l=280$^{\circ}$, five of the six
stars have solar hydrogen wall detections and all six have no detected
absorption at radial velocities predicted by the LIC velocity
vector. In these directions there must be neutral hydrogen gas flowing
into the heliosphere to create the solar hydrogen wall, but this
neutral gas has the velocity vector of the G or Aql cloud
rather than the LIC.
\subsection{Comparison of the Blue cloud with the hydrogen hole}
Figure~\ref{fig:lism3} shows the outer contours of the hydrogen hole
and the locations of $\epsilon$~CMa, $\beta$~CMa and some of the stars
in Table~\ref{tab:outside} plotted in Galactic coordinates.
Superimposed on the hydrogen hole is the boundary of the Blue cloud
based on Figure 3 in \cite{Redfield2008} and the stars listed in
Table~\ref{tab:outside}. The similar
morphologies of the hydrogen hole and the Blue cloud
strongly imply their physical connection.
The radial velocity of the Blue cloud in the direction of
$\epsilon$~CMa is 12.97~km~s$^{-1}$, which is blue shifted about 6
km~s$^{-1}$ relative to the predicted LIC velocity for this line of
sight. We interpret the Blue cloud as a
Str\"omgren shell that is driven towards the
heliosphere by excess pressure in the external H~II region. The
Blue cloud's lower temperature and higher gas density than
the LIC could result from compression of the Blue cloud gas leading to
increased radiative cooling compared to the LIC.
We conclude from this analysis that in the hydrogen hole direction,
lines of sight from the Sun first pass through the Blue cloud
and then through ionized gas. Outside of the hydrogen hole
at larger Galactic longitudes, the lines of sight from the Sun
pass through the G or other clouds rather than the LIC.
The stars lying at the edge of the hydrogen hole and
$\epsilon$~CMa have
lines of sight that first encounter a small column density of LIC gas.
The absence of LIC gas inside most of the hydrogen hole confirms that
the heliosphere lies at the edge of or just beyond the LIC.
\section{Is the LIC flow vector consistent with measurements of
interstellar gas flowing through the heliosphere?}
In Section 1, we called attention to the flow vector for
neutral and singly ionized helium near the heliosphere
inferred from observations with the {\em EUVE},
{\em IBEX}, {\em Ulysses}, and {\em STEREO} spacecraft. This vector,
which we call the ``inflow vector'', refers to gas just
before entering the heliosphere where interactions with the Sun's
gravity, radiation pressure, and solar wind particles
can alter the flow direction. The excellent agreement in speed, flow
direction and temperature among these
different measurement techniques by different instruments provides
the benchmark for the flow vector of
interstellar gas just outside of the heliosphere. The flow vector of
neutral hydrogen in the LIC cloud, which we call the ``LIC vector'',
refers to gas located at one or a few parsecs from the
heliosphere where possible influences from the Sun and the solar wind
are negligible. Here we consider whether the inflow and LIC vectors
are in agreement, or whether they differ due
to some effect not yet taken into consideration. There are several
possibilities:
\begin{description}
\item[The inflow and LIC vectors agree within measurement errors]
The data in Table~\ref{tab:inflow} shows that the gas temperatures of
the two vectors are in
agreement, but the inflow speed of the LIC gas is 2--3~km~s$^{-1}$ too
slow, the ecliptic longitude of the LIC flow is about 3$^{\circ}$ too high,
and the ecliptic latitude about 2$^{\circ}$ too low. However, the difference
in speed is only about $2.2\sigma$ and the difference in flow
directions only $1\sigma$. Given these small differences, one
could argue that the inflow and LIC flow vectors agree,
but new studies are needed to reduce measurement
errors and provide insight into the magnitude of potential systematic errors.
\item[The LIC flow may be inhomogeneous]
The difference in flow speeds of the two vectors, if real, could
test the kinematics of the LIC gas.
There is no physical reason why the low density gas in the
LIC should have homogeneous flow properties. In fact, the mean nonthermal
broadening of interstellar absorption lines in the LIC,
$\xi=1.62\pm 0.75$~km~s$^{-1}$ \citep{Redfield2008}, is nearly
as large as the
difference in speed between the inflow and LIC vectors. Other nearby
interstellar clouds have values of $\xi$ as large as 3.6~km~s$^{-1}$,
about half of the cloud's sound speed. \cite{Gry2014} have proposed that the
multicloud scenario can be replaced by a single interstellar cloud with velocity
gradients, but \cite{Redfield2015} have argued that the data are
more accurately fit with multiple clouds each with its own velocity vector.
As a test for an inhomogeneous flow pattern in the LIC gas, we have selected
LIC velocity components that meet the following criteria: (a) the
uncertainty in the measured radial velocity is no more than three
times the precision of the instrumetal velocity scale
(typically 1.5 km~s$^{-1}$), and (b) that the distance from
the Sun is no more than 4~pc. The latter criterion removes velocity
components with large uncertainties in $N$(H~I). Of the 62 LIC
velocity components, 45 meet both criteria. Figure~\ref{fig:lic2}
shows the deviations in the measured radial velocities from those
predicted by the LIC velocity vector.
The velocity deviations appear to be random, except for an excess of
blue points (positive radial velocities) near $l=180^{\circ}$ and
$b=-25^{\circ}$. It is interesting that this location is close to the
tail of the LIC flow, $l=187.0^{\circ}\pm3.4^{\circ}$ and
$b=-13.5^{\circ}\pm 3.3^{\circ}$ \citep{Redfield2008}. The reality of
this region of velocity deviation and a physical explanation for the
similar direction as the tail of
the LIC flow requires additional data and investigation.
\item[The LIC flow at it outer edge may differ from its mean value]
Outside of the hydrogen hole, a
thin region of LIC gas could be in contact with the ionized gas produced by
radiation from $\epsilon$~CMa and Sirius. Contact with this ionized gas
can alter the
flow of the adjacent LIC gas by a pressure difference
if the ionized gas has a higher or lower pressure than the LIC.
\item[Could hydrogen and helium in the LIC have different flow vectors?]
Since the inflow vector is measured from neutral and ionized helium
and the LIC vector is measured from neutral hydrogen, we consider
whether hydrogen and helium could have different inflow vectors.
\cite{Lallement2005} compared the inflow direction of neutral helium
with that of neutral hydrogen observed from the glow of backscattered
solar Lyman-$\alpha$ photons that were observed by the Solar Wind Anisotropies
(SWAN) instrument on the {\em Solar and Heliospheric Observatory (SOHO)}.
The inflow directions of neutral helium and hydrogen differ by about
4$^{\circ}$ (see Table~\ref{tab:inflow}), which they explain as due to the
inflowing hydrogen being a mixture
of pristine interstellar hydrogen and hydrogen atoms resulting from
charge exchange between solar wind electrons and interstellar protons.
The properties of this hydrogen with mixed origins is very different
from the pristine interstellar helium observed by {\em IBEX, Ulysses,}
and {\em STEREO} and from the hydrogen observed at parsec distances by UV
spectroscopic measurements. Thus the question of whether there is a
difference between the flow
vectors of hydrogen and helium in the LIC prior to interactions in
the heliosphere remains open.
\item[Interstellar magnetic fields may be important]
Using {\em IBEX}
observations of the circular shaped ribbon of intense emission by
energetic neutral atoms, \cite{Zirnstein2016} derived the
interstellar magnetic field strength of $2.93\pm 0.08$~$\mu$G outside
of the heliosphere. The magnetic pressure of this field is
comparable to the gas pressure in the heliopause leading to the
magnetic field draping and altering the shape of the heliopause region. Also,
the MHD calculations by \cite{Zank2013} show that for an interstellar
magnetic field of this strength, the boundary between the outer
heliopause and the
interstellar medium will be a bow wave rather than a shock wave.
At the center of the ribbon, the
direction of this magnetic field is
$\lambda=227.28^{\circ}\pm 0.69^{\circ}$, $\beta=34.62^{\circ}\pm0.45^{\circ}$ in
ecliptic coordinates or $l=25.98^{\circ}\pm0.70^{\circ}$, $b=50.09^{\circ}\pm
0.57^{\circ}$ in Galactic coordinates. The 41 degree angle between the
interstellar magnetic field and the inflow direction
means that the magnetic field can change the apparent inflow direction
of ions relative to the undeflected neutrals.
The inflow direction of neutral hydrogen and helium atoms
is not influenced by the magnetic field unless these neutrals had
previously been ions before charge exchange. The consistent values
of $\lambda$ and $\beta$ for He$^0$ measured by {\em IBEX} and {\em Ulysses} and
for He$^+$ measured by {\em STEREO} suggest that magnetic fields may
not play an important role in changing the inflow flow vector based on
ionized and neutral helium, but this is an open question.
\end{description}
We conclude that the inflow and LIC vectors are different and that this
difference indicates
that the heliosphere is now passing through a region different from the
main body of the LIC. The heliosphere may be inside the outer edge of
the LIC with properties modified by EUV radiation or contact with
ionized gas. Additional observations are needed to address this question.
\section{Sightlines to Nearby Stars}
We now describe the placement of warm interstellar clouds and H~II
intercloud gas along the lines of sight (LOS) to nearby stars.
Figure~\ref{fig:InnerLISMfig} shows the gas components along the
sightlines to four nearby stars. The distance scales assume that
$n$(H~I)=0.2~cm$^{-3}$ \citep{Slavin2008} for all of the
partially ionized clouds. We
require that $\alpha$~Cen, $\epsilon$~Eri, and other stars with
detected astrospheric absorption are surrounded by
partially ionized hydrogen clouds. With these constraints, we propose
that the sightlines to nearby stars have the following structures:
\begin{itemize}
\item {\bf The LOS to Sirius:} LIC and Blue are the two detected
partially ionized clouds in this LOS. We propose that the
LIC extends from the outer heliosphere to 0.25~pc in the direction of
Sirius, followed by the Blue cloud for a distance of 0.25~pc, and then
H~II gas photoionized by Sirius~B
and $\epsilon$~CMa filling the remaining 2.14~pc to Sirius.
\item {\bf The LOS to $\alpha$~Cen:} Only the G cloud absorption is detected
extending over a distance of 0.70~pc if we assume that
$n$(H~I) in the G cloud is same
as in the LIC. The remaining 0.62~pc would filled with
H~II gas from Sirius~B and $\epsilon$~CMa.
Since $\alpha$~Cen has an astrosphere \citep{Wood2002}, it must by
surrounded by gas containing neutral hydrogen, which we assume is the
G cloud. However, heliospheric hydrogen wall absorption is detected
in the direction of $\alpha$~Cen \citep{Wood2005},
implying that the outer heliosphere must be in contact with neutral
hydrogen gas in this direction.
One explanation is to assume that a very
thin layer of the LIC provides the neutral
hydrogen needed to create the hydrogen wall in this direction but with
a hydrogen column density too small to be detected.
Another explanation is to extend the G
cloud all of the way to the outer heliosphere. This
can be accomplished by reducing the assumed $n$(H~I) in the G cloud to
0.11~cm$^{-3}$, half of the value of the LIC, but consistent with
variability, the
precision of $n_e$ measurements in the LISM \citep{Redfield2008b}, and
with the measured $\log N$(H~I) = 17.6 to $\alpha$~Cen.
Figure~\ref{fig:interstellar_clouds_5} shows this model.
\item {\bf The LOS to $\epsilon$ Eri:} Spectra of $\epsilon$ Eri show
absorption only at the LIC velocity, but the star has an astrosphere
\citep{Wood2002}
and, therefore, must be located in a presently unknown cloud
containing neutral hydrogen. The LIC fills
1.10~pc along this LOS leaving
2.12~pc to be filled with ionized gas. Since Sirius is located 4.88~pc from
$\epsilon$~Eri
and 3.40~pc from the midpoint of the LOS from the Sun to $\epsilon$~Eri,
we do not expect that the H~II gas produced by Sirius~B can fill the
missing 2.12~pc along the Sun-$\epsilon$~Eri LOS. Another hot white dwarf,
40~Eri~B, is located
6.11~pc from $\epsilon$~Eri, but it is unlikely that its H~II region gas
comes close to the Sun-$\epsilon$~Eri LOS. Instead, $\epsilon$ CMa
is the likely source for this ionized gas.
The gas-pressure of the
LIC, $(n_{HI}+n_e+n_p+n_{He})T=2710$~K~cm$^{-3}$, where
$n_{HI}$=0.2 cm$^{-3}$,
$n_e=n_p=0.07\pm 0.002$ cm$^{-3}$ \citep{Slavin2008},
$n_{He}/n_{H}=0.1$, and $T=7500$~K
\citep{Redfield2008}.
If there is gas pressure balance between the LIC and the H~II gas,
then the temperature of the ionized gas
along this LOS would be about 40,000~K.
\item {\bf The LOS to Procyon:} Absortion by the LIC and beyond it the
Aur cloud leave 1.25~pc of
path length to be filled with ionized gas. The morphology of the Aur
and other nearby clouds not in contact with the heliosphere will be
the subject of a forthcoming paper. Ionizing radiation from Sirius~B
is the likely source of this ionized gas, since the separation of Sirius and
Procyon is only 1.61~pc. This would be consistent with the gas
temperature of about 7,500~K if there is gas-pressure balance between
the extended H~II region and the LIC.
\item {\bf The LOS to $\pi^1$~UMa, V368~Cep, MN~UMa, $\delta$~Dra, 47~Cas, and
$\iota$~Cep:} Spectra of these stars centered near l=130$^{\circ}$,
b=+30$^{\circ}$ with distances 14.4--35.4~pc all show interstellar
absorption only at the LIC velocity with no evidence for any other
neutral gas in the LOS. Since the LIC lies in the immediate vicinity of the
Sun, the remainder of these lines of sight must be ionized gas. In
addition to $\epsilon$~CMa,
GJ3753 (14.1~pc) and especially the hot white dwarf G191-B2B (59.9~pc) may be
responsible for much of the ionizing radiation from this general direction.
\item {\bf The LOS to 61~Vir, $\beta$~Com, $\tau$~Boo, and
$\chi$~Her:} Spectra of
these stars, all located at high Galactic latitudes (b$>44^{\circ}$),
show interstellar absorption only by the NGP cloud with
no evidence for absorption by the LIC or any other cloud. Since the closest
star, 61 Vir (d=8.53~pc), likely has a detected astrosphere \citep{Wood2005},
the NGP cloud must be located near to and in front of this star and perhaps
the other stars. This leaves about 6.8~pc of the LOS to
61 Vir and similar path lengths toward the other stars to be filled
with ionized gas. The high Galactic latitude white dwarfs especially HZ~43
but also GJ3753, GJ433.1, and UZ~Sex could provide part of this
ionizing radiation in addition to that provided by
$\epsilon$~CMa and $\beta$~CMa.
\end{itemize}
Figure~\ref{fig:interstellar_clouds_5} is a schematic representation
of the four partially ionized clouds that are in contact with the
outer heliosphere as seen from the North Galactic pole.
The figure shows the sight lines
to 5 stars projected on to a plane parallel to the Galactic equator,
and the length of
each cloud located along each line of sight. Red shading indicates the
Str\"omgren shells produced by EUV radiation from $\epsilon$~CMa.
The direction of inflowing interstellar gas as seen from the Sun is
at $b\approx 20^{\circ}$ where the LIC, Aql, and G clouds may be in contact.
\section{\bf Could the LISM clouds in contact with the heliosphere be the
source of $^{60}$Fe accretion?}
With a half-life of 2.6 Myr, the radioisotope $^{60}$Fe is
produced during the very late evolution of massive stars and then ejected by
supernovae into the interstellar medium. The presence of this isotope in
deep ocean samples such as ferro-manganese crusts, nodules, and sediments
\citep{Knie1999,Wallner2016} and in the lunar regolith
\citep{Fimiani2016} indicates that supernovae have occured within the
last few Myr in the solar vicinity.
\cite{Wallner2016} found evidence for enhanced
$^{60}$Fe accretion during the time intervals 1.5--3.2 Myr and 6.5--8.7 Myr
ago but no evidence for $^{60}$Fe accretion above background
either more recently
or outside of these two time intervals. They concluded that the $^{60}$Fe
detected during these two time intervals was produced by one or more supernovae
occurring at these times.
The most likely location for these supernovae would be in the closest
association of massive stars, the Scorpius-Centaurus Association
at a distance of 100--150 pc. Within the Sco-Cen Association,
the youngest
star forming region where the recent supernova likely occured is the
Upper Scorpius region centered at $l=352^{\circ}$ and
$b=-15^{\circ}$. \cite{Fry2015} concluded that the most likely
explanation for the event timed at 2.2 My ago was the ejection
of $^{60}$Fe in the debris of an electron-capture supernova
at a distance of about 100~pc
and the subsequent condensation of the $^{60}$Fe onto large
($>0.2$~$\mu$m) interstellar grains. This could explain the
amount of $^{60}$Fe that
arrived at Earth after traversing the interstellar medium and heliosphere.
Very recently, \cite{Koll2019} identified $^{60}$Fe in dust grains embedded
in Antarctic snow. After careful analysis, they concluded that the
$^{60}$Fe could not be explained by terrestrial nuclear explosions or by
cosmogenic sources, but instead must have a supernova origin. Unlike
the ocean core samples that were built up a long time ago,
the Antarctic snow sample is a very recent accumulation over the last 20 years.
The time scale for supernova debris to travel a distance of 100~pc
through the interstellar medium would be
about 200,000 yr \citep{Fry2015}, which is much less than the
half-life of $^{60}$Fe or the time
of the most recent supernovae and thus cannot directly explain
the very recent accretion of $^{60}$Fe.
Since iron is singly ionized in interstellar gas, iron ions flow
around rather than penetrating the heliosphere.
To reach the inner solar system, $^{60}$Fe must, therefore, be included
in interstellar grains. The large depletion of iron from the gas phase
of LISM clouds requires that most of the iron is resident in dust
grains that are likely olivene silicates \citep{Redfield2008,Frisch2011}.
{\em In situ} measurements of interstellar dust by
experiments on {\em Ulysses} and other spacecraft sample dust
grains with sizes larger than about 0.3 $\mu$m, because solar radiation
pressure and heliospheric magnetic fields filter out most of the
smaller grains \citep{Mann2010,Kruger2019}.
Larger grains are expected to reach the inner
solar system without significant changes in direction or
speed. In their analysis of data from the {\em Ulysses} impact
ionization dust detectors, \cite{Strub2015} found that the
speed ($\approx 26$~km~s$^{-1}$) of large dust grains (sizes greater than 0.5
$\mu$m) and their flow
toward ecliptic longitude $\lambda=75^{\circ}\pm
30^{\circ}$ and latitude $\beta=-13^{\circ}\pm 4^{\circ}$
are similar to neutral helium gas in the LIC and other nearby clouds.
The large uncertainty in $\lambda$ precludes identification of the dust
flow with the helium gas flow of a specific cloud, but the the data are
consistent with the dust flowing with the gas in the LIC, G, or other
nearby clouds. Interstellar grains with sizes less than 1$\mu$m
should be well coupled to the gas in warm clouds as the Larmor radius
for electrically changed grains is $<1$~pc for an interstellar magnetic
field of 5 $\mu$G \citep{Grun2000}.
We propose two possible explanations for the recent arrival of
interstellar dust grains containing $^{60}$Fe in
Antarctic snow. One is that the dust grains containing $^{60}$Fe are
resident in the warm LISM clouds and
enter the heliosphere from one or all of the four clouds that are
in contact with the outer heliosphere. The density of dust grains in
the ionized intercloud medium should be much less than in the warm
clouds because strong UV radiation and shocks can destroy
grains and the low gas density means slower grain formation.
Since the Sun is moving
at a speed of 26.3~pc per million years through the cluster of local
clouds, it entered the local cluster
about 200,000 years ago and will leave in about the same
time assuming that the warm clouds extend about 5~pc in all directions. This
is a rough estimate, but it gives a time scale for the input of dust
grains containing $^{60}$Fe from a recent supernova.
This scenario predicts continuous low level accretion of $^{60}$Fe
containing grains from warm clouds only
when they are in contact with
the heliosphere. This scenario can be tested by searching for $^{60}$Fe
deeper in snow and ice fields going back to more than 200,000 years.
An alternative explanation is that a large number of
dust grains entered the heliosphere during the year 2005 when the
measured flux of dust increased a factor of 3 and the
inflow abruptly changed direction by $50^{\circ}\pm7^{\circ}$ \citep{Strub2015}.
This event might have been able to increase the terrestrial accretion
rate to a level just
above background and thus appear as a one-time occurance. A possible
scenario would be a change in which cloud
is feeding dust grains into the heliosphere. The absence of detected
$^{60}$Fe dust grains above background deeper in snow and ice fields would be
consistent with the recent detection being due to a one-time
event.
New measurements are clearly needed to test between these two scenarios.
As suggested by \cite{Koll2019} and by \cite{Fry2015},
future measurements
of the $^{60}$Fe dust particles located deeper in Antarctic snow/ice fields
could provide a historical record of the Sun's motion through the LIC and other
clouds in the LISM.
\section{Conclusions and Further Work}
Observations and analysis of 62 sightlines with interstellar velocity
components consistent with the Local Interstellar Cloud
vector permitted us to compute a three-dimensional model of the
LIC. This model extends from the heliosphere about 2~pc toward
Galactic longitude $l=135^{\circ}$ and
latitude $b=+20^{\circ}$, but about 0~pc in the opposite direction
($l=315^{\circ}$, $b=-20^{\circ}$)). This peculiar shape, which has
been identified
in previous studies, highlights the question of whether
the heliosphere is located inside or outside of the LIC. To better
understand this question, we analyzed spectroscopic data and obtained
the following results:
(1) As seen from the geometric center of the LIC, the distance to its edge
is less than 0.5~pc within a wide solid angle defined by
$225^{\circ}\leq l \leq 290^{\circ}$ and
$-60^{\circ}\leq b \leq+10^{\circ}$. We call this region of minimal neutral
hydrogen column density the ``hydrogen hole''. Inside of the hydrogen
hole are sight lines to the strongest source of EUV radiation
($\epsilon$~CMa), the second strongest source ($\beta$~CMa), and the
nearby hot white dwarf Sirius~B. Photoionization of neutral hydrogen
by the strong EUV radiation from these stars is
the most likely cause of the hydrogen hole.
(2) Inside of the hydrogen hole, sight lines to eight stars
show interstellar absorption by gas at the Blue cloud's radial
velocity but not at the predicted LIC velocity. The outline of the Blue cloud
overlaps that of the
hydrogen hole, indicating that the Blue cloud and the hydrogen hole are
probably physically associated. We propose that the outer
edge of the Blue cloud is a Str\"omgren
shell being pushed against the outer heliosphere by higher ionized gas
behind it. The outer layers of other clouds facing
$\epsilon$~CMa are also Str\"omgren shells if they are not shielded by
other clouds. The presence of Si~III and likely C~IV absorption in
the sight lines to $\epsilon$~CMa and $\beta$~CMa support the argument
that these sight lines pass through Str\"omgren shells.
The radial velocity and higher gas pressure of the
Blue cloud are consistent with compression. Since the flow vector of
interstellar gas inside of the hydrogen hole differs substantally
from that of main body of the LIC, the heliosphere lies outside of the
LIC in this direction.
(3) The vector of interstellar gas flowing into the heliosphere as
measured by the {\em IBEX, Ulysses}, and {\em Stereo} spacecraft
differs from that inferred from interstellar absorption lines
representing the flow of LIC gas far from the heliosphere
at a distance of roughly 1~pc.
This difference of 2--3 km~s$^{-1}$ in speed and slightly different direction,
should be tested by precise new measurements.
We conclude that the inflow and LIC vectors are different and
propose that this difference indicates
that the heliosphere is now passing through a region different from the
main body of the LIC. The heliosphere could be inside the outer edge of
the LIC where the flow is modified by EUV radiation.
Additional observations are needed to address this question.
(4) We propose a model for LISM immediately outside of the heliosphere.
The presence of heliospheric hydrogen wall absorption
in all directions requires that the outer heliosphere be in contact
with and be surrounded by interstellar gas containing a significant
amount of neutral hydrogen. In the hydrogen hole region, the Blue
cloud is in direct contact with the outer heliosphere.
At the edge of the hydrogen hole, the
LIC is in contact with the outer heliosphere with the Blue cloud
lying immediately outside of the LIC.
Away from the hydrogen hole toward higher Galactic longitudes, the
Aql cloud is in direct contact with the outer
heliosphere. In the direction of $\alpha$~Cen, there must be
partially ionized gas in contact with both the heliosphere and the
astrosphere. We adopt a model in which the G cloud fills
this entire line of sight to the star with neutral hydrogen density
$n$(H~I)$\approx 0.11$~cm$^{-3}$, although a very thin layer of LIC
gas may be in contact with the heliosphere in this direction.
For $l=90^{\circ} - 235^{\circ}$,
the LIC is in contact with the outer heliosphere.
Our model with the heliosphere in direct
contact with four interstellar clouds
may result from the directionality of the EUV radiation from $\epsilon$~CMa.
The different kinematics of the partially ionized interstellar
gas clouds may result from whether a cloud receives direct ionizing
radiation or is
shielded by other clouds now and in the recent past. The complex
magnetic field surrounding the heliosphere may also play a role in
determining the shape and properties of these clouds.
(5) We describe the lines of sight to nearby stars in terms
of several partially ionized clouds and H~II gas which has been
photoionized by the EUV radiation from $\epsilon$~CMa and other stars.
The modest degree of ionization in the nearby intercloud gas and the strong EUV
radiation field suggest that the intercloud gas is
an irregularly shaped Str\"omgren sphere rather than a recombining
plasma following a supernova shock.
(6) Finally, we note that the heliosphere is leaving a region of
space where the LIC, G, Aql, and Blue clouds are located. We propose
that the very recent measurement in Antarctic snow of enhanced
$^{60}$Fe from the debris of a supernova could be explained by
the inflow of interstellar grains
containing $^{60}$Fe from the warm clouds in contact with the heliosphere
either continuously at a low level or during an unusual event.
Our models for the LIC morphology and the very local ISM are updates
of our previous studies \citep{Redfield2000,Redfield2008}.
In a subsequent paper,
we will present the results of a new three-dimensional LISM model
including data studied by \cite{Malamut2014} and more recently
observed sightlines. When
this is available, we will reexamine the extent to which stellar EUV
radiation may explain the properties of the intercloud gas between the
LISM.
\acknowledgements
We acknowledge support through the NASA HST Grant GO-11568 from the
Space Telescope Science Institute, which is operated by the
Association of Universities for Research in Astronomy, Inc. for NASA,
under contract NAS 5-26555.
Support for {\it HST} observing programs \#11568 was provided by NASA
through a grant from the Space Telescope Science Institute. We thank
John Vallerga for a very thoughtful referee report,
Martin Barstow for computing the ionizing flux from Sirius~B, and
Steven Burrows for his graphics.
JLL thanks the Erwin Schr\"odinger International Institute for
Mathematics and Physics at the University of Vienna
for their hospitality and opportunity to learn
about nucleosynthetic isotope anomalies. Our
research has made use of NASA's Astrophysics Data System Bibliographic
Services and the SIMBAD database, operated at CDS, Strasbourg, France.
{\it Facilities:} {HST (GHRS, STIS), FUSE, EUVE, CHIPS, ULYSSES}
|
1,477,468,751,150 | arxiv | \section{Introduction}
In the last few decades, the rapidly emerging field of quantum thermodynamics~\cite{binder2018,gemmer2004,deffner2019} has offered the prospect for unravelling fundamental laws of miniaturized quantum systems, including thermal devices such as quantum batteries~\cite{Alicki,PoliniPRL}, quantum thermal transistors~\cite{karl2016}, diodes~\cite{miranda2017} and quantum refrigerators~\cite{linden2010}. Recent progress in quantum technologies as well as in the ability of controlling quantum systems~\cite{raimond2001, bloch2008, haffner2008, pan2012} has fueled the experimental urge of constructing quantum thermal machines in order to see whether they outperform their classical counterparts. In this respect, it is also shown that quantum spin models, implementable in different physical substrates like cold atoms~\cite{duan2003, lewenstein2007}, trapped ions~\cite{mintert2001, porras2004, monroe2021}, and nuclear magnetic resonance systems~\cite{zhang2005, rao2013, rao2014}, can serve as important platforms to realize these quantum thermal machines~\cite{modi2017,modi2018,srijon2020,srijon2021, hewgill2020,konar2021}.
The main task of a quantum refrigerator consisting of a few $d$-dimensional quantum mechanical subsystems coupled with local thermal baths is to decrease the temperature of a chosen subsystem in the steady state~\cite{Allahverdyan2004,linden2010,elias2011,allahverdyan2011,arisoy2021}. This has so far been achieved using different combinations of qubits and qutrits~\cite{linden2010,naseem2020,hewgill2020, khlifi2020,bhandari2021,konar2021}. The available proposals till date include devices in which cooling is either performed with the help of one or more external energy source(s)~\cite{elias2011}, or in a self-contained fashion~\cite{linden2010}. To obtain the minimum possible steady-state temperature, cooling assisted by various means, such as solar energy~\cite{wang2015}, rapid measurement~\cite{erez2008}, repeated collision~\cite{dechiara2018}, periodically modulated interactions~\cite{vasco2021}, and paradigmatic quantum spin Hamiltonians~\cite{hewgill2020,konar2021} have been reported, and even achievement of Carnot efficiency in a two-qubit setup via a reverse-coupling mechanism has been shown~\cite{silva2015}. Moreover, recent experimental realizations of refrigerators using trapped ions~\cite{maslennikov2019} and several experimental proposals employing superconducting qubits~\cite{hofer2016}, quantum dots~\cite{davide2013}, trapped ions~\cite{mitchison2016}, and optomechanical systems~\cite{mari2012} have made the implementation of a spin model-based fridge in laboratories a possibility.
As of now, most of the proposed and implemented quantum technologies typically involve two-dimensional systems due to (a) the relative ease of handling a single or a multiqubit system compared to a system involving qudits, and (b) the fact that a quantum system moves towards the classical limit with an increase in the spin quantum numbers of the constituent spins, thereby eventually losing its quantum characteristics. However, higher dimensional quantum systems are revealed to be advantageous over their lower dimensional counterparts in several quantum gadgets, including quantum key distribution~\cite{durt2003}, quantum switch~\cite{wei2019}, and quantum batteries~\cite{santos2019,dou2020,ghosh2021}, to name a few. While there has been a few attempts in constructing quantum refrigerators using constituent quantum systems with a Hilbert space dimension higher than that of a qubit or a qutrit~\cite{correa2014,wang2015,silva2016,usui2021}, to the best of our knowledge, realization of quantum refrigerators using quantum spin models constituted of particles with arbitrary spin-quantum number remains an unexplored area, which we address in this work.
Our design of the quantum refrigerator bears two distinct features. (a) We employ interacting quantum spin systems with nearest-neighbor interactions, namely the quantum $XYZ$ model~\cite{qptbook1} and the bilinear-biquadratic (BB) model~\cite{Sutherland1975, Takhtajan1982, Babu82, Fath91, Fath93}, consisting of two or three spins having spin quantum number $j$. (b) In order to quantify the performance of the refrigerator, we introduce a definition of local temperature for a spin-$j$ system that uses the minimum distance between the time-evolved state of the system and a canonical thermal state. We prove that the introduced measure reduces to the measure of local temperature based on population of the ground state in case of spin-$1/2$ systems that is already available in literature~\cite{linden2010}. We further demonstrate that the proposed definition of local temperature is independent of the choice of the distance measure, by considering the trace distance, the relative entropy distance \cite{nielsenchuang} and Uhlmann's fidelity ~\cite{uhlmann1976}.
We derive the explicit forms of the Lindblad operators corresponding to subsystems with spin quantum number $j$, when the quantum master equation is constructed following a local approach. We show, by solving the local quantum master equation, that a two- and a three-spin system of identical spins with spin-$j$ governed by the $XYZ$ type, or the bilinear-biquadratic interactions and connected to local bosonic thermal baths can serve as a refrigerator for a chosen spin in the system. The performance of the refrigerator, as quantified by the proposed distance-based local temperature and the normalized von Neumann entropy of the steady state of the cold spin, exhibits diminution with an increase in $j$, demonstrating the dimensional advantage. We also show that a duo of a spin-$\frac{1}{2}$ and a spin-$j$ system cooling the spin-$\frac{1}{2}$ particle has better dimensional benefit compared to a system of two identical spin-$j$ particles with local master equation. The dimensional improvement is found to persist even when one considers the global approach of constructing the quantum master equation.
The paper is organised as follows. The setup of the refrigerator with spin models, local thermal baths, and their interactions, as well as the derivation of Lindblad operators for local quantum master equations are described in Sec. \ref{sec:model}. In Sec. \ref{sec:definition_temperature}, we introduce the concept of quantifying local temperature using distance measures, and prove that it coincides with the population-based definition of local temperature for spin-$1/2$ systems. We also discuss the use of von Neumann entropy as an indicator for local temperature. The performance of the refrigerators constituted of two spins using these figures of merit is reported in Sec. \ref{sec:twospinR}. The analysis on the refrigerator with three spin-$j$ particles is carried out in Sec. \ref{sec:threespinR}, while we conclude in Sec. \ref{sec:conclu}.
\section{Design for Quantum Refrigerator}
\label{sec:model}
In this section, we briefly discuss the system-environment setup, a part of which acts as a quantum refrigerator for the rest under specific conditions of the system as well as the system-environment interaction parameters. For local master equation, we also derive the Lindblad operators applicable in higher dimensions.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig1.pdf}
\caption{(Color Online.) \textbf{Schematic representation of a quantum refrigerator consisting of three spin-$j$ particles.} Each of the spin is coupled with non-interacting thermal baths having temperature $T_{1}, T_{2}$ and $T_{3}$. $J$ is the interaction strength between the spins.}
\label{fig:schematic}
\end{figure}
\subsection{Small spin clusters as system}
We use small clusters of particles with specific types of two-body interactions to design the refrigerator, where the individual particles can take half-integer as well as integer spins. The total Hamiltonian $H_{sys} = H_{loc} + H_{int}$ of the system consists of two parts -- (a) the local Hamiltonian $H_{loc}$ given by
\begin{eqnarray}
H_{loc}=\sum_{r=1}^{N}h_{r}S^{z}_{r},
\label{eq:h_local}
\end{eqnarray}
and (b) the interaction Hamiltonian $H_{int}$ governed by the spin-spin interactions. We particularly focus on two types of spin-spin interactions, namely, the nearest neighbor $XYZ$ interaction giving rise to the Hamiltonian~\cite{qptbook1}
\begin{eqnarray}
H_{xyz}&=&J\sum_{r=1}^{N}\left[(1+\gamma)S^{x}_{r}S^{x}_{r+1}+(1-\gamma)S^{y}_{r}S^{y}_{r+1}\right]\nonumber \\
&&+J\Delta\sum_{r=1}^NS_r^zS_{r+1}^z,
\label{eq:hint_xy}
\end{eqnarray}
and the bilinear-biquadratic Hamiltonian~\cite{Sutherland1975, Takhtajan1982, Babu82, Fath91, Fath93}
\begin{eqnarray}
\label{eq:hint_bb}
H_{B}(\phi)&=&J\cos \phi\sum_{r=1}^{N}\vec{S}_r\vdot\vec{S}_{r+1}+J\sin \phi\sum_{r=1}^{N}(\vec{S}_r\vdot\vec{S}_{r+1})^2,\nonumber \\
\end{eqnarray}
where we assume periodic boundary conditions unless otherwise mentioned.
Here, $S^{\nu}_{r}$ ($\nu = x, y, z$) are the $(2j + 1)$ dimensional spin matrices (for a spin-$j$ particle, $j=\frac{1}{2},1,\frac{3}{2},\cdots$) acting on the site $r$, and $N$ is the total number of particles in the system. The $r$th spin in the system is subject to a magnetic field of strength $h_{r}$ in the $z$-direction. When the interaction is of the $XYZ$ type, $J$ is the strength of the spin-spin interaction strength between the nearest-neighbor spins, while $\gamma$ and $\Delta$ represent the $xy$- and the $z$-anisotropy parameters respectively. When \(\Delta = 0\) and \(\gamma=0\), the Hamiltonian represents the \(XX\) model while \(\gamma =0\), and \( \Delta \ne 0 \) gives the $XXZ$ mmodel. On the other hand, $J\cos \phi$ and $J\sin \phi$ are the interaction strengths for the linear and the quadratic terms in the BB Hamiltonian respectively, where the parameter $\phi$ governs the phases of the system in the absence of the local magnetic field~\cite{Sutherland1975, Takhtajan1982, Babu82, Fath91, Fath93}. With the aim of designing small quantum thermal machines and investigating the effect of a change in
the Hilbert space dimension of the system on the performance of the machine, we typically restrict the values of $N$ to be $N=2$ and $N=3$.
Note that in Eqs.~(\ref{eq:hint_xy}) and (\ref{eq:hint_bb}), we assume all the particles in the interacting quantum spin model to have identical spin value $j$. However, in a more general situation, one may consider different spin values for different particles at different lattice sites. An example of such cases will be discussed in Sec.~\ref{subsec:mixed_spin} for $N=2$.
\subsection{Interaction with local bosonic baths}
\label{subsec:osd}
Consider a situation where each spin $r$ in the system is interacting with a local bath $B_{r}$, $r = 1,2, ..., N$ (see Fig.~\ref{fig:schematic} for an illustration with $N=3$), which is a collection of harmonic oscillators, described by the bath Hamiltonian
\begin{equation}
H_{B_r} = \int_{0}^{\omega_{c}} d\omega a_{\omega}^{\dagger} a_{\omega},
\label{bath_eqn.}
\end{equation}
so that the total bath Hamiltonian for $N$ baths is given by $H_B=\sum_{r=1}^N H_{B_r}$. Here, $a_{\omega}$ ($a_{\omega}^{\dagger}$) is the annihilation (creation) operator of the mode $\omega$, such that $[a_{\omega} , a_{\tilde{\omega}}^{\dagger}] = \delta(\omega - \tilde{\omega})$, and $\omega_{c}$ is the cut-off frequency of the bath. We consider the absolute temperature of the bath $B_r$ to be $T_r^0$, and the baths are local in the sense that the bath $B_r$ affects only the spin $r$ in the entire bath-environment setup. The Hamiltonian defining the interaction between the systems and baths, denoted by $H_{SB}$, reads as
\begin{equation}
H_{SB}=\sum_{r=1}^{N}\sum_\omega\left (S_r^{+}\otimes a_{\omega}+S_r^{-}\otimes a_{\omega}^{\dagger}\right)
\label{H_sb},
\end{equation}
where $S_r^+$ ($S_r^-$) is the spin raising (lowering) operators of $r$th spin, given by $S_r^\pm=S^x_r\pm\text{i}S^y_r$.
We consider a scenario where at $t=0$, $H_{sys}=H_{loc}$, such that each spin is in thermal equilibrium with its respective bath and has an initial temperature equal to the bath temperature $T_r^0$, such that the initial ($t=0$) state of the $r$th spin is represented by a diagonal density matrix $\rho_r^0$. In the eigenbasis of $S_r^{z}$ having eigenvalues $j,j-1\ldots -j$, it takes the form as
\begin{eqnarray}
\rho_r^0&=&\tau_{r}^{2j}(0)\dyad{2j}{2j}+\tau_{r}^{2j-1}(0)\dyad{2j-1}{2j-1}\nonumber\\ &&+\cdots+\tau_{r}^0(0)\dyad{0}{0},
\end{eqnarray}
where
\begin{eqnarray}
\tau_{r}^\mu(0)=\frac{\exp(-\mu\beta_r^0 h_r)}{\sum_{\mu=0}^{2j}\exp(-\mu\beta_r^0 h_r)}
\end{eqnarray}
with $\sum_{\mu=0}^{2j}\tau_{r}^\mu(0)=1$, such that the initial state of the $N$-spin system is $\rho(0) = \bigotimes_{r = 1}^{N} \rho_{r}^0$, and $\beta_r^0=1/k_BT_r^0$, \(k_B\) being the Boltzmann constant, which is set to \(1\). After the interaction Hamiltonian $H_{int}$ is turned on at $t>0$ such that $H_{sys}=H_{loc}+H_{int}$ at $t>0$, the time-dynamics of the system is governed by the quantum master equation (QME)~\cite{Petruccione}, given by
\begin{equation}
\dot\rho=-\text{i}[H_{sys},\rho]+\mathcal{L}(\rho),
\label{eq:qme}
\end{equation}
where $\mathcal{L}(.)$ represents the dissipator, emerging out of the spin-bath interactions. The solution of Eq.~(\ref{eq:qme}) provides the state, $\rho(t)$, of the system as a function of $t$.
\subsection{Dissipators: local vs. global}
There exists two competing approaches to determine the dissipator in the QME (Eq.~(\ref{eq:qme})) -- (a) a global approach, where transitions between the eigenstates of the entire system represented by the Hamiltonian $H_{sys}$ is considered, and (b) a local treatment of the quantum master equation, considering only the transitions between the eigenstates of the individual subsystems labelled by $r=1,2,\cdots,N$. In the former, the dissipator $\mathcal{L}(\rho)=\sum_{r=1}^NL_r(\rho)$, with
\begin{widetext}
\begin{eqnarray}
L_r(\rho) &=& \sum_{\omega>0}\gamma_r(\omega) \left[\left(A_r(\omega)\rho A_r^\dagger(\omega)+A_r^\dagger(\omega)\rho A_r(\omega)\right)-\frac{1}{2}\left(\left\{A_r^\dagger(\omega)A_r(\omega),\rho\right\}+\left\{A_r(\omega)A_r^\dagger(\omega),\rho\right\}\right)\right],
\label{lindblad_global}
\end{eqnarray}
\end{widetext}
where the operators $A_r(\omega)$ are the Lindblad operators corresponding to the $r$th spin for a transition amounting energy $\omega$ among the energy levels of the system, defined by the equation
\begin{eqnarray}
\text{e}^{\text{i}H_{sys}t}\left(S_r^++S_r^-\right)\text{e}^{-\text{i}H_{sys}t}=2\sum_{\omega}A_r(\omega)\text{e}^{-\text{i}\omega t}.
\end{eqnarray}
Explicit forms of $A_r(\omega)$ can be derived by decomposing the spin-part of the system-bath interaction Hamiltonian in the eigenbasis of $H_{sys}$, and may not always be analytically derivable in cases of complex Hamiltonians with large number of spins. The transition rate, represented by $\gamma_r(\omega)$, corresponds to the jump through an energy gap $\omega$ for the spin $r$, and depends on the spectral function and the cut-off frequency of the bath. For baths with an Ohmic spectral function $\kappa_r(\omega)=\left[\exp(\beta_r\omega)-1\right]^{-1}$ and a cut-off frequency $\omega_c$ which are the same across baths,
\begin{eqnarray}
\gamma_r(\omega) &=& f_r(\omega)[1+\kappa_r(\omega)],\text{ for }\omega\geq 0,\nonumber \\
\gamma_r(\omega) &=& f_r(|\omega|)\kappa_r(|\omega|), \text{ for }\omega<0,
\end{eqnarray}
with $f_r(\omega)=\alpha_r\omega\exp(-\omega/\omega_c)$, and $\alpha_r$ being a constant for the $r$th bath, representing the spin-bath interaction strength. Under Markovian approximation, $\max\{\alpha_r\}\ll 1$.
The explicit form of the Lindblad operators can, however, be determined if one takes a local approach for deriving the QME. Note that in the limit where the spin-spin and the spin-bath interaction strengths are so small that $H_{sys}\approx H_{loc}$ $\left(\gamma_r(\omega),J<<h_r\right)$, one may calculate the Lindblad operators using the eigenstates of $H_{loc}$ only, which we present in the following for the systems consisting of spin-$j$ particles .
\noindent\textbf{Lindblad operators for spin-$j$ local QME.} Let us demonstrate the explicit form of the Lindblad operators considering a system of two spin-$j$ particles ($r=1,2$), subject to magnetic fields of strength $h_1$ and $h_2$ in the $z$-direction. Eigenvalues of the local Hamiltonian (Eq.~(\ref{eq:h_local})) for this system can be written as $(-j+a)h_1+(-j+b)h_2$, corresponding to eigenstates $\ket{a}\otimes \ket{b}$, where $a,b\in[0,2j]$. Writing the system-bath interaction Hamiltonian as
\begin{equation}
H_{SB}=\sum_{r=1}^2\sum_{\omega}\left[S_r^x\otimes(a_\omega+a_\omega^\dagger)+S^y_r\otimes \text{i}(a_\omega-a_\omega^\dagger)\right],
\end{equation}
the Lindblad operators corresponding to the $r$th bath can be determined as~\cite{Petruccione}
\begin{equation}
A_r(\omega)=\sum_{\epsilon_q-\epsilon_p=\omega}\dyad{\epsilon_p}{\epsilon_p}S_r^x\dyad{\epsilon_q}{\epsilon_q},
\end{equation}
where $\epsilon_p$ ($\epsilon_q$) is the $p$th ($q$th) eigenstates of $H_{loc}$. Performing algebra for spin-$j$ particles corresponding to the first spin leads to
\begin{eqnarray}
A_1(\omega)&=&\sum_{\epsilon_q-\epsilon_p=\omega}\frac{1}{2}[\sqrt{j(j+1)-a(a+1)}\ket{a^\prime}\bra{a}\delta_{a^\prime,a+1}\nonumber\\
&&+\sqrt{j(j+1)-a(a-1)}\ket{a^\prime}\bra{a}\delta_{a^\prime,a-1}]\otimes\ket{b}\bra{b},\nonumber \\
\end{eqnarray}
where $A_2(\omega)$ can also be determined with similar calculation. Considering (a) only transitions with positive $\omega$ implying $\omega=h_1$, and noticing that (b) non-zero matrix elements for $A_1(\omega)$ requires transitions to be within consecutive energy levels only, $a\in[1,2j]$, $b\in[0,2j]$, and $a^\prime=a-1$. Therefore, the desired Lindblad operator can be represented as
\begin{eqnarray}
\nonumber A_1(\omega)&=&\frac{1}{2}\left[\sum_{a=1}^{2j}\sqrt{j(j+1)-a(a-1)}\dyad{a-1}{a}\right]\nonumber\\
&&\otimes\left[\sum_{b=0}^{2j}\dyad{b}{b}\right]\nonumber\\
&=&\frac{1}{2}(S^{-}_1\otimes\mathbb{I}),
\end{eqnarray}
where $\mathbb{I}$ is the identity operator in the Hilbert space of a spin-$j$ particle. This calculation can be extended to a system of $N$ spin-$j$ particles also, where the Lindblad operators for the $r$th bath are given by
\begin{eqnarray}
A_r(\omega)&=&\frac{1}{2} \mathbb{I} \otimes \cdots S_r^- \otimes \ldots \mathbb{I},\nonumber\\ A_r^\dagger(\omega)&=&\frac{1}{2}\mathbb{I} \otimes \cdots S_r^+ \otimes \ldots \mathbb{I}. \nonumber\\
\label{eq:local_lindblad}
\end{eqnarray}
In this paper, we consider both local as well as global approach for constructing the Lindblad operators. However, it is important to note that unlike the global QME, the local QME may not always be appropriate for determining the non-equilibrium properties of the system due to a potential violation of second law of thermodynamics~\cite{Wichterich2007}, and therefore should always be used carefully. Note also that in the case of the local approach, the time-evolution of the state of a system of identical spins depends on the initial state of the system, as presented in the following proposition.
\noindent\textbf{Proposition I}. \emph{For a system of identical spins, if
\begin{eqnarray}
\frac{h_r}{T_r^0}=\text{constant} \quad \forall\; r=1,2,\cdots,N,
\label{eq:no_evolution}
\end{eqnarray}
the system does not evolve with time as long as the dynamics if governed by a local quantum master equation.}
\begin{proof}
To prove it, we first note that $\rho(t+\delta t)$, with \(\delta t\) being the small increment in time, can be expanded as
\begin{eqnarray}
\rho(t+\delta t) &=& \rho(t)+\eval{\pdv{\rho}{t}}_t\delta t+ \mathcal{O}(\delta t^2).
\end{eqnarray}
Performing the expansion about $t=0$, and neglecting higher order terms, we obtain
\begin{eqnarray}
\rho(\delta t)&\approx&\rho(0)+\eval{\pdv{\rho}{t}}_{t=0}\delta t \nonumber \\ &&=\rho(0)-\text{i}[H_{sys},\rho(0)]-\mathcal{L}(\rho(0)).
\end{eqnarray}
Since the interaction between the spins is absent at $t=0$, for dissipators constructed of Lindblad operators of the form given in Eq.~(\ref{eq:local_lindblad}), $\mathcal{L}(\rho(0))=0$. Also, at $t=0$, the condition in Eq.~(\ref{eq:no_evolution}) suggests identical initial states for all spins in the system, implying $[H_{sys},\rho(0)]=0$, leading to $\eval{\pdv{\rho}{t}}_{t=0}=0$. This can be continued for an arbitrary small time increment $\delta t$ such that $t=n\delta t$, $n$ being an integer, when $\rho(t)=\rho(n\delta t)=\rho(0)$. Therefore, there is no time-evolution of the state in the system and the proof.
\end{proof}
\section{Quantifying local temperature in higher dimension}
\label{sec:definition_temperature}
To assess the performance of the quantum refrigerator described in Sec, \ref{sec:model}, we propose a definition of local temperature which remains valid for arbitrary spins. In this paper, we shall focus on the scenario where one aims to cool a chosen spin in the spin-bath setup during the dynamics. The $r$th spin in the system is said to achieve a \emph{local steady-state cooling} by virtue of the dynamics of the system if and only if $T_r^0>T_r^s$, where $T_r^s=T_r(t\rightarrow\infty)$ is the local steady-state temperature of the $r$th spin. Note that the chosen spin-bath interaction (see Sec.~\ref{subsec:osd}) ensures a diagonal reduced density matrix,
\begin{eqnarray}
\rho_r(t)=\text{Tr}_{\underset{j,k=1,2,3}{j,k (\neq i)}}\left[\rho(t))\right],
\end{eqnarray}
for the $r$th spin. In the case of a system having a spin-$\frac{1}{2}$ particle at the $r$th site, we know that $\rho_r(t)$ takes the form as
\begin{eqnarray}
\rho_r(t)&=&\tau_{r}^0(t)\dyad{0}{0}+\tau_{r}^1(t)\dyad{1}{1},
\label{eq:local_state_spin-half}
\end{eqnarray}
where $\tau_r^0(t)$ $(\tau_r^1(t))$ can be identified as the time-dependent population of the state $\ket{0}$ ($\ket{1}$), and can be used to define a \emph{population-based local temperature} (PLT) for the $r$th spin as a function of time, given by
\begin{eqnarray}
T_r(t)=\frac{h_r}{\ln\left[{\tau_r^1(t)}^{-1}-1\right]}.
\end{eqnarray}
While this protocol for defining the local temperature of a spin-$\frac{1}{2}$ particle is well-established~\cite{linden2010}, the definition of a local temperature remains a scarcely explored area with little literature~\cite{burke2021} (cf. \cite{silva2016, usui2021}), when the subsystems have Hilbert space dimension larger than $2$ (eg. spin-$j$ particles). In the latter situation, the local density matrix of the $r$th spin, having the form
\begin{eqnarray}
\rho_r(t)&=&\tau_{r}^{2j}(t)\dyad{2j}{2j}+\tau_{r}^{{2j-1}}(t)\dyad{2j-1}{2j-1}\nonumber\\ &&+\cdots+\tau_{r}^{{0}}(t)\dyad{0}{0},
\label{eq:local_state_time_dependent}
\end{eqnarray}
depends on a total of $2j$ parameters, and hence defining a \emph{unique} local temperature for the $r$th spin-$j$ particle following the protocol for spin-$\frac{1}{2}$ particles~\cite{linden2010} is not possible. In this regard, we put forward a distance-based quantifier for local temperature in the subsequent subsections, and justify its importance in the context of investigating performance of a quantum refrigerator constructed out of quantum spin models. In this way, a set of definition for local temperature emerges depending on the choice of valid distance measures although we show that they qualitatively behave in a similar fashion.
\subsection{Estimating local temperature using distance measures}
\label{subsec:local_cooling}
Let us consider an arbitrary canonical thermal state of the $r$th spin-$j$ particle in the system, having an absolute temperature $T_r^\prime$. The canonical thermal state is given by
\begin{equation}
\tilde{\rho}_r=\frac{\exp(-\beta^\prime_r h_r S^z_r)}{\text{Tr}\left[\exp(-\beta^{\prime}_r h_r S^z_r)\right]},
\label{trial}
\end{equation}
where $\beta^\prime_r=1/k_BT^\prime_r$ is the inverse temperature, and $h_{r}$ is the strength of the external magnetic field of the $r$th spin. We define the \emph{distance-based local temperature} (DLT), $T_r^D(t)$, for the $r$th spin described by the steady state $\rho_r(t)$ obtained via the dynamics (see Eq.~ (\ref{eq:local_state_time_dependent})) as
\begin{eqnarray}
T_r^D(t)=\frac{1}{\underset{T_r^{\prime}}{\min} D(\tilde{\rho}_r,\rho_r(t))},
\label{eq:loc_temp}
\end{eqnarray}
where $D(\tilde{\sigma},\sigma)$ is an appropriate distance measure between the density matrices $\tilde{\sigma}$ and $\sigma$. There exists a number of distance measures in literature, including the trace distance~\cite{nielsenchuang}, the Hilbert-Schmidt distance~\cite{Ozawa2000}, Uhlmann fidelity~\cite{uhlmann1976, jozsa1994}, and the relative entropy distance~\cite{mendon2008} to name a few, which can be used to quantify the local temperature, and the use of a particular measure may depend on specific situations. In the following sections, we shall compare the performances of different distance measures in context of faithfully quantifying the local temperature of a spin-$j$ system. In order to justify the importance of such a definition of local temperature, we present the following proposition.
\noindent\textbf{$\blacksquare$ Proposition II.} \emph{For a spin-$\frac{1}{2}$ particle, the distance-based local temperature is equivalent to the population-based local temperature at all times, when trace distance is chosen as the distance measure.}
\begin{proof}
Let us define
\begin{eqnarray}
y_r=\frac{\exp(-\beta^\prime_rh_r/2)}{\exp(-\beta^\prime_rh_r/2)+\exp(\beta^\prime_rh_r/2)},
\end{eqnarray}
and write $D(\tilde{\rho}_r,\rho_r(t))$, at an arbitrarily fixed time instant $t$, as a function of $y_r$ as
\begin{eqnarray}
D(y_r)&=& \frac{1}{2}\text{Tr}\sqrt{(\tilde{\rho}_r-\rho_r(t))^\dagger(\tilde{\rho}_r-\rho_r(t))} \nonumber \\
&=&\frac{1}{2}\left(\left|y_r-\tau_r^1(t)\right|+\left|(1-y_r)-\tau_r^0(t)\right|\right),
\end{eqnarray}
which, using $\tau_{r}^0=1-\tau_r^1$, becomes
\begin{eqnarray}
D(y_r)&=&\frac{1}{2}\left(\left|y_r-\tau_r^1(t)\right|+\left|\tau_r^1-y_r\right|\right).
\end{eqnarray}
Since $D(y_r)\geq 0$ by virtue of being a distance measure, and $D(y_r)=0$ for $y_r=\tau_r^1$, the DLT is obtained from the equation $y_r=\tau_r^1$ by solving for $T_r^\prime$ as
\begin{eqnarray}
T_r^\prime=\frac{h_r}{\ln\left[{\tau_r^1(t)}^{-1}-1\right]}.
\label{eq:dlt-plt}
\end{eqnarray}
Since Eq.~(\ref{eq:dlt-plt}) holds for an arbitrary $t$, the DLT is equivalent to the PLT at all times. Hence the proof. \end{proof}
In the subsequent sections, we shall discuss the steady-state cooling of a spin in the system using DLT as a quantifier for cooling, where for brevity, we denote the steady-state DLT as $T_r^{s}=T_{r}^D(t\rightarrow\infty)$.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig2.pdf}
\caption{(Color online.) \textbf{Variation of (a) heat current, (b) steady-state temperature and (c) normalized von-Neumann entropy (vertical axis) as functions of the dimension of the first subsystem, \(j\) (horizontal axis).} The refrigerator consists of two identical spins which are interacting according to the $XX$ Hamiltonian given in Eq. (\ref{eq:hint_xy}) with \(\gamma =0\) and \(J \Delta =0\). We compute these quantities by solving local QME. Circles, squares and triangles represent different interaction strengths, namely $J = 0.02$, $J = 0.05$ and $J = 0.09$ respectively. The local external magnetic fields of the first and the second spins are respectively $h_1=1.1$, and $h_2=1.3$ while the corresponding initial temperature of the first and the second spins are $T_1(0)=1$ and $T_2(0)=1.1$ respectively. Here the spin-bath interaction is chosen as $\Gamma=0.05$. The dimensional advantage according to the figures of merit for the refrigerator is clearly visible. All the axes are dimensionless. }
\label{fig:xyz_two_spin_local}
\end{figure*}
\subsection{Entropy-based estimation of local temperature}
In situations where a spin-$j$ subsystem of a quantum spin model in a system-bath setup described in Sec.~\ref{sec:model} attains a local steady-state cooling, the entropy of the subsystem in the steady state should be lower than the initial entropy of the subsystem at $t=0$, providing a signature of the cooling phenomena. In order to carry out a quantitative investigation, we define an entropy-based estimated temperature, quantified by the normalized entropy for the steady state, as
\begin{equation}
S_N^s = \frac{S(\rho_{r}(t \rightarrow \infty))}{S(\rho_{r}(0))},
\label{normalized_entropy}
\end{equation}
where $S(\rho)=-\text{Tr}(\rho\log_2\rho)$ is the von Neumann entropy for the density matrix $\rho$. A local steady-state cooling of the $r$th spin is indicated by $S_N^s <1$ while its positivity implies heating. The qualitative variations of $S_N^s$ as functions of the relevant system parameters as well as with increasing dimension of the Hilbert spaces of the subsystems are similar to those for DLT, as we shall demonstrate in the subsequent sections.
\subsection{Local heat current}
An important quantity, providing the indication as to whether an $N$-spin system is operating as a refrigerator for the $r$th spin, is the local heat current at the steady state, defined as~\cite{Petruccione}
\begin{eqnarray}
\dot{Q}_r=\text{Tr}\left[H_{sys}\mathcal{L}_r(\rho^s)\right],
\end{eqnarray}
where $\rho^s$ is the steady state $\rho(t\rightarrow\infty)$ of the entire system. A positive value of $\dot{Q}_r$ represents a situation where heat flows from the bath $B_r$ to the $r$th spin in the steady state, which is at a lower temperature $T_r^s<T_r^0$ if a steady-state cooling has been achieved. The value of $\dot{Q}_r$, therefore, is expected to be positive in accordance with a cooling indicated by $T_r^{D,s}$ and $S_N^s$. Note, however, that the definition of $H_{sys}$ may vary depending on the choice of a local, or a global approach to define the QME, and an inappropriate choice of the QME may lead to anomalous values of $\dot{Q}_r$, although the steady-state cooling for the $r$th spin is indicated by the values of $T_r^{D,s}$ and $S_N^s$~\cite{ghoshal2021,konar2021}. We shall elaborate on this in the subsequent sections as we discuss specific constructions of small refrigerators in a case-by-case basis.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig3.pdf}
\caption{(Color online) Trends of (a) heat current and (b) local temperature in the steady state (vertical axis) against the interaction strength, $J$ for BB Hamiltonian (horizontal axis).
Hollow squares with solid line, solid squares with dashed line and solid squares with solid line respectively represent \(j=1\), \(3/2\) and \(2\) respectively while blue, orange and yellow respectively denotes the phases, \(\phi =\pi/6\), \(\pi/3\) and \(2 \pi/3\).
All other specifications are same as Fig. \ref{fig:xyz_two_spin_local}. All the axes are dimensionless.
}
\label{fig:bbh_two_spin_local}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig4.pdf}
\caption{(Color online.) Scattered plot of the steady-state temperature only when cooling occurs in the $(J$, $\Delta h = h_2 - h_1)$-plane. Squares, circles and triangles represent respectively $j=1/2$, $1$ and $3/2$.
Other specifications are same as Fig. \ref{fig:xyz_two_spin_local}. Among $7 \times 10^3$ choices of parameters of the refrigerator based on the $XX$ model, $2.7\%$, $14.3\%$ and $23.82\%$ of situations are found to exhibit cooling in the steady state with $j=1/2$, $j=1$ and $j=3/2$ respectively. Both the axes are dimensionless.}
\label{fig:scatter}
\end{figure}
\section{Two-spin quantum refrigerators}
\label{sec:twospinR}
We now discuss the performance of quantum refrigerators built with two spins, where one of the spins is cooled and the other spin, along with the baths, constitute the refrigerator. Unless otherwise mentioned, in the rest of the paper, we always choose the first spin, i.e., $r=1$ to be the target spin for cooling.
\subsection{System of two identical spins}
\label{subsec:like_spins}
Let us consider two identical interacting spin-$j$ particles constituting the system, and increase the value of $j$ simultaneously for both the spins to study how the refrigeration of one of the spins depends on $j$. Unless otherwise mentioned, in all our analysis, we use the trace distance to define the DLT.
For computing heat current, local temperature and entropy, we solve the local as well as global QME using the Runge-Kutta fourth order technique, and determine the reduced state of the spin-$j$ particle in the steady state that is used to compute the relevant quantities.
\subsection*{Tuning refrigerator with system parameters}
\textbf{\(XYZ\) model as refrigerator. } We first consider the $XYZ$-type interaction between the spins, and solve the time-dependent state of the system using the local QME to compute $\dot{Q}_1$, $T_1^{s}$, and $S_N^s$ corresponding to the first spin. Fig.~\ref{fig:xyz_two_spin_local} depicts the variations of these quantities as a function of $j$, clearly indicating significant advantage in cooling of the first spin when the dimension of the Hilbert spaces corresponding to the spins increases. For example, with \(J=0.09\), in case of spin-$1/2$, the decrease in temperature at the steady state from the initial state is \(\approx 0.59\%\) while it is \(3.1\%\) for the refrigerator with spin-$4$ systems.
Also, our results indicate that a higher value of $J$ favours the cooling of the first spin, compared to a lower value. In our analysis, we have kept the value of $J$ to be in such a range that the local QME can be applied. However, the improvement in cooling for higher values of $J$ indicates the need for an investigation with the global QME, which we shall discuss in the subsequent subsections.
Note that the results presented in Fig.~\ref{fig:xyz_two_spin_local} is for the case of $\gamma=0$ and $\Delta=0$, representing the \(XX\) Hamiltonian. Our data suggests that even in the presence of the $xy$- and the $z$-anisotropy in the interaction, the dimensional advantage in cooling persists although the variation of the relevant quantities is almost negligible with non-zero values of the anisotropies for a fixed value of $j$, especially with low $j$ values. For high $j$, the slight change in the local temperature happens with the introduction of \(\gamma\) and \(\Delta\) and the results suggest that the performance of the refrigerator based on the $XXZ$ model is the best among the class of $XYZ$ models. Therefore, the behaviors of the heat current, entropy and local temperature depicted in Fig~\ref{fig:xyz_two_spin_local} faithfully capture all the relevant information regarding the effect of increasing spin-dimension and the spin-spin interaction strength on performance of the refrigerator. Note that the positive or negative coupling strength, \(J\), leads to the same local temperature in the steady state.
\textbf{Refrigerator with bilinear-biquadratic interactions.} Using the local approach, we also investigate the performance of the two-spin refrigerator when the spin-spin interactions are governed by the BB Hamiltonian (see Fig.~\ref{fig:bbh_two_spin_local}), and have found the results to be qualitatively similar to that reported in Fig.~\ref{fig:xyz_two_spin_local}. The dimensional advantage of cooling is present irrespective of the phase of the system from which the spin-spin interaction parameter is chosen. Specifically, the parameters chosen for demonstration reveals that the minimum temperature is obtained when the corresponding system at equilibrium belongs to the critical phase.
\noindent\emph{Note.} A comment on the choice of the system parameters for the demonstration of refrigeration is in order here. Although numerous points in the parameters space of the system parameters exists where a local steady-state cooling for the first spin is observed, the total volume of the parameter space that represents such refrigerators is small compared to the entire parameter space although it increases with the increase of the spin-dimension. In Fig.~\ref{fig:scatter}, we depict, for $j=\frac{1}{2},1,\frac{3}{2}$, the points in the parameter space of $J$ and $\Delta h=h_2-h_1$ for which a steady-state cooling of at least $T_1^0-T_1^s=10^{-3}$ is obtained. The fraction of points representing a refrigerator increases with an increase in $j$ $2.7\%$, $14.3\%$ and $23.82\%$ for \(j = 1/2, \, 1, \,\text{and}\, 3/2\) respectively, demonstrating again a dimensional advantage in the accessibility of parameter space in building a quantum refrigerator. Note also the higher clustering of the accessible points in parameter space towards a high value of $J$, indicating the need for performing a global QME-based analysis of the system. Interestingly, however, we notice that there exists a forbidden regime in the \((J, \Delta h)\)-plane where the cooling with $XX$ interactions does not occur and it decreases with dimensions.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig5.pdf}
\caption{(Color online) Dependence of steady state cooling factor, \(\eta_j\) (vertical axis) vs. initial temperature (horizontal axis). Dashed, solid and dotted lines represent spin quantum number, $j = 1/2$ $1$ and $3/2$ respectively. Here $J = 0.05$ and $T_2^0-T_1^0 =0.4$. Other specifications are same as in Fig. \ref{fig:xyz_two_spin_local}. Both the axes are dimensionless. }
\label{fig:temperature_tuning}
\end{figure}
\subsection*{Tuning refrigerator with bath temperature}
Along with system parameters, it is also important to investigate how the performance of the refrigerator can be controlled when one has access to the tunable parameters of the thermal baths, such as the bath temperatures $T_r^0$. Towards this aim, we define a steady-state cooling factor relative to the initial temperature of the cold spin-$j$ particle in the system, as
\begin{eqnarray}
\eta_j=\frac{T_1^0-T_1^s}{T_1^0}.
\end{eqnarray}
In Fig.~\ref{fig:temperature_tuning}, we plot the variation of $\eta_j$ as function of $T_1^0$, which exhibits a critical point $T_1^c$ on the $T_1^0$-axis corresponding to a zero-crossing of $\eta_j$. For $T_1^0<T_1^c$, a steady-state heating of the first spin takes place represented by a negative value of $\eta_j$, while when $T_1^0>T_1^c$, a positive value of $\eta_j$ is obtained due to the occurrence of a steady-state cooling. Note that for the reported data, the critical point $T_1^c$ corresponds to the situation described in Proposition I, such that
\begin{eqnarray}
\frac{h_1}{T_1^c}=\frac{h_2}{T_2^0},
\end{eqnarray}
ensuring that no evolution of the system takes place. Note also that our numerical analysis clearly suggests that
\begin{eqnarray}
\eta_{j=\frac{1}{2}} < \eta_{j=1} < \eta_{j=\frac{3}{2}},
\end{eqnarray}
thereby exhibiting the importance of higher dimensional subsystems in enhancing the performance of the designed refrigerator.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig6.pdf}
\caption{(Color online) Behavior of steady-state local temperature of the target spin obtained via trace and relative entropy distance (ordinate) with respect to interaction strength, $J$ (abscissa). Solid and hollow symbols represent trace and relative entropy distance. Squares, circles and triangles are respectively for refrigerators with two identical spins having \(j=1/2,\, 1, \, 3/2\). Other specifications are same as Fig. \ref{fig:xyz_two_spin_local}. All the axes are dimensionless. }
\label{fig:different_distance_measure}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig7.pdf}
\caption{(Color online) \textbf{Steady-state local temperature (ordinate) with the variation of spin quantum number $j$ (abscissa).} The dashed line is obtained by solving global QME for the $XXZ$ model with two identical spin-$j$ systems and the solid line is for the refrigerator consisting of a spin-$1/2$ and spin-$j$ particles.
Here $J=0.05$ (circles) and $J=0.09$ (squares) with $J\Delta=-1.0$ while the strength of the magnetic fields and the spin-bath interactions are chosen as $h_1=1.1$, $h_2=1.3$, $\alpha_1,\alpha_2=10^{-3}$ and $\omega_c=10^3$ respectively, and the initial temperatures of each spin are same as in Fig. \ref{fig:xyz_two_spin_local}. See text for the improvement obtained with global QME over the local ones. Both the axes are dimensionless.
}
\label{fig:global_two_spins}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig8.pdf}
\caption{(Color online) (a) \(\dot{Q_1}\) and (b) \(T_1^s\) (ordinate) against the dimension, \(j\) (abscissa). Here the refrigerator is built with a spin-$1/2$ and a spin-$j$ particles governed by the Hamiltonian \(H_{xx}\) in Eq. (\ref{spin1/2_spinj_hamiltonian}). All other specifications are same as in Fig. \ref{fig:xyz_two_spin_local}. All the axes are dimensionless.}
\label{fig:twospinnonidentical}
\end{figure*}
\textbf{Local temperature with different distance measures.} At this point, it is natural to ask whether the reported results remain invariant under a change in the choice of the distance measure used to quantify the DLT. We answer this question affirmatively. Fig.~\ref{fig:different_distance_measure} depicts a comparison between the DLT values obtained by using the trace distance and the relative entropy distance, defined as
\( S(\sigma_{1} || \sigma_{2}) = \text{Tr}\left[\sigma_{1} \log_2 \sigma_{1} -\sigma_{1} \log_2 \sigma_{2}\right]\),
for two density matrices $\sigma_1$ and $\sigma_2$.
While the two measures provide identical results for qubit systems, the values of the DLTs differ by $\sim10^{-3}$ with increasing $j$. Nonetheless, the qualitative behavior remains similar in all these situations. It is also noteworthy that the difference is very small for low values of the spin-spin interaction strength, and increases very slowly with an increase in $J$. We also check the performance of the DLT using Uhlmann fidelity~\cite{uhlmann1976} as the distance measure, which coincides with the DLT using relative entropy distance.
\subsection*{Refrigeration using the global QME}
A question that naturally arises is to whether the results corresponding to a quantum refrigerator obtained using the local QME remains the same even in situations where a global QME is appropriate to describe the dynamics of the system. To answer this question, we find that in the case of a two-spin models described by the Hamiltonian $H_{xyz}$, cooling of the first spin takes place only with a non-zero value of $\Delta J$ (see Fig.~\ref{fig:global_two_spins} for a typical cooling phenomena for the first spin). This is in stark contrast with the situations discussed so far involving the local QME, where the $zz$-interaction term in $H_{xyz}$ does not have any significant effect on cooling (cf.~\cite{konar2021}). However, even in the case of the global QME, the features like significant dimensional advantage remains unaltered, and the amount of refrigeration of the first spin is much higher in comparison to the case of the local QME.
For example, for spin-1/2 systems, the percentage of cooling in the first spin is approximately \(18.8\%\) with global QME while for spin-$3/2$ quantum refrigerator, it is \(53.5\%\) for the $XXZ$ model refrigerator with \(J=0.05\) and \(J \Delta =-1\).
\subsection{System of two different spins}
\label{subsec:mixed_spin}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig9.pdf}
\caption{(Color online) (a) Local temperature (vertical axis) as a function of the dimension of the spins $j$ (horizontal axis) with $N = 3$ for Heisenberg $XX$ Hamiltonian. Here we choose $h_1=1.5$, $h_2=2.5$, $h_3=3.5$ and $\Gamma=0.05$, and the initial temperature of each spin is $T_1(0)=1$, $T_2(0)=1.1$ and $T_3(0)=1.5$.
(b) \(T_1^s\) (ordinate) against interaction strength \(J\) (abscissa) with \(N=3\) for the refrigerator based on the BB Hamiltonian for different \(\phi\) values. Initial values of the magnetic fields, temperature and bath-system interaction strengths are same as in the $XX$ model.
All the axes are dimensionless.}
\label{fig:xx_bbh}
\end{figure*}
Let us design a refrigerator with two spins having different values of $j$, and focus specifically on the situation where $j=\frac{1}{2}$ for the first spin $(r=1)$, while for the second spin $(r=2)$, $j$ can take an arbitrary value. While it is known that a qubit can be cooled in a qubit-qutrit system with specific interaction between them~\cite{linden2010}, it is not yet clear whether increasing the Hilbert space dimension of the second party in a $2\times (2j+1)$ system provides any advantage to the refrigeration of the qubit system. To address this question, we consider the Hamiltonian modelling the interaction between the spin-$1/2$ and spin-$j$ particle to be
\begin{equation}
H_{xx}=J[\tilde{S}_1^xS_2^x+\tilde{S}_1^yS_2^y],
\label{spin1/2_spinj_hamiltonian}
\end{equation}
where $\tilde{S}$ ($S$) represents the spin operator corresponding to the spin-$\frac{1}{2}$ (spin-$j$) subsystem, and $J$ is the strength of the spin-spin interaction. In Figs. \ref{fig:twospinnonidentical}(a) and (b), we respectively observe the patterns of $\dot{Q}_1$ and $T_1^{s}$ of the spin-$\frac{1}{2}$ particle by varying $j$ for the second spin. With an increasing $j$ for the second spin, $\dot{Q}_1$ ($T_1^{s}$) starts from a low positive value and then increases (starts from a value $\approx T_1^0$ and then decreases), exhibiting again the dimensional advantage in cooling the first spin. Surprisingly, we observe that in this non-identical scenario, the minimum temperature corresponding to \(j=4\) of the second spin (i.e., the decrease of temperature, \( \approx 4.82\%\)) is much lower (higher) than that obtained for the scenario with identical spins (the decrease from the initial temperature, \(\approx 3.1\%\) (comparing Figs. \ref{fig:xyz_two_spin_local} and \ref{fig:twospinnonidentical}).
We also perform the same analysis using the global QME to find a more pronounced dimensional advantage. Specifically, with the $XXZ$ model (\(J=0.05, J \Delta = -1 \)), we find that a $18.9\%$ cooling of the first spin in the case of $j=\frac{1}{2}$ for the second spin occurs while it becomes $48.53\%$, when the spin quantum number of the second spin is increased to $j=3/2$.
\section{Refrigeration in three-spin systems}
\label{sec:threespinR}
Let us now move to a set up of refrigerator consisting of three identical spin-$j$ particles, each of which is connected to a local thermal bath as shown in Fig. \ref{fig:schematic}.
Starting with the product state of a local Hamiltonian, \(H_{loc}\), the system evolves according to Hamiltonian, \(H_{xyz}\) or \(H_{B}\) at \(t>0\). In case of the $XYZ$ refrigerator, we consider an isotropic case ($\gamma=0,\Delta=0$) for demonstration of the performance of the refrigerator in a local QME approach. Fig.~\ref{fig:xx_bbh} depicts the variations of $\dot{Q}_1$, $T_1^{s}$, and $S_N^s$ as functions of $j$ for different values of $J$, clearly demonstrating a dimensional advantage. Note that although the qualitative results on refrigeration of the first spin using the three-spin system remains similar to its two-spin variant (see Sec.~\ref{sec:twospinR}), quantitatively the two-spin refrigerator performs better than the three-spin one, which can be seen by comparing Fig.~\ref{fig:xx_bbh} with Fig.~\ref{fig:xyz_two_spin_local}.
In case of refrigerator with three-spins governed by the BB Hamiltonian, $T_1^{s}$ and $S_N^s$ again exhibit increasing cooling of the first spin with increasing $J$ as well as with the increase of spin quantum number, \(j\). As observed for the two-spin refrigerator, the phase dependence also remains unaltered. However, the heat current, $\dot{Q}_1$ exhibits a non-monotonic variation with $J$ for $j=2$ when $\phi = \pi/3$ and $2\pi/3$, and becomes negative for moderate and high values of $J$. We point out here that we have defined the local heat current following the global approach (see Sec.~\ref{subsec:osd}) which may lead to such anomalous behavior in the heat current (cf. \cite{Wichterich2007, barra2015, strasberg2017,De_Chiara2018,ghoshal2021,konar2021}).
\section{Conclusion}
\label{sec:conclu}
Summarizing, we have designed a quantum refrigerator built of a few spins whose individual Hilbert space dimensions can go beyond the qubits, or qutrits. The spins are considered to be interacting among each other via the $XYZ$ and the bilinear biquadratic interactions, while each of the spins are locally interacting with the bosonic baths. So far, such machines have typically been built with spin-$1/2$ or spin-$1$ systems, and the quantifiers of the performances of these machines, such as definitions of local temperature for the constituent subsystems, are designed accordingly. To deal with the higher dimensional systems, in this paper, we propose a new definition of local temperature based on the minimum distance between the dynamical state of a spin-$j$ particle in the steady state, and a canonical thermal state of the same particle, which proves to be a faithful quantifier for the performance of the designed refrigerator. The definition is proved to be consistent with the existing definitions for qubit systems, and the behavior of the distance-based local temperature is found to be in agreement with the local heat current and the entropy of the subsystems. We observed that our setup leads to a cooling of one of the spins in the system, which enhances with the increase of the spin quantum number of the spins, and thereby with the increase of the Hilbert space dimension, hence establishing the dimensional advantage in the refrigerators. On our way to verify these results by using both local and global quantum master equations, we have also analytically derived the form of the Lindblad operators corresponding to the individual spins while constructing the dissipator for the local quantum master equation.
Miniaturisation of devices are necessary to fulfil the current way of living. Although most of these devices work according to the laws of classical physics, they have now started knocking at the door of the quantum world due to immense advancement in the design and control of machines at the microscopic scale. In recent years, it has been established that appliances based on quantum mechanics can remarkably enhance the efficiencies compared to that obtained from the existing ones, thereby revolutionizing the world of technologies. In this respect, our work explores and manifests building small quantum refrigerators using quantum spin systems with large spins. The scope for future exploration from our work is immense. For instance, note that starting from a microscopic quantum thermal machine, there exists two routes to macroscopicity -- (a) by increasing dimension of individual subsystems of a composite quantum system while keeping the number of subsystems small, and (b) by having a large number of small subsystems~\cite{mohammady2018, arisoy2021}. Our results explores the former, while the latter also gained some interests in the recent past~\cite{george2011,deniz2017,lekscha2018, kloc2019,hong2020,revathy2020}. It will be interesting to find out the hierarchies, if any, among the macroscopic devices obtained following these two different routes.
\acknowledgements
TKK, SG and ASD acknowledge the support from the Interdisciplinary Cyber Physical Systems (ICPS) program of the Department of Science and Technology (DST), India, Grant No.: DST/ICPS/QuST/Theme- 1/2019/23. AKP acknowledges the Seed Grant from IIT Palakkad. We acknowledge the use of \href{https://github.com/titaschanda/QIClib}{QIClib} -- a modern C++ library for general purpose quantum information processing and quantum computing (\url{https://titaschanda.github.io/QIClib}), and the cluster computing facility at the Harish-Chandra Research Institute.
\bibliographystyle{apsrev4-1}
|
1,477,468,751,151 | arxiv | \section{\def\@secnumfont{\mdseries}\@startsection{section}{1}%
\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}%
{\normalfont\scshape\centering}}
\def\subsection{\def\@secnumfont{\bfseries}\@startsection{subsection}{2}%
{\parindent}{.5\linespacing\@plus.7\linespacing}{-.5em}%
{\normalfont\bfseries}}
\makeatother
\def\subl#1{\subsection{}\label{#1}}
\newcommand{\Hom}{\operatorname{Hom}}
\newcommand{\End}{\operatorname{End}} \newcommand{\wh}[1]{\widehat{#1}} \newcommand{\Ext}{\operatorname{Ext}} \newcommand{\ch}{\text{ch}} \newcommand{\ev}{\text{ev}}
\newcommand{\Ob}{\operatorname{Ob}} \newcommand{\soc}{\operatorname{soc}} \newcommand{\rad}{\operatorname{rad}} \newcommand{\head}{\operatorname{head}}
\def\operatorname{Im}{\operatorname{Im}}
\def\operatorname{gr}{\operatorname{gr}}
\def\operatorname{mult}{\operatorname{mult}}
\newcommand{\krsm}{KR^\sigma(m\omega_i)} \newcommand{\krsmzero}{KR^\sigma(m_0\omega_i)} \newcommand{\krsmone}{KR^\sigma(m_1\omega_i)}
\newcommand{\vsim}{v^\sigma_{i,m}}
\newcommand{\Cal}{\cal} \newcommand{\Xp}[1]{X^+(#1)} \newcommand{\Xm}[1]{X^-(#1)}
\newcommand{\on}{\operatorname} \newcommand{\Z}{{\bold Z}} \newcommand{\J}{{\cal J}} \newcommand{\C}{{\bold C}} \newcommand{\Q}{{\bold Q}}
\renewcommand{\P}{{\cal P}}
\newcommand{\N}{{\Bbb N}} \newcommand\boa{\bold a} \newcommand\bob{\bold b} \newcommand\boc{\bold c} \newcommand\bod{\bold d} \newcommand\boe{\bold e} \newcommand\bof{\bold f} \newcommand\bog{\bold g}
\newcommand\boh{\bold h} \newcommand\boi{\bold i} \newcommand\boj{\bold j} \newcommand\bok{\bold k} \newcommand\bol{\bold l} \newcommand\bom{\bold m} \newcommand\bon{\bold n} \newcommand\boo{\bold o}
\newcommand\bop{\bold p} \newcommand\boq{\bold q} \newcommand\bor{\bold r} \newcommand\bos{\bold s} \newcommand\bou{\bold u} \newcommand\bov{\bold v} \newcommand\bow{\bold w} \newcommand\boz{\bold z}
\newcommand\boy{\bold y} \newcommand\ba{\bold A} \newcommand\bb{\bold B} \newcommand\bc{\bold C} \newcommand\bd{\bold D} \newcommand\be{\bold E} \newcommand\bg{\bold G} \newcommand\bh{\bold H} \newcommand\bi{\bold I}
\newcommand\bj{\bold J} \newcommand\bk{\bold K} \newcommand\bl{\bold L} \newcommand\bm{\bold M} \newcommand\bn{\bold N} \newcommand\bo{\bold O} \newcommand\bp{\bold P} \newcommand\bq{\bold Q} \newcommand\br{\bold R}
\newcommand\bs{\bold S} \newcommand\bt{\bold T} \newcommand\bu{\bold U} \newcommand\bv{\bold V} \newcommand\bw{\bold W} \newcommand\bz{\bold Z} \newcommand\bx{\bold x} {\title[Highest weight
categories of representations]{ Current algebras, Highest weight categories and quivers}
\author{Vyjayanthi Chari and Jacob Greenstein}
\thanks{This work was partially supported by the NSF grant DMS-0500751}
\address{Department of Mathematics, University of
California, Riverside, CA 92521.} \email{[email protected]}
\email{[email protected]}\maketitle
\begin{abstract}
We study the category of graded finite-dimensional
representations of the polynomial current algebra associated
to a simple Lie algebra. We prove that the category has
enough injectives and compute the graded character of the
injective envelopes of the simple objects as well as
extensions betweeen simple objects. The simple objects in
the category are parametized by the affine weight lattice.
We show that with respect to a suitable refinement of the
standard ordering on affine the weight lattice the category
is highest weight. We compute the $\Ext$ quiver of the algebra
of endomorphisms of the injective cogenerator of the
subcategory associated to an interval closed finite subset of
the weight lattice. Finally, we prove that there is a large
number of interesting quivers of finite, affine and tame
type that arise from our study. We also prove that the path
algebra of star shaped quivers are the $\Ext$-algebra of a
suitable subcategory.
\end{abstract}
\section*{Introduction}
In this paper we study the category $\cal G$ of graded
finite-dimensional representations of the polynomial current
algebra~$\lie g[t]$ associated to a simple finite dimensional Lie algebra~$\lie
g$. There are numerous interesting and related families of examples
of such representations: the Demazure modules arising from the
positive level representations of the affine algebra, the fusion
products of finite-dimensional representations of $\lie g[t]$ defined
in \cite{FL}, the Kirillov-Reshetikhin modules studied in
\cite{CMkir1, CMkir2} and the Weyl modules introduced in~\cite{CP}
and studied in \cite{CL,FoL}. All these representations are in
general reducible but always indecomposable.
The isomorphism classes of simple objects in $\cal G$ are indexed by
the set $\Lambda= P^+\times \bz_+$ where $P^+$ is the set of
dominant integral weights of $\lie g$. The set $\Lambda$ can be
identified in a natural way with a subset of the lattice of integral
weights $\wh P$ of the untwisted affine Lie algebra associated to
$\lie g$. We define an interval finite partial order $\preccurlyeq$
on $\Lambda$ which is a refinement of the usual order on $\wh P$ and
show (\thmref{thmone}) that $\cal G$ is a highest weight category,
in the sense of~\cite{CPS}, with the poset of weights
$(\Lambda,\preccurlyeq)$. To do this, we study first the category
$\wh{\cal G}$ of graded $\lie g[t]$-modules with finite-dimensional
graded pieces. This category has enough projectives and the graded
character of the projective modules can be described explicitly.
Then, using a certain duality, we are able to show that the category
$\cal G$ has enough injectives and
we compute the graded character of the injective
envelope of any simple object. We then prove that $\cal G$ is a
directed highest weight category by computing the extensions
between simple objects.
In Section~\ref{TMPALG} we study algebraic structures associated
with Serre subcategories of~$\cal G$. For an interval closed subset
$\Gamma$ of $\Lambda$, let $\cal G[\Gamma]$ be the full subcategory
of $\cal G$ consisting of objects whose simple constituents are
parametrized by elements of $\Gamma$ and let $I(\Gamma)_\Gamma$ be
the injective cogenerator of $\cal G[\Gamma]$. It is well-known
that there is an equivalence of categories between $\cal G[\Gamma]$
and the category of finite-dimensional right $\mathfrak
A(\Gamma)=\End_{\cal G[\Gamma]}I(\Gamma)_\Gamma$-modules. Moreover $\mathfrak
A(\Gamma)$ is a quotient of the path algebra of its $\Ext$ quiver
$Q(\Gamma)$ and has a compatible grading. By using the character
formula for the injective envelopes, we show that $Q(\Gamma)$ can
be computed quite explicitly in terms of finite dimensional
representations of~$\lie g$.
In Sections~\ref{EXH} and~\ref{QR} we show that there are many
interesting quivers arising from our study. Thus, in
Section~\ref{EXH} we see that for all $\lie g$ (in some cases one
has to exclude $\lie{sl}_2$ or~$\lie g$ of type~$C_\ell$), there
exists interval closed finite subsets $\Gamma$ such that the
corresponding algebra $\mathfrak A(\Gamma)$ is hereditary and
$Q(\Gamma)$ is (a) a generalized Kronecker quiver; (b) a quiver of
type $\mathbb A_\ell$, $\mathbb D_\ell$; (c) an affine quiver of
type $\tilde{\mathbb D}_\ell$; (d) any star shaped quiver with three
branches. In Section~\ref{QR} we study an example which arises from
the theory of Kirillov-Reshetikhin modules for $\lie g[t]$ where
$\lie g$ is of type $D_n$. In this case the algebra $\mathfrak
A(\Gamma)$ is not hereditary, but is still of tame representation
type.
\subsection*{Acknowledgements}
The first author is very grateful to Steffen Koenig for his infinite
patience in answering many questions and for his generosity in
providing references and detailed explanations, this paper could not
have been written without those discussions. The second author
thanks Olivier Schiffmann and Wolfgang Soergel. Part of this work
was done while the first author was visiting the University of
Cologne and the second author was visiting the Weizmann Institute of
Science. It is a pleasure to thank Peter Littelmann and the algebra
group of the University of Cologne and Anthony Joseph for their
hospitality. We also thank Brian Parshall for pointers to references
in the literature. Finally, we are grateful to Claus Michael Ringel
for explaining to us the proof that the example in Section~\ref{QR}
is of tame type.
\section{The category~$\mathcal G$}\label{CAT}
\subsection{The simple Lie algebras and the associated current algebras}\label{CAT10}
Throughout the paper $\lie g$ denotes a finite-dimensional complex
simple Lie algebra and $\lie h$ a fixed Cartan subalgebra of $\lie
g$. Set~$I=\{1,\dots,\dim\lie h\}$ and let $\{\alpha_i: i\in
I\}\subset\lie h^*$ be a set of simple roots of $\lie g$ with
respect to $\lie h$. Let $R\subset\lie h^*$ (respectively, $R^+$,
$P^+$, $Q^+$) be the corresponding set of roots (respectively,
positive roots, dominant integral weights, the $\bz_+$-span of
$R^+$) and let $\theta\in R^+$ be the highest root.
Let $W\subset \operatorname{Aut}(\lie h^*)$ be the Weyl group of $\lie g$ and
$w_\circ$ be the longest element of $W$. For
$\alpha\in R$ denote by $\lie g_\alpha$ the corresponding root
space. The subspaces $\lie n^\pm=\bigoplus_{\alpha\in R^+}\lie
g_{\pm\alpha},$ are Lie subalgebras of $\lie g$. Fix a Chevalley
basis $x^\pm_\alpha$, $\alpha\in R^+$, $h_i$, $i\in I$ of $\lie g$
and for~$\alpha\in R^+$, set~$h_\alpha=[x_\alpha,x_{-\alpha}]$. Note that $h_{\alpha_i}=h_i$, $i\in I$. For
$i\in I$, let $\omega_i\in P^+$ be defined by
$\omega_i(h_j)=\delta_{ij}$ for all $j\in I$.
Let $\cal F(\lie g)$ be the category of finite-dimensional $\lie g$-modules with the morphisms being maps of $\lie g$-modules.
In particular, we write $\Hom_{\lie g}$ for~$\Hom_{\cal
F(\lie g)}$. The set $P^+$ parametrizes the isomorphism classes of simple objects in~$\cal F(\lie g)$. For $\lambda\in P^+$, let $V(\lambda)$ be
the simple module in the corresponding isomorphism class which is generated by an element $v_\lambda\in V(\lambda)$ satisfying the defining
relations:
$$
\lie n^+ v_\lambda=0,\quad hv_\lambda=\lambda(h)v_\lambda,\quad (x^-_{\alpha_i})^{\lambda(h_i)+1}v_\lambda =0,
$$
for all~$h\in\lie h$, $i\in I$. The module~$V(-w_\circ\lambda)$ is the $\lie g$-dual of $V(\lambda)$.
If $V\in\cal F(\lie g)$, write
$$V=\bigoplus_{\lambda\in\lie h^*} V_\lambda,$$ where $V_\lambda=\{v\in
V:hv=\lambda(h)v,\ \ \forall\ h\in\lie h\}$. Set $\operatorname{wt}(V)=\{\lambda\in\lie h^*:V_\lambda\ne 0\}$. Finally, recall also that the category~$\cal
F(\lie g)$ is semi-simple, i.e. any object in~$\cal F(\lie g)$
is isomorphic to a direct sum of the modules $V(\lambda)$, $\lambda\in P^+$. We shall use
the following standard results in the course of the paper (cf.~\cite{PRV} for \eqref{CAT10.iv}).
\begin{lem}
\begin{enumerate}[{\rm(i)}]
\item\label{CAT10.i} Let $\lambda\in P^+$. Then
$\operatorname{wt}(V(\lambda))\subset \lambda-Q^+$.
\item\label{CAT10.ii} Let $V\in\cal F(\lie g)$.Then $w\operatorname{wt}(V)\subset\operatorname{wt}(V)$ for all $w\in W$
and $\dim V_\lambda=V_{w\lambda}$.
\item\label{CAT10.iii} Let $V\in\cal F(\lie g)$. Then $$\dim\Hom_{\lie
g}(V(\lambda), V)=\dim\{v\in V_\lambda: \lie n^+ v=0\}.$$
\item\label{CAT10.iv} Let $\lambda,\mu\in P^+$. Then the module $V(-w_\circ\lambda)\otimes V(\mu)$ is
generated as a $\bu(\lie g)$-module by the element $v=v_{-\lambda}\otimes v_\mu$ with defining relations:
$$
(x^+_{\alpha_i})^{\mu(h_i)+1}v=(x^-_{\alpha_i})^{\lambda(h_i)+1}v=0,\qquad
hv=(\mu-\lambda)(h)v,
$$
for all $i\in I$ and $h\in\lie h$.\qed
\end{enumerate}
\end{lem}
Given any Lie algebra $\lie a$ let $\lie a[t]=\lie a\otimes \bc[t]$ be the polynomial current algebra of~$\lie a$. Let~$\lie a[t]_+$ be the
Lie ideal~$\lie a\otimes t \bc[t]$. Both $\lie a[t]$ and $\lie a[t]_+$ are $\bz_+$-graded Lie algebras with the grading given by powers of $t$.
Let $\bu(\lie a)$ denote the universal enveloping algebra of $\lie a$. Then $\bu(\lie a[t])$ has a natural $\bz_+$-grading as an associative
algebra and we let $\bu(\lie a)[k]$ be the $k^{th}$-graded piece. The algebra $U(\lie a)$ is a Hopf algebra, the comultiplication being given
by extending the assignment $x\to x\otimes 1+1\otimes x$ for $x\in\lie a$ to an algebra homomorphism of $U(\lie a)$. In the case of $\bu(\lie
a[t])$ the comultiplication is a map of graded algebras.
{\em In the course of the paper, we shall repeatedly use the fact that $\bu(\lie a[t])$ is generated as a graded algebra by $\lie a$ and $\lie
a\otimes t$ without further comment.}
\subsection{The category $\widehat{\cal G}$ }\label{CAT30}
Let $\wh{\cal G}$ be the category whose objects are graded $\lie g[t]$-modules with finite-dimensional graded pieces and where the morphisms are
graded maps of $\lie g[t]$-modules. More precisely, if $V\in\Ob\wh{\cal G}$ then
$$
V=\bigoplus_{r\in\bz_+} V[r],
$$
where $V[r]$ is a finite-dimensional subspace of $V$ such that $ (xt^k)V[r]\subset V[r+k] $ for all~$x\in\lie g$ and~$r,k\in\bz_+$. In
particular, $V[r]\in\Ob\cal F(\lie g)$ Also, if $V,W\in\Ob\wh{\cal G}$,then
$$\Hom_{\wh{\cal G}}(V,W)=\{f\in\Hom_{\lie g[t]}(V,W):
f(V[r])\subset W([r],\, r\in\bz_+\}.$$ For $f\in\Hom_{\wh{\cal G}}(V,W)$ let $f[r]$ be the restriction of $f$ to $V[r]$. Clearly,
$f[r]\in\Hom_{\lie g}(V[r],W[r])$.
Define a covariant functor $\ev:\cal F(\lie g)\to \wh{\cal G}$ by the requirements:
$$\ev(V)[0]=V,\qquad \ev(V)[r]=0,\qquad r>0,$$
and with $\lie g[t]$-action given by
$$
(xt^k)v=\delta_{k,0} xv,\qquad \forall\,x\in\lie g,\, k\in\bz_+,\, v\in V
$$
and
\begin{equation}\label{CAT30.10}
\Hom_{\wh{\cal G}}(\ev(V),\ev(W))= \Hom_{\lie g}(V,W).
\end{equation}
For~$s\in\bz_+$ let~$\tau_s$ be the grading shift given by
$$
(\tau_s V)[k]=V[k-s],\qquad k\in\bz_+, \quad V\in\Ob{\wh{\cal G}}.
$$
Clearly $\tau_s(V)\in\Ob{\wh{\cal G}}$.
\subsection{Simple objects in $\wh{\cal G}$}\label{CAT40}
For $\lambda\in P^+$ and $r\in\bz_+$, set
\begin{equation}\label{CAT40.20}
V(\lambda,r)=\tau_r(\ev(V(\lambda)).
\end{equation}
\begin{prop}The isomorphism classes of simple objects in $\wh{\cal G}$
are parametrized by pairs $(\lambda,r)$ and we have
\begin{align*}
&\Hom_{\wh{\cal G}}(V(\lambda,r), V(\mu,s))=0,\qquad (\lambda,r)\ne (\mu,s),\\
&\Hom_{\wh{\cal G}}(V(\lambda,r),V(\lambda,r))\cong\bc.
\end{align*}
Moreover if
$V\in\Ob\cal G$ is such that $V=V[n]$ for some $n\in\bz_+$, then $V$ is semi-simple.
\end{prop}
\begin{pf}The modules $V(\lambda,r)$, $(\lambda,r)\in\Lambda$ are obviously simple and
non-isomorphic. Moreover, if $V\in\Ob\wh{\cal G}$ is such that $V\ne V[r]$ for some $r$, there exists $m, m'\in\bz_+$ with
$m'>m$ such that ~$V[j]\not=0$, for $j\in\{m,m'\}$.
Hence the subspace $\bigoplus_{k>m}V[k]$ is a nontrivial proper graded $\lie g[t]$-submodule of
$V$ and it follows that $V$ is not simple. Assume now that $V$ is simple so that $V=V[r]$ for some $r$. This implies that $V$ is
finite-dimensional and also that $\lie g[t]_+ V=0$. It follows that $V$ must be isomorphic to $V(\lambda)$ for some $\lambda\in P^+$ as a $\lie
g$-module and hence $V\cong V(\lambda,r)$ as $\lie g[t]$-modules. The other statements are now obvious.
\end{pf}
\subsection{Tensor structure of the category~$\wh{\cal G}$}\label{CAT50}
Let~$V,W\in\Ob\wh{\cal G}$. Then~$V\otimes W$ is a $\lie g[t]$-module with the action being given by the comultiplication.
Given~$k\in\bz_+$, set
$$
(V\otimes W)[k]=\bigoplus_{i\in\bz_+} V[i]\otimes W[k-i],
$$
with the usual convention that~$W[j]=0$ if~$j<0$. The following is trivially checked.
\begin{lem}
\begin{enumerate}[{\rm(i)}]
\item\label{CAT50.i} $V\otimes W=\bigoplus_{k\in\bz_+} (V\otimes
W)[k]$ and for all $r\in\bz_+$, we have
$$
(xt^r)((V\otimes W)[k])\subset (V\otimes W)[k+r].
$$
In particular, $\wh{\cal G}$ is a tensor category.
\item\label{CAT50.ii} For all~$r,s\in\bz_+$
\begin{equation} \label{CAT50.10}
\tau_s V\cong V\otimes V(0,s),\qquad \tau_{r+s}(V\otimes W)\cong(\tau_r V)\otimes (\tau_s W).
\end{equation}
\end{enumerate}\qedhere
\end{lem}
\subsection{The subcategories $\cal G$ and $\cal G_{\le s}$} \label{CAT70}
Let $\cal G_{\le s}$ be the full subcategory of $\wh{\cal G}$ whose objects~$V$ satisfy
$$V[r]=0,\qquad \forall\, r>s,
$$
and let $\cal G$ be the full subcategory of $\cal G$ consisting of $V\in\Ob\wh{\cal G}$ such that $V\in\Ob\cal G_{\le s}$ for some $s\in\bz_+$.
It follows from the definition that $\cal G_{\le s}$ is a full subcategory of~$\cal G_{\le r}$ for all~$s< r\in\bz_+$.
Given $V\in\Ob\cal G$, let $\soc(V)\in\Ob\cal G$ be the maximal semi-simple subobject
of~$V$. Similalry, given $V\in\Ob\wh{\cal G}$, let~$\head(V)$ be the maximal semi-simple quotient of~$V$.
Given $s\in\bz_+$ and $V\in\Ob\wh{\cal G}$, define
$$
V_{>s}= \bigoplus_{r>s} V[r],\qquad V_{\le s}= V/V_{>s}
$$
Then~$V_{\le s}\in\Ob\cal G_{\le s}$.
Furthermore, if $f\in\Hom_{\wh{\cal G}}(V,W)$, then $V_{>s}$ is contained in the kernel of the canonical
morphism $\bar f:V\to W_{\le s}$ and hence we have a natural morphism~$f_{\le s}\in\Hom_{\wh{\cal G}_{\le s}}( V_{\le s}, W_{\le s})$.
\begin{lem}
\begin{enumerate}[{\rm(i)}]
\item\label{CAT70.i} For all $r,s\in\bz_+$, and $V\in\Ob\cal
G_{\le r}$, $W\in\Ob\cal G_{\le s}$ we have $$V\otimes W\in\Ob\cal G_{\le r+s}.$$ In particular $\cal G$ is a tensor subcategory of $\wh{\cal
G}$. \item\label{CAT70.ii} The assignments $V\mapsto V_{\le r}$ for all~$V\in\Ob\wh{\cal G}$ and~$f\mapsto f_{\le r}$ for
all~$f\in\Hom_{\wh{\cal G}}(V,W)$, $V,W\in\Ob\wh{\cal G}$ define a full, exact and essentially surjective functor from $\wh{\cal G}$ to $\cal G_{\le
r}$. \item\label{CAT70.iii} For any $V\in\Ob\wh{\cal G}$, $\lambda\in P^+$ and $r,s\in\bz_+$ with $s\ge r$, we have
$$(V\otimes V(\lambda,r))_{\le s}\cong V_{\le s-r}\otimes V(\lambda,r).$$
\end{enumerate}
\end{lem}
\begin{pf} Parts~\eqref{CAT70.i} and~\eqref{CAT70.ii} are obvious. For the last part, consider the natural map of graded $\lie g[t]$-modules
$V\otimes V(\lambda,r)\to V_{\le s-r}\otimes V(\lambda,r)$. The assertion follows by noting that $(V\otimes V(\lambda,r))[k]=V[k-r]\otimes
V(\lambda,r)$ for all $k\in\bz_+$.
\end{pf}
From now on, given~$V\in\Ob\cal G$ we denote by~$[V:V(\lambda,r)]$ the multiplicity of~$V(\lambda,r)$ in a composition series
for~$V$. Furthermore, given~$W\in\Ob\wh{\cal G}$, we set~$[W:V(\lambda,r)]:=[W_{\le r}:V(\lambda,r)]$. Observe that
$[V:V(\lambda,r)]$ equals the $\lie g$-module multiplicity of~$V(\lambda)$ in~$V[r]$.
For any~$V\in\Ob\wh{\cal G}$, define
$$\Lambda(V)=\{(\lambda,r)\in\Lambda: [V:V(\lambda,r)]\ne 0\}.$$
\subsection{}\label{HWCAT}
We recall the following definition (which motivated much of this paper) of a directed category following~\cite{CPS,PSW}, in the context of
interest to us. Thus let~$\cal C$ be an abelian category over~$\bc$ whose objects are complex vector spaces, have finite length and such that
$\Hom_{\cal C}(M,N)$ is finite-dimensional for all $M,N\in\Ob\cal C$.
\begin{defn} We say
that $\cal C$ is a directed category if
\begin{enumerate}[$1^\circ.$]
\item\label{HWCAT90.1} The simple objects in $\cal C$ are
parametrized by a poset $(\Pi,\le)$ ({\em the poset of weights}) such that for all~$\tau\in\Pi$, the set~$\{\xi\in\Pi\,:\,\xi<\tau\}$ is finite.
\item\label{HWCAT90.2} Given~$\sigma\in\Pi$, let~$S(\sigma)$ be a simple object in the corresponding isomorphism class. Then
$$
\Ext^1_{\cal C}(S(\sigma),S(\tau))\not=0 \implies \sigma<\tau.
$$
\end{enumerate}\end{defn}
It is immediate from~\cite{DR,Rin,Soe} that a directed category has enough injectives.
Given~$\Xi\subset\Pi$,
let~$\mathcal C[\Xi]$ be the full subcategory of~$\cal C$ whose objects satisfy
$$
M\in\Ob\cal C[\Xi],\quad [M:S(\tau)]\not=0\implies \tau\in\Xi.
$$ It is clear that if $\cal C$ is a directed category with poset of weights $\Pi$, then for any subset $\Xi\subset\Pi$,
the category $\cal C[\Xi]$ is also directed with poset of weights $\Xi$.
Given $\sigma,\tau\in\Xi $ with $\sigma<\tau$,
we denote by $[\sigma,\tau]$ the interval $\{\xi:\sigma\le \xi\le \tau\}$. A subset $\Xi$ of $\Pi$ is said to be {\em interval closed} if
$\sigma<\tau\in\Xi$ implies that $[\sigma,\tau]\subset \Xi$.
\subsection{}\label{CAT110} We can now state the main result of this section. Set $$\Lambda=P^+\times\bz^+,\qquad \Lambda_{\le
r}=\{(\nu,l)\in\Lambda\,:\, l\le r\},\qquad r\in\bz_+.
$$
Define a strict partial order on~$\Lambda$ in the following way.
Given~$(\lambda,r), (\mu,s)\in\Lambda$, say that $(\mu,s)$ covers
$(\lambda,r)$ if and only if $s=r+1$ and~$\mu-\lambda\in
R\sqcup\{0\}$. It follows immediately that for
any~$(\mu,s)\in\Lambda$ the set of~$(\lambda,r)\in\Lambda$ such
that~$(\mu,s)$ covers $(\lambda,r)$ is finite. Let $\preccurlyeq$ be
the unique partial order on $\Lambda$ generated by this cover
relation. Then~$\{(\mu,s): (\mu,s)\prec(\lambda,r)\}$ is finite for
all~$(\lambda,r)\in\Lambda$. Note that if $(\lambda,r)\prec(\mu,s)\in \Lambda_{\le k}$
then $(-w_\circ\mu,k-s)\prec (-w_\circ\lambda,k-r)$.
\begin{thm} \label{thmone} For all $\Gamma\subset\Lambda$, the category $\cal G[\Gamma]$ is a directed category with poset
of weights $(\Gamma,\preccurlyeq)$.
\end{thm}
\noindent
We prove this theorem in the next section (\propref{CAT170}).
\section{Injective and projective objects}
\subsection{Projectives in $\wh{\cal G}$ and $\cal G_{\le r}$}\label{CAT130}
Given $(\lambda,r)\in \Lambda$, set
$$
P(\lambda,r)=\bu(\lie g[t])\otimes_{\bu(\lie g)} V(\lambda,r) .
$$
Clearly, $P(\lambda,r)$ is an infinite dimensional graded $\lie g[t]$-module. Using the PBW theorem we have an isomorphism of graded vector
spaces
$$
\bu(\lie g[t])\cong \bu(\lie g[t]_+)\otimes \bu(\lie g),
$$
and hence we get
\begin{equation}\label{CAT130.10}
P(\lambda,r)[k]=\bu(\lie g[t]_+)[k-r]\otimes V(\lambda,r),
\end{equation}
where we understand that $\bu(\lie g[t]_+)[k-r]=0$ if $k<r$. This shows that $P(\lambda,r)\in\Ob\wh{\cal G}$ and also that
$$P(\lambda,r)[r] =1\otimes V(\lambda,r).$$ Set
$p_{\lambda,r}=1\otimes v_{\lambda,r}$.
\begin{prop}
Let $(\lambda,r)\in\Lambda$, $s\in\bz_+$ and $s\ge r$.
\begin{enumerate}[{\rm(i)}]
\item\label{CAT130.i} $P(\lambda,r)$ is generated as a $\lie
g[t]$-module by $p_{\lambda,r}$ with defining relations:
$$(\lie n^+)p_{\lambda,r}=0,\quad
hp_{\lambda,r}=\lambda(h)p_{\lambda,r},\quad (x^-_{\alpha_i})^{\lambda(h_i)+1}p_{\lambda,r}=0,$$ for all $h\in\lie h$, $i\in I$. Hence,
$P(\lambda,r)$ is the projective cover in the category~$\wh{\cal G}$ of its unique simple quotient $V(\lambda,r)$. Moreover the kernel
$K(\lambda,r )$ of the canonical projection $P(\lambda,r)\twoheadrightarrow V(\lambda,r)$ is generated as a $\lie g[t]$-module by $P(\lambda,r)[r+1]=\lie
g\otimes V(\lambda,r)$.
\item\label{CAT130.ii}
$P(\lambda,r)\cong P(0,0)\otimes V(\lambda,r)$ as objects in~$\wh{\cal G}$.
\item\label{CAT130.iii} The modules $P(\lambda,r)_{\le s}$ are
projective in $\cal G_{\le s}$ and
\begin{equation*}
P(\lambda,r)_{\le s}\cong P(0,0)_{\le s-r}\otimes V(\lambda,r)
\end{equation*}
\item\label{CAT130.iv} As $\lie g$-modules, we have
$$P(0,0)[k]\cong\bu(\lie g[t]_+)[k]\cong \bigoplus_{(r_1,\dots,r_k)\in\bz_+^k\,:\,
\sum_{j=1}^kjr_j=k}S^{r_1}(\lie g)\otimes\cdots\otimes S^{r_k}(\lie g),
$$ where
$S^p(\lie g)$ denotes the $p^{th}$ symmetric power
of $\lie g$.
\item\label{CAT130.v} Let~$(\mu,s)\in\Lambda$. Then~$[K(\lambda,r):V(\mu,s)]\not=0$ only if~$(\lambda,r)\prec(\mu,s)$.
\item\label{CAT130.vi} Let~$V\in\Ob\wh{\cal G}$. Then~$\dim\Hom_{\wh{\cal G}}(P(\lambda,r),V)=[V:V(\lambda,r)]$.
\end{enumerate}
\end{prop}
\begin{pf} The fact that $P(\lambda,r)$ is projective in the category~$\wh{\cal G}$ is standard in
relative homological algebra (cf.~\cite{Hoch}). The other statements in~\eqref{CAT130.i} are immediate from the discussion preceding the
proposition. For part~\eqref{CAT130.ii}, note that the element $p_{0,0}\otimes v_{\lambda,r}$ satisfies the defining relations of
$P(\lambda,r)$. Moreover it is easily seen that
$$
\bu(\lie g[t])(p_{0,0}\otimes v_{\lambda,r})=P(0,0)\otimes V(\lambda,r).
$$
Hence we have a surjective morphism $P(\lambda,r)\to P(0,0)\otimes V(\lambda,r)$ in~$\wh{\cal G}$. On the other hand, \eqref{CAT130.10} implies
that $P(\lambda,r)\cong P(0,0)\otimes V(\lambda,r)$ as vector spaces and~\eqref{CAT130.ii} is proved. Part~\eqref{CAT130.iii} is immediate from
\lemref{CAT70}(\ref{CAT70.ii},\ref{CAT70.iii}). The first isomorphism in \eqref{CAT130.iv} is obvious. To prove the second, we may
assume that $k>0$ since $\bu(\lie g[t]_+)[0]=\bc$. For $s\ge 0$, let $\bu(\lie g[t]_+)_{\le s}$, $s\ge 0$ be the subspace
of~$\bu(\lie g[t]_+)$ spanned by the set
$$\{(y_1t^{r_1})\cdots (y_kt^{r_k}): y_jt^{r_j}\in\lie g[t]_+,\,1\le j \le k\le s\},$$ and set
$$
\bu(\lie g[t]_+)[k]_{\le s}=\bu(\lie g[t]_+)[k]\cap \bu(\lie g[t]_+)_{\le s}.
$$
Then $\bu(\lie g[t]_+)[k]=\bu(\lie g[t]_+)[k]_{\le k}$ and
for $0\le s\le k$ the subspaces $\bu(\lie g[t]_+)[k]_{\le s}$ define an increasing filtration on $\bu(\lie g[t]_+)[k]$. Moreover, regarding
$\lie g[t]_+$ as a $\lie g$-module via the adjoint action on~$\lie g[t]$, we see that this filtration is in fact $\lie g$-equivariant.
Since $\bu(\lie g[t]_+)[k]_{\le r}$
is finite dimensional we get an isomorphism of $\lie g$-modules,
\begin{align*}
\bu(\lie g[t]_+)[k]_{\le r}&\cong_{\lie g} \bu(\lie g[t]_+)[k]_{\le r-1}\oplus (\bu(\lie g[t]_+)[k]_{\le r}/\bu(\lie g[t]_+)[k]_{\le r-1})
\\&\cong_{\lie g} \bu(\lie g[t]_+)[k]_{\le r-1} \oplus S^r(\lie g[t]_+)[k],
\end{align*}
the second isomorphism being a consequence of the PBW theorem.
It follows by a downward induction on~$r$ that
$$\bu(\lie g[t]_+)[k]\cong_{\lie g}
\bigoplus_{r=1}^k S^r(\lie g[t])_+[k]=S(\lie g[t]_+)[k].
$$
Given a partition $\bon= (n_s\ge n_{s-1}\ge \cdots \ge n_1>0)$ of $k$, let $V(\bon)$ be the subspace of~$S(\lie g[t]_+)[k]$ spanned by the
elements $$ \{(x_{j_1} t^{n_1})\cdots (x_{j_s} t^{n_s}): x_j\in\lie g\}.
$$
Clearly $V(\bon)$ is a $\lie g$--submodule of $S(\lie g[t]_+)[k]$ and we have $$
S(\lie g[t]_+)[k]=\bigoplus_{\bon\vdash k}
V(\bon).
$$
The result follows since $$V(\bon)\cong_{\lie g} S^{r_1}(\lie g)\otimes \cdots\otimes S^{r_k}(\lie g),$$ where $r_j=|\{ 1\le r\le s\,:\,
n_r=j\}|$.
Part~\eqref{CAT130.v} is now obvious. To establish \eqref{CAT130.vi}, it is enough to observe that
by~\eqref{CAT130.i} the natural map
$$\Hom_{\wh{\cal G}}(P(\lambda,r),V)\to\Hom_{\cal G_{\le r}}(P(\lambda,r)_{\le r},V_{\le r})$$
is injective and hence is an isomorphism (\propref{CAT70}\eqref{CAT70.ii}). The statement follows since $P(\lambda,r)_{\le r}\cong V(\lambda,r)$ is projective
in~$\cal G_{\le r}$ and every object in~$\cal G_{\le r}$ has finite length.
\end{pf}
In what follows, we shall write
$$S^{(k)}(\lie g)=\bigoplus_{(r_1,\dots,r_k)\in\bz_+^k\,:\,
\sum_{j=1}^kjr_j=k} S^{r_1}(\lie g)\otimes\cdots\otimes S^{r_k}(\lie
g).
$$
Observe that~$S^{(k)}(\lie g)$ is a $\lie g$-module quotient of~$\lie g^{\otimes k}$. Indeed, the map~$\lie g^{\otimes k}\to
\bu(\lie g[t]_+)[k]\cong_{\lie g} S^{(k)}(\lie g)$ given by extending~$x_1\otimes\cdots\otimes x_k\mapsto (x_1 t)\cdots(x_k t)$, $x_j\in\lie g$, $1\le j\le k$ is a
surjective $\lie g$-module homomorphism.
\subsection{Morphisms between projectives}\label{CAT150} The next
proposition is trivial, but we include it here explicitly since it is used repeatedly in the later sections where we construct examples of
interesting subcategories of $\cal G$.
Let~$s\le r\le \ell\in\bz_+$. It is immediate from the definitions and Proposition~\ref{CAT130} that
\begin{equation}\label{CAT150.10}
\begin{split}
\Hom_{\cal G}(P(\lambda,r)_{\le \ell},P(\mu,s)_{\le
\ell})&=\Hom_{\wh{\cal G}}(P(\lambda,r),P(\mu,s))\\
&\cong \Hom_{\lie g}(V(\lambda),\bu(\lie g[t]_+)[r-s]\otimes V(\mu)).
\end{split}
\end{equation}
In this section we make the last isomorphism explicit. Let $s\le r\in\bz_+$ and $\lambda,\mu\in P^+$. Given $f\in\Hom_{\lie
g}(V(\lambda),\bu(\lie g[t]_+)[r-s]\otimes V(\mu))$, define $\bof\in\Hom_{\bc}(P(\lambda,r),P(\mu,s))$ by
$$
\bof(u\otimes v)=\sum_p uu_p\otimes v_p,
$$
where~$u\in \bu(\lie g[t]_+)$, $v\in V(\lambda,r)$ and~$\sum_p u_p \otimes v_p=f(v)$. Let $m:\bu(\lie g[t])\otimes \bu(\lie g[t])\to\bu(\lie
g[t])$ be the multiplication map. The next proposition is a straightforward consequence of Proposition~\ref{CAT130} and we omit the details of
the calculations.
\begin{prop} Let $(\lambda,r), (\mu,s)\in\Lambda$.
\begin{enumerate}[{\rm(i)}]
\item\label{CAT150.i} The assignment~$\bof\mapsto \bof[r]$ defines
an isomorphism of vector spaces
$$
\phi^{(\lambda,r)}_{(\mu,s)}: \Hom_{\wh{\cal G}} (P(\lambda,r),P(\mu,s))\to \Hom_{\lie g}(V(\lambda),\bu(\lie g[t]_+)[r-s]\otimes V(\mu)).
$$
Moreover if $\bof\in\Hom_{\wh{\cal G}}(P(\lambda,r), P(\mu,s))$, $\bog\in\Hom_{\wh{\cal G}}(P(\mu,s), P(\nu,k))$, then
$$(\bog\circ\bof)[r]=(m\otimes 1)\circ(1\otimes \bog[s])\circ \bof[r].$$
\item\label{CAT150.ii} The assignment~$f\mapsto \bof$ is an
isomorphism
$$
\psi^{(\lambda,r)}_{(\mu,s)}: \Hom_{\lie g}(V(\lambda),\bu(\lie g[t]_+)[r-s]\otimes V(\mu))\to \Hom_{\wh{\cal G}}(P(\lambda,r),P(\mu,s))
$$
which is the inverse of~$\phi^{(\lambda,r)}_{(\mu,s)}$. Moreover, if~$f\in \Hom_{\lie g}(V(\lambda),\bu(\lie g[t]_+)[r-s]\otimes V(\mu))$,
$g\in\Hom_{\lie g}(V(\mu),\bu(\lie g[t]_+)[s-k]\otimes V(\nu))$ then
$$
\psi^{(\lambda,r)}_{(\nu,k)}((m\otimes 1)\circ(1\otimes g)\circ f)= \psi^{(\mu,s)}_{(\nu,k)}(g)\circ \psi^{(\lambda,r)}_{(\mu,s)}(f).
$$
\end{enumerate}
\end{prop}
\subsection{Duality in~$\cal G_{\le s}$}\label{CAT90}
Given~$V,W\in\Ob\cal G$, the vector space $\Hom_{\bc}(V,W)$ is a $\lie g[t]$-module with respect to the usual action
$$
((xt^r)\cdot f)(v)= (xt^r) f(v)-f( (xt^r) v),
$$
for all~$x\in\lie g$, $r\in\bz_+$, $f\in\Hom_{\bc}(V,W)$ and~$v\in V$. For $k\in\bz_+$, set
\begin{equation}\label{CAT90.10}
\Hom_{\bc}(V,W)[k]=\bigoplus_{i\in\bz_+} \Hom_{\bc}(V[i],W[i+k])
\end{equation}
and define
$$
\Hom_{\bc}^+(V,W)=\bigoplus_{k\in\bz_+} \Hom_{\bc}(V,W)[k].
$$
Since~$V[i]=0$ for all but finitely many~$i\in\bz_+$,
$\dim\Hom_{\bc}(V,W)[k]<\infty$. Notice that
$\Hom_{\bc}^+(V,W)=\Hom_{\bc}(V,W)$ provided that~$V\in\Ob\cal
G_{\le r}$ and~$W[i]=0$ for all~$i<r$. The proof of the following
proposition is quite standard and is omitted.
\begin{prop} Let $V,W\in\Ob\cal G$, $r,s\in\bz_+$.
\begin{enumerate}[{\rm(i)}]
\item\label{CAT90.0} For all~$x\in\lie g$, $i,r,k\in\bz_+$ and
$f\in\Hom_{\bc}(V[i],W[i+k])$,
\begin{equation*}
(xt^r)\cdot f\in\Hom_{\bc}(V[i],W[i+k+r])\oplus \Hom_{\bc}(V[i-r],W[i+k]).
\end{equation*}
In particular, $\Hom_\bc^+(V,W)\in\Ob\cal G$ and
$$
W\in\Ob\cal G_{\le s}\implies \Hom_{\bc}^+(V,W)\in\Ob\cal G_{\le s}.
$$
\item\label{CAT90.i} Let ${}^{\#_s}:\cal G_{\le s}\to \cal G_{\le
s}$ be the contravariant functor defined by $\Hom_{\bc}(-,V(0,s))$. Then
${}^{\#_s}$ is exact and for all $\lambda\in P^+$, $r\le s$,
$$
(V^{\#_r})^{\#_s}\cong \tau_{s-r} V,\qquad V(\lambda,r)^{\#_s}\cong V(-w_\circ\lambda,s-r).$$ In particular, ${}^{\#_s}$ defines an involutive
auto-duality on the category~$\cal G_{\le s}$.
\item\label{CAT90.ii} Suppose that~$V\in\Ob \cal G_{\le r}$,
$W\in\Ob\cal G_{\le s}$. As objects in $\cal G$,
$$
(V\otimes W)^{\#_{r+s}}\cong V^{\#_r}\otimes W^{\#_s},\qquad V\otimes W^{\#_s}\cong\Hom_{\bc}(W,V(0,s)\otimes V).
$$
\item\label{CAT90.iii} For all $V,W'\in\Ob\cal G$, $W\in\Ob\cal G_{\le s}$, we have an
isomorphism of vector spaces,
\begin{equation*}\tag*{\qedsymbol}\Hom_{\cal G}(V,W\otimes W')\cong
\Hom_{\cal G}(V\otimes W^{\#_s},V(0,s)\otimes W').
\end{equation*}
\end{enumerate}
\end{prop}
\subsection{Injective objects in $\cal G$ and ${\wh{\cal G}}$}\label{injG}
We begin with the following remark: any injective object of $\cal G$
is also injective in $\wh{\cal G}$. To prove this, let $I\in\Ob\cal G$
be injective and assume that $I[s]=0$ for all $s\ge r$. Suppose that
$\iota\in\Hom_{\wh{\cal G}}(V,W)$ is injective and
let $f\in\Hom_{\wh{\cal G}}(V,I)$. Since $f_{\le r}\in\Hom_{\cal
G}(V_{\le r}, I)$ and $\iota_{\le r}\in\Hom_{\wh{\cal G}}(V_{\le
r},W_{\le r})$ is injective there exists $\tilde f\in\Hom_{\cal G}(
W_{\le r},I)$ such that $\tilde f\circ\iota_{\le r}=f_{\le r}$. Let
$\bar f=\tilde f\circ p_r(W)$ where $p_r(W):W\to W_{\le r}$ be the
canonical projection. It is now easily checked that $\bar
f\circ\iota=f$.
For~$(\lambda,r)\in\Lambda$, set
$$ I(\lambda,r)\cong P(-w_\circ\lambda,0)_{\le r}{}^{\#_r}.
$$
It follows from~\propref{CAT170} and~\propref{CAT90} that $I(\lambda,r)\cong I(\lambda,0)\otimes V(0,r)$.
\begin{prop}
Let~$(\lambda,r)\in\Lambda$.
\begin{enumerate}[{\rm(i)}]
\item\label{injG.i} The object $I(\lambda,r)$ is the injective envelope of $V(\lambda,r)$ in $\cal
G$.
\item\label{injG.ii}
For $k\in\bz_+$ we have
$$
I(\lambda,r)[r-k]\cong_{\lie g} S^{(k)}(\lie g)\otimes V(\lambda).
$$
\item\label{injG.iii}
Let~$(\mu,s)\in\Lambda$. Then~$[I(\lambda,r)/V(\lambda,r):V(\mu,s)]\not=0$ only if~$(\mu,s)\prec(\lambda,r)$.
\end{enumerate}
\end{prop}
\begin{pf} It is immediate from \propref{CAT90} that
$I(\lambda,r)$ is injective in $\cal G_{\le r}$. To prove that
$I(\lambda,r)$ is injective in $\cal G$, it suffices now to show
that $\Ext_{\cal G}^1(V(\mu,s),I(\lambda,r))=0$ if $s>r$, in other words that
every short exact sequence of the form $$0\to I(\lambda,r)\to V\to
V(\mu, s)\to 0$$ splits if $s>r$. Writing
$V=\bigoplus_{k\in\bz_+}V[k]$, we see that $V[k]=0$ if $k>s$. Hence
$\lie g[t]V[s]\subset V[s]$. Moreover, since $\bigoplus_{k\le r}
V[k]\cong_{\cal G} I(\lambda,r)$ it follows now that we have a
decomposition of $\lie g[t]$-modules $$V\cong I(\lambda,r)\oplus
V(\mu,s)$$ and hence the short exact sequence splits. To prove that
it is the injective envelope of $V(\lambda,r)$ it suffices to use
\propref{CAT130} and \lemref{CAT10} to notice that $V(\lambda,r)$ is
the unique irreducible subobject of $I(\lambda,r)$.
The proof of
\eqref{injG.ii} and \eqref{injG.iii} are immediate from
\lemref{CAT10} and \propref{CAT130}.
\end{pf}
\begin{cor}\label{injG.cor} Let $V,W\in\Ob\cal G$.
\begin{enumerate}[{\rm(i)}]
\item\label{injG.cor.i} For all $j\ge 0$, we have
$$\Ext^j_{\cal G}(V,W)\cong\Ext^j_{\wh{\cal G}}(V,W).$$
\item\label{injG.cor.ii} Let~$I$ be the injective envelope of~$V$. If $(\lambda,r)\in \Lambda(I)$ then
$(\lambda,r)\preccurlyeq (\mu,s)$ for some~$(\mu,s)\in\Lambda(\soc(V))$.
\end{enumerate}
\end{cor}
\subsection{Extensions between simple objects} \label{CAT170}
The following proposition proves~\thmref{thmone}.
\begin{prop}
For $(\lambda,r),(\mu,s)\in\Lambda$, we have
\begin{align*} &\Ext^1_{{\cal G}}(V(\lambda,r),V(\mu,s))=0,
\qquad s\ne r+1,\\
&\Ext^1_{{\cal G}}(V(\lambda,r),V(\mu,r+1)))\cong\Hom_{\lie g}(
V(\lambda), \lie g\otimes V(\mu)).
\end{align*}
In other words, $\Ext^1_{{\cal G}}(V(\lambda,r),V(\mu,s))=0$
unless $(\mu,s)$ covers $(\lambda,r)$.
\end{prop}
\begin{pf}
Applying $\Hom_{\cal G}(V(\lambda,r),-)$ to
the short exact sequence $$0\to V(\mu,s)\to
I(\mu,s)\to J(\mu,s)\to 0$$
gives
$$\Hom_{{\cal G}}(V(\lambda, r),
J(\mu,s))\cong\Ext^1_{\cal G}(V(\lambda,r), V(\mu,s)).
$$
The proposition obviously follows if we prove that
$$
\Hom_{{\cal G}}(V(\lambda, r),J(\mu,s))\cong\begin{cases} \Hom_{\lie g}(\lie g\otimes V(\lambda),V(\mu)),&\text{if $s=r+1$},\\ 0,&
\text{otherwise}.\end{cases}
$$
Let $\psi: V(\lambda,r)\to
J(\mu,s)$ be a non-zero element of $\Hom_{{\cal G}}(V(\lambda, r),
J(\mu,s))$. It follows from Proposition \ref{injG} that
$(\lambda,r)\prec (\mu,s)$ and hence in particular that $r<s$.
Suppose that $r<s-1$. Since~$V(\mu,s)$ is essential in~$I(\mu,s)$, there exists $V\subset I(\mu,s)$ such that
$V/V(\mu,s)\cong \psi(V(\lambda,r))$. Then $V=V[s]\oplus V[r]$ and
since $r<s-1$, it follows that $\lie g[t]V[r]\subset V$. Hence
$V[r]$ is in $\soc(I(\mu,s))$ which is impossible. Thus, $s=r-1$.
The following isomorphisms which are consequences of \propref{injG}
establish the proposition.
\begin{gather*}\Hom_{\lie g}( V(\lambda),
\lie g\otimes V(\mu))\cong \Hom_{\cal G}(V(\lambda,r),
\tau_{r}\ev(\lie g\otimes V(\mu))),\\
\tau_r\ev(J(\mu,r+1)[r])\cong_{\cal G}
\tau_{r}\ev(\lie g\otimes V(\mu)),\\
\Hom_{{\cal G}}(V(\lambda,r),J(\mu,r+1)) \cong \Hom_{\cal
G}(V(\lambda,r),\tau_r\ev(J(\mu,r+1)[r])).\qedhere
\end{gather*}
\end{pf}
\subsection{}\label{CAT175}
Given $\Gamma\subset\Lambda$, set
\begin{gather*}{V_\Gamma}^+= \{v\in
V[s]_\mu: \lie n^+ v=0,\,\, (\mu,s)\in \Gamma\},\\
V_\Gamma=\bu(\lie g)V_\Gamma{}^+,\quad V^\Gamma=V/V_{\Lambda\setminus\Gamma}.
\end{gather*}
Furthemore, given $f\in\Hom_{\cal G}(V,W)$, let~$f_\Gamma:=f|_{V_\Gamma}$
and let $f^\Gamma$ be the induced map $V^\Gamma\to W^\Gamma$. It follows from the definitions that $f_\Gamma$ and
$f^\Gamma$ are morphisms of graded vector spaces and $\lie g$-modules.
\begin{prop} Let $V\in\Ob{\wh{\cal G}}$ and let $\Gamma$ be an interval closed subset of $\Lambda$.
\begin{enumerate}[{\rm(i)}]
\item\label{CAT175.i} Suppose that for each
$(\lambda,r)\in\Lambda(V)\setminus\Gamma$ there exists
$(\mu,s)\in\Gamma$ with $(\lambda,r)\prec(\mu,s)$. Then $
V_\Gamma\in\Ob\wh{\cal G}[\Gamma]$ and $V/V_\Gamma\in\Ob\wh{\cal
G}[\Lambda\setminus\Gamma]$.
\item\label{CAT175.ii}Suppose that for each
$(\lambda,r)\in\Lambda(V)\setminus\Gamma$ there exists $(\mu,s)\in
\Gamma$ with $(\mu,s)\prec(\lambda,r)$. Then $
V_{\Lambda\setminus\Gamma}\in\Ob\wh{\cal
G}[\Lambda\setminus\Gamma]$ and $ V^\Gamma\in\Ob\wh{\cal
G}[\Gamma]$.
\item\label{CAT175.iii} Let
$$
0\longrightarrow V\stackrel{f}{\longrightarrow} U\stackrel{g}\longrightarrow W\longrightarrow 0
$$
be a short exact sequence in~$\cal G$. Then~$U$ satisfies \eqref{CAT175.i} {\em(}respectively, \eqref{CAT175.ii}{\em)} if and only if~$V$
and $W$ satisfy \eqref{CAT175.i} {\em(}respectively, \eqref{CAT175.ii}{\em)} and
\begin{equation}\label{CAT175.10a}
0\longrightarrow V_\Gamma\stackrel{f_\Gamma}{\longrightarrow} U_\Gamma\stackrel{ g_\Gamma}{\longrightarrow}
W_\Gamma\longrightarrow 0
\end{equation}
{\em(}respectively, \begin{equation}\label{CAT175.10b} 0\longrightarrow V^\Gamma\stackrel{f^\Gamma}{\longrightarrow} U^\Gamma\stackrel{
g^\Gamma}{\longrightarrow} W^\Gamma\longrightarrow 0)
\end{equation} is an exact sequence of objects in $\cal G[\Gamma]$.
\end{enumerate}
\end{prop}
\begin{pf} Consider the map $\lie g\otimes V_\Gamma\to V$ given by
$x\otimes v\mapsto (xt)v$. This is clearly a map of $\lie g$-modules.
Let $U$ be a $\lie g$-module complement to $V_\Gamma$ in $V$.
Suppose that $V_\Gamma$ is not a subobject of $V$ in $\wh{\cal G}$, i.e
there exists $x\in\lie g$ and $v\in V_\Gamma$ such that
$(xt)v\notin V_\Gamma$. Since $(\lie g\otimes t)\bu(\lie
g)\subset\bu(\lie g)(\lie g\otimes t)$, we may
assume without loss of generality that $v\in V_\Gamma^+\cap
V[r]_\lambda$ for some $(\lambda,r)\in\Lambda$ and hence $\bu(\lie
g)v\cong_{\lie g} V(\lambda,r)$. In other words, the induced map of
$\lie g$-modules $\lie g\otimes V(\lambda,r) \to U$ is
non-zero and so there exists $(\nu,r+1)\notin\Gamma$ such that the
composite map $\lie g\otimes V(\lambda,r) \to U\to
V(\nu,r+1)$ is non-zero.
This implies immediately that
$(\lambda,r)\prec(\nu,r+1)$. Choose $(\mu,s)\in\Gamma$ such that
$(\nu,r+1)\prec (\mu,s)$ for some $(\mu,s)\in\Gamma$. This gives~$(\nu,r+1)\in[(\lambda,r),(\mu,s)]$
which is impossible since $\Gamma$ is
interval closed. Hence
$$(xt)v\in V_\Gamma,\qquad\forall \, x\in\lie g,\, v\in V_\Gamma^+,
$$ and \eqref{CAT175.i} is proved.
In order to prove the second part, assume that~$V_{\Lambda\setminus\Gamma}$ is not a $\wh{\cal G}$-subobject of~$V$.
Then~$(xt)v\notin V_{\Lambda\setminus\Gamma}$ for some~$x\in\lie g$, $v\in V_{\Lambda\setminus\Gamma}$ and as before we may
assume, without loss of generality that~$v\in V_{\Lambda\setminus\Gamma}{}^+\cap V[r]_\lambda$ for some~$(\lambda,r)\in\Lambda\setminus
\Gamma$. Let~$U'$ be a $\lie g$-module complement of~$V_{\Lambda\setminus\Gamma}$. Then we have a non-zero $\lie g$-module
map $\lie g\otimes V(\lambda,r)\to U'$, given by extending $x\otimes v\mapsto (xt)v$, and hence a non-zero $\lie g$-module map
$\lie g\otimes V(\lambda,r)\to V(\nu,r+1)$ for some~$(\nu,r+1)\in \Gamma$. By assumption, there exists~$(\mu,s)\in\Gamma$ such
that $(\mu,s)\prec (\lambda,r)$. Thus, $(\lambda,r)\in [(\mu,s),(\nu,r+1)]\subset\Gamma$ since $\Gamma$ is interval closed,
which is a contradiction.
The first statement of~\eqref{CAT175.iii} is obvious since $\Lambda(U)=\Lambda(V)\cup\Lambda(W)$. For the second, note that we have
$$U\cong V\oplus W,$$ as graded $\lie g$-modules and hence as $\lie g$-module we have $$U_\Gamma\cong V_\Gamma\oplus W_\Gamma.$$ Since
$V_\Gamma$, $W_\Gamma$ and~$U_\Gamma$ are objects in $\cal G[\Gamma]$ the result follows.
\end{pf}
\begin{rem} Part \eqref{CAT175.i} of the preceding proposition holds
for all $V\in\Ob\cal G$ and $\Gamma\subset\Lambda$ such that
$\Lambda(\soc(V))\subset\Gamma$ while part \eqref{CAT175.ii} holds if
$\Lambda(\head(V))\subset\Gamma$. In fact, for any
$(\lambda,r)\in\Lambda(V)$, there exists
$(\mu,s)\in\Lambda(\soc(V))$ and $(\nu,p)\in\Lambda(\head(V))$ such
that
$$(\nu,p)\preccurlyeq(\lambda,r)\preccurlyeq(\mu,s).$$ This is an immediate
consequence of the fact that $V$ embeds in the injective envelope of
$\soc(V)$ and is a quotient of $P(\head(V))$ together with
Proposition~\ref{CAT130}\eqref{CAT130.iii} and Proposition \ref{injG}\eqref{injG.ii}.
Moreover, in this case if we let $\overline{V_\Gamma}$ be the
maximal subobject of $V$ that is in $\cal G[\Gamma]$, then the
Proposition implies that $$\overline{V_\Gamma}\cong V_\Gamma$$ if for
each $(\lambda,r)\in\Lambda(V)\setminus\Gamma$ there exists
$(\mu,s)\in\Gamma$ with $(\lambda,r)\preccurlyeq(\mu,s)$ and similarly
for $\overline V^\Gamma$.
\end{rem}
\subsection{}\label{projinjgamma} We isolate some consequences of the preceding Proposition
since we use them repeatedly in the following sections.
\begin{prop} Let $\Gamma$ be finite and interval closed and assume that
$(\lambda,r), (\mu,s)\in\Gamma$.
\begin{enumerate}[{\rm(i)}]
\item\label{projinjgamma.i} The object
$I(\lambda,r)_\Gamma$ is the injective envelope of $V(\lambda,r)$ in $\cal G[\Gamma]$ while $P(\lambda,r)^\Gamma$ is the projective cover
of~$V(\lambda,r)$ in~$\cal G[\Gamma]$. In particular, $\cal G[\Gamma]$ has enough projectives.
\item\label{projinjgamma.ii}
We have
$$
[P(\lambda,r)^\Gamma:V(\mu,s)]=[P(\lambda,r):V(\mu,s)]=[I(\mu,s):V(\lambda,r)]=[I(\mu,s)_\Gamma:V(\lambda,r)].
$$
\item\label{projinjgamma.iii} For all $j\ge 0$, we have $$\Ext^j_{\cal G}(V(\lambda,r),
V(\mu,s))\cong\Ext^j_{\cal G[\Gamma]}(V(\lambda,r), V(\mu,s)).$$
\item\label{projinjgamma.iv}
Let $\boldsymbol p^\Gamma(\lambda,r):P(\lambda,r)\twoheadrightarrow
P(\lambda,r)^\Gamma$ {\em(}respectively, $\boldsymbol
\iota_\Gamma(\lambda,r):I(\lambda,r)_\Gamma \hookrightarrow
I(\lambda,r)${\em )} be the canonical projection {\em(}respectively,
the canonical embedding{\em)}. There exists an isomorphism
$\Hom_{\wh{\cal G}}(P(\lambda,r), P(\mu,s))\to\Hom_{\cal
G[\Gamma]}(P(\lambda,r)^\Gamma, P(\mu,s)^\Gamma)$ given by $f\to
f^\Gamma$ such that
$$\boldsymbol p^\Gamma(\mu,s)\circ f=f^\Gamma\circ\boldsymbol p^\Gamma(\lambda,r),$$ and
similarly an isomorphism $\Hom_{\cal G}(I(\mu,s),
I(\lambda,r))\to\Hom_{\cal G[\Gamma]}(I(\mu,s)_\Gamma,
I(\lambda,r)_\Gamma)$ given by $g\to g_\Gamma$ such that
\begin{equation*}
g\circ\boldsymbol \iota_\Gamma(\mu,s)=\boldsymbol \iota_\Gamma(\lambda,r)\circ g_\Gamma.\tag*{\qed}
\end{equation*}
\end{enumerate}
\end{prop}
\begin{pf}
It follows from~\propref{injG}\eqref{injG.iii} that
$$
(\nu,k)\prec (\lambda,r)\qquad \forall\,(\nu,k)\in\Lambda(I(\lambda,r))\backslash \{(\lambda,r)\}.
$$
\propref{CAT175}\eqref{CAT175.i} now gives,
$$I(\lambda,r)_\Gamma\in\Ob\cal G[\Gamma],\qquad
\soc(I(\lambda,r)_\Gamma)=V(\lambda,r)$$
and
$$\Hom_{\cal G}(V(\mu,s),I(\lambda,r)/I(\lambda,r)_\Gamma)=0,\qquad \forall\,
(\mu,s)\in\Gamma.
$$
It follows immediately that~$\Ext^1_{\cal G[\Gamma]}(V(\mu,s),I(\lambda,r)_\Gamma)=0$ for all
$(\mu,s)\in\Gamma$ which implies the first statement
in~\eqref{projinjgamma.i}. The proof of the second statement is
similar.
The first and the last equality in~\eqref{projinjgamma.ii}
follow immediately from~\propref{CAT175} while the second equality
is an obvious consequence
of~\propref{CAT130}(\ref{CAT130.ii},\ref{CAT130.iv}) and
\propref{injG}.
Part~\eqref{projinjgamma.iii} is obvious if~$j=0$.
Set~$Q_{-1}(\mu,s)=V(\mu,s)$. For~$j\ge 0$ define inductively the
objects~$I_j(\mu,s)$ as the injective envelope of~$Q_{j-1}(\mu,s)$
in~$\cal G$ and~$Q_j(\mu,s)=\operatorname{coker}(Q_{j-1}(\mu,s)\hookrightarrow
I_j(\mu,s))$. Then
$$
0\to V(\mu,s)\to I_0(\mu,s)\to I_1(\mu,s)\to \cdots \to I_k(\mu,s)\to 0
$$
is an injective resolution for~$V(\mu,s)$ in~$\cal G$ and
\begin{equation}\label{projinjgamma.10}
\Ext^j_{\cal G}(V(\lambda,r),V(\mu,s))
\cong \Hom_{\cal G}(V(\lambda,r),I_j(\mu,s)),\qquad j>0.
\end{equation}
It follows from~\corref{injG.cor}\eqref{injG.cor.ii} by a straightforward induction on~$j$ that
$$(\nu,k)\not=(\mu,s)\in\Lambda(I_j(\mu,s))\cup \Lambda(Q_j(\mu,s))\implies
(\nu,k)\prec (\mu,s).
$$
Hence by \propref{CAT175},
$$Q_j(\mu,s)_\Gamma, I_j(\mu,s)_\Gamma\in\Ob\cal G[\Gamma], \qquad
I_j(\mu,s)/I_j(\mu,s)_\Gamma\in\cal G[\Lambda\setminus\Gamma]$$
and the sequence
\begin{equation}\label{projinjgamma.15}
0\to V(\mu,s)\to I_0(\mu,s)_\Gamma\to I_1(\mu,s)_\Gamma\to \cdots\to I_k(\mu,s)_\Gamma\to 0
\end{equation}
is exact in~$\cal G[\Gamma]$. It follows from part~\eqref{projinjgamma.i} that~\eqref{projinjgamma.15} is
an injective resolution of~$V(\mu,s)$ in~$\cal G[\Gamma]$.
Furthermore, for all~$(\lambda,r)\in\Gamma$
\begin{equation}\label{projinjgamma.20}
\Hom_{\cal G}(V(\lambda,r),I_j(\mu,s))\cong \Hom_{\cal G}(V(\lambda,r),I_j(\mu,s)_\Gamma)
\end{equation}
and similarly
$$
\Hom_{\cal G}(V(\lambda,r),Q_j(\mu,s))\cong \Hom_{\cal G}(V(\lambda,r),Q_j(\mu,s)_\Gamma).
$$
In particular, this implies that
$$
\soc(I_j(\mu,s)_\Gamma)\cong\soc(Q_{j-1}(\mu,s)_\Gamma)
$$
and so~$I_j(\mu,s)_\Gamma$ is the injective envelope
of~$Q_{j-1}(\mu,s)_\Gamma$ in~$\cal G[\Gamma]$.
By~\propref{CAT175}\eqref{CAT175.iii},
$I_j(\mu,s)_\Gamma/Q_{j-1}(\mu,s)_\Gamma\cong Q_j(\mu,s)_\Gamma$.
Then
$$\Ext^{j}_{\cal G[\Gamma]}(V(\lambda,r),V(\mu,s))
\cong \Hom_{\cal G[\Gamma]}(V(\lambda,r),I_j(\mu,s)_\Gamma)
$$
and~\eqref{projinjgamma.iii} follows from~\eqref{projinjgamma.10}
and~\eqref{projinjgamma.20}.
To prove \eqref{projinjgamma.iv},
let $f\in\Hom_{\wh{\cal G}}(P(\lambda,r), P(\mu,s))$, $f\not=0$.
\iffalse
Since
$f(P(\lambda,r)_{\Lambda\setminus\Gamma})\subset
P(\mu,s)_{\Lambda\setminus\Gamma}$, we have an induced map
$f^\Gamma\in\Hom_{\cal G[\Gamma]}(P(\lambda,r)^\Gamma,
P(\mu,s)^\Gamma)$. Moreover
\fi
Then $f^\Gamma\ne 0$ since $f^\Gamma(1\otimes
V(\lambda,r))=f(1\otimes V(\lambda,r))\pmod{P(\lambda,r)_{\Lambda\setminus\Gamma}}\ne 0$. Thus, we have an injective
map
$$
\Hom_{\wh{\cal G}}(P(\lambda,r),P(\mu,s))\to \Hom_{\cal G[\Gamma]} (P(\lambda,r)^\Gamma,P(\mu,s)^\Gamma).
$$
Since both spaces have the same dimension by~\eqref{projinjgamma.ii} and~\propref{CAT150}, the isomorphism follows.
To prove the statement for injectives, observe that the natural map
$$\Hom_{\cal G}(I(\mu,s),I(\lambda,r))\to
\Hom_{\cal G}(I(\mu,s)_\Gamma,I(\lambda,r))$$
is surjective, while~$\Hom_{\cal G}(I(\mu,s)_\Gamma,I(\lambda,r)_\Gamma)
\cong \Hom_{\cal G}(I(\mu,s)_\Gamma,I(\lambda,r))$. The assertion follows from~\eqref{projinjgamma.ii}.
\end{pf}
\section{Algebras associated with the category~$\cal G$}\label{TMPALG}
In this section, we let $\Gamma$ be a finite interval closed
subset of~$\Lambda$. Given a finite dimensional algebra $A$, let
$A-\operatorname{mod}_f$ (respectively, $\operatorname{mod}_f-A$)
be the category of finite dimensional left (respectively, right)
$A$-modules.
\subsection{The algebra $\mathfrak A(\Gamma)$ and an equivalence of categories}\label{TMPALG10}
Set
$$
I(\Gamma)=\bigoplus_{(\lambda,r)\in\Gamma} I(\lambda,r),\qquad
\mathfrak A(\Gamma)=\End_{\cal G} I(\Gamma).
$$
Then $\mathfrak A(\Gamma)$ is an associative algebra. Moreover,
it is immediate from~\propref{projinjgamma} that
\begin{equation}\label{agamma}\mathfrak A(\Gamma)\cong \mathfrak A_\Gamma(\Gamma):=\End_{\cal G} I(\Gamma)_\Gamma.
\end{equation}
In particular, $\mathfrak A(\Gamma)-\operatorname{mod}_f$ is equivalent to $\mathfrak A_\Gamma(\Gamma)-\operatorname{mod}_f$
and similarly for the categories of right modules. Since $I(\Gamma)_\Gamma$ is the
injective cogenerator of $\cal G[\Gamma]$, a standard argument now
shows that the contravariant functor $\Hom_{\cal G}(-,I(\Gamma)_\Gamma)$ from~$\cal
G[\Gamma]$ to the category $\mathfrak A_\Gamma(\Gamma)-\operatorname{mod}_f$ is exact and provides a duality of categories.
Similarly, the functor $\Hom_{\cal G}(-,I(\Gamma)_\Gamma)^*$ from $\cal G[\Gamma]$ to the
category $\operatorname{mod}_f-\mathfrak A_\Gamma(\Gamma)$ is exact and provides an equivalence of categories.
Thus, $\cal G[\Gamma]$ is equivalent to~$\operatorname{mod}_f-\mathfrak A(\Gamma)$ and is dual to
$\mathfrak A(\Gamma)-\operatorname{mod}_f$.
It is clear from the definition that the simple objects in~$\mathfrak
A(\Gamma)-\operatorname{mod}_f$ are one-dimensional, that is to say
$\mathfrak A(\Gamma)$ is basic, and their isomorphism classes are
parametrized by elements of~$\Gamma$. Given~$(\lambda,r)\in\Gamma$,
let~$S_{\lambda,r}$ be the corresponding simple left $\mathfrak
A(\Gamma)$-module.
\begin{prop}\label{extalg}Let $\Gamma\subset\Lambda$ be finite and interval closed.
For all~$(\lambda,r),(\mu,s)\in\Gamma$, we have
\begin{equation}\label{sim}
\dim \Ext^1_{\mathfrak
A(\Gamma)}(S_{\lambda,r},S_{\mu,s})=\delta_{r,s+1}\dim \Hom_{\lie
g}(V(\mu),\lie g\otimes V(\lambda)).
\end{equation}
In particular,
the algebra $\frak A(\Gamma)$ is quasi-hereditary.
\end{prop}
\begin{pf} Since $\Gamma$ is interval closed, it follows from \corref{projinjgamma}
that if $(\mu,s), (\lambda,r)\in\Gamma$, then $$\Hom_{\cal G[\Gamma]}(V(\mu,s),
I(\lambda,r)_\Gamma/V(\lambda,r))\cong\Hom_{\cal G}(V(\mu,s),
I(\lambda,r)/V(\lambda,r)).$$ Hence, we have,
$$\Ext^1_{\mathfrak
A(\Gamma)}(S_{\lambda,r},S_{\mu,s})\cong\Ext^1_{\cal
G[\Gamma]}(V(\mu,s), V(\lambda,r))\cong\Ext^1_{\cal
G}(V(\mu,s),V(\lambda,r)).
$$
Equation~\eqref{sim} follows from \propref{CAT170} which also proves that $\mathfrak
A(\Gamma)-\operatorname{mod}_f$ is a directed highest weight
category with poset of weights $(\Gamma,\preccurlyeq^{op})$.
Now \cite[Theorem~3.6]{CPS} implies that the algebra
$\mathfrak A(\Gamma)$ is quasi-hereditary.
\end{pf}
\subsection{}\label{TMPALG20}
Let~$Q(\Gamma)$ be the the $\Ext$-quiver of~$\mathfrak A(\Gamma)$,
that is, the quiver whose set of vertices is~$\Gamma$ and the number
of arrows from~$(\lambda,r)$ to~$(\mu,s)$ in $Q(\Gamma)$
is~$\dim\Ext^1_{\mathfrak A(\Gamma)}(S_{\lambda,r},S_{\mu,s})$. Note
that that the number of paths from $(\lambda,r)$ to $(\mu,s)$ is
non-zero only if $(\mu,s)\prec(\lambda,r)$. In particular,
$Q(\Gamma)$ has no oriented loops. Let $\bc Q(\Gamma)$ be the path
algebra of $Q(\Gamma)$ and $\bc Q(\Gamma)[k]$ be the subspace
spanned by all paths of length $k$. Then
$$\bc Q(\Gamma)=\bigoplus_{k\in\bz_+}\bc Q(\Gamma)[k],$$
is a tightly graded associative algebra. Since~$\mathfrak A(\Gamma)$
is basic, a classical result of Gabriel's (cf. for
example~\cite[2.1(2)]{RinBook}) proves that $\mathfrak
A(\Gamma)$ is isomorphic to a quotient of the path algebra~$\bc
Q(\Gamma)$ of~$Q(\Gamma)$ by an ideal~$R(\Gamma)$ which is contained
in the ideal of paths of length at least two. In particular this
means that an arrow between $(\lambda,r)$ and $(\mu,r-1)$ maps to a
non-zero element of $\Hom_{\cal G}(I(\lambda,r), I(\mu,r-1))$.
Given~$(\lambda,r)\in\Gamma$, let~$1_{\lambda,r}$ be the
corresponding primitive idempotent in~$\bc Q(\Gamma)$.
Note that $1_{\lambda,r}$ maps to the element
$\operatorname{id}_{\lambda,r}\in\End_{\cal G} I(\Gamma)$ defined by
$\operatorname{id}_{\lambda,r}(I(\mu,s))=\delta_{(\lambda,r), (\mu,s)}\operatorname{id}$ for
$(\mu,s)\in\Gamma$. In particular, $\operatorname{id}_{\mu,s} \mathfrak A(\Gamma)
\operatorname{id}_{\lambda,r}\cong \Hom_{\cal G}(I(\lambda,r),I(\mu,s))$ as a
vector space.
\subsection{A grading on $\mathfrak A(\Gamma)$}\label{TMPALG30}
Given $k\le r\in\bz_+$ define
\begin{gather*}
\mathfrak A(\Gamma)[k]=\bigoplus_{(\lambda,r),(\mu,r-k)\in\Gamma} \Hom_{\cal
G}(I(\lambda,r),I(\mu,r-k)).
\end{gather*}
Since $[I(\lambda,r): V(\mu,s)]=0$ unless $(\mu,s)\preccurlyeq(\lambda,r)$ (cf.~\propref{injG}\eqref{injG.iii}), it follows immediately that
$$
\mathfrak A(\Gamma)=\bigoplus_{k\in\bz_+} \mathfrak A(\Gamma)[k],\qquad \mathfrak A(\Gamma)[j] \mathfrak A(\Gamma)[k]\subset \mathfrak
A(\Gamma)[j+k],\quad\forall\, j,k\in\bz_+.
$$
Thus, $\mathfrak A(\Gamma)$ is a graded associative algebra and
$\mathfrak A(\Gamma)[0]$ is a commutative semi-simple subalgebra
of~$\mathfrak A(\Gamma)$. It is trivial to observe that with this
grading the algebra $\mathfrak A(\Gamma)$ is in fact a graded
quotient of $\bc Q(\Gamma)$ and hence the ideal~$R(\Gamma)$ is
graded. In particular, $\mathfrak A(\Gamma)$ is tightly graded.
\subsection{The dimension of $R(\Gamma)$}\label{TMPALG40}
\begin{prop} Let $\Gamma$ be interval closed and finite and $(\lambda,r),(\mu,s)\in\Gamma$. Then the number of paths from $(\lambda,r)$ to $(\mu,s)$ is
$\dim\Hom_{\lie g}(V(\mu),\lie g^{\otimes (r-s)}\otimes V(\lambda))$.\end{prop}
\begin{pf}
Let $N(\lambda,r), (\mu,r-s))$ be the number of paths in $\bc
Q(\Gamma)$ from $(\lambda,r)$ to $(\mu,r-s)$ and
set~$N'(\lambda,\mu,s)=\dim\Hom_{\lie g}(V(\mu),\lie g^{\otimes
s}\otimes V(\lambda))$. It is easy to see that
$$
N'(\lambda,\mu,0)=\delta_{\lambda,\mu}=N((\lambda,r),(\mu,r)),
$$
while
$$
N'(\lambda,\mu,1)=\dim\Hom_{\lie g}(V(\mu),\lie g\otimes
V(\lambda))= \dim\Ext^1_{\mathfrak A(\Gamma)}
(S_{\lambda,r},S_{\mu,r-1})=N((\lambda,r),(\mu,r-1)),
$$
where we used~\eqref{sim}.
We now prove that ~$N$ and $N'$ satisfy the same recurrence relation
which establishes the proposition. It is clear that
$$
N((\lambda,r),(\mu,r-s))=\sum_{\nu\in P^+} N((\nu,r-s+1),(\mu,r-s))
N((\lambda,r),(\nu,r-s+1)).
$$
On the other hand, note that we can write
$$
\lie g^{\otimes (s-1)}\otimes V(\lambda)\cong\bigoplus_{\nu\in P^+}
V(\nu)^{N'(\lambda,\nu,s-1)}.
$$
Tensoring with $\lie g$ gives,
\begin{equation*}
N'(\lambda,\mu,s)=\dim\bigoplus_{\nu\in P^+}\Hom_{\lie g}(V(\mu),
\lie g\otimes V(\nu))^{N'(\lambda,\nu,s-1)}
\\=\sum_{\nu\in P^+} N'(\lambda,\nu,s-1) N'(\nu,\mu,1),
\end{equation*}
and the proof is complete.
\end{pf}
\begin{cor}\label{TMPALG40.cor}
Given $(\mu,r-s),
(\lambda,r)\in\Gamma$, we have
\begin{equation*}
\dim 1_{\mu,r-s}R(\Gamma) 1_{\lambda,r} =\dim\Hom_{\lie g}(V(\mu),\lie g^{\otimes s}\otimes V(\lambda))-\dim\Hom_{\lie g}(V(\mu),S^{(s)}(\lie
g)\otimes V(\lambda)).
\end{equation*}
In particular, the algebra $\mathfrak A(\Gamma)$ is hereditary if and only if
\begin{equation*} \dim\Hom_{\lie g}(V(\mu),\lie g^{\otimes s}\otimes
V(\lambda))=\dim\Hom_{\lie g}(V(\mu),S^{(s)}(\lie g)\otimes V(\lambda))
\end{equation*}
for all $(\mu,r-s),(\lambda,r)\in\Gamma$.
\end{cor}
\begin{pf} Observe that
\begin{equation*}
\begin{split}\dim 1_{\mu,r-s}R(\Gamma)
1_{\lambda,r}&=N((\lambda,r),(\mu,r-s))-\dim \operatorname{id}_{\mu,r-s}\mathfrak
A(\Gamma) \operatorname{id}_{\lambda,r}\\
&=N((\lambda,r),(\mu,r-s))-\dim\Hom_{\cal G}(I(\lambda,r),I(\mu,r-s)).
\end{split}
\end{equation*}
The first assertion is now immediate from the above Proposition and~\propref{injG}\eqref{injG.ii}. For the second,
it is enough to observe that $\mathfrak A(\Gamma)$ is hereditary if and only if $R(\Gamma)=0$ and that
$R(\Gamma)=\bigoplus_{(\lambda,r),(\mu,r-s)\in\Gamma} 1_{\mu,r-s} R(\Gamma) 1_{\lambda,r}$.
\end{pf}
\subsection{}\label{TMPALG50}
Given $\Gamma\subset \Lambda_{\le r}$, let~$\Gamma^{\#_r}=\{ (-w_\circ \mu,r-s)\,:\, (\mu,s)\in\Gamma\}$.
It is easy to see that $\Gamma^{\#_r}$ is interval closed if and only if $\Gamma$ is interval closed.
\begin{prop}
Suppose that $\Gamma\subset \Lambda_{\le r}$ is finite and interval closed. Then $\mathfrak A(\Gamma^{\#_r})
\cong \mathfrak A(\Gamma)^{op}$.
\end{prop}
\begin{pf}
Let~$(\lambda,r)\in\Gamma$. Then~$V(\lambda,r)^{\#_r}$ is an object in~$\cal G[\Gamma^{\#_r}]$ and it follows
that $(\cal G[\Gamma])^{\#_r}=\cal G[\Gamma^{\#_r}]$. Thus, $\cal G[\Gamma]$ is dual to $\cal G[\Gamma^{\#_r}]$.
It follows from~\ref{TMPALG10} that $\mathfrak A(\Gamma)^{op}$ and $\mathfrak A(\Gamma^{\#_r})$ are Morita
equivalent. Since they are both basic, the assertion follows.
\end{pf}
\section{Examples of $\Gamma$ with $\mathfrak A(\Gamma)$ hereditary}\label{EXH}
Throughout this section we use the notations of~\cite{RinBook} for
the various types of quivers. This should eliminate confusion with
the notation for the types of simple Lie algebras. For instance,
$\mathbb T_{n_1,\dots,n_r}$ denotes a quiver whose underlying graph
is a star with $r$ branches where the $i$th branch contains~$n_i$
vertices, while $\tilde{\mathbb X}_k$ denotes the quiver whose
underlying graph is of affine Dynkin type $X_k^{(1)}$. Notice that if~$\Gamma\subset\Lambda_{\le r}$, then~\propref{TMPALG50} implies
that $Q(\Gamma^{\#_r})$ is the opposite quiver of~$Q(\Gamma)$.
\subsection{The generalized Kronecker quivers} Let~$\lambda\in P^+$ be non-zero and
let $$k_\lambda=|\{ i\in I\,:\, \lambda(h_i)>0\}|.$$ It is easily
checked, by using Lemma \ref{CAT10} that
$$
\dim\Hom_{\lie g}(V(\lambda),\lie g\otimes
V(\lambda))=k_\lambda.$$For $r\in\bz_+$, set $\Gamma_{\lambda,r}=\{
(\lambda,r),(\lambda,r+1)\}$. It follows that
$Q(\Gamma_{\lambda,r})$ is the quiver with $k_\lambda$ arrows from
$(\lambda,r+1)$ to $(\lambda,r)$. If $k_\lambda=1$, the quiver is of
type $\mathbb A_2$, if $k_\lambda=2$ it is the Kronecker quiver
$\tilde{\mathbb A}_1$, while for $k_\lambda>2$ we get the
generalized Kronecker quiver. Since there are no paths of length two
in these quivers, it follows that $\mathfrak
A(\Gamma_{\lambda,r})\cong \bc Q(\Gamma_{\lambda,r})$.
\subsection{Quivers of type $\tilde{\mathbb D}_4$} Suppose that $\lie g$ is not of type~$\lie{sl}_2$.
Let~$I_{\bullet}=\{i\in I\,:\, \theta-\alpha_i\in R^+\}$. Note that $|I_{\bullet}|=1$
if $\lie g$ is not of type~$\lie{sl}_{n+1}$ and let~$i_\bullet$ be the unique element of~$I_{\bullet}$.
If $\lie g\cong\lie{sl}_{n+1}$, $n>1$, $I_\bullet=\{1,n\}$.
Let~$r\in\bz_+$. If $\lie g$ is not of type~$\lie{sl}_{n+1}$ set
$$
\Gamma=\{(\theta,r),(0,r+1), (2\theta,r+1),(2\theta-\alpha_{i_\bullet},r+1),
(\theta,r+1) \}.
$$
Otherwise, set
$$
\Gamma=\{(\theta,r), (0,r+1), (2\theta,r+1), (2\theta-\alpha_1,r+1),
(2\theta-\alpha_n,r+1)\}.
$$
In the first case, we find that $Q(\Gamma)$ is
$$
\makeatletter
\def\unitlength=0.025pt\dg@YGRID=1\dg@XGRID=1{\unitlength=0.018pt\dg@YGRID=2\dg@XGRID=3}
\def\scriptstyle{\scriptstyle}
\begin{diagram}
\node{}\node{(2\theta-\alpha_{i_\bullet},r+1)}\arrow{s}\\
\node{(0,r+1)}\arrow{e}\node{(\theta,r)}\node{(2\theta,r+1)}\arrow{w} \\
\node{}\node{(\theta,r+1)}\arrow{n}
\end{diagram}
$$
while in the second case $Q(\Gamma)$ is
$$ \makeatletter
\def\unitlength=0.025pt\dg@YGRID=1\dg@XGRID=1{\unitlength=0.018pt\dg@YGRID=2\dg@XGRID=3}
\def\scriptstyle{\scriptstyle}
\begin{diagram}
\node{}\node{(2\theta-\alpha_{1},r+1)}\arrow{s}\\
\node{(0,r+1)}\arrow{e}\node{(\theta,r)}\node{(2\theta,r+1)}\arrow{w} \\
\node{}\node{(2\theta-\alpha_n,r+1)}\arrow{n}
\end{diagram}
$$
The algebra $\mathfrak A(\Gamma)$ is hereditary
since any path in $Q(\Gamma)$ has length at most one.
Note that $\Gamma$ can be shifted by any $\lambda\in P^+$ such that
$\dim\Hom_{\lie g}(V(\lambda+\theta),\lie g\otimes
V(\lambda+\theta))=1$.
\subsection{Quivers of type $\mathbb A_\ell$} If $\lie g$ is not of type $\lie{sl}_2$ choose $i_\bullet\in I_\bullet$.
\begin{prop}\label{aquiver} Fix~$\lambda\in P^+$ with $\lambda(h_{i_\bullet})\ne 0$, $\ell\in\bz_+$. Let $r_j\in\bz_+$, $0\le j\le \ell$ be
such that $|r_k-r_{k+1}|=1$ for $0\le k\le \ell-1$. Let
$\alpha\in\{\theta, \theta-\alpha_{i_\bullet}\}$. The set
$$
\Gamma=\{(\lambda+j\alpha, r_j):0\le j\le \ell
\}
$$
is interval closed, the quiver $Q(\Gamma)$ is of type
$\mathbb A_{\ell+1}$ and the algebra $\mathfrak A(\Gamma)$ is
hereditary.
\end{prop}
\begin{pf} We prove the proposition in the case when $\alpha=
\theta-\alpha_{i_\bullet}$, the proof in the other case being similar
and in fact simpler. Suppose that for some $0\le j,j'\le\ell$ and
$(\mu,s)\in\Lambda$, we have,
$$(\lambda+j\alpha,r_j)\prec(\mu,s)\prec
(\lambda+j'\alpha,r_{j'}),$$ then we have
$$\mu=\lambda+j'\alpha-\sum_{p=1}^{r_{j'}-s}\beta_p=\lambda+j\alpha+\sum_{q=1}^{s-r_j}\gamma_q,$$ and $$ r_j<s<r_{j'}.$$
for some $\beta_p, \gamma_q\in R\sqcup\{0\}$. Assuming without loss
of generality that $j\le j'$, we find in particular,
$0<r_{j'}-r_j\le j'-j$ and
$$ (j'-j)(\theta-\alpha_{i_\bullet})=\sum_{p=1}^{r_{j'}-s}
\beta_p+\sum_{q=1}^{s-r_j}\gamma_q.
$$ Equating the coefficients of
$\alpha_{i}$, $i\not=i_\bullet$ on both sides of the above
expression we conclude that $$\beta_p,\gamma_q\in
\{\theta,\theta-\alpha_{i_\bullet}\}, \ \ \ r_{j'}-r_j=j'-j,$$ for
$1\le p\le r_{j'}-s$ and $1\le q\le s-r_j$, which gives
$$r_p=r_j+(p-j),\ \ j\le p\le j'.$$
Next, equating the
coefficient of~$\alpha_{i_\bullet}$ on both sides now gives
$\beta_p=\gamma_q=\theta-\alpha_{i_\bullet}$ for all $1\le p\le
r_{j'}-s$, $1\le q\le s-r_j$. This proves that
$$(\mu,s)=(\lambda+s\alpha,s)=(\lambda+s\alpha, r_s),$$ and hence
$(\mu,s)\in\Gamma$.
It follows from~\propref{extalg} that
$$\dim\Ext^1_{\mathfrak A(\Gamma)}(S_{\lambda+j\alpha, r_j}, S_{\lambda+ k\alpha,
r_{k}})=\delta_{r_{j}-r_k,1}\dim\Hom_{\lie
g}(V(\lambda+k\alpha),\lie g\otimes V(\lambda+j\alpha)), $$ and
applying Lemma \ref{CAT10} now gives
$$\dim\Ext^1_{\mathfrak A(\Gamma)}(S_{\lambda+j\alpha, r_j}, S_{\lambda+ k\alpha,
r_{k}})=\delta_{r_{j}-r_k,1}\delta_{|j-k|,1}.$$
This shows that there is
precisely one arrow between $(\lambda+j\alpha,r_j)$ and
$(\lambda+(j\pm 1)\alpha,r_{j\pm1})$ and no other arrow which has
$(\lambda+j\alpha,r_j)$ as its head or tail. Therefore,
$Q(\Gamma)$ is of type~$\mathbb A_{\ell+1}$.
To prove that the algebra $\mathfrak A(\Gamma)$ is hereditary, let
$(\lambda+k\alpha,r_k), (\lambda+k'\alpha, r_{k'})\in \Gamma$. The
number of paths in $Q(\Gamma)$ between these vertices is zero unless
$(\lambda+k\alpha, r_k)$, $(\lambda+k'\alpha, r_{k'})$ are strictly
comparable. Assume without loss of generality that
$(\lambda+k\alpha,r_k)\prec (\lambda+k'\alpha,r_{k'})$ and also that
$k\le k'$. But in this case, we have proved that $r_k=r_{k'}-k'+k$
and that there is exactly one path from $(\lambda+k'\alpha,r_{k'})$
to $(\lambda+k\alpha,r_{k'}-k'+k)$. The result now follows from
\corref{TMPALG40} if we prove that
$$\dim\Hom_{\lie g}(V(\lambda+k\alpha), S^{(k'-k)}(\lie g)\otimes
V(\lambda+k'\alpha))=1.
$$
Since $\Hom_{\lie g}(V((k-k')\theta)),S^{(k'-k)}(\lie g))\ne 0$ it
suffices to prove that $$\dim\Hom_{\lie g}(V(\lambda+k\alpha),
V((k-k')\theta))\otimes V(\lambda+k'\alpha))=1.
$$
But this again follows from Lemma~\ref{CAT10}.
\end{pf}
\begin{rem} The restriction $\lambda(h_{i_\bullet})\ne 0$ is not necessary if
$\alpha=\theta$.\end{rem}
\subsection{Quivers of
type $\tilde{\mathbb D}_{\ell+1}$, $\ell\ge 4$} The arguments given
in the previous section can be used with obvious modifications to
prove the following.
Let $\alpha=\theta-\alpha_{i_\bullet}$, $\ell\in\bz_+$ with
$\ell\ge 4$ and $\lambda\in P^+$.
Let $r_j\in\bz_+$, $2\le j\le \ell-1$ be
such that $|r_k-r_{k+1}|=1$ for $2\le k\le \ell-2$ and let
$$\Gamma=\Gamma_1\cup\Gamma_2$$ where $\Gamma_1= \{(\lambda+j\theta, r_j):2\le j\le \ell-1
\}$ and $$\Gamma_2=\{(\lambda+\theta, r_2+1),\, (\lambda+\theta+\alpha, r_2+1),\, (\lambda+\ell\theta, r_{\ell-1}-1),\,
( \lambda+(\ell-1)\theta+\alpha, r_{\ell-1}-1)\}.
$$
Then $\Gamma$ is interval closed and the quiver $Q(\Gamma)$ is
of type $\tilde{\mathbb D}_{\ell+1}$, where the set $\Gamma_2$ consists of precisely
those vertices which are either the head or the tail of precisely
one arrow. Moreover, the algebra $\mathfrak A(\Gamma)$ is
hereditary.
\subsection{Star-shaped quivers} Suppose that $\lie g$ is not of type~$C_n$.
Fix~$\lambda\in P^+$. For $1\le p\le 3$ and $0\le j\le \ell_p$,
let~$r_{p,j}\in\bz_+$ be such that $|r_{p,j}-r_{p,j+1}|=1$. Let
\begin{align*}
\Gamma_1&=\{ (\lambda+(\ell_1-j)\theta,r_{1,j}): 0\le j\le \ell_1\},\\
\Gamma_2&=\{ (\lambda+(\ell_1+j)\theta,r_{2,j}): 0\le j\le \ell_2\},\\
\Gamma_3&=\{
(\lambda+(\ell_1+j)\theta-j\alpha_{i_\bullet},r_{3,j}):0\le j\le
\ell_3-1\}.
\end{align*}
Set~$\Gamma=\Gamma_1\cup\Gamma_2\cup\Gamma_3$.
\begin{prop}
Suppose that for all $1\le p\le 3$ and~$0\le j_p<\ell_p$ we have
$$|r_{3,j_3}-r_{2,j_2}|\le |j_3-j_2|,\ \ |r_{3,j_3}-r_{1,j_1}|\le
|j_3-j_1|.$$ Then~$\Gamma$ is interval closed, $Q(\Gamma)$ is of
type~$\mathbb T_{\ell_1,\ell_2,\ell_3}$ and~$\mathfrak A(\Gamma)$ is
hereditary.
\end{prop}
\begin{pf}
Note that the conditions imply that
$$r_{1,j}=r_{2,j}=r_{3,j},\quad 0\le j\le \ell_3,\qquad
\Gamma_1\cap\Gamma_2\cap\Gamma_3=\{(\lambda+\ell_1\theta,r_{1,0})\}.$$
Using \propref{aquiver} we see that $\Gamma_1\cup\Gamma_2$ and
$\Gamma_3$ are interval closed and that $\frak
A(\Gamma_1\cup\Gamma_2)$ and $\frak A(\Gamma_3)$ are hereditary of
type $\mathbb A_{\ell_1+\ell_2-1}$ and $\mathbb A_{\ell_3}$.
The
proposition follows at once if we prove that an element in
~$\Gamma_3$ is not comparable in the partial order $\prec$ to any
element in~$(\Gamma_1\cup\Gamma_2)$ except possibly to
$(\lambda+\ell_1\theta,r_{1,0})$.
Suppose first that
$(\lambda+(\ell_1+j_3)\theta-j_3\alpha_{i_\bullet}, r_{3,j_3})$ is
strictly comparable with $(\lambda+(\ell_1+j_2)\theta,r_{2,j_2})$.
Then~$0<|r_{2,j_2}-r_{3,j_3}|\le
|j_3-j_2|$ and
\begin{equation}\label{TMPPP.20}
(j_2-j_3)\theta+j_3\alpha_{i_\bullet}=\sum_{p=1}^{|r_{2,j_2}-r_{3,j_3}|}
\beta_p,\qquad \beta_p\in R\sqcup\{0\}.
\end{equation}
If~$j_3>j_2$, then we see by comparing the coefficients of
$\alpha_i$ with $i\ne i_\bullet$ on both sides, that $\beta_p\in
\{-\theta,-\theta+\alpha_{i_\bullet}\}$ and that
$|r_{2,j_2}-r_{3,j_3}|=j_3-j_2$. Suppose that $-\theta$ occurs $s$
times in the set $\{\beta_p:1\le p\le j_3-j_2\}$. Then $0\le s\le
j_3-j_2$ and $-\theta+\alpha_{i_\bullet}$ occurs $j_3-j_2-s$ times.
It follows that $-(j_3-j_2)\theta+j_3\alpha_{i_\bullet}=\sum_p
\beta_p = -s\theta+(j_3-j_2-s)(-\theta+\alpha_{i_\bullet})
=-(j_3-j_2)\theta+(j_3-j_2-s)\alpha_{i_\bullet}$, which implies
$j_2=s=0$.
Furthermore, suppose that~$j_2>j_3$. By comparing the coefficients
of $\alpha_i$ with $i\ne i_\bullet$ in both sides
of~\eqref{TMPPP.20}, we conclude
that~$\beta_p\in\{\theta,\alpha_{i_\bullet}\}$ and
$|r_{2,j_2}-r_{3,j_3}|=j_2-j_3$. Suppose that $\alpha_{i_\bullet}$
occurs $s'$ times. Then we must have $s'=j_3$ and
$j_2-j_3-s'=(j_2-j_3)$ which is only possible if~$s'=j_3=0$.
Suppose now that
$(\lambda+(k+j_3)\theta-j_3\alpha_{i_\bullet},r_{3,j_3})$ is
strictly comparable with $(\lambda+{(k-j_1)}\theta,r_{1,j_1})$. Then
we must have $0\le r=|r_{3,j_3}-r_{1,j_1}|\le |j_3-j_1|$ and
$$
(j_1+j_3)\theta-j_3\alpha_{i_\bullet}=\sum_{p=1}^{r} \gamma_p,\qquad
\gamma_p\in R\sqcup\{0\}.
$$
Comparing the coefficients of~$\alpha_i$, $i\not=i_\bullet$ in both
sides shows that $\beta_p\in\{\theta,\theta-\alpha_{i_\bullet}\}$.
If $\theta$ appears $s$ times, then we have
$$
r\theta-(r-s)\alpha_{i_\bullet}=(j_1+j_3)\theta-j_3\alpha_{i_\bullet},
$$
which implies $r=j_1+j_3$ and~$r-s=j_3$. Since $r\le |j_1-j_3|$ the
first equality implies that either~$j_1=0$ or~$j_3=0$.
\end{pf}
\section{Quivers with relations}\label{QR}
In this section we give an example of $\Gamma$ for which $\frak
A(\Gamma)$ is not hereditary. The example is motivated by a family
of $\lie g[t]$-modules called the Kirillov-Reshetikhin modules
(cf.~\cite{CMkir1}). We assume in this section that $\lie g$ is of
type $D_n$, $n\ge 6$. Recall that for $i\in I$ with $i\ne n-1,n$ we
have $$\omega_i=\sum_{j=1}^i j\alpha_j+
i\sum_{j=i+1}^{n-2}\alpha_j+\frac i2(\alpha_{n-1}+\alpha_n).$$
Let~$\Gamma$ be the interval $[(2\omega_4,0),(0,4)]$. It is easily
checked that $$\Gamma=\{(2\omega_4,0),(\omega_2+\omega_4,1),
(\omega_4,2),(2\omega_2,2),(\omega_1+\omega_3,2),(\omega_2,3),(0,4)\}.$$
Since $\lie g\cong_{\lie g} V(\omega_2)$, it is now not hard to see
by using \propref{sim} that the quiver $Q(\Gamma)$ is as follows:
$$
\makeatletter
\def\unitlength=0.025pt\dg@YGRID=1\dg@XGRID=1{\unitlength=0.04pt\dg@YGRID=1\dg@XGRID=1}
\def\scriptstyle{\scriptstyle}
\begin{diagram}
\node{}\node{(0,4)}\arrow{s,l}{a}\\
\node{}\node{(\omega_2,3)}\arrow{se,l}{b_3}\arrow{s,l}{b_2}\arrow{sw,l}{b_1}\\
\node{(\omega_4,2)}\arrow{se,r}{c_1}\node{(2\omega_2,2)}\arrow{s,l}{c_2}\node{(\omega_1+\omega_3,2)}\arrow{sw,r}{c_3}\\
\node{}\node{(\omega_2+\omega_4,1)}\arrow{s,l}{d}\\
\node{}\node{(2\omega_4,0)}
\end{diagram}
$$
The path algebra $\bc
Q(\Gamma)$ has a basis consisting of the paths of length at most four
which we list below for the readers convenience:
\begin{alignat*}{3}&\{1_{\lambda,r}:(\lambda,r)\in\Gamma\},&\qquad
&\{a,b_i,c_i,d: 1\le i\le 3\},&\\
&\{b_ia,dc_i, c_ib_i:1\le i\le 3\},& &\{c_ib_ia, dc_ib_i:1\le i\le
3\},& \qquad \{dc_ib_ia: 1\le i\le 3\}
\end{alignat*}
We now compute the dimension of $\dim 1_{\mu,s} R(\Gamma)
1_{\lambda,r}$ for $(\mu,s),(\lambda,r)\in\Gamma$ with $r-s\ge 2$.
By \corref{TMPALG40} it suffices to calculate $\dim \Hom_{\cal
G}(I(\lambda,r),I(\mu,s))$.
Using~\propref{projinjgamma}\eqref{projinjgamma.ii} and the graded
characters of injective envelopes of simples in $\cal G[\Gamma]$
listed in Appendix~\ref{app.1} we find that
$$\dim 1_{\mu,s} R(\Gamma) 1_{\lambda,r}= 1,$$ if
$$
((\lambda,r),(\mu,s))\in\{((0,4),(\omega_1+\omega_3,2)),
((\omega_2,3),(\omega_2+\omega_4,1)),((\omega_1+\omega_3,2),(2\omega_4,0))\},$$
and $$\dim 1_{\mu,s} R(\Gamma) 1_{\lambda,r}= 2,$$ if
$$((\lambda,r),(\mu,s))\in\{((0,4),(\omega_2+\omega_4,1)),
((0,4),(2\omega_4,0))\},$$ while $\dim 1_{\mu,s} R(\Gamma)
1_{\lambda,r}= 0$ otherwise. This implies that there exists a unique
(up to multiplication by non-zero constants) choice of complex
numbers $x_i$, $1\le i\le 3$ and $\xi_j$, $\zeta_j$, $\eta_j$,
$j=1,2$ such that $\mathfrak A(\Gamma)$ is the quotient of $\bc
Q(\Gamma)$ by the following relations
\begin{equation}\label{quad}
b_3 a=0=d c_3,\qquad x_1 c_1 b_1+x_2 c_2 b_2+x_3 c_3 b_3 = 0,
\end{equation}
\begin{equation}\label{cub}
c_3 b_3 a =0,\qquad \xi_1 c_1 b_1 a +\xi_2 c_2 b_2 a=0,\qquad
\zeta_1 d c_1 b_1+\zeta_2 d c_2 b_2 =0,
\end{equation}
and \begin{equation}\label{quart}\eta_1 dc_1 b_1a+\eta_2 d c_2 b_2 a=0,\qquad
dc_3 b_3 a=0.
\end{equation}
\begin{prop} The algebra $\frak A(\Gamma)$ is the quotient of
$\bc Q(\Gamma)$ by the relations:
\begin{equation}\label{min} b_3a=0,\qquad dc_3=0,\qquad
c_1b_1+c_2b_2+c_3b_3=0.
\end{equation}
In particular $\frak
A(\Gamma)$ is quadratic, of global dimension $2$ and of tame
representation type.
\end{prop}
\begin{rem} It is not hard to see by using the results of \cite{CMkir1} and the equivalence of categories between $\mathfrak A(\Gamma)$-modules
and $\cal G[\Gamma]$, that the projective cover in $\mathfrak A(\Gamma)-\operatorname{mod}_f$ of $S_{0,4}$
or the injective envelope of~$S_{2\omega_4,0}$ corresponds to the
Kirillov-Reshetikhin module $KR(2\omega_4)$, which is thus injective and projective in~$\cal G[\Gamma]$.
This connection will be
pursued elsewhere.
\end{rem}
\begin{pf} The relations in \eqref{min} are clearly independent of each other. To see that all relations in
$\mathfrak A(\Gamma)$ are consequences of those in~\eqref{min} it is enough to prove that the space spanned by
$b_ic_i$, $b_jc_j$ with $1\le i< j\le 3$ is always of dimension two.
Using the equivalence of categories, \propref{TMPALG50},
\propref{injG} and~\propref{CAT90}\eqref{CAT90.i} this can be
reformulated into the following question on morphisms in $\cal G$.
Thus, for $\mu\in\{2\omega_2, \omega_4, \omega_1+\omega_3\}$ fix
non-zero elements $f_\mu\in\Hom_{\wh{\cal
G}}(P(\omega_2+\omega_4,2), P(\mu,1))$ and $g_\mu\in\Hom_{\wh{\cal
G}}( P(\mu,1),P(\omega_2,0))$. We have to prove that the elements
$g_\mu f_\mu$ and $g_\lambda f_\lambda$ are linearly independent in
$\Hom_{\wh{\cal G}}(P(\omega_2+\omega_4,2),P(\omega_2,0))$. In turn,
using~\propref{CAT150} this question translates into the following
question in the category $\cal F(\lie g)$. Let $\bar f_\mu$, $\bar
g_\mu$ be the restrictions of $f_\mu$ and~$g_\mu$ to
$V(\omega_2+\omega_4)$ and $V(\mu)$ respectively and $\bop: T^3(\lie
g)\to S^2(\lie g)\otimes \lie g$ be the canonical projection. The
elements $\bop\circ(1\otimes\bar g_\mu)\circ \bar f_\mu$ and
$\bop\circ(1\otimes\bar g_\lambda)\circ \bar f_\lambda$ are linearly
independent elements of $\Hom_{\lie g}(V(\omega_2+\omega_4),
S^2(\lie g)\otimes \lie g)$. This is done by an explicit computation
of the maps, and the details can be found in the Appendix~\ref{app.2}.
Since $\mathfrak A(\Gamma)$ is quadratic, it follows from~\cite[Theorem~1.1]{Bon}
that $\Ext^2_{\mathfrak A(\Gamma)}(S_{\lambda,r},S_{\mu,s})=0$ unless~$r=s+2$. We have
$$
\dim\Ext^2_{\mathfrak A(\Gamma)}(S_{0,4},S_{\omega_1+\omega_3,2})=
\dim \Ext^2_{\mathfrak A(\Gamma)}(S_{\omega_2,3}, S_{\omega_2+\omega_4,1})=\\
\dim\Ext^2_{\mathfrak
A(\Gamma)}(S_{\omega_1+\omega_3,2},S_{2\omega_4,0})=1
$$
and $\Ext^2_{\mathfrak A(\Gamma)}(S_{\lambda,r},S_{\mu,r-2})=0$ in
all other cases. We claim that~$\Ext^3_{\mathfrak A(\Gamma)}(S_{\lambda,r},S_{\mu,s})=0$
for all~$(\lambda,r),(\mu,s)\in\Gamma$. Indeed, by~\cite[Theorem~1.1]{Bon}
\begin{multline*}
\dim\Ext^3_{\mathfrak A(\Gamma)}(S_{\lambda,r},S_{\mu,s})\\= \dim
1_{\mu,s} ((\bc Q(\Gamma)_+ R(\Gamma)\cap R(\Gamma)\bc
Q(\Gamma)_+)/(R(\Gamma)^2+\bc Q(\Gamma)_+ R(\Gamma)\bc Q(\Gamma)_+)))
1_{\lambda,r},
\end{multline*}
where~$\bc Q(\Gamma)_+$ is the radical of~$\bc Q(\Gamma)$.
If $r-s<4$, it is clear that $\dim 1_{\mu,s} (\bc Q(\Gamma)_+
R(\Gamma)\cap R(\Gamma)\bc Q(\Gamma)_+) 1_{\lambda,r}=0$. For
$r-s=4$, we have a unique pair $(\lambda,r),(\mu,r-4)\in\Gamma$,
namely $(0,4)$ and~$(2\omega_4,0)$, and two linearly independent
elements in $1_{2\omega_4,0}(\bc Q(\Gamma)_+ R(\Gamma) \cap R(\Gamma) \bc
Q(\Gamma)_+)1_{0,4}$, namely $d c_3 b_3 a$ and $d c_2 b_2 a+d c_1 b_1 a$.
The first is contained in~$R(\Gamma)^2$, since it can be written as
$(d c_3)(b_3 a)$ and $d c_3, b_3 a\in R(\Gamma)$, while the second
is contained in $\bc Q(\Gamma)_+ R(\Gamma)\bc Q(\Gamma)_+$ since it
can be written as $d(c_1 b_1+c_2 b_2+c_3 b_3)a$. Thus,
$\dim\Ext^3_{\mathfrak A(\Gamma)}(S_{0,4},S_{2\omega_4,0})=0$.
It remains to prove that the algebra is tame.
Let~$\Gamma_0=\Gamma\setminus\{(2\omega_4,0), (0,4)\}$. Note that~$\Gamma_0$
is interval closed and
consider the subalgebra $\mathfrak A(\Gamma_0)$. This
algebra is canonical (cf.~\cite[3.7]{Rin}) and of type~$(2,2,2)$,
hence tame concealed (\cite[4.3(5)]{Rin}). Let $K$ be the subspace
of~$\mathfrak A(\Gamma_0)$ spanned by~$\{b_3, c_3 b_3\}$. Clearly,
$K$ is a $\mathfrak A(\Gamma_0)$-submodule of~$\mathfrak
A(\Gamma_0)1_{\omega_2,3}$. Let $M$ be the quotient of $\mathfrak
A(\Gamma_0)1_{\omega_2,3}$ by $K$. This $\mathfrak
A(\Gamma_0)$-module has dimension vector
$$
\makeatletter
\def\unitlength=0.025pt\dg@YGRID=1\dg@XGRID=1{\unitlength=0.025pt\dg@YGRID=1\dg@XGRID=1}
\def\scriptstyle{\scriptstyle}
\begin{diagram}
\node{}\node{1}\arrow{se,-}{}\arrow{s,-}{}\arrow{sw,-}{}\\
\node{1}\arrow{se,-}{}\node{1}\arrow{s,-}{}\node{0}\arrow{sw,-}{}\\
\node{}\node{1}
\end{diagram}
$$
and hence belongs to the tubular family of type~$(2,2,2)$.
Then it is easy to check that $\mathfrak A(\Gamma)$ is obtained as
one-point extension and one-point coextension of $\mathfrak
A(\Gamma_0)$ at~$M$ and hence is tame (even domestic). We refer the reader to~\cite[4.7]{RinBook}
for details.
\end{pf}
|
1,477,468,751,152 | arxiv |
\section{Derivation of the Search Objective}\label{sec:appx_search_objective}
The search objective presented in~\eqref{eq:search_objective_2} is derived in a following way, starting from the $\mathcal{KL}$ divergence between our approximation and the true posterior $\mathcal{KL}(q(\boldsymbol{\alpha}, \boldsymbol{w}\mid\boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w) \mid \mid p(\boldsymbol{\alpha}, \boldsymbol{w}, \boldsymbol{\Psi} \mid \boldsymbol{x}, \boldsymbol{y}))$.
\begin{multline}
\mathcal{KL}(q(\boldsymbol{\alpha}, \boldsymbol{w}\mid\boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w) \mid \mid p(\boldsymbol{\alpha}, \boldsymbol{w}, \boldsymbol{\Psi} \mid \boldsymbol{x}, \boldsymbol{y})) \\
= \int\int q(\boldsymbol{\alpha}, \boldsymbol{w}\mid\boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w) \log\frac{q(\boldsymbol{\alpha}, \boldsymbol{w}\mid\boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w)}{p(\boldsymbol{\alpha}, \boldsymbol{w}, \boldsymbol{\Psi} \mid \boldsymbol{y}, \boldsymbol{x})} d\boldsymbol{\alpha}d\boldsymbol{w}
\end{multline}
Then we use the Bayes rule to decompose the posterior $p(\boldsymbol{\alpha}, \boldsymbol{w}, \boldsymbol{\Psi} \mid \boldsymbol{x}, \boldsymbol{y}) = \frac{p(\boldsymbol{y}\mid \boldsymbol{x}, \boldsymbol{\alpha}, \boldsymbol{w}, \boldsymbol{\Psi})p(\boldsymbol{\alpha})p(\boldsymbol{w})p(\boldsymbol{\Psi})}{p(\boldsymbol{y}\mid\boldsymbol{x})}$. We can separate the priors $p(\boldsymbol{\alpha}, \boldsymbol{w},\boldsymbol{\Psi})$ into individual terms $p(\boldsymbol{\alpha})p(\boldsymbol{w})p(\boldsymbol{\Psi})$ as they are independent from one another in our model.
\begin{multline}\hspace{-0.5cm}
= \int\int q(\boldsymbol{\alpha}, \boldsymbol{w}\mid\boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w) \log\frac{q(\boldsymbol{\alpha}, \boldsymbol{w}\mid\boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w)p(\boldsymbol{y}\mid\boldsymbol{x})}{p(\boldsymbol{y}\mid \boldsymbol{x}, \boldsymbol{\alpha}, \boldsymbol{w}, \boldsymbol{\Psi})p(\boldsymbol{\alpha})p(\boldsymbol{w})p(\boldsymbol{\Psi})} \\ d\boldsymbol{\alpha}d\boldsymbol{w}
\end{multline}
Since the marginal term $p(\boldsymbol{y}\mid \boldsymbol{x})$ is independent of the parameters it can be moved outside of the integral where it is a constant along with the term $p(\boldsymbol{\Psi})$, which we assume is a uniform prior.
\begin{multline}
= \int\int q(\boldsymbol{\alpha}, \boldsymbol{w}\mid\boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w) \log\frac{q(\boldsymbol{\alpha}, \boldsymbol{w}\mid\boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w)}{p(\boldsymbol{y}\mid \boldsymbol{x}, \boldsymbol{\alpha}, \boldsymbol{w}, \boldsymbol{\Psi})p(\boldsymbol{\alpha})p(\boldsymbol{w})}\\
d\boldsymbol{\alpha}d\boldsymbol{w} +const.
\end{multline}
Next, we separate the terms with respect to the logarithm, into two terms: one involving the log-likelihood with respect to the data $p(\boldsymbol{y}\mid \boldsymbol{x}, \boldsymbol{\alpha}, \boldsymbol{w}, \boldsymbol{\Psi})$ and the other which consists of the priors $p(\boldsymbol{\alpha})p(\boldsymbol{w})p(\boldsymbol{\Psi})$ and the approximate posterior $q(\boldsymbol{\alpha}, \boldsymbol{w}\mid\boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w)$. Then, since not only the priors $p(\boldsymbol{\alpha}),p(\boldsymbol{w})$, but also the approximations $q(\boldsymbol{\alpha}\mid\boldsymbol{\theta}_\alpha),q(\boldsymbol{w}\mid\boldsymbol{\theta}_w)$ are independent, we can split the integral between $q(\boldsymbol{\alpha}\mid\boldsymbol{\theta}_\alpha)$ and $q(\boldsymbol{w}\mid\boldsymbol{\theta}_w)$ which again result in two $\mathcal{KL}$ divergence terms, in addition to the log-likelihood.
\begin{multline}
= -\mathbb{E}_{q(\boldsymbol{\alpha},\boldsymbol{w}\mid \boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w)}[\log p(\boldsymbol{y}\mid \boldsymbol{x}, \boldsymbol{\alpha}, \boldsymbol{w}, \boldsymbol{\Psi})] +\\ \qquad \int\int q( \boldsymbol{\alpha}\mid \boldsymbol{\theta}_\alpha) q(\boldsymbol{w}\mid\boldsymbol{\theta}_w) \log\frac{q( \boldsymbol{\alpha}\mid \boldsymbol{\theta}_\alpha) q(\boldsymbol{w}\mid\boldsymbol{\theta}_w)}{p(\boldsymbol{\alpha})p(\boldsymbol{w})}\\ d\boldsymbol{\alpha}d\boldsymbol{w} +const.
\end{multline}
\begin{multline}
= -\mathbb{E}_{q(\boldsymbol{\alpha},\boldsymbol{w}\mid \boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w)}[\log p(\boldsymbol{y}\mid \boldsymbol{x}, \boldsymbol{\alpha}, \boldsymbol{w}, \boldsymbol{\Psi})] + \\ \qquad \mathcal{KL}( q(\boldsymbol{w}\mid\boldsymbol{\theta}_w)\mid\mid p(\boldsymbol{w})) + \mathcal{KL}( q(\boldsymbol{\alpha}\mid\boldsymbol{\theta}_\alpha)\mid\mid p(\boldsymbol{\alpha}))\\ +const.
\end{multline}
$\mathbb{E}_{q(\boldsymbol{\alpha},\boldsymbol{w}\mid \boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w)}[\log p(\boldsymbol{y}\mid \boldsymbol{x}, \boldsymbol{\alpha}, \boldsymbol{w}, \boldsymbol{\Psi})]$ represents the log-likelihood with respect to the samples from the approximates $q(\boldsymbol{\alpha},\boldsymbol{w}\mid \boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w)$ and the data, which in our case is the standard cross-entropy term. Note, that the expectation is thus approximated through Monte Carlo sampling with respect to these variables and also the data $\boldsymbol{x}, \boldsymbol{y}$. The weights $\boldsymbol{w}$ as well as the architecture weights $\boldsymbol{\alpha}$ are independent for each operation $o^{i,j}_{k,c}(.)$ and therefore the $\mathcal{KL}$ divergence can be computed independently for each term resulting in sums indexed by $i,j,k,c$.
\begin{multline}
= -\mathbb{E}_{q(\boldsymbol{\alpha},\boldsymbol{w} \mid \boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w})[\log p(\boldsymbol{y}|\boldsymbol{x}, \boldsymbol{\alpha}, \boldsymbol{w}, \boldsymbol{\Psi})] \\ + \sum_{i,j,k,c} \mathcal{KL}(q(\boldsymbol{w}_{k,c}^{i,j}\mid \boldsymbol{\theta}_w) || p(\boldsymbol{w}_{k,c}^{i,j}))
\\ + \sum_{i,j} \mathcal{KL}(q(\boldsymbol{\alpha}^{i,j}\mid \boldsymbol{\theta}_\alpha) || p(\boldsymbol{\alpha}^{i,j})) + const.
\end{multline}
Furthermore, we introduced arbitrary constants $\gamma_1$ and $\gamma_2$ to balance the effect of the regulariser terms $\mathcal{KL}(.)$. Note, that we compute the $\mathcal{KL}$ divergence with respect to the approximation provided by Molchanov \textit{et al.}~\cite{molchanov2017variational}.
\begin{multline}
= -\mathbb{E}_{q(\boldsymbol{\alpha},\boldsymbol{w} \mid \boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w)}[\log p(\boldsymbol{y}|\boldsymbol{x}, \boldsymbol{\alpha}, \boldsymbol{w}, \boldsymbol{\Psi})] \\ + \gamma_1 \sum_{i,j,k,c} \mathcal{KL}(q(\boldsymbol{w}_{k,c}^{i,j}\mid \boldsymbol{\theta}_w) || p(\boldsymbol{w}_{k,c}^{i,j}))
\\ + \gamma_2 \sum_{i,j} \mathcal{KL}(q(\boldsymbol{\alpha}^{i,j}\mid \boldsymbol{\theta}_\alpha) || p(\boldsymbol{\alpha}^{i,j})) + const.
\end{multline}
Lastly, we add the entropy term $\mathcal{H}$ to increase the certainty of the operations' selection. In our case, we want to achieve certainty in the operations' selection across $\boldsymbol{\alpha}^{i,j}$, which is equivalent to minimising their joint entropy across the potential operations $K$ as $\sum_{i,j}\mathcal{H}(\mathbb{E}_{q(\boldsymbol{\alpha}\mid \boldsymbol{\theta}_\alpha)}[\boldsymbol{z}^{i,j}])$. The $\boldsymbol{z}^{i,j}$ are computed with respect to the samples from $q(\boldsymbol{\alpha} \mid \boldsymbol{\theta}_\alpha)$ in \eqref{eq:darts}. Applying a regulating coefficient $\gamma_3$ on the entropy term gives the final search objective.
\begin{multline}
= -\mathbb{E}_{q(\boldsymbol{\alpha},\boldsymbol{w} \mid \boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w)}[\log p(\boldsymbol{y}|\boldsymbol{x}, \boldsymbol{\alpha}, \boldsymbol{w}, \boldsymbol{\Psi})] + \\ + \gamma_1 \sum_{i,j,k,c} \mathcal{KL}(q(\boldsymbol{w}_{k,c}^{i,j}\mid \boldsymbol{\theta}_w) || p(\boldsymbol{w}_{k,c}^{i,j}))
+ \\ + \gamma_2 \sum_{i,j} \mathcal{KL}(q(\boldsymbol{\alpha}^{i,j}\mid \boldsymbol{\theta}_\alpha) || p(\boldsymbol{\alpha}^{i,j})) + \\ + \gamma_3 \sum_{i,j}\mathcal{H}(\mathbb{E}_{q(\boldsymbol{\alpha}\mid \boldsymbol{\theta}_\alpha)}[\boldsymbol{z}^{i,j}]) + const.
\end{multline}
The same logic, but with fewer terms, can be applied to derive the original ELBO in~\eqref{eq:elbo}.
Below in Tables~\ref{tab:mnist} and \ref{tab:fashion} we present the comparison of \textit{VINNAS} with respect to other related hand-made and NAS-found architectures for MNIST and FashionMNIST datasets.
\begin{table}[H]
\caption{Comparison of VINNAS for MNIST.}
\label{tab:mnist}
\centering
\scalebox{.9}{
\begin{tabular}{c|c|c|c|c}
\toprule
\begin{tabular}[x]{@{}c@{}}\textbf{Search}\\\textbf{method}\end{tabular} &
\begin{tabular}[x]{@{}c@{}}\textbf{Principal}\\\textbf{algorithm}\end{tabular}&
\begin{tabular}[x]{@{}c@{}}\textbf{Test Accuracy}\\ (\%)\end{tabular}&
\begin{tabular}[x]{@{}c@{}}\textbf{\# Params}\\ (M)\end{tabular}& \begin{tabular}[x]{@{}c@{}}\textbf{Search Cost}\\(GPU days)\end{tabular} \\
\midrule
LeCun \textit{et al.}~\cite{lecun1998gradient} & hand-made & 99.45 & 0.37 & - \\
Jin \textit{et al.}~\cite{jin2019auto} & Bayes. opt. & 99.45 & -& 0.5 \\
Fedorov \textit{et al.}~\cite{fedorov2019sparse} & Bayes. opt. & 99.17 & 0.001& 1 \\
Byla \textit{et al.}~\cite{byla2019deepswarm} & swarm. opt. & \textbf{99.61} & -& 0.33 \\
Gaier \textit{et al.}~\cite{gaier2019weight} & genetic alg. & 91.9 & $\approx$ \textbf{0}& - \\
\midrule
\textit{VINNAS} [Ours] & gradient & 99.57 & 0.01 & \textbf{0.02} \\
\bottomrule
\end{tabular}}
\end{table}
\begin{table}[H]
\caption{Comparison of VINNAS for FashionMNIST.}
\label{tab:fashion}
\centering
\scalebox{.9}{
\begin{tabular}{c|c|c|c|c}
\toprule
\begin{tabular}[x]{@{}c@{}}\textbf{Search}\\\textbf{method}\end{tabular} &
\begin{tabular}[x]{@{}c@{}}\textbf{Principal}\\\textbf{algorithm}\end{tabular}&
\begin{tabular}[x]{@{}c@{}}\textbf{Test Accuracy}\\ (\%)\end{tabular}&
\begin{tabular}[x]{@{}c@{}}\textbf{\# Params}\\ (M)\end{tabular}& \begin{tabular}[x]{@{}c@{}}\textbf{Search Cost}\\(GPU days)\end{tabular} \\
\midrule
Zhong \textit{et al.}~\cite{zhong2017random} & hand-made & $96.2 \pm 0.05$ & 11 & - \\
Nøkland \& Eidnes~\cite{nokland2019training} & hand-made & 95.47& 7.3& - \\
Jin \textit{et al.}~\cite{jin2019auto} & Bayes. opt. & 92.58 & -& 0.5 \\
Kyriakides \textit{et al.}~\cite{10.1007/978-3-030-49186-4_10} & genetic alg. & 94.46 & 3.1& - \\
Byla \textit{et al.}~\cite{byla2019deepswarm} & swarm. opt. & 93.56 & -& 0.33 \\
Xue \textit{et al.}~\cite{xue2019transferable} & clustering & 93.9 & -& \textbf{0.013 } \\
Noy \textit{et al.}~\cite{noy2020asap} & gradient & 96.27 & -& 0.2 \\
Nayman \textit{et al.}~\cite{nayman2019xnas} & gradient & 96.36 & 3.7& 0.3 \\
Tanveer \textit{et al.}~\cite{tanveer2020fine} & gradient & \textbf{96.91} & 3.2& - \\
\midrule
\textit{VINNAS} [Ours] & gradient & 96.14 & \textbf{1.98} & 0.46 \\
\bottomrule
\end{tabular}}
\end{table}
As can be seen in the Tables~\ref{tab:mnist} and~\ref{tab:fashion}, our method is comparable to the state-of-the-art results in terms of accuracy as well as the number of non-zero parameters. \textit{VINNAS} can find an architecture with a comparable performance to other works for classifying MNIST digits as well as FashionMNIST images. These results prove the versatility of our method, which can be used for finding CNN architectures for simple and more challenging tasks alike.
\section{Conclusion}\label{sec:conclusion}
In summary, our work proposes a combined approach of probabilistic modelling and neural architecture search. Specifically, we give the operations' strengths a probabilistic interpretation by viewing them as learnable random variables. Automatic relevance determination-like prior is imposed on these variables, along with their corresponding operation weights, which incentivises automatic detection of pertinent operations and zeroing-out the others. Additionally, we promote certainty in the operations selection, through a custom loss function which allows us to determine the most relevant operations in the architecture. We demonstrated the effectiveness of \textit{VINNAS} on three different datasets and search spaces.
In the future work, we aim to explore a hierarchical Bayesian model for the architecture parameters, which could lead to architectures composed of more diverse cell types, instead of just two. Additionally, all of the evaluated NNs shared the same evaluation hyperparameters and in the future we want to investigate an approach which can automatically determine suitable hyperparameters for the found architecture.
\subsection{Concrete Distribution}\label{sec:background_concrete_distribution}
\paragraph{Expected Calibration Error}
We measure the calibration of the found architectures and their sensitivity through expected calibration error (ECE)~\cite{guo2017calibration}. ECE relates confidence with which a network makes predictions to accuracy, thus it measures whether a network is overconfident in its predictions. To compute ECE, the authors propose to discretize the prediction probability interval into a fixed number of bins, and assign each predicted probability to the bin that encompasses it. The calibration error is the difference between the fraction of predictions in the bin that are correct (accuracy) and the mean of the probabilities in the bin (confidence). ECE computes a weighted average of this error across bins as shown in~\eqref{eq:ece}.
\begin{equation}
ECE = \sum_{b=1}^{B}\frac{n_{B}}{N}\mid \text{accuracy}(b) - \text{confidence}(b) \mid
\label{eq:ece}
\end{equation}
where $n_b$ is the number of predictions in bin $b$, $N$ is the total number of data points, and accuracy($b$) and confidence($b$) are the accuracy and confidence of bin $b$, respectively. We set $B=10$.
The concrete categorical or the Gumbel-softmax distribution was jointly proposed by Jang \textit{et al.}~\cite{jang2016categorical} and Maddison \textit{et al.}~\cite{maddison2016concrete} and it is a fully reparametrisable continuous relaxation to Categorical distribution that lets us backpropagate the gradient through otherwise discrete distribution. Sampling $N$ dimensional probability vectors $\boldsymbol{n} \in \mathbb{R}^N$ can now be expressed as a deterministic function of its parameters - the probability vector $\boldsymbol{\mu}$ and an external randomness $\eta$ which is sampled through a Gumbel distribution and the density $p(\textbf{n}|\boldsymbol{\mu}, \tau)$ is expressed as in Equation~\ref{eq:concrete_multiple}.
\begin{align}
n_i &= \frac{e^{\frac{\log \mu_i + \eta_i}{\tau}}}
{\sum_j e^{\frac{\log \mu_j + \eta_j}{\tau}}} \nonumber \\
\eta_i &\sim -\log(-\log(\textrm{Uniform}(0,1))) \nonumber \\
p(\textbf{n}|\boldsymbol{\mu}, \tau) &= (N-1)!\tau^{N-1} \frac{\prod_{i=1}^{N}\mu_i n^{-\tau - 1}_i}{\left(\sum_{i=1}^{N}\mu_i n^{-\tau}_i\right)^{N}}
\label{eq:concrete_multiple}
\end{align}
The $\tau \in \mathbb{R}^1$ is a temperature hyperparameter that can be tuned to relax or restrict the distribution's representation to a discrete or a continuous domain. In the limit $\tau \rightarrow 0$ the samples are going to be one-hot vectors similarly to a concrete distribution, while $\tau \rightarrow \infty$ means that $n_i = n_j$. The authors recommend to start with an arbitrary value of $2/3$ for $\tau$ and anneal it during training of the NN that samples through this distribution.
- uncertainty in the weights is complementary in that it captures uncertainty about which neural network is appropriate, leading to regularisation of the weights and model averaging~\cite{blundell2015weight}
The concrete distribution is ideal for considering whether a particular edge or operation in a NN is active or not. During training, start with relatively high temperature $\tau$ and gradually decrease it to resemble sampling from a Categorical distribution. It is necessary to consider this form of approximation, since it is impossible to propagate gradients through a discrete distribution without making an estimate~\cite{bengio2013estimating}.
- CHALLENGE:
- Providing prior knowledge about architecture which is the case for every neural network is challenging and it needs to be investigates
- If we simply decide to use the MAP estimate as the prior we would have a delta function and we are not going to learn anything new
- With the current methodology we cannot express our confidence in selecting that particular architecture - no measure of uncertainty
- SOLUTION:
- Methods borrowed from neural architecture search to perform Bayesian inference not only on the weights of the network but the structure itself to achieve the best performance and hopefully approach the posterior as close as possible
- We do this by adopting an approach from differential neural architecture search and searching for weights that decide which operations are going to be eventually performed
- Others do it simply by replicating the same cell which is uintuitive
- Each cell shares the statistical strength of other cells but at the same time enables personalising
- These cells are then stacked together to create one network
- We are then able to express the uncertainty of the individual weights as well as the operations and the overall architecture
Gradient-based optimisation is typically based on starting with an over-parametrised NN that is gradually pruned or compressed to achieve a trade-off between the model size and the desired objective. This approach became attractive because it reduces the prohibitive computational cost immensely~\cite{deng2019dbsn}.
- Cell based NAS
- sequence of cells which have the same internal structure and are separted by upsampling or downsampling modules
- Each cell contains sequential nodes that are connected through operations
- Differentiable NAS tries to find the strength of these connections by introducing specific parameters representing the strength and then multipying the output to enable backpropagation
- Introduce the states inside the cell, relate to hte introduced figures
The cell structure is then defined with respect to these parameters that are defined as $\boldsymbol{\beta}= \{\beta_{i,j}^c; 1 \leq i < j , c = 1, \dots, C\}$ where the indices $i,j$ signify the potential connections between states $S$ inside the cell $c$. In a traditional NAS all gating weights $\boldsymbol{\beta}^c$ are all the same for all cells $c$ or split into categories, but their structure is the same among the few categories. The information for the state inside the cell is then a weighted sum of the outputs from the $K$ different operations on $\boldsymbol{S}_{i}^c$. The output of the cell $\boldsymbol{S}_{j}^{c}$ is then a concatenation of all the previous states $\boldsymbol{S}_{i}^j; i<j$.
\begin{equation}
\boldsymbol{S}_{i,j}^c = \sum_{k=1}^{K} \boldsymbol{\beta}_{i,j}^{c,k} o_{i,j}^{c,k}(\boldsymbol{S_{i}^{c}},\boldsymbol{w}_{i,j}^{c})
\end{equation}
-
The pruning can then be based on the magnitude of the weights, penalty terms or thresholds~\cite{zhou2019bayesnas}. There is usually no proxy model or controller to guide the search, instead the objective $\mathcal{L}$ is formed in such fashion that it can be optimised both with respect to the weights $\boldsymbol{\theta}$ as well as architecture parameters $\boldsymbol{\phi}$ with respect to data $\mathcal{D}$ as
\begin{equation}
\boldsymbol{\theta^{*},\phi^{*}}=\textrm{arg}\min_{\boldsymbol{\theta, \phi}}\mathbb{E}[\mathcal{L}(\mathcal{D},\boldsymbol{\theta}, \boldsymbol{\phi})]
\end{equation}
- The main point is that the objective needs to be differentiable
In DARTS Liu \textit{et al.}~\cite{liu2018darts} used a softmax function weighted operations to determine which operation should the given node perform
\paragraph{Bayesian learning}
Dikov \textit{et al.}~\cite{dikov2019bayesian} were optimising specifically the BNNs' layer size and network depth through VI, however, the network operations were fixed their performance was not comparable to the current-state-of-the-art. Zhou \textit{et al.}~\cite{zhou2019bayesnas} were jointly learning the parameters and architecture of a BNN through a Laplace approximation that rapidly reduced the search time and they also obtained results that are more comparable to the current state-of-the-art in comparison to Dikov \textit{et al.}.
By jointly optimising both the parameters and the architecture of a NN via SGD, it is not necessary to introduce any bias in form of a controller that reflects the bias of the author. Note, that since there is no controller, it is not necessary to spend any time training the controller and we can focus completely on traversing the search space. Moreover, it is not necessary to add any metaheuristics, similar to EA, that need to be tuned and the search is unified under one pattern which core is SGD. However, formulating an objective in a tangible way to induce optimisation not only in terms of the weights, but also in terms of architecture parameters is a challenging task. Nevertheless, the frequency of the respective updates to $\boldsymbol{\theta}$ as well as $\boldsymbol{\phi}$ need to be investigated.
- Dikov et al.
- Bayesian method to structure optimisation by treating hyperparameters, such as layer size and network depth as random variables whose parametrised distributions are learnt together with the rest of the network weights
0 They kept the structure the same
The main observations that have been noted in this research area is the importance of (i) weight sharing among architectures to avoid starting from scratch and save computational resources, (ii) transfer learning and transferring the same architecture from a small proxy task to a more challenging task (iii)
Except methods that are based on continuous relaxation of architecture parameters and differentiable search~\cite{xie2018snas, liu2018darts, dikov2019bayesian, zhou2019bayesnas, cai2018proxylessnas, deng2019dbsn, antoran2020variational} the search strategy, based on the searching algorithm, can be later divided into three other categories: (i) reinforcement learning based on an actor-critic framework~\cite{zoph2016neural, pham2018efficient, cai2018efficient, DBLP:journals/corr/ZophVSL17, Schrimpf2018, Zhong2018, Baker2019, Cai} (ii) evolutionary methods based on evolutionary algorithms~\cite{stanley2002evolving, real2017large, xie2017genetic, real2019regularized, yang2019cars}, (iii) Bayesian optimisation based on proxy models~\cite{jin2019auto, fedorov2019sparse} or (iv) gradient based methods. A comprehensive review was made by Elksen \textit{et al.}~\cite{elsken2018neural} and Hu and Yang~\cite{hu2020technical}.
Reinforcement learning use a controller to sample a sequence of operations that then represents their architecture and they gain their reward from the performance on the validation dataset. Evolutionary methods initialize a set of models as a population which they evolve through self-defined mutation and crossover towards better fitness value. Lat but not least, Bayesian optimiztaion tries to find new architectures through a proxy model, usually a Gaussian process~\cite{williams1996gaussian}, that guides the sampling by balancing evaluation and exploration provided by the proxy model. Given an architecture, these methods have to train it for a large number of epochs, and then evaluate its performance or optimize the controller, population or proxy model, which makes the searching stage less efficient~\cite{yang2019cars}.
\subsection{Automatic Relevance Determination}\label{sec:background_ard}
ARD~\cite{mackay1995probable, neal1996bayesian} is used as a principled criterion to determine the number of latent variables
needed to model the data, especially in high-dimensional regimes. The idea consists of using a prior distribution on weights which encourages the weights to be zero, for example for r.v. $x$: $p(x) = \mathcal{N}(x \mid 0, 1)$. ARD was predominantly used in relevance vector machines~\cite{tipping2001sparse} to compute a mask for the support vectors, which resulted in a sparser model in comparison to support vector machines.
- Factorisable Gaussians ~\cite{titsias2014doubly}
Our main contribution ins concerned with regularising the previously deterministic weights by incorporating uncertainty using prior distributions, but not at the global level, but at the cell level by introducing auxiliary parameters $\boldsymbol{\beta}_{c,normal}\in c=1,\ldots, \sum_i^{C_{normal}}C_i$ where $C_i$ is the number of normal cells replicated with specific $\boldsymbol{\beta}_{c,normal}$ and $\boldsymbol{\beta}_{c,reduce}\in c=1,\ldots, C_{reduce}$. We model $\boldsymbol{\beta}$ using a Gaussian prior distribution $p(\boldsymbol{\beta})= \mathcal{N}(0,\boldsymbol{I})$, where $\boldsymbol{I}$ denotes the identity covariance matrix. In Figure~\ref{fig:graphical_model} we sketch the graphical model along with the prior and approximate posterior distributions.
Our particular structure of the structure parameters forms a mask over the latent variables $\boldsymbol{\alpha}$, which would otherwise select operations for all cells uniformly. The goal of adding this structure to the model is to facilitate the relevance weights to automatically reach distributions peaked around zero, in order to effectively prune the irrelevant operations suggested by the global parameters or otherwise strengthen other ones. Apart from regularisation and model selection the added variable also provides interpretability when using the model after being
successfully inferred, by allowing the user to inspect the relevant latent variables $\boldsymbol{\beta}$ directly.
Nevertheless, inferring the parameters $\boldsymbol{\beta}$ becomes intractable, therefore we introduce an approximate posterior distribution $q(\boldsymbol{\beta}_c \mid \mathcal{D}, \boldsymbol{\mu}_c, \boldsymbol{\Sigma}_c)$ that we optimise through variational inference.
To stay consistent with research in NAS~\cite{zoph2016neural, real2019regularized, liu2018darts} we follow the search space-space which has been originally formulated by Zoph \textit{et al.}~\cite{DBLP:journals/corr/ZophVSL17}.
We seek to fill in a pre-determined network structure defined in Figure which consists of two types of cells: \textit{normal cell} and \textit{reduction cell}. A sample cell structure is shown in Figure~\ref{fig:nas_cell}. It contains operations, which are represented as an edge $(i,j)$, while the nodes $S_i$ represent information. Each node $S_i$ is formed by outputs from two operations $o_{k,j}(S_k) + o_{k,i}(S_j); k,j<i$.
However, our search space differs such that, each normal cell in the search space is custom, while sharing the statistical strengths of the other cells through parameters $\boldsymbol{\alpha}=\{\boldsymbol{\alpha}_{reduce}, \boldsymbol{\alpha}_{normal}\}$ for both the normal and the reduce cells. The cells are stacked to create a network. We model the distribution over the parameters through parameters $\boldsymbol{\beta}_i \in \{1, \ldots, N\}$ for the $N$ normal cells and $\boldsymbol{\beta}_i \in\{1, \ldots, R\}$ for the $R$ reduce cells that can be jointly interleaved.
The cells are collection of The learned cell could either be stacked to form a
convolutional network or recursively connected to form a recurrent network.
A cell is a directed acyclic graph consisting of an ordered sequence of N nodes. Each node x
(i)
is
a latent representation (e.g. a feature map in convolutional networks) and each directed edge (i, j)
is associated with some operation o
(i,j)
that transforms x
(i)
. We assume the cell to have two input
nodes and a single output node. For convolutional cells, the input nodes are defined as the cell outputs
in the previous two layers (Zoph et al., 2018). For recurrent cells, these are defined as the input at
the current step and the state carried from the previous step. The output of the cell is obtained by
applying a reduction operation (e.g. concatenation) to all the intermediate nodes.
The performance degradation is usually at the expense of Bayesian inference which, however, allows us to add all the previously mentioned advantages to the predictive model. The inference is often done on the weights $w$ of the NN which are, under a Bayesian framework, parametrisable random variables~\cite{titsias2014doubly, blundell2015weight}, e.g.: $q(w) = \mathcal{N}(w \mid \mu,\mu^2\lambda)$ with variational parameters $\mu,\lambda$~\cite{molchanov2017variational,kingma2015variational}. Given the high dimensionality of these random variables across the NN, calculating the posterior distribution analytically becomes intractable.
- utilize the differentiable Gumbel Softmax [22,15] to mimic one-hot encoding. However, the one-hot nature implies an exclusive competition, which risks being exploited by unfair advantages~\cite{chu2019fair}
In this work we argue that the performance degradation is due to a lack of attention to searching the right BNNs architecture for the task at hand. Most of architecture search has been focused on pointwise NNs though highly successful neural architecture search (NAS)~\cite{zoph2016neural} automation. The main NAS techniques cover reinforcement learning, genetic algorithms, Bayesian optimisation or lately more successful differential search in which the architecture is found simply through backpropagation.
Thus, we automatically assume that all the operations and their organisation in the existing pointwise NN architecture are the most suitable ones, given our representation and chosen prior. This defines a strong assumption on the shape of our model which might not hold~\cite{osawa2019practical, neal1996bayesian} and
lead to poorer performance than simply thinking of the weights as scalars.
Therefore, we hypothesise that in order to get better performance of BNNs it is necessary to actively search for their architecture, rather than naively convert pointwise NNs into a Bayesian setting {\color{red} prefer: counterpart than setting}. We propose to model the architecture $\mathcal{A}$ of the BNN itself as a set of additional random variables $\mathcal{A} \Leftarrow \boldsymbol{\beta}\sim q(\boldsymbol{\beta}|\boldsymbol{\theta}_\beta, \boldsymbol{x}, \boldsymbol{y})$ paired with learnable parameters $\boldsymbol{\theta}_\beta$ {\color{red} need to explain what these additional random variables/parameters represent in terms of aarchitecture/elements of architectur}. To navigate through the search space, we borrow and extend methods from one-shot cell-based differentiable neural architecture search (NAS)~\cite{liu2018darts,xie2018snas} and variational inference~\cite{molchanov2016dropout,kingma2015variational,blundell2015weight, kingma2013auto}. We specifically search for a structure that is composed of groups of cells that gather a variety of operations that are then stacked on top of each other. The operations are organised into two types of cells, \textit{normal} organised into $N$ groups and \textit{reduction} organised into $R$ groups, that similarly to cell-based NAS~\cite{DBLP:journals/corr/ZophVSL17, pham2018efficient,xie2018snas,liu2018darts,deng2019dbsn, liu2017hierarchical}, are replicated and then used to construct the complete BNN. In our \textit{Variational Inference-based Bayesian neural network architecture search (VIBNNAS)} model, each cell shares the statistical strength of other cells of the same type through global parameters $\boldsymbol{\alpha}$, but at the same time, enables personalisation by learning cell-specific relevance parameters. The model is shown in Figure~\ref{fig:nas_complete}. We determine the importance of using particular operations through variational dropout~\cite{molchanov2016dropout, molchanov2017variational, kingma2015variational} and imposing automatic relevance determination (ARD)~\cite{mackay1995probable, neal1996bayesian} priors. To encourage traversal through the operations' search space, we formulated an auto-regularising objective that promotes exploration, but at the same time motivates certainty in the selection. We further demonstrate that the algorithm is not only limited to finding BNN architectures, but it can be used to search a pointwise NN architecture.
\section{Experiments}\label{sec:experiments}
To demonstrate the effectiveness of the proposed VINNAS method, we perform experiments on three different datasets, namely MNIST (M), FashionMNIST (F) and CIFAR-10 (C).
\subsection{Experimental Settings}\label{sec:experiments_settings}
For each dataset, we search for a separate architecture involving operations commonly used in CNNs, namely: $\mathcal{O}= \{$ $3 \times3$, $5 \times 5$ and $7 \times 7$ separable convolutions, $3 \times 3$ and $5 \times 5$ dilated separable convolutions, $ 7 \times 1$ followed by $1\times 7$ convolution, $3 \times 3$ max pooling, $3 \times 3$ average pooling, skip-connection, and zero - meaning no connection$\}$ making $K=10$. Note that we clip the strength of the zero operation to avoid scaling problems with respect to other operations. All operations are followed by BN and ReLU activation except zero and skip-connection.
Each cell accepts an input from the previous cells $c-1$ and $c-2$. Each input is processed trough ReLU-convolution-BN block to match the input shape required by that particular cell. For M, we search for an architecture comprising of a single reduction cell (R), $l=1$ with $I=2$ states.
For F, we search for an architecture comprising of 6 normal (N) and 2 reduction cells $l=2$ (NNRNNRNN) with $I=3$ states each. Both of these architectures have the same layout during evaluation, however, for F, the number of channels is increased by a factor 6.4 during evaluation. For C, during the search phase we optimise a network consisting of 8 cells $l=2$ with $I=4$ states (NNRNNRNN) that is then scaled to 20 cells during evaluation (6NR6NR6N), along with the channel sizes, which are increased by threefold. Each state always accepts 2 inputs processed through 2 operations. Each net also has a stem, which is a $3\times 3$ convolution followed by BN.
At the end of the network, we perform average pooling followed by a linear classifier with the softmax activation. Scaling of the found architectures and the followed building principles are based on previous successful work~\cite{elsken2017simple}.
The search space complexity for each net is given as $K^{(\sum_{i=0}^{I}2+i)\times l}$ which for M is $\approx 10^{5}$, for F is $\approx 10^{18}$ and for C is $\approx 10^{28}$. Weights learnt from the search phase are not kept and we retrain the resultant architectures from scratch. We train the networks with respect to a single sample with respect to $q(.)$s and LRT.
Instead of cherry-picking of the found architectures through further evaluation and then selecting the resultant architectures by hand~\cite{liu2018darts}, we report the results of the found architectures directly through \textit{VINNAS}.
\paragraph{Search Settings} For optimising both the architecture parameters as well as the weight parameters, we use Adam~\cite{kingma2014adam} with different initial learning rates. We use cosine scheduling~\cite{loshchilov2016sgdr} for the learning rate of the weights' parameters and we keep the architecture's learning rate constant through the search. We initialise $\gamma$s and start applying and gradually linearly increasing them during the search process. We disable tracking of BN's learnable parameters for affine transformation or stats tracking. We initialise the operations strengths' $\boldsymbol{\mu}_\alpha$ through sampling $\mathcal{N}(0., 0.001)$. We utilise label smoothing~\cite{muller2019does} to avoid the architecture parameters to hard commit to a certain pattern. To speed up the search we not only search reduced architectures in terms of the number of channels and cells, but also search on 25\% - M, 50\% - F and 50\% - C of the data, while using 50\% of that portion as the dataset for learning the architecture parameters. For M we use z-normalisation.
For F and C we use random crops, flips and erasing~\cite{zhong2017random}, together with input channel normalisation. We search for 20, 50 and 100 epochs for M, F and C respectively.
\paragraph{Evaluation Settings} During evaluation we scale up the found architectures in terms of channels and cells as described previously. We again use Adam optimiser with varying learning rates and cosine learning rate scheduling. We similarly initialise $\gamma_1$ and start to linearly increase it from a given epoch. We do so, to avoid over-regularisation and clamping of the weights to zero too soon during the optimisation. We train on full datasets for M, F and C for 100, 400 and 600 epochs respectively, and we preserve the data augmentation strategies also during retraining, we add drop-path~\cite{larsson2016fractalnet} and auxiliary tower~\cite{szegedy2015going} regularisation to C and F. For both the search and evaluation we initialise the weights' means with Xavier uniform initialisation~\cite{glorot2010understanding}. We initialise all the log-variances to $-10$.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Figures/cells/mnist_R.png}
\caption{MNIST reduction cell with its positive SNR.}
\label{fig:mnist}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\linewidth]{Figures/cells/fashion_N.png}
\includegraphics[width=.9\linewidth]{Figures/cells/fashion_R.png}
\caption{FashionMNIST normal and reduction cells with their positive SNR.}
\label{fig:fashion}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{Figures/cells/cifar_N.png}
\includegraphics[width=1\linewidth]{Figures/cells/cifar_R.png}
\caption{CIFAR-10 normal and reduction cells with their positive SNR.}
\label{fig:cifar}
\end{figure}
\subsection{Evaluation}\label{sec:experiments_evaluation}
The evaluation is condensed in Tables~\ref{tab:comp_random} and~\ref{tab:cifar_10}. The numbers in bold represent the score for the best performing model for the given selection method: positive SNR/magnitude and the dataset. The found best performing architectures are shown in Figures~\ref{fig:mnist},~\ref{fig:fashion} and~\ref{fig:cifar}. Specifically for the case of CIFAR-10, that is popular in the NAS community, in Table~\ref{tab:cifar_10} it is shown that \textit{VINNAS} found an architecture that is comparable to the SOTA, however, with $2 \times$ fewer non-zero parameters.
We first perform random search on our search spaces for M, F and C. Note that the search spaces are vast and we deem it impossible to evaluate all architectures in the search space, given our available resources, and thus we sample 10 separate architectures from each search space and we train them with the same hyperparameter settings as the found architectures to avoid any bias. The number of parameters for \textit{VINNAS} is reported as the amount after pruning with respect to $\log \lambda \geq 3$.
When comparing the found architectures for the different datasets in Table~\ref{tab:comp_random}, we noticed that in case of M, there are certain connections onto which an operation could potentially be completely omitted with the positive SNR being relatively small. We attribute this to the fact that this dataset is easy to generalise to, which can be also seen by the overall performance of the random search for these datasets. However, on CIFAR-10, it can be seen that the inferred importance of all the operations and the structure is very high. The results also demonstrated that using the learnt uncertainty in the operation selection, in addition to the magnitude, marginally benefits the operation selection. Compared with DARTS~\cite{liu2018darts} which only uses $3 \times 3$ separable convolutions and max pooling everywhere, it can be observed that the found architectures are rich in the variety of operations that they employ and the search does not collapse into a mode where all the operations are the same. For future reference regarding deeper models such as for F and C, we observe that the found cells of the best performing architectures do contain skip-connections to enable efficient propagation of gradients and better generalisation.
The main limiting factor of this work is the GPU search cost which is higher, in comparison to the other NAS methods, due to using LRT, which requires two forward passes during both search and evaluation. Most importantly, all the found architectures demonstrate good generalisation performance in terms of the measured test accuracy.
\section{Introduction}\label{sec:introduction}
Neural networks (NNs) have demonstrated their great potential in a wide range of artificial intelligence tasks such as image classification, object detection or speech recognition~\cite{zoph2016neural, ding2020autospeech}. Nevertheless, designing a NN for a given task or a dataset requires significant human expertise, making their application restricted in the real-world~\cite{elsken2018neural}. Recently, neural architecture search (NAS) has been demonstrated to be a promising solution for this issue~\cite{zoph2016neural}, which automatically designs a NN for a given task on a target objective. Current NAS methods are already able to automatically find better neural architectures, in comparison to hand-made NNs~\cite{zoph2016neural, ding2020autospeech, real2019regularized}.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\linewidth]{Figures/figure_1a.pdf}
\hspace{1.5em}
\includegraphics[height=0.37\textheight]{Figures/figure_1b.pdf}
\caption{(a) A structure of a computational cell which accepts inputs from two previous cells. Each cell accepts inputs from the immediate last cell $C_{c-1};c-1\geq0$ as well as the penultimate cell $C_{c-2};c-2\geq0$. The random variables $\boldsymbol{\alpha}$ represent the learnable relevance over the operations. The coloured lines represent different candidate operations and their thickness represents their likelihood. All outputs from the processed states $\boldsymbol{S}_0$ and $\boldsymbol{S}_1$ are concatenated in the output along the channel dimension into $C_{c+1}$, symbolised by the dashed lines. Green rectangles - states signify data. (b) Network skeleton comprising of $N_1, N_2$ and $N_3$ normal cells and two reduction cells $R_1$ and $R_2$ which share the same structure, in total giving $C=N_1 + N_2 + N_3 + 2$ cells. The network also contains a stem comprised of convolution and at the end of the network are average pooling followed by a linear classifier.}
\label{fig:nas_complete}
\end{figure}
NAS is a challenging optimisation problem on a constrained discrete search space, which can be simplified into reasoning about what operations should be present and how should they be interconnected between each other in the NN architecture. Common operation types considered in NAS are, for example, different types of convolutions or pooling~\cite{zoph2016neural}. However, if the search is not approached with caution, the resultant NN might not be flexible enough to learn useful patterns. Additionally, the ability of the model to generalise is also directly dependant on the NN architecture~\cite{zoph2016neural,liu2018darts}. Therefore, there is a pressing need for finding architectures that are expressive enough while achieving good generalisation performance.
Based on the core algorithmic principle operating during the search, NAS can be divided into four categories: (i) reinforcement learning-based on an actor-critic framework~\cite{zoph2016neural} (ii) evolutionary methods based on genetic algorithms~\cite{ real2019regularized}, (iii) Bayesian optimisation-based on proxy models~\cite{cai2018path} or (iv) gradient-based methods~\cite{liu2018darts}. In particular, gradient-based NAS~\cite{liu2018darts} has been recently popularised for convolutional NN (CNN) architecture search due to computational efficiency during the search. Nevertheless, gradient-based NAS is likely to collapse into a situation where it selects all operations to be the same~\cite{zela2019understanding}, treats operations unfairly~\cite{chu2019fair} or is hard to adapt across different datasets and search spaces~\cite{li2019random}.
To solve the issues in the existing gradient-based NAS methods, this paper proposes \textit{Variational Inference-based Neural Network Architecture Search (VINNAS)}. Under the same search space as in the case of NAS methods~\cite{liu2018darts,chu2020noisy,zela2019understanding}, our approach does not require any additional computation to the standard backpropagation algorithm. In \textit{VINNAS}, we tackle NAS using Bayesian inference, by modeling the architecture search through additional random variables $\boldsymbol{\alpha}$ which determine different operation types or connections between operations, our algorithm is able to conduct effective NN architecture search. The importance of using particular operations is determined by using a variational dropout scheme~\cite{molchanov2017variational, kingma2015variational} with the automatic relevance determination (ARD)~\cite{mackay1995probable} prior. We specifically search for a network structure that is composed of cells containing a variety of operations. The operations are organised into two types of cells: \textit{normal} and \textit{reduction}, and similarly to cell-based NAS~\cite{liu2018darts}, the cells are replicated and then used to construct the complete CNN. The model is shown in \figref{fig:nas_complete}. To encourage traversal through the NN architecture search space, we formulated an auto-regularising objective that promotes exploration, while ensuring high levels of certainty in the selection phase.
We performed experiments on searching CNNs for classification on image datasets namely MNIST, FashionMNIST and CIFAR-10. Our results demonstrate the state-of-the-art (SOTA) performance, thanks to targeting
sparse architectures that focus on learning efficient representations, which is enforced by strict regularisation. For example on CIFAR-10, we demonstrate that our approach is able to find an architecture that contains $2 \times$ fewer non-zero parameters in comparison to the SOTA, without any human intervention.
In summary, our main contributions are as follows:
\begin{itemize}
\item[1.] A differentiable neural architecture search method adopting variational dropout, which is effective in searching neural network architectures with the state-of-the-art performance on multiple datasets.
\item[2.] An architecture search objective using scheduled regularisation to promote exploration, but at the same time motivates certainty in the operation selection.
\item[3.] An updated rule for selecting the most dominant operations based on their inferred uncertainty.
\end{itemize}
In the sequel, we describe our approach in detail. In Section~\ref{sec:related_work} we review related work, in Section~\ref{sec:preliminaries} we introduce variational learning and gradient-based NAS. In Section \ref{sec:vinnas} we introduce our search objective, search space and the proposed overall algorithm. Section~\ref{sec:experiments} documents the performance of our search method on experiments and lastly, in Section~\ref{sec:conclusion} we draw our conclusions.
\begin{table*}[t]
\centering
\caption{Notation used in this paper.}
\label{tb:notation}
\scalebox{.95}{
\setlength\tabcolsep{6pt}
\begin{tabular}{ccccc}
\toprule
$\mathcal{A}$ Architecture & $\boldsymbol{\mathcal{M}}$ Architecture search space (supergraph) & $\boldsymbol{S}$ Data/State in architecture & $\boldsymbol{\alpha}$ Architecture var. & $\mathcal{D}/D$ Dataset / Dataset size \\
$K$ Operation candidates & $o(.)$ Candidate operations & $C$ Total number of cells & N Normal cell & R Reduction cell \\
$p(.)$ Prior density & $q(.)$ Approximation density & $\boldsymbol{w}$ Weights & $\boldsymbol{\Psi}$ Other params. &
$\boldsymbol{\theta}$ Reparametrisation params. \\
\bottomrule
\end{tabular}}
\end{table*}
We have since come across a competing publication~\cite{wang2020si} that overlaps with this work. In particular,~\cite{wang2020si} also proposes a NAS methodology for finding CNNs, based on ideas coming from variational dropout~\cite{kingma2015variational}. Additionally, the authors in~\cite{wang2020si} propose a hierarchical semi-implicit distribution over the operation as well as connectivity selection, that enables them to find CNN architectures with state-of-the-art accuracy. In our work, we imposed a distribution over the operation selection, while keeping the connectivity pattern fixed as shown in Figure~\ref{fig:nas_complete}, and the individual operation weights that allows us to find sparse and memory light-weight architectures.
\section{Preliminaries}\label{sec:preliminaries}
In this Section we introduce variational learning and cell-based differential neural architecture search which serve as basic building blocks for developing \textit{VINNAS}. Notation used in this paper is summarised in Table~\ref{tb:notation}.
\subsection{Variational Learning}\label{sec:preliminaries_variational_learning}
We specify a CNN as a parametrisable function approximator with some architecture $\mathcal{A}$ learnt on $D$ data samples consisting of inputs $\boldsymbol{x}_i$ and targets $\boldsymbol{y}_i$ forming a dataset $\mathcal{D}$ as $\mathcal{D}=\{(\boldsymbol{x_1}, \boldsymbol{y}_1), (\boldsymbol{x}_2, \boldsymbol{y}_2), (\boldsymbol{x}_3, \boldsymbol{y}_3), \ldots, (\boldsymbol{x}_D, \boldsymbol{y}_D)\}$. The architecture $\mathcal{A}$, composed of operations, might have certain parameters, for example weights $\boldsymbol{w}^{\mathcal{A}}$, which are distributed given some prior distributions $\boldsymbol{w}^{\mathcal{A}} \sim p(\boldsymbol{w})$. $\boldsymbol{w}^{\mathcal{A}}$ and $\mathcal{A}$ jointly define the model and the likelihood $p_{\mathcal{A}}(\boldsymbol{y}\mid\boldsymbol{x}, \boldsymbol{w}^{\mathcal{A}})$. We seek to learn the posterior distribution over the parameters $p_{\mathcal{A}}(\boldsymbol{w}^{\mathcal{A}} \mid \boldsymbol{x},\boldsymbol{y})$ using the Bayes rule. However, that is analytically intractable due to the normalising factor $p_{\mathcal{A}}(\boldsymbol{y} \mid \boldsymbol{x})$, which cannot be computed exactly due to the high dimensionality of $\boldsymbol{w}^{\mathcal{A}}$.
Therefore, we need to formulate an approximate parametrisable posterior distribution $q_{\mathcal{A}}(\boldsymbol{w}^{\mathcal{A}} \mid \boldsymbol{\theta}_w^\mathcal{A}, \boldsymbol{x},\boldsymbol{y})$\footnote{ From now on we drop the conditioning on the data $\{\boldsymbol{x},\boldsymbol{y}\}$ to avoid clutter in the notation, such that any parametrisable $q(.)$ will become $q(\boldsymbol{w} \mid \boldsymbol{\theta}_w)$.}
whose parameters $\boldsymbol{\theta}_w^\mathcal{A}$ can be learnt in order to approach the true posterior, $p_{\mathcal{A}}(\boldsymbol{w}^{\mathcal{A}} \mid \boldsymbol{x},\boldsymbol{y})$. Moving the distribution $q_{\mathcal{A}}(\boldsymbol{w}^{\mathcal{A}} \mid \boldsymbol{\theta}_w^\mathcal{A})$ closer to $p_{\mathcal{A}}(\boldsymbol{w}^{\mathcal{A}} \mid \boldsymbol{x},\boldsymbol{y})$ in terms of $\boldsymbol{\theta}_w^\mathcal{A}$ naturally raises an objective: to minimise their separation, which is expressed as the Kullback-Leibler ($\mathcal{KL}$) divergence~\cite{kullback1951information}. This objective $\mathcal{L}_\mathcal{A}(\boldsymbol{\theta}_w^\mathcal{A}, \boldsymbol{\Psi}^\mathcal{A})= \mathcal{KL}(q_{\mathcal{A}}(\boldsymbol{w}^{\mathcal{A}}\mid\boldsymbol{\theta}_w^\mathcal{A})\mid\mid p_{\mathcal{A}}(\boldsymbol{w}^{\mathcal{A}}\mid \boldsymbol{x}, \boldsymbol{y}))$ is approximated through the evidence lower bound (ELBO), shown in~\eqref{eq:elbo}. The $\boldsymbol{\Psi}^\mathcal{A}$ represents other learnable pointwise parameters that are assumed to have uniform prior.
\begin{multline}
\textrm{arg} \min_{\boldsymbol{\theta}_w^\mathcal{A},\boldsymbol{\Psi}^\mathcal{A}} \mathcal{KL}(q_{\mathcal{A}}(\boldsymbol{w}^{\mathcal{A}}\mid\boldsymbol{\theta}_w^\mathcal{A})\mid\mid p_{\mathcal{A}}(\boldsymbol{w}^{\mathcal{A}}\mid \boldsymbol{x}, \boldsymbol{y})) = \\ = \textrm{arg} \min_{\boldsymbol{\theta}_w^\mathcal{A},\boldsymbol{\Psi}^\mathcal{A}} -\mathbb{E}_{q_{\mathcal{A}}(\boldsymbol{w}^{\mathcal{A}}\mid \ \boldsymbol{\theta}_w^\mathcal{A})}[\log p_{\mathcal{A}}(\boldsymbol{y} \mid \boldsymbol{x}, \boldsymbol{w}^{\mathcal{A}}, \boldsymbol{\Psi}^\mathcal{A})] + \\+\gamma \times \mathcal{KL}(q_{\mathcal{A}}(\boldsymbol{w}^{\mathcal{A}}\mid \boldsymbol{\theta}_w^\mathcal{A}) \mid \mid p(\boldsymbol{w})) + const.
\label{eq:elbo}
\end{multline}
The first term is the negative log-likelihood of the data which measures the data-fit, while the second term is a regulariser whose influence can be managed through $\gamma$. The $\boldsymbol{\Psi}^\mathcal{A}$ contribute to the $const.$ term that is independent of the parameters, due to the uniform prior.
Kingma \textit{et al.} introduced the local reparametrisation trick (LRT)~\cite{kingma2015variational} that allows us to solve the objective in \eqref{eq:elbo} with respect to $\boldsymbol{\theta}_w^\mathcal{A}$ through stochastic gradient descent (SGD) with low variance. We can backpropagete the gradients with respect to the distribution $q_{\mathcal{A}}(\boldsymbol{w}^{\mathcal{A}}\mid \boldsymbol{\theta}_w^\mathcal{A})$ by sampling $\boldsymbol{z}$ that is obtained through deterministic transformation $t(.)$ as $\boldsymbol{z}=t(\boldsymbol{\theta}_w^\mathcal{A},\boldsymbol{\epsilon})$ where $\boldsymbol{\epsilon}$ is a parameter-free noise, e.g.: $\boldsymbol{\epsilon} \sim \mathcal{N}(\boldsymbol{0},\boldsymbol{I})$.
Moreover, using this trick, Molchanov \textit{et al.}~\cite{molchanov2017variational}, were able to search for an unbounded approximation\footnote{$\odot$ represents a Hadamard product.} for weights $\boldsymbol{w}$ as shown in \eqref{eq:variational_dropout}, which corresponds to a Gaussian dropout model with learnable parameters $\boldsymbol{\theta}_w^\mathcal{A}=\{\boldsymbol{\mu}_w,\boldsymbol{\sigma}_w\}$~\cite{srivastava2014dropout}.
\begin{equation}
\boldsymbol{w} \sim q_{\mathcal{A}}(\boldsymbol{w} \mid \boldsymbol{\mu}_w, \boldsymbol{\sigma}^2_w) \Leftrightarrow \boldsymbol{w} = \boldsymbol{\mu}_w + \boldsymbol{\sigma}_w \odot \boldsymbol{\epsilon}
\label{eq:variational_dropout}
\end{equation}
After placing a factorised log-uniform prior on the weights, such that $p(\boldsymbol{w}) \propto \frac{1}{\mid \boldsymbol{w} \mid}$, the authors observed an effect similar to ARD~\cite{molchanov2017variational}, however, without the need to modify the prior. Throughout the inference, the learnt weights tend to a delta function centred at $\boldsymbol{0}$, leaving the model only with the important non-zero weights. The relevance determination is achieved by optimising both the $\boldsymbol{\mu}_w$ and $\boldsymbol{\sigma}_w$ and if they are both close to zero, they can be pruned.
\subsection{Cell-based Differential Neural Architecture Search}\label{sec:preliminaries_nas_grad}
As shown above, Bayesian inference can be used to induce sparsity in the weight space, however, we wish to find $\mathcal{A}$ from some architecture space $\boldsymbol{\mathcal{M}};\mathcal{A} \subset \boldsymbol{\mathcal{M}}$.
Authors of DARTS~\cite{liu2018darts} defined the search for an architecture as finding specific $\boldsymbol{\alpha}$ associated to choosing operations $o(.)$ in an overparametrised directed acyclic graph (DAG) $\boldsymbol{\mathcal{M}}; \mathcal{A} \subset \boldsymbol{\mathcal{M}}$, where the learnt values of $\boldsymbol{\alpha}$ are then used to specify $\mathcal{A}$ at test time. Due to compute feasibility, the search space for all potential architectures is simplified into finding cells. The cell structure is defined with respect to $\boldsymbol{\alpha}; \boldsymbol{\alpha}^{i,j}_l \in \mathbb{R}^K; 1 \leq i < j<I$ where the indices $i,j$ signify the potential connections and operations $o_k(.)$ between information states $\boldsymbol{S}^{i}_c$ and $\boldsymbol{S}^{j}_c$ inside the cell $c$ with $I$ states, where $k\in 1,\ldots,K$. The information state $\boldsymbol{S}$ is a 4-dimensional tensor $\boldsymbol{S}\in \mathbb{R}^{B \times P \times H \times W}$ with $B$ samples, containing $P$ channels, height $H$ and width $W$. The index $l$ represents the number of different types of cells, where $l \in \{normal,reduce\}$ represents 2 different cell types: \textit{normal} (N) cells preserve the input dimensionality while \textit{reduce} (R) cells decrease the spatial dimensionality, but increase the number of channels~\cite{liu2018darts}. The cells can be interleaved and repeated giving $C$ total cells. The information for the state inside the cell $c$ is a weighted sum of the outputs generated from the $K$ different operations on $\boldsymbol{S}^{j}_c$. Choosing one of the operations can be approximated through performing $\text{softmax};\ \text{softmax}(\alpha^{i,j}_{l,k}) = \frac{\exp(\alpha^{i,j}_{l,k})}{\sum_{k^\prime} \exp(\alpha^{i,j}_{l,k^\prime})}$ on the architecture variables $\boldsymbol{\alpha}$, instead of argmax, which provides the method with differentiable strengths of potential operations as shown in \eqref{eq:darts}.
The last state $\boldsymbol{S}_{c}^{I}$, which is the output of the cell, is then a concatenation of all the previous states, except the first two input states $\boldsymbol{S}_{c}^I=\boldsymbol{S}_{c}^I\oplus\boldsymbol{S}_{c}^j; j<I$.
\begin{equation}
\boldsymbol{S}^{i}_c = \sum_{j=1}^{j<i}\sum_{k=1}^{K} z_{c,k}^{i,j} o_{c,k}(\boldsymbol{S}^{j}_{c},\boldsymbol{w}^{i,j}_{c,k}) \quad \boldsymbol{z}_{c}^{i,j} = \textrm{softmax}(\boldsymbol{\alpha}_{l}^{i,j})
\label{eq:darts}
\end{equation}
After the search, each state $\boldsymbol{S}_{c}^i$ is connected with the outputs from two operations $o_{c,k}^{j,l}(\boldsymbol{S}_c^j) + o_{c,k}^{i,l}(\boldsymbol{S}_c^i); i,j<l$, whose strengths $\boldsymbol{\alpha}$ have the highest magnitude. The learnt weights $\boldsymbol{w}$ are discarded and the resultant architecture is retrained from scratch.
DARTS has been heavily adopted by the NAS community, due to its computational efficiency, in comparison to other NAS methods. However, upon a careful inspection, it can be observed that it does not promote choosing a particular operation and often collapses to a mode based on the fact that the graph is overparameterised through a variety of parallel operations~\cite{chu2019fair}. The supergraph then focuses on improving the performance with respect to the whole graph, without providing a dominant architecture. Additionally, others have observed~\cite{chu2019fair, chu2020noisy} that the method requires careful hyperparameter tuning without which it might collapse into preferring only one operation type over the others.
\section{Related Work}\label{sec:related_work}
\paragraph{Differentiable Neural Architecture Search} Since Zoph \textit{et al.}~\cite{zoph2016neural} popularised NAS for CNNs, the field has been growing from intensive scientific~\cite{liu2018darts, zhou2019bayesnas} and industrial~\cite{zoph2016neural,real2019regularized} interests. NAS techniques automate the design of CNNs, mainly in terms of high-level operations, such as different types of convolutions or pooling, and their corresponding connections. The core of these techniques
is the search space of potential architectures, their optimisation objective and search algorithm. For further detail of NAS, we refer the reader to a review of NAS by Elsken \textit{et al.}~\cite{elsken2018neural}. It is a common practice to organise the search space for all potential architectures into finding cells that specify the operations and their connections~\cite{liu2018darts}, which are then stacked on top of each other to construct the final NN, as previously shown in Figure~\ref{fig:nas_complete}. Modern NAS methods often apply a weight-sharing~\cite{pham2018efficient} approach where they optimise the search over several architectures in parallel by sharing weights of their operations to save memory consumption.
Among these approaches, gradient-based NAS has become one of the most popular methods~\cite{liu2018darts}, mainly due to its compute feasibility. DARTS~\cite{liu2018darts} defines the search for an architecture as optimising continuous weights associated to operations in an overparametrised supergraph $\boldsymbol{\mathcal{M}}$, while utilising weight-sharing. After the best combination of operations $\mathcal{A}; \mathcal{A} \subset \boldsymbol{\mathcal{M}}$ in the supergraph is identified, it is then used to construct the final architecture for evaluation. However, Zela \textit{et al.}~\cite{zela2019understanding} identified a wide range of search spaces for which DARTS yields degenerate architectures with very poor test performance. Chu \textit{et al.}~\cite{chu2019fair} observed critical problems in the two-stage weight-sharing NAS due to inherent unfairness in operation selection during the search in the supergraph. Chu \textit{et al.}~\cite{chu2020noisy} attempted to fix this problem by adding noise to the skip-connection operation during the search. Our approach is similar to~\cite{chu2020noisy}, however, we do not bias the search only towards skip-connections, but rather, infer the properties of the noise distribution with respect to ARD.
\paragraph{Pruning} Gradient-based NAS can be regarded as a subset of pruning in NNs, that is applied at the end of search in the operations' space. There have been many approaches introduced for pruning, such as by LeCun \textit{et al.}~\cite{lecun1990optimal} who pruned networks by analysing second-order derivatives. Other approaches~\cite{scardapane2017group} considered removing groups of filters in convolutions. Kingma \textit{et al.}~\cite{kingma2015variational} pruned NNs at a node-level by noticing connections between dropout~\cite{srivastava2014dropout} and variational inference. Molchanov \textit{et al.}~\cite{molchanov2017variational} showed that the interpretation of Gaussian dropout as performing variational inference in a network with log-uniform prior over weights leads to high sparsity in weights. Blundell \textit{et al.}~\cite{blundell2015weight} introduced a mixture of Gaussians prior on the weights, with one mixture tightly concentrated around zero, thus approximating a spike and slab prior over weights. Ghosh \textit{et al.}~\cite{ghosh2018structured} and Loizous \textit{et al.}~\cite{louizos2017bayesian} simultaneously considered grouped Horseshoe prior~\cite{carvalho2009handling} for neural pruning. Zhou \textit{et al.}~\cite{zhou2020posterior-guided} used variational dropout~\cite{kingma2015variational} to select filters for convolution. Our method differs to these approaches, by not only inferring sparse weights for operations, but also attempting to infer weights over the operations' search space to search NN architectures.
\section{VINNAS}\label{sec:vinnas}
In this Section, we first describe the search space assumptions for \textit{VINNAS} in detail, followed by the objective that guides the exploration among different architectures. At last, we present the algorithm of \textit{VINNAS} that couples everything together.
\subsection{Search Space}\label{sec:vinnas_search_space}
Our method extends the idea behind gradient-based NAS, while using variational learning to solve the aforementioned defects in previous work. \textit{VINNAS} builds its search space as an overparametrised DAG $\boldsymbol{\mathcal{M}}$ in which the algorithm searches for the right cell patterns to be used to build the final architecture $\mathcal{A}$. Similarly to DARTS, we aim to search for two repeated cells, namely a normal and a reduction cell that will be repeated as shown in \figref{fig:nas_complete}. Therefore, the $\boldsymbol{\mathcal{M}}$ contains several of normal and reduction cells laid in a sequence with each containing the $K$ parallel operation options. However, $\boldsymbol{\mathcal{M}}$ is downscaled in the number of cells and channels in comparison to the $\mathcal{A}$ considered during the evaluation, such that the supergraph can fit into GPU memory. Nevertheless, the pattern and the ratio of the number of cells $N_1, N_2$ and $N_3$ or $R$s in $\boldsymbol{\mathcal{M}}$ are preserved in accordance to the model shown in Figure~\ref{fig:nas_complete}. To apply variational inference and subsequently ARD through variational dropout, we associate the structural strength $\boldsymbol{\alpha}_{normal}$ for normal cells and $ \boldsymbol{\alpha}_{reduce}$ for reduction cells with a probabilistic interpretation. The graphical model of the supergraph $\boldsymbol{\mathcal{M}}$ that pairs together its weights $\boldsymbol{w}$ and architecture strengths $\boldsymbol{\alpha}$ is shown in \figref{fig:graphical_model}.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=1.5]
\node[obs] (y) {$y_i$};%
\node[latent, left=of y] (x) {$\boldsymbol{x}_i$}; %
\node[latent, right= of y] (w) {$\mathbf{w}$}; %
\node[const, right= 0.5cm of w] (paramw) {$\boldsymbol{\mu}_w, \boldsymbol{\sigma}^2_w$};
\node[latent, above=of y] (alphas) {$\boldsymbol{\alpha}$};
\node[const, right= 0.5cm of alphas] (paramalphas_normal) {$\boldsymbol{\mu}_{normal}, \boldsymbol{\sigma}^2_{normal}$};
\node[const, left= 0.5cm of alphas] (paramalphas_reduce) {$\boldsymbol{\mu}_{reduce}, \boldsymbol{\sigma}^2_{reduce}$};
\plate[] {plate1} {(x)(y)} {$D$}; %
\edge {paramalphas_normal, paramalphas_reduce}{alphas}
\edge {x, w, alphas}{y}
\edge{paramw}{w}
\end{tikzpicture}
\caption{Graphical model capturing the search space in terms of the structural random variables $\boldsymbol{\alpha}$ and the weights $\boldsymbol{w}$. Note that, the parameters for $\boldsymbol{w}$ will be discarded after the search.}
\label{fig:graphical_model}
\end{figure}
For simplicity, we assume fully factorisable log-uniform prior for $\boldsymbol{\alpha}= \{\boldsymbol{\alpha}_{normal}, \boldsymbol{\alpha}_{reduce}\}$. The prior biases the distributions of the operations' strengths towards zero, which avoids giving an advantage to certain operations over the others. We similarly model the weights $\boldsymbol{w}$ of the supergraph $\boldsymbol{\mathcal{M}}$ as random variables such that the joint prior distribution is $p(\boldsymbol{\alpha}, \boldsymbol{w})$ $=$ $ p(\boldsymbol{\alpha}_{normal}) p(\boldsymbol{\alpha}_{reduce}) p(\boldsymbol{w})$. It is not analytically possible to find the true posterior $p(\boldsymbol{\alpha}, \boldsymbol{w} \mid \boldsymbol{x}, \boldsymbol{y})$, therefore, we resort to formulating an approximation $q(\boldsymbol{\alpha}, \boldsymbol{w} \mid \boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w)$. We again set factorisable approximations for both $\boldsymbol{\alpha}$ and $\boldsymbol{w}$, such that the joint distribution factorises $q(\boldsymbol{\alpha}, \boldsymbol{w} \mid \boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w)$ $=$ $ q(\boldsymbol{\alpha}_{normal} \mid \boldsymbol{\theta}_{\alpha_{normal}}) q(\boldsymbol{\alpha}_{reduce} \mid \boldsymbol{\theta}_{\alpha_{reduce}}) q(\boldsymbol{w} \mid \boldsymbol{\theta}_w)$ with respect to the optimisable parameters $\boldsymbol{\theta}_w$ for $\boldsymbol{w}$ and $\boldsymbol{\theta}_\alpha=\{\boldsymbol{\theta}_{\alpha_{normal}},\boldsymbol{\theta}_{\alpha_{reduce}}\}$ for $\boldsymbol{\alpha}$. The prior $p(.)$ and approximations $q(.)$ are detailed in \eqref{eq:priors} and \eqref{eq:approxs} respectively. The indeces $i,j$ stand for different states in the cells with $i<j$ and $k$ is associated to the $K$ available operations.
\begin{align}
p(\boldsymbol{w}) &=\prod_{i,j,k} p(\boldsymbol{w}_{k}^{i,j});\ p(\boldsymbol{w}_{k}^{i,j})\propto \frac{1}{\mid \boldsymbol{w}_{k}^{i,j} \mid} \label{eq:priors} \\
p(\boldsymbol{\alpha}_{normal}) &=\prod_{i,j} p(\boldsymbol{\alpha}_{normal}^{i,j});\ p(\boldsymbol{\alpha}_{normal}^{i,j}) \propto \frac{1}{\mid \boldsymbol{\alpha}_{normal}^{i,j} \mid} \nonumber\\
p(\boldsymbol{\alpha}_{reduce}) &=\prod_{i,j} p(\boldsymbol{\alpha}_{reduce}^{i,j});\ p(\boldsymbol{\alpha}_{reduce}^{i,j}) \propto \frac{1}{\mid \boldsymbol{\alpha}_{reduce}^{i,j}\mid} \nonumber
\end{align}
\begin{align}
q(\boldsymbol{w}) &=\prod_{i,j,k} \mathcal{N}( \boldsymbol{\mu}^{i,j}_{w,k}, \boldsymbol{\sigma^2}_{w,k}^{i,j}) \label{eq:approxs} \\
q(\boldsymbol{\alpha}_{normal}) &= \prod_{i,j} \mathcal{N}( \boldsymbol{\mu}^{i,j}_{\alpha_{normal}}, \boldsymbol{\sigma^2}^{i,j}_{\alpha_{normal}}) \nonumber \\
q(\boldsymbol{\alpha}_{reduce}) &= \prod_{i,j} \mathcal{N}(\boldsymbol{\mu}^{i,j}_{\alpha_{reduce}},
\boldsymbol{\sigma^2}^{i,j}_{\alpha_{reduce}}) \nonumber
\end{align}
The approximate posteriors were selected as Gaussians with diagonal covariance matrices. We used the formulation by Molchanov \textit{et al.}~\cite{molchanov2017variational} for both $\boldsymbol{\alpha}$, during the search phase, and $\boldsymbol{w}$, during both the search and test phases. We aim to induce sparsity in the operations' space, which would result in most operations' strengths in the DAG as zero, while the most relevant operations are expected to be non-zero. At the same time, the method induces sparsity in the weight space and thus motivates the individual operations to be extremely efficient in their learnt patterns.
Also, the Gaussian noise used in our method effectively disrupts the previously observed unfairness in operation selection during NAS as partially demonstrated by~\cite{chu2020noisy} for skip-connection operation. Circling back to \eqref{eq:darts} the information in each cell during search is now calculated with respect to a sample $\boldsymbol{\alpha}$ from the inferred distributions $q(.)$. The second-level parameters such as the individual means and variances are assumed to have non-informative uniform prior.
\subsection{Search Objective}\label{sec:vinnas_search_objective}
The goal of the search is to determine the right set of structural variables $\boldsymbol{\alpha}$ or their corresponding parameters such that they can be later used to construct the desired architecture $\mathcal{A}$. Therefore, the search objective is to determine $\boldsymbol{\theta}_\alpha$ by solving $\mathcal{L}(\boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w, \boldsymbol{\Psi})$. $\mathcal{L}(\boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w, \boldsymbol{\Psi})$ is in fact a secondary objective to the primary objective of minimising~\eqref{eq:elbo} with respect to some unknown parameters implied by the chosen $\mathcal{A}$ as shown in \eqref{eq:min_min_formulation}.
\begin{equation}
\textrm{arg} \min_{\boldsymbol{\theta}_{w}^\mathcal{A},\boldsymbol{\Psi}^{\mathcal{A}}, \boldsymbol{\theta}_\alpha} \mathcal{L}_{\mathcal{A}}( \boldsymbol{\theta}_{w}^\mathcal{A}, \boldsymbol{\Psi}^\mathcal{A}, \mathcal{L}(\boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w, \boldsymbol{\Psi}))
\label{eq:min_min_formulation}
\end{equation}
The $\boldsymbol{\theta}_w, \boldsymbol{\theta}_\alpha$ and $\boldsymbol{\Psi}$ refer to the reparametrisations for the supergraph.
Therefore, at the same time it is necessary to optimise the objective with respect to the structural parameters $\boldsymbol{\theta}_\alpha$, the operations' weight parameters $\boldsymbol{\theta}_w$ and $\boldsymbol{\Psi}$ indicating their usefulness in the final architecture $\mathcal{A}$. Derived from the original ELBO in \eqref{eq:elbo}, optimising the supergraph $\boldsymbol{\mathcal{M}}$ with respect to the learnable parameters rises the following objective in \eqref{eq:search_objective_1} below.
\begin{multline}
\mathcal{A} \Leftarrow \boldsymbol{\theta}_{\boldsymbol{\alpha}}^{*} = \textrm{arg} \min_{\boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w, \boldsymbol{\Psi}} -\mathbb{E}_{q(\boldsymbol{\alpha},\boldsymbol{w})}[\log p(\boldsymbol{y}|\boldsymbol{x}, \boldsymbol{\alpha}, \boldsymbol{w}, \boldsymbol{\Psi})] + \\ + \gamma_1 \sum_{i,j,k,c} \mathcal{KL}(q(\boldsymbol{w}_{k,c}^{i,j}\mid \boldsymbol{\theta}_w) || p(\boldsymbol{w}_{k,c}^{i,j}))
+ \\ + \gamma_2 \sum_{i,j} \mathcal{KL}(q(\boldsymbol{\alpha}^{i,j}\mid \boldsymbol{\theta}_\alpha) || p(\boldsymbol{\alpha}^{i,j})) + const.
\label{eq:search_objective_1}
\end{multline}
The first term again corresponds to the data-fitting term which pushes the parameters toward maximising the expectation of the log-likelihood with respect to the variational distributions $q(\boldsymbol{\alpha}, \boldsymbol{w} \mid \boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w)$ towards targets $\boldsymbol{y}$. The other two terms are regulariser terms, which because of the factorisation of the joint distributions $q(\boldsymbol{\alpha}, \boldsymbol{w})$ and priors $p(\boldsymbol{\alpha}, \boldsymbol{w})$ can be separated, and scaled by arbitrary constants $\gamma_1, \gamma_2$. As previously stated, $\gamma_1$ and $\gamma_2$ enable the trade-off between the data-fit and regularisation. Molchanov \textit{et al.}~\cite{molchanov2017variational} approximated the $\mathcal{KL}$ divergence between the prior and the posterior using $\lambda=\frac{\sigma^2}{\mu^2}$ as $\mathcal{KL}(.) \approx k_1 \sigma(k_2 + k_3 \log \lambda)-0.5\log(1+\lambda^{-1}) - k_1;\ k_1= 0.63576, k_2=1.8732, k_3 = 1.48695$. After the search or training of the final evaluation, the variances are only considered to compute which weights can be pruned and otherwise they are not considered during evaluation.
Additionally, we are inspired by~\cite{chu2019fair} which promotes the confidence in selecting connections in a graph by explicitly minimising their entropy $\mathcal{H}$ in a similar NAS setup to minimise their uncertainty. In our case, we want to achieve high level of certainty in the operations' selection across $\boldsymbol{\alpha}^{i,j}$, which is equivalent to minimising their joint entropy across the potential operations $K$ as $\sum_{i,j}\mathcal{H}(\mathbb{E}_{q(\boldsymbol{\alpha}\mid \boldsymbol{\theta}_\alpha)}[\boldsymbol{z}^{i,j}])$. Applying a regulated coefficient $\gamma_3$ on the entropy term, the final search objective $\mathcal{L}(.)$ is formulated in \eqref{eq:search_objective_2}.
\begin{multline}
\mathcal{A} \Leftarrow \boldsymbol{\theta}_{\boldsymbol{\alpha}}^{*} = \textrm{arg} \min_{\boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w, \boldsymbol{\Psi}} \mathcal{L}(\boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w, \boldsymbol{\Psi}) = \\ = -\mathbb{E}_{q(\boldsymbol{\alpha},\boldsymbol{w})}[\log p(\boldsymbol{y}|\boldsymbol{x}, \boldsymbol{\alpha}, \boldsymbol{w}, \boldsymbol{\Psi})] + \\ + \gamma_1 \sum_{i,j,k,c} \mathcal{KL}(q(\boldsymbol{w}_{k,c}^{i,j}\mid \boldsymbol{\theta}_w) || p(\boldsymbol{w}_{k,c}^{i,j}))
+ \\ + \gamma_2 \sum_{i,j} \mathcal{KL}(q(\boldsymbol{\alpha}^{i,j}\mid \boldsymbol{\theta}_\alpha) || p(\boldsymbol{\alpha}^{i,j})) + \\ + \gamma_3 \sum_{i,j}\mathcal{H}(\mathbb{E}_{q(\boldsymbol{\alpha}\mid \boldsymbol{\theta}_\alpha)}[\boldsymbol{z}^{i,j}]) + const.
\label{eq:search_objective_2}
\end{multline}
\subsection{Algorithm}\label{sec:vinnas_algorithm}
Our algorithm, shown in Algorithm~\ref{alg:vinnas}, is based on SGD and relies on complete differentiation of all the operations. \textit{VINNAS} iterates between two stages: (1, lines 6-8) optimisation of $\boldsymbol{\theta}_w$ and $\boldsymbol{\Psi}$, and (2, lines 10-14) optimisation of $\boldsymbol{\theta}_\alpha$. The usage of this two-stage optimisation aims to avoid over-adaption of parameters as suggested in~\cite{liu2018darts}.
\begin{algorithm}
\caption{\textit{VINNAS}}
\label{alg:vinnas}
\begin{algorithmic}[1]
\State Initialise $\boldsymbol{\mu}_{w}, \boldsymbol{\mu}_{\alpha}, \log\boldsymbol{\sigma}^2_{w}, \log\boldsymbol{\sigma}^2_{\alpha}$\;
\State Initialise scaling factors $\gamma_1, \gamma_2, \gamma_3 = 0$\;
\State Initialise $error=\infty$
\For{$epoch$ in search budget}
\State \textbf{Stage (1)}
\State Sample one $batch$ for updating $\boldsymbol{\theta}_w, \boldsymbol{\Psi}$ from $\mathcal{D}_{\boldsymbol{\theta}_w, \boldsymbol{\Psi}}$
\State Compute loss $\mathcal{L}_{\boldsymbol{\theta}_w, \boldsymbol{\Psi}}$ based on \eqref{eq:search_objective_2} with respect to $batch$
\State Update $\boldsymbol{\theta}_w, \boldsymbol{\Psi}$ by gradient descent: $\boldsymbol{\theta}_w \leftarrow \boldsymbol{\theta}_w - \nabla_{\boldsymbol{\theta}_w}\mathcal{L}_{\boldsymbol{\theta}_w, \boldsymbol{\Psi}}$; $\boldsymbol{\Psi} \leftarrow \boldsymbol{\Psi} - \nabla_{\boldsymbol{\Psi}}\mathcal{L}_{\boldsymbol{\theta}_w, \boldsymbol{\Psi}}$\;
\State \textbf{Stage (2)}
\If{$epoch \geq weight\ epochs$}
\State Sample one $batch$ for updating $\boldsymbol{\theta}_\alpha$ from $\mathcal{D}_{\boldsymbol{\theta}_\alpha}$\;
\State Compute loss $\mathcal{L}_{\boldsymbol{\theta}_\alpha}$ based on \eqref{eq:search_objective_2} with respect to $batch$\;
\State Update $\boldsymbol{\theta}_{\alpha}$ by gradient descent: $\boldsymbol{\theta}_{\alpha} \leftarrow \boldsymbol{\theta}_{\alpha} - \nabla_{\boldsymbol{\theta}_{\alpha}}\mathcal{L}_{ \boldsymbol{\theta}_{\alpha}}$\;
\EndIf
\State \textbf{end if}
\State Compute error on $\mathcal{D}_{\boldsymbol{\theta}_\alpha}$
\If{Error on $\mathcal{D}_{\boldsymbol{\theta}_\alpha} <$ $error$}
\State Save $\boldsymbol{\theta}_\alpha$ and update $error$
\EndIf
\State \textbf{end if }
\State Linearly increase $\gamma_1, \gamma_2, \gamma_3$\;
\EndFor
\State \textbf{end for}
\State Choose $\mathcal{A}$ based on the positive signal to noise ratio $\frac{\boldsymbol{\mu}_\alpha}{\boldsymbol{\sigma}^2_\alpha}$
\end{algorithmic}
\end{algorithm}
After the initialisation of the parameters, the optimisation loops over stages (1) and (2) using two same-sized portions of the dataset. The optimisation of the stage (2) is not started from the very beginning, but only after a certain number of epochs - \textit{weight epochs}, which are used as a warm-up for training the weights of the individual operations, to avoid oscillations and settling in local minima~\cite{liu2018darts}. The variance parameters are optimised as logarithms to guarantee computational stability. We linearly increase the values of $\gamma_1,\gamma_2$ and $\gamma_3$ to force the cells to gradually choose the most relevant operations and weight patterns with respect to $\boldsymbol{\theta}_\alpha, \boldsymbol{\theta}_w$ and $\boldsymbol{\Psi}$. To avoid stranding into a local minima, we do not enforce the regularisation from the very start of the search, meaning the $\gamma$s are initialised as zero. After each iteration of (1) and (2), we compute the error on the data sampled from $\mathcal{D}_{\boldsymbol{\theta}_\alpha}$ and save the $\boldsymbol{\theta}_\alpha$ if that error was lower than that in previous iterations. The search is repeated until the search budget, which is defined as the number of epochs that the search is allowed to perform, is not depleted. Note that the parameters for the weights $\boldsymbol{\theta}_w$ or $\boldsymbol{\Psi}$ are discarded after the search. The main outcome of the search algorithm is the parameters $\boldsymbol{\theta}_\alpha$ that are used further to perform the architecture selection that leads to $\mathcal{A}$.
Signal to noise ratio (SNR) is a commonly used measure in signal processing to distinguish between useful information and unwanted noise contained in a signal. In the context of NN architecture, the SNR can be used as an indicative of parameter importance; the higher the SNR, the more effective or important the parameter is to the model predictions for a given task. In this work we propose to look at the SNR when choosing the operations through the learnt variances $\boldsymbol{\sigma}^2_\alpha$, which can be used to compute the positive SNR as $\frac{\boldsymbol{\mu}_\alpha}{\boldsymbol{\sigma}^2_\alpha}$. We consider positive SNR, due to sign-sensitive softmax with respect to which the means were inferred. It can then be used as a metric based on which the right operations should be chosen, instead of just relying on the means $\boldsymbol{\mu}_\alpha$ as in the previous work~\cite{liu2018darts}.
|
1,477,468,751,153 | arxiv | \section{Guidelines}
The Moriond proceedings are printed from camera-ready manuscripts.
The following guidelines are intended to get a uniform rending of the
proceedings. Authors with no connection to \LaTeX{} should use this
sample text as a guide for their presentation using their favorite
text editor (see section~\ref{subsec:final})
\subsection{Producing the Hard Copy}\label{subsec:prod}
The hard copy may be printed using the procedure given below.
You should use
two files: \footnote{You can get these files from
our site at \url{https://indico.cern.ch/event/EDS2019}.}
\begin{description}
\item[\texttt{moriond.cls}] the style file that provides the higher
level \LaTeX{} commands for the proceedings. Don't change these parameters.
\item[\texttt{moriond.tex}] the main text. You can delete our sample
text and replace it with your own contribution to the volume, however we
recommend keeping an initial version of the file for reference.
\end{description}
The command for (pdf)\LaTeX ing is \texttt{pdflatex moriond}: do this twice to
sort out the cross-referencing.
{\bf Page numbers should not appear.}
\subsection{Headings and Text and Equations}
Please preserve the style of the
headings, text fonts and line spacing to provide a
uniform style for the proceedings volume.
Equations should be centered and numbered consecutively, as in
Eq.~\ref{eq:murnf}, and the {\em eqnarray} environment may be used to
split equations into several lines, for example in Eq.~\ref{eq:sp},
or to align several equations.
An alternative method is given in Eq.~\ref{eq:spa} for long sets of
equations where only one referencing equation number is wanted.
In \LaTeX, it is simplest to give the equation a label, as in
Eq.~\ref{eq:murnf}
where we have used \verb^\label{eq:murnf}^ to identify the
equation. You can then use the reference \verb^\ref{eq:murnf}^
when citing the equation in the
text which will avoid the need to manually renumber equations due to
later changes. (Look at
the source file for some examples of this.)
The same method can be used for referring to sections and subsections.
\subsection{Tables}
The tables are designed to have a uniform style throughout the proceedings
volume. It doesn't matter how you choose to place the inner
lines of the table, but we would prefer the border lines to be of the style
shown in Table~\ref{tab:exp}.
The top and bottom horizontal
lines should be single (using \verb^\hline^), and
there should be single vertical lines on the perimeter,
(using \verb^\begin{tabular}{|...|}^).
For the inner lines of the table, it looks better if they are
kept to a minimum. We've chosen a more complicated example purely as
an illustration of what is possible.
The caption heading for a table should be placed at the top of the table.
\begin{table}[t]
\caption[]{Experimental Data bearing on $\Gamma(K \rightarrow \pi \pi \gamma)$
for the $K^0_S, K^0_L$ and $K^-$ mesons.}
\label{tab:exp}
\vspace{0.4cm}
\begin{center}
\begin{tabular}{|c|c|c|l|}
\hline
& & & \\
&
$\Gamma(\pi^- \pi^0)\; s^{-1}$ &
$\Gamma(\pi^- \pi^0 \gamma)\; s^{-1}$ &
\\ \hline
\multicolumn{2}{|c|}{Process for Decay} & & \\
\cline{1-2}
$K^-$ &
$1.711 \times 10^7$ &
\begin{minipage}{1in}
$2.22 \times 10^4$ \\ (DE $ 1.46 \times 10^3)$
\end{minipage} &
\begin{minipage}{1.5in}
No (IB)-E1 interference seen but data shows excess events relative to IB over
$E^{\ast}_{\gamma} = 80$ to $100MeV$
\end{minipage} \\
& & & \\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Figures}\label{subsec:fig}
If you wish to `embed' an image or photo in the file, you can use
the present template as an example. The command
\verb^\includegraphics^ can take several options, like
\verb^draft^ (just for testing the positioning of the figure)
or \verb^angle^ to rotate a figure by a given angle.
The caption heading for a figure should be placed below the figure.
\subsection{Limitations on the Placement of Tables,
Equations and Figures}\label{sec:plac}
Very large figures and tables should be placed on a page by themselves. One
can use the instruction \verb^\begin{figure}[p]^ or
\verb^\begin{table}[p]^
to position these, and they will appear on a separate page devoted to
figures and tables. We would recommend making any necessary
adjustments to the layout of the figures and tables
only in the final draft. It is also simplest to sort out line and
page breaks in the last stages.
\subsection{Acknowledgments, Appendices, Footnotes and the Bibliography}
If you wish to have
acknowledgments to funding bodies etc., these may be placed in a separate
section at the end of the text, before the Appendices. This should not
be numbered so use \verb^\section*{Acknowledgments}^.
It's preferable to have no appendices in a brief article, but if more
than one is necessary then simply copy the
\verb^\section*{Appendix}^
heading and type in Appendix A, Appendix B etc. between the brackets.
Footnotes are denoted by a letter superscript
in the text,\footnote{Just like this one.} and references
are denoted by a number superscript.
Bibliography can be generated either manually or through the BibTeX
package (which is recommanded). In this sample we
have used \verb^\bibitem^ to produce the bibliography.
Citations in the text use the labels defined in the bibitem declaration,
for example, the first paper by Jarlskog~\cite{ja} is cited using the command
\verb^\cite{ja}^.
\subsection{Photograph}
You may want to include a photograph of yourself below the title
of your talk. A scanned photo can be
directly included using the default command\\
\verb^\newcommand{}{\includegraphics[height=35mm]{mypicture}}^\\
just before the
\verb^\begin{document}^
line. If you don't want to include your photo, just comment this line
by adding the \verb
the line and uncomment the next one
\verb
\subsection{Final Manuscript}\label{subsec:final}
All files (.tex, figures and .pdf) should be sent by the {\bf 15th of May 2017}
by e-mail
to \\
{\bf [email protected]}.\\
\section{Sample Text }
The following may be (and has been) described as `dangerously irrelevant'
physics. The Lorentz-invariant phase space integral for
a general n-body decay from a particle with momentum $P$
and mass $M$ is given by:
\begin{equation}
I((P - k_i)^2, m^2_i, M) = \frac{1}{(2 \pi)^5}\!
\int\!\frac{d^3 k_i}{2 \omega_i} \! \delta^4(P - k_i).
\label{eq:murnf}
\end{equation}
The only experiment on $K^{\pm} \rightarrow \pi^{\pm} \pi^0 \gamma$ since 1976
is that of Bolotov {\it et al}.~\cite{bu}
There are two
necessary conditions required for any acceptable
parametrization of the
quark mixing matrix. The first is that the matrix must be unitary, and the
second is that it should contain a CP violating phase $\delta$.
In Sec.~\ref{subsec:fig} the connection between invariants (of
form similar to J) and unitarity relations
will be examined further for the more general $ n \times n $ case.
The reason is that such a matrix is not a faithful representation of the group,
i.e. it does not cover all of the parameter space available.
\begin{equation}
\renewcommand{\arraystretch}{1.2}
\begin{array}{rc@{\,}c@{\,}l}
\bf{K} & = && Im[V_{j, \alpha} {V_{j,\alpha + 1}}^*
{V_{j + 1,\alpha }}^* V_{j + 1, \alpha + 1} ] \\
& & + & Im[V_{k, \alpha + 2} {V_{k,\alpha + 3}}^*
{V_{k + 1,\alpha + 2 }}^* V_{k + 1, \alpha + 3} ] \\
& & + & Im[V_{j + 2, \beta} {V_{j + 2,\beta + 1}}^*
{V_{j + 3,\beta }}^* V_{j + 3, \beta + 1} ] \\
& & + & Im[V_{k + 2, \beta + 2} {V_{k + 2,\beta + 3}}^*
{V_{k + 3,\beta + 2 }}^* V_{k + 3, \beta + 3}] \\
& & \\
\bf{M} & = && Im[{V_{j, \alpha}}^* V_{j,\alpha + 1}
V_{j + 1,\alpha } {V_{j + 1, \alpha + 1}}^* ] \\
& & + & Im[V_{k, \alpha + 2} {V_{k,\alpha + 3}}^*
{V_{k + 1,\alpha + 2 }}^* V_{k + 1, \alpha + 3} ] \\
& & + & Im[{V_{j + 2, \beta}}^* V_{j + 2,\beta + 1}
V_{j + 3,\beta } {V_{j + 3, \beta + 1}}^* ] \\
& & + & Im[V_{k + 2, \beta + 2} {V_{k + 2,\beta + 3}}^*
{V_{k + 3,\beta + 2 }}^* V_{k + 3, \beta + 3}],
\\ & &
\end{array}
\label{eq:spa}
\end{equation}
where $ k = j$ or $j+1$ and $\beta = \alpha$ or $\alpha+1$, but if
$k = j + 1$, then $\beta \neq \alpha + 1$ and similarly, if
$\beta = \alpha + 1$ then $ k \neq j + 1$.\footnote{An example of a
matrix which has elements
containing the phase variable $e^{i \delta}$ to second order, i.e.
elements with a
phase variable $e^{2i \delta}$ is given at the end of this section.}
There are only 162 quark mixing matrices using these parameters
which are
to first order in the phase variable $e^{i \delta}$ as is the case for
the Jarlskog parametrizations, and for which J is not identically
zero.
It should be noted that these are physically identical and
form just one true parametrization.
\begin{eqnarray}
T & = & Im[V_{11} {V_{12}}^* {V_{21}}^* V_{22}] \nonumber \\
& & + Im[V_{12} {V_{13}}^* {V_{22}}^* V_{23}] \nonumber \\
& & - Im[V_{33} {V_{31}}^* {V_{13}}^* V_{11}].
\label{eq:sp}
\end{eqnarray}
\begin{figure}
\begin{minipage}{0.33\linewidth}
\centerline{\includegraphics[width=0.7\linewidth,draft=true]{figexamp}}
\end{minipage}
\hfill
\begin{minipage}{0.32\linewidth}
\centerline{\includegraphics[width=0.7\linewidth]{figexamp}}
\end{minipage}
\hfill
\begin{minipage}{0.32\linewidth}
\centerline{\includegraphics[angle=-45,width=0.7\linewidth]{figexamp}}
\end{minipage}
\caption[]{same figure with draft option (left), normal (center) and rotated (right)}
\label{fig:radish}
\end{figure}
\section*{Acknowledgments}
This is where one places acknowledgments for funding bodies etc.
Note that there are no section numbers for the Acknowledgments, Appendix
or References.
\section*{Appendix}
We can insert an appendix here and place equations so that they are
given numbers such as Eq.~\ref{eq:app}.
\begin{equation}
x = y.
\label{eq:app}
\end{equation}
\section*{References}
\section{Introduction}
It was long anticipated that at sufficiently high temperatures and energy densities the strongly interacting matter would no longer be confined into hadrons.\cite{Hagedorn:1965st,Bondorf:1978kz} Such deconfined matter, the Quark--Gluon Plasma (QGP) is assumed to have filled the early universe after the first microseconds.
In the past decades, large experiments at RHIC and the LHC have shown that the QGP exhibits strong collective behavior, similar to an extremely hot and almost perfect fluid.\cite{Adcox:2004mh,Aamodt:2010pa}
Data from the Run 2 phase of the LHC allowed for precision measurements aimed at a detailed understanding of QGP properties.
ALICE is a dedicated heavy-ion experiment at the CERN LHC accelerator with excellent identification capabilities in collisions with high particle multiplicities in the final state.\cite{Abelev:2014ffa} This contribution summarizes some of the most intriguing results.
\section{Production of identified particles}
ALICE carried out a broad set of high-precision measurements of identified particles at several collision energies and in different colliding systems.\cite{Acharya:2018qsh,Acharya:2018eaq,Aamodt:2010my} The mass-dependent hardening of light-particle spectra with increasing multiplicity suggests that spectral slopes are determined by a statistical freezeout temperature that is modified by the radial expansion of the freezeout surface. An oft-used parametrization is the blast-wave model, where particles are produced on an expanding hypersurface.\cite{Schnedermann:1993ws} The spectra can then be determined by the radial expansion velocity $\beta_{\rm T}$ and the kinetic freeze-out temperature $T_{\rm kin}$.
The results of a simultaneous fit to the spectra of light particles are shown in Fig.~\ref{fig:ToPionRatios} (left) as a function of multiplicity for various collision systems and energies.
Although the trends are similar in all three collision systems, similar values of expansion velocity correspond to smaller freeze-out temperatures in small than in large systems. On the other hand, the dependence on collision energy is weak within a given collision system.
\begin{figure}[h]
\center
\includegraphics[width=0.6\columnwidth]{BlastWaveFits.pdf}%
\includegraphics[width=0.4\columnwidth]{ToPionRatios.pdf}%
\caption{\label{fig:ToPionRatios}%
{\it Left:} $T_{\rm kin}$ and $\beta_{\rm T}$ parameters from blast-wave fits in different colliding systems and collision energies. {\it Right:} p, $\mathrm{K}^0_{\rm S}$, $\Lambda+\bar{\Lambda}$, $\Xi^-+\bar{\Xi}^+$, $\Omega^-+\bar{\Omega}^+$ and $\phi$ to $\pi^\pm$ ratios in function of event multiplicity in pp collisions at $\sqrt{s}$ = 7 and 13 TeV, p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 and 8.16 TeV, PbPb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV and Xe-Xe collisions at $\sqrt{s_\mathrm{NN}}$ = 5.44 TeV.}
\end{figure}
Strangeness enhancement was traditionally considered as a smoking-gun signature of the QGP formation.\cite{Rafelski:1982pu}
Figure~\ref{fig:ToPionRatios} (right) summarizes ALICE measurements of strange and non-strange hadron yields normalized by the yield of pions, across several collision systems and energies as a function of charged-hadron event multiplicity at mid-rapidity. There is a clear sign of enhancement that increases with strangeness content. However, no significant energy and system dependence is present at any given multiplicity, and a universal smooth evolution can be observed with event multiplicity regardless of collision system or energy.
These observations suggest that the production of light and strange particles are driven by the characteristics of the final state. An implication of this is that penetrating probes are required to learn about the onset and the nature of QGP production.
\section{Collective phenomena in small and large systems}
In the picture of the strongly interacting QGP emerging in the era of RHIC, collective phenomena were associated with the production of the QGP in high-energy heavy-ion collisions. The LHC experiments, however, discovered several collective features in smaller pp and pA systems with sufficiently high multiplicity.\cite{Khachatryan:2016txc,Abelev:2012ola}
The azimuthal momentum anisotropy of the final-state particles, also called flow, is often described in a Fourier decomposition.\cite{Voloshin:1994mz}
While a substantial second Fourier component $v_2$ (``elliptic flow'') has been traditionally associated with the collective motion of the final state, the presence of higher-order (especially the odd) coefficients highlight the importance of the initial state in the development of azimutal anisotropy.\cite{Takahashi:2009na} In fact, $v_n$ are sensitive to the full evolution of the system from initial conditions through the QGP until the hadronic phase.
Figure~\ref{fig:vnCoeffs} (left) shows the comparison of $v_2$ coefficients for several particle species in semi-central Pb--Pb collisions at $\sqrt{s_\mathrm{NN}}=5.02$ TeV.\cite{Acharya:2018zuq} At low $p_\mathrm{T}$ a clear mass ordering of $v_2$ is present. At intermediate $p_\mathrm{T}$ (in the range between $2.5 \lesssimp_\mathrm{T}\lesssim 6$ GeV/$c$) an approximate constituent-quark-number scaling can be observed, that is, the baryons and the mesons group together, the two groups being distant from each other. Above $p_\mathrm{T}\approx 6$ GeV/$c$, however, parton energy loss becomes dominant and the scaling falls apart.
The right panels of Fig.~\ref{fig:vnCoeffs} present the $v_n$ coefficients in pp, p--Pb, Xe--Xe, and Pb--Pb systems.\cite{Acharya:2019vdf} Long-range multiparticle correlations are clearly observed in all systems, and the two-particle, multi-particle and subevent methods yield qualitatively the same results. The slight systematic difference between the two-particle method and the other methods is owed to non-flow contribution (non-collective correlations).
The ordering of $v_2$, $v_3$ and $v_4$ are the same regardless of systems, and there is a quantitative match of the $v_n$ coefficients throughout the systems at low charged-hadron multiplicity ($N_{\rm ch}$). At higher $N_{\rm ch}$ values, however, $v_2$ does not scale with $N_{\rm ch}$, which suggests different initial geometries in small and large systems. Also, neither pQCD nor hydrodynamics-based models\cite{Sjostrand:2014zea,Mantysaari:2017cni} provide a satisfactory description of pp and p--Pb data.
\begin{figure}[h]
\center
\includegraphics[width=0.6\columnwidth]{v2_pid_V0A.pdf}%
\includegraphics[width=0.4\columnwidth]{vnAll_PYTHIASchenke.pdf}%
\caption{\label{fig:vnCoeffs}%
{\it Left:} The $p_\mathrm{T}$-differential $v_2$ of $\pi^\pm$, K$^\pm$, K$^0_s$, p+$\bar{\mathrm{p}}$, $\Lambda$ and $\bar{\Lambda}$ and $\phi$ in 10--20\% centrality Pb--Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV. {\it Top right:} Multiplicity dependence of $v_n$ obtained with two-particle cumulants for pp collisions at $\sqrt{s_\mathrm{NN}}$ = 13 TeV, p--Pb and Pb--Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV and Xe--Xe collisions at $\sqrt{s_\mathrm{NN}}$ = 5.44 TeV. {\it Bottom right:} Multiplicity dependence of $v_2$ coefficients obtained with multiparticle cumulants.}
\end{figure}
\section{Medium interactions}
Interactions of high-$p_\mathrm{T}$ self-generated probes with the hot medium have traditionally been addressed by the measurement of nuclear modification factors, $R_\mathrm{AA}$, where the yields of particles or jets in heavy-ion collisions are compared to reference yields in pp collisions, scaled by the average number of binary nucleon--nucleon collisions within a nucleus--nucleus collision.
While the $R_\mathrm{AA}$ is sensitive to hadronization and radial flow at low $p_\mathrm{T}$, a universal high-$p_\mathrm{T}$ suppression has been found among all the light and strange hadrons at RHIC and the LHC,\cite{Adler:2006hu,Adam:2017zbf} which can be associated to parton energy loss in the colored medium.
The high delivered luminosities and the high-precision capabilities of the current experiments have recently opened the possibility for measuring more refined observables such as correlation or jet structure observables, which aim for the study of jet development within the medium. Grooming techniques allow us to understand hard jet substructures while mitigating the effects of soft fragmentation.\cite{Asquith:2018igt}
ALICE has measured the jet substructure variable $z_g=\frac{min({p_\mathrm{T}}_1,{p_\mathrm{T}}_2)}{{p_\mathrm{T}}_1+{p_\mathrm{T}}_2}$,
where ${p_\mathrm{T}}_1$ and ${p_\mathrm{T}}_2$ are the leading and subleading prongs from the first intra-jet splitting determined using an iterative declustering.\cite{Acharya:2019djg}
Figure~\ref{fig:JetSubstruct} shows $z_g$ distributions in central Pb--Pb collisions at $\sqrt{s_\mathrm{NN}}=2.76$ TeV, in four different categories by the opening angle between the two prongs, $\Delta R$.
\begin{figure}[h]
\center
\includegraphics[width=\columnwidth]{JetSubstructure.pdf}%
\caption{\label{fig:JetSubstruct}%
Detector-level Pb--Pb distributions of $z_g$~for $R$=0.4 jets with varying minimum/maximum angular separation of subjets ($\Delta R$) for jets in the charged jet momentum range $80\le p_\mathrm{T}\le 120$ GeV/$c$, compared to model calculations.
}
\end{figure}
While embedded PYTHIA pp simulations\cite{Sjostrand:2014zea} describe Pb--Pb data generally well, there is a reduction of small-angle splittings and an enhancement of large-angle splittings in data compared to embedded simulations. Models that include the medium response from jet-medium interactions provide a better agreement with the data.\cite{KunnawalkamElayavalli:2017hxo} This highlights the importance of the interplay between early jet development and the medium.
\section{Direct photons}
The strongly interacting deconfined matter created in high-energy heavy-ion collisions is transparent to electromagnetic particles. Direct photons (photons not coming from hadron decays) are therefore able to bring information from all stages of the reaction including hard scattering, jet radiation, the QGP, as well as the hadron gas. An excess in the low-$p_\mathrm{T}$ direct photon spectrum above the yields expected from pp measurements is attributed to the thermal radiation of the hot medium, and implies the presence of the QGP\cite{Adare:2008ab,Adam:2015lda} with an initial temperature between 300 and 600 MeV in central Pb--Pb collisions at $\sqrt{s_\mathrm{NN}}=2.76$ TeV. Figure~\ref{fig:DirectPhotons} (left) shows recent ALICE measurements of direct photon yields in p--Pb collisions. No excess is present in the thermal region above pQCD-based models with cold nuclear matter effects, thus corroborating the above interpretation.
Figure~\ref{fig:DirectPhotons} (right) shows the azimuthal anisotropy of direct photons in semi-central Pb--Pb collisions at $\sqrt{s_\mathrm{NN}}=2.76$ TeV. The $v_2$ of direct photons is in agreement with that of hadron decay photons. All current models, including those that assume the dominance of late stages in the observed flow,\cite{Shen:2016zpp} predict lower flow for the direct photons. This observation questions our current understanding of the role of thermal photons.
\begin{figure}[h]
\center
\includegraphics[width=0.4\columnwidth]{DirGamma_pPb.pdf}%
\hspace{10mm}
\includegraphics[width=0.5\columnwidth]{v2dir_combined_theory.pdf} \caption{\label{fig:DirectPhotons}%
{\it Left:} Invariant yield of the measured direct photons for several multiplicity bins and the full non-single diffractive sample of p--Pb collisions at $\sqrt{s_\mathrm{NN}}=5.02$ TeV, compared to models.
{\it Right:} Elliptic flow of direct photons compared to the expected flow of decay photons as well as model calculations in the 20--40\% centrality class.%
}
\end{figure}
\section{Heavy-flavor mesons and quarkonia}
Heavy-flavor (charm and beauty) quarks are produced almost exclusively in early hard processes. Measurements of heavy-flavor production in small systems collisions can therefore be used as benchmarks of perturbative quantum-chromodynamics (QCD) models. Heavy-flavor particles, especially when compared to light flavor, also provide insight to softer QCD mechanisms like multiple-parton interactions and flavor-dependent fragmentation. Because of their long lifetime, in collisions where there is a nuclear medium, they can be used as self-generated penetrating probes that provide us with means to understand the properties of hot and cold nuclear matter (in nucleus--nucleus and proton--nucleus collisions, respectively). While the high-$p_\mathrm{T}$ range mostly brings information about the collisional and radiative energy loss mechanisms in the perturbative regime, measurements at lower $p_\mathrm{T}$ can address collective behavior and give insight to coalescence mechanisms between heavy and light flavor.\cite{Andronic:2015wma}
Both the ALICE heavy-flavor electron (HFE) and the D-meson measurements in p--Pb collisions agree with the expectations from pp collisions, suggesting that charm production is not modified substantially by the cold nuclear matter effects at mid-rapidity.\cite{Adam:2015qda,Adam:2016ich} Figure~\ref{fig:pPbHFjets} (left) shows
measurements on the nuclear modification of jets containing a heavy-flavor electron. Regardless of the choice of the jet resolution parameter, the corresponding $R_{\rm pPb}$ is consistent with unity.
Figure~\ref{fig:pPbHFjets} (right) shows that the cross-section of jets containing a beauty quark is consistent with POWHEG HVQ\cite{Frixione:2007nu} pQCD-based predictions. Although the uncertainties are rather sizeable, these new results indicate that the production of jets initiated by charm and beauty quarks is not influenced strongly by the presence of cold nuclear matter.
\begin{figure}[h]
\center
\includegraphics[width=.5\columnwidth]{HFeJetRpPbAllR.pdf}%
\includegraphics[width=.5\columnwidth]{xsec_ratio_bjets_powHVQ.pdf}
\caption{\label{fig:pPbHFjets}%
{\it Left:} Nuclear modification factor of jets containing a HFE in p--Pb collisions ad $\sqrt{s_\mathrm{NN}}=5.02$ TeV, reconstructed with resolution parameters $R = 0.3$, 0.4, and 0.5. {\it Right:} Cross-section of beauty-jets in p--Pb collisions at $\sqrt{s_\mathrm{NN}}=5.02$ TeV, reconstructed with the anti-$k_{\rm T}$ algorithm with a resolution parameter $R=0.4$, obtained with secondary vertex tagging. The data are compared to the POWHEG HVQ model scaled by the Pb nuclear mass number, and their ratio is shown on the bottom panel.
}
\end{figure}
Figure~\ref{fig:Dmesons} presents the nuclear modification factor $R_\mathrm{AA}$ and the azimuthal anisotrophy parameter $v_2$ of non-strange as well as strange D mesons. At high $p_\mathrm{T}$, a substantial suppression can be observed, which is consistent with that of light mesons (not shown). That no mass ordering is present between light and heavy flavors contradicts na\"{i}ve expectations of the ordered energy loss mechanisms, but can be understood by models taking dead cone and color charge fragmentation effects into account.\cite{Djordjevic:2014hka}
\begin{figure}[h!]
\center
\includegraphics[width=0.45\columnwidth]{DmesonAverageDs_vs_models_010.pdf}%
\hspace{0.05\columnwidth}%
\includegraphics[width=0.45\columnwidth]{PromptDsV2_SP_PbPb_5TeV_wModels.pdf}%
\caption{\label{fig:Dmesons}%
{\it Left}: Average non-strange D-meson $R_\mathrm{AA}$ and prompt ${\rm D}_{\rm s}^+$ $R_\mathrm{AA}$ in 30--50\% central Pb--Pb collisions at $\sqrt{s_\mathrm{NN}} = 5.02$ TeV, compared with theoretical predictions from transport models.
{\it Right}: Prompt ${\rm D}_{\rm s}^+$ and average non-strange D-meson $v_2$ as a function of $p_\mathrm{T}$ in the 30--50\% centrality class of Pb--Pb collisions at $\sqrt{s_\mathrm{NN}} = 5.02$ TeV, compared to models implementing heavy-quark transport in an hydrodynamically expanding medium.
}
\end{figure}
Focusing on the low-$p_\mathrm{T}$ regime, the D mesons show less suppression than light flavor, and there is an indication of weaker suppression of strange than non-strange D mesons. At the same time, both strange and non-strange D mesons exhibit azimutal anisotropy that is comparable to that of light mesons. This is well-described by models assuming the coalescence of charm with light quarks in an environment of relative strangeness enhancement.\cite{He:2014cla,Song:2015sfa,Plumari:2017ntm}
While open heavy flavor can be used mostly for the tomographic study of the medium, one can address the thermodynamical properties of the QGP by looking at the production of quarkonia (bound states of heavy quarks and their antiquark pairs).
Sequential suppression of different quarkonium states in a colored medium by the Debye-screening of Q$\bar{\rm Q}$ potential has long been proposed as a sensitive thermometer of the QGP.\cite{Mocsy:2007jz}
The $\Upsilon$ bottomonium states are found to follow the predicted sequential suppression pattern in both RHIC and LHC heavy-ion collisions.\cite{Adamczyk:2016dzv,Chatrchyan:2012lxa,Acharya:2018mni}
The production of the $J/\psi$ mesons is, however, enhanced by the late regeneration of charmonia, especially at LHC energies.\cite{Chen:2019sak,Abelev:2012rv}
Figure~\ref{fig:OniumFlow} shows the azimuthal anisotropy of $J/\psi$ and, for the first time, of the $\Upsilon$ mesons.
The $J/\psi$ flow patterns exhibit substantial collective behavior, although less so than D mesons. This is in qualitative agreement with strong charmonium recombination, although it is challenging for models to obtain a quantitative description.\cite{Acharya:2018pjd}
The $\Upsilon(1S)$ state, however, appears to be the only hadron measured at the LHC that shows no $v_2$ within the current precision.\cite{Acharya:2019hlv} This suggests that bottomonia are produced early, and are decoupled from the collectively moving medium, and that late recombination is substantially weaker than in the case of charmonia.
\begin{figure}[h!]
\begin{minipage}{0.48\columnwidth}
\center
\includegraphics[width=.86\columnwidth]{Upsilon_v2.pdf}%
\caption{\label{fig:OniumFlow}%
Elliptic flow of $\Upsilon(1S)$ mesons in Pb--Pb collisions at $\sqrt{s_\mathrm{NN}} = 5.02$ TeV in function of $p_\mathrm{T}$, compared to that of $J/\psi$ mesons as well as to model calculations.
}
\end{minipage
\hfill
\begin{minipage}{0.5\columnwidth}
\center
\vspace{-7mm}
\includegraphics[width=\columnwidth]{LcDAverageMc_pp.pdf}%
\caption{\label{fig:LambdaC}%
The $\Lambda^+_{\rm c}/{\rm D}^0$ ratio as a function of $p_\mathrm{T}$ measured in pp collisions at $\sqrt{s}= 5.02$ and 7 TeV, compared to model calculations.
}
\end{minipage
\end{figure}
Measurements of baryons containing heavy flavor provide valuable input for theoretical understanding of the heavy-flavor fragmentation. Figure~\ref{fig:LambdaC} presents the charmed baryon to meson ratio $\Lambda_{\rm c}^+/{\rm D}^0$ from recent ALICE measurements in pp collisions. A significant excess is observable in the lower to intermediate $p_\mathrm{T}$ range, which cannot be reproduced by commonly used models such as PYTHIA 8\cite{Sjostrand:2014zea}. Note that a similar excess is observed in the $\Xi_{\rm c}^0/{\rm D}^0$ ratio.\cite{Acharya:2017lwf}
As the fragmentation models are tuned using $e^+e-$ collision data, this may raise the question whether heavy-flavor fragmentation is collision system dependent. Some recent model developments, however, are able to capture the observed trends in the $\Lambda_{\rm c}^+/{\rm D}^0$ ratio with either string formation beyond leading color approximation\cite{Christiansen:2015yqa}, or via the feed-down contribution from augmented charmed baryon states\cite{He:2019tik}.
\section{Summary and outlook}
During the Run 1 and Run 2 data taking periods, the ALICE experiment has collected large datasets of pp, p--Pb and nucleus--nucleus collisions at several LHC energies. These data allow for the understanding the system size and energy dependence of hadroproduction, as well as to study the onset of QGP effects and the origin of collective-like behavior in small systems.
These results contribute to a detailed understanding on the properties of the QGP. This small selection includes intriguing results on global observables that inform us about particle production, bulk property measurements of several species that aim to understand collectivity in the hot matter, and penetrating probes to study energy loss and jet development within the medium. The flavor-dependent studies include precision charm and a wide set of beauty measurements.
After the second long shutdown (LS2), in the Run 3 phase from 2021 on, the LHC will see much improved interaction rate, up to 50 kHz in Pb--Pb collisions. The events will be recorded with upgraded ITS, TPC, MFT and FIT detectors paired with a new continuous readout and computing system. In Run 3 and later in Run 4, altogether an integrated luminosity of 13 nb$^{-1}$ is anticipated by ALICE, which is up to two orders of magnitude more luminosity than Run 1 and Run 2 together. This will allow for a more detailed understanding on the heavy-flavor baryonic sector and a wide range of beauty measurements. With the study of jet structures and event shapes, the soft-hard boundary regime of the strong interaction can be understood in great details.\cite{Noferini:2018are}
\section*{Acknowledgments}
This work has been supported by the Hungarian NKFIH/OTKA K 120660 grant and the J\'anos Bolyai scholarship of the Hungarian Academy of Sciences.
\section*{References}
|
1,477,468,751,154 | arxiv | \section{Introduction}
\vspace*{-0.5pt}
\noindent
Present astronomical observations related to abundances of the light
elements $^4{\rm He}$ and $^4\overline{\rm He}$, the content of
protons versus antiprotons in cosmic rays, etc, lead to the
conclusion\cite{RDTMQ,KT} that, before the nucleosynthesis epoch, the
Universe must have possessed an excess in the baryon number $B$ which
is expressed by the small baryon-to-photon ratio of number densities
\begin{equation}
\label{nB}
\frac{n_{\Delta B}}{n_\gamma}\ =\ (4-7)\times 10^{-10}\ .
\end{equation}
This baryonic asymmetry, $n_{\Delta B} = n_B - n_{\bar{B}} \approx
n_B$, should have survived until today if there had been no processes
that violate the $B$ number and/or modify the number density of
photons $n_\gamma$. Sakharov, assuming that the Universe was created
initially in a $B$-conserving symmetric state, was able to derive
three necessary conditions\cite{ADS} to explain the baryon asymmetry
in the Universe (BAU):
\begin{itemize}
\item[ (i)] There must be $B$-violating interactions in nature, so
that a net $B$ number can in principle be generated.
\item[ (ii)] The $B$-violating interactions should violate the
discrete symmetries of charge conjugation (C) and that resulting
from the combined action of charge and parity transformations (CP).
In this way, an excess in baryons over antibaryons, $\Delta B$, is
produced.
\item[(iii)] The $B$- and CP-violating processes must be out of
thermal equilibrium, \cite{KW,EWK&SW} namely they should have an
interaction rate smaller than the expansion rate of the Universe.
This last requirement ensures that the produced $\Delta B$ is not
washed out by the inverse processes.
\end{itemize}
Grand unified theories (GUT's) can in principle contain all the above
necessary ingredients for baryogenesis.\cite{MY} In such theories,
out-of-equilibrium $B$- and CP-violating decays of super-heavy bosons
with masses near to the grand unification scale $M_X\approx 10^{15}$
GeV can produce the BAU. However, this solution to the BAU has its
own problems. The main difficulty is the generic feature that minimal
GUT's predict very small CP violation, since it occurs at very high
orders in perturbation theory. This problem may be avoided by
augmenting GUT's with extra Higgs representations.\cite{RDTMQ} Also,
GUT's must comply with limits obtained by experiments on the stability
of the proton. Such experiments put tight constraints on the masses
of the GUT bosons mediating $B$ violation and their couplings to the
matter. Another severe limitation to scenarios of baryogenesis arises
from the anomalous $B+L$-violating processes, also known as
sphalerons,\cite{hooft,sphal,KRS} which are in thermal equilibrium for
temperatures\cite{AMcL,BS} $200 \stackrel{\displaystyle <}{\sim} T
\stackrel{\displaystyle <}{\sim} 10^{12}$ GeV. Unlike $B+L$,
sphalerons preserve the quantum number $B-L$. Therefore, any
primordial BAU generated at the GUT scale should not rely on
$B+L$-violating operators, which imposes a further non-trivial
constraint on unified theories. In that vein, Kuzmin, Rubakov and
Shaposhnikov\cite{KRS} suggested that the same anomalous
$B+L$-violating electroweak interactions may produce the observed
excess in $B$ during a first-order electroweak phase transition. Such
a mechanism crucially depends on the Higgs-boson mass
$M_H$,\cite{ADD,RS} and the experimental fact $M_H>80$ GeV practically
rules out this scenario of electroweak baryogenesis.\cite{EWphase}
Therefore, baryogenesis provides the strongest indication against the
completeness of the SM, as well as poses limits on its possible
new-physics extensions.
Among the many baryogenesis scenarios invoked in the literature, the
most attractive one is due to Fukugita and Yanagida\cite{FY}, and our
emphasis will be put on their scenario in this review article. In
such a scenario, out-of-equilibrium $L$-violating decays of heavy
Majorana neutrinos $N_i$, with masses $m_{N_i} \gg T_c$, produce an
excess in the lepton number $L$ which is converted into the desired
excess in $B$ by means of $B+L$-violating sphaleron interactions,
which are in thermal equilibrium above the critical temperature $T_c$.
Over the last years, many authors have discussed such a scenario, also
known as baryogenesis through
leptogenesis.\cite{MAL,CEV,epsilonprime,LS1,Paschos,APRD} However, we
should remark that isosinglet heavy neutrinos are not theoretically
compelling in order to create an excess in the $L$ number. Recently,
Ma and Sarkar\cite{Ma/Sarkar} suggested a leptogenesis scenario based
on a generalized Higgs-triplet model,\cite{Triplet1,Triplet2} where
the leptonic asymmetry is generated by out-of-equilibrium CP-violating
decays of heavy doubly charged Higgs triplets into charged leptons.
However, such alternatives seem to face the known gravitino problem if
they are to be embedded in a supersymmetric theory.\cite{Del/Sar} The
charged Higgs triplets or their supersymmetric partners may interact
strongly with gravitinos and produce them in large abundances. The
slow decay rate of gravitinos during the nucleosynthesis epoch
distorts the abundances of the light elements at a level inconsistent
with present observations.
Mechanisms that enhance CP violation play a decisive role in
baryogenesis. Using the terminology known from the $K^0\bar{K}^0$
system,\cite{reviewCP} one may distinguish the following two cases:
\begin{itemize}
\item[ (i)] CP violation originating from the interference between the
tree-level decay amplitude and the absorptive part of the one-loop
vertex. Such a mechanism is usually called $\varepsilon'$-type CP
violation.\cite{FY,MAL,CEV,epsilonprime}
\item[(ii)] CP violation induced by the interference of the tree-level
graph and the absorptive part of a one-loop self-energy transition.
This mechanism is termed $\varepsilon$-type CP
violation.\cite{IKS,BR,KRS,LS1,Paschos,APRD}
\end{itemize}
\begin{figure}
\begin{center}
\begin{picture}(360,110)(0,0)
\SetWidth{0.8}
\ArrowLine(0,70)(30,70)\ArrowLine(30,70)(70,70)
\Line(70,70)(100,70)\DashArrowArc(50,70)(20,0,180){5}
\Text(85,70)[]{{\boldmath $\times$}}
\Text(0,77)[bl]{$N_i$}\Text(85,77)[b]{$N_j$}
\Text(50,95)[b]{$\Phi^\dagger$}\Text(50,65)[t]{$L$}
\ArrowLine(130,40)(100,70)\DashArrowLine(100,70)(130,90){5}
\Text(140,30)[r]{$L^C$}\Text(140,100)[r]{$\Phi$}
\Text(80,10)[]{\bf (a)}
\ArrowLine(250,70)(280,70)\ArrowLine(280,70)(310,70)\Line(310,70)(310,30)
\ArrowLine(340,30)(310,30)\DashArrowLine(310,70)(340,90){5}
\DashArrowLine(310,30)(280,70){5}\Text(310,50)[]{{\boldmath $\times$}}
\Text(250,65)[lt]{$N_i$}\Text(295,80)[]{$L$}\Text(315,50)[l]{$N_j$}
\Text(345,90)[l]{$\Phi$}\Text(345,30)[l]{$L^C$}\Text(290,50)[r]{$\Phi^\dagger$}
\Text(280,10)[]{\bf (b)}
\end{picture}\\[0.4cm]
\end{center}
\fcaption{One-loop (a) self-energy and (b) vertex graphs in heavy Majorana
neutrino decays.}\label{fig:0}
\end{figure}
As can be seen from Fig.\ \ref{fig:0}, both of the above two
mechanisms of CP violation are present\cite{LS1,Paschos,APRD} in the
usual leptogenesis scenario\cite{FY} of heavy Majorana neutrino
decays. CP violation of the $\varepsilon'$ type was extensively
discussed in the literature. \cite{FY,MAL,CEV,epsilonprime} If all
Yukawa couplings of the Higgs fields to $N_i$ and the ordinary lepton
isodoublets are of comparable order,\cite{MAL,epsilonprime} then
baryogenesis through the $\varepsilon'$-type mechanism requires very
heavy Majorana neutrinos with masses of order $10^7$--$10^8$ GeV. Such
a high mass bound may be lifted if a strong hierarchy for Yukawa
couplings and $m_{N_i}$ is assumed.\cite{MAL,CEV} However, without the
latter assumption,\cite{epsilonprime} one obtains
$\varepsilon'<10^{-15}$ for $m_{N_i}\approx 1$ TeV, and hence very
heavy neutrinos are needed to account for the BAU.
Recently, $\varepsilon$-type CP violation and its implications for the
BAU has received much attention.\cite{Paschos,APRD} In particular, it
has been observed\cite{Paschos,APRD} that CP violation can be
considerably enhanced through the mixing of two nearly degenerate
heavy Majorana neutrinos. Such an analysis cannot be performed in the
conventional field-theoretic framework, since finite-order
perturbation theory breaks down in the limit of degenerate particles.
To be specific, the wave-function amplitude that describes the
CP-asymmetric mixing of two heavy Majorana neutrinos, $N_1$ and $N_2$,
say, is inversely proportional to the mass splitting
$m_{N_1}-m_{N_2}$, and it becomes singular if degeneracy is exact.
Solutions to this problem have been based on the Weisskopf and Wigner
(WW)\cite{WW} approximation,\cite{Paschos} and the resummation
approach.\cite{ANPB,APRD} Both approaches lead to similar conclusions
concerning the resonant enhancement of CP violation. Here, we shall
follow the latter method, as the discussion of many crucial
field-theoretic issues, such as renormalization, CPT invariance and
unitarity, is conceptually more intuitive in this framework.
To describe the dynamics of CP violation through mixing of two
unstable particles, one is compelled to rely on resummation
approaches, which treat unstable particles in a consistent
way.\cite{APCP,unstable} In fact, to any finite order in perturbation
theory, physical amplitudes reflect the local gauge symmetry, respect
unitarity, are invariant under the renormalization group, and satisfy
the equivalence theorem. All of the above properties should also be
present after resummation. Unfortunately, resummation methods often
end up violating one or more of them. The reason is that subtle
cancellations are distorted when certain parts of the amplitude are
resummed to all orders in perturbation theory, whereas others,
carrying important physical information, are only considered to a
finite order. In this context, a novel diagrammatic resummation
approach has been developed,\cite{PP} which is based on the pinch
technique (PT)\cite{Pinch} and devoid of the above pathologies. In
the PT resummation approach, basic field-theoretic requirements, such
as analyticity, unitarity, gauge invariance and
renormalizability,\cite{PP} are naturally satisfied. Apart from the
great phenomenological importance of such a resummation formalism for
the proper definition of the mass and the width of unstable particles,
such as the $W$, the $Z$ boson and the Higgs boson,\cite{PP,ET} this
formalism may also be extended to the case of mixing between two
intermediate resonant states in scattering processes\cite{APRL,ANPB}
retaining all the required field-theoretic properties mentioned above.
The afore-mentioned resummation formalism has been proved to be very
successful in describing resonant transitions taking place in collider
experiments. These are situations where the unstable particles are
produced by given asymptotic states, $e^+e^-$, say, and their
subsequent decay is observed by detecting some other asymptotic states
in the final state, e.g.\ $e^+e^-\to Z^*\to \mu^+\mu^-$. However, in
an expanding Universe, the unstable particles may undergo a huge
number of collisions before they eventually decay. Each of these
collisions contributes a Coulomb phase shift, and hence the mixed
heavy particles are practically uncorrelated when they decay. To some
extent, this thermodynamic phenomenon may be described by Boltzmann
equations.\cite{KW} In this context, a related formalism for decays
has been developed,\cite{APRD} which effectively takes into account
{\em decoherence} phenomena in the mixing and subsequent decay of
heavy particles, namely heavy Majorana neutrinos in our case.
Specifically, it is shown that $\varepsilon$-type CP violation can
even be of order unity.\cite{APRD} This is in agreement with earlier
studies on resonant CP violation through mixing in scatterings
involving top quarks, supersymmetric quarks or Higgs particles in the
intermediate state.\cite{APCP,APRL,ANPB} Finally, we must remark that
alternative formulations of Boltzmann equations already exist in the
recent literature\cite{BEqs} but they are expected not to alter
drastically the existing conclusions\cite{Paschos,APRD} as far as the
resonant phenomenon of CP violation is concerned.
The organization of the review article is as follows: in Section 2 we
briefly review the basic theoretical background concerning the $B+L$
anomaly in the Standard Model (SM), and the effect of sphaleron
processes on the chemical potentials of SM particles. These
considerations lead to a relation between the generated leptonic
asymmetry and the observed baryonic asymmetry induced by sphaleron
interactions. In Section 3 we discuss theories that naturally include
heavy Majorana neutrinos. For illustration, we consider a minimal
model with two isosinglet neutrinos and demonstrate how CP and L can
simultaneously be violated in this model. In Section 4 we address the
issue of renormalizability of the minimal iso-singlet neutrino model.
Section 5 discusses in detail the resummation approach and its
effective extension to describe incoherent decays of heavy unstable
fermions. In Section 6 we apply the effective approach to the decays
of heavy Majorana neutrinos. In Section 7 we explicitly demonstrate
how the resummation approach satisfies unitarity. In Section 8, we
solve numerically the Boltzmann equations for representative
leptogenesis scenarios, and give numerical estimates and comparisons
for the BAU generated through $\varepsilon$- and/or
$\varepsilon'$-type CP violation. Furthermore, we estimate the impact
of finite-temperature effects on the resonant phenomenon of CP
violation. Heavy Majorana neutrinos may also have important
phenomenological implications for low-energy observables, as they can
give rise to a non-vanishing electric dipole moment (EDM) of the
electron at two loops or induce $L$-violating decays of the $Z$ boson
and the $\tau$ lepton. These new-physics effects are detailed in
Section 9. Section 10 summarizes our conclusions.
\setcounter{section}{2}
\setcounter{equation}{0}
\section{$B+L$ anomaly and sphaleron processes \label{sec:2}}
\noindent
In the SM, the $B$ and $L$ numbers are only conserved in the classical
action. After quantization, however, both baryonic and leptonic
currents are violated by triangle anomalies, i.e.,
\begin{equation}
\label{anomaly}
\partial_\mu J^\mu_B\ =\ \partial_\mu J^\mu_L\ =\ i\,
\frac{N_F}{8\pi}\ \Big( -\alpha_w W^{\mu\nu,a}\widetilde{W}^a_{\mu\nu}
+ \alpha_{{}_Y} Y^{\mu\nu}\widetilde{Y}_{\mu\nu}\, \Big)\, ,
\end{equation}
where $N_F$ is the number of flavours, and $\alpha_w = g^2_w/(4\pi)$,
$\alpha_{{}_Y} = g^2_{{}_Y}/(4\pi)$, are the SU(2)$_L$ and U(1)$_Y$
fine-structure constants, respectively. Similarly, $W^{\mu\nu}$,
$Y^{\mu\nu}$ are their respective field-strength tensors, and the
antisymmetric tensors $\widetilde{W}_{\mu\nu} = \frac{1}{2}
\varepsilon_{\mu\nu\lambda\rho} W^{\lambda\rho}$,
$\widetilde{Y}_{\mu\nu} = \frac{1}{2} \varepsilon_{\mu\nu\lambda\rho}
Y^{\lambda\rho}$ are their associated duals. Furthermore, baryonic
and leptonic currents are defined as
\begin{eqnarray}
\label{JB}
J^\mu_B &=& \frac{1}{3}\ \sum_{q,\alpha} \bar{q}^\alpha \gamma^\mu
q^\alpha\, \\
\label{JL}
J^\mu_L &=& \sum_{l,\nu_l}\, (\, \bar{l} \gamma^\mu
l\, +\ \bar{\nu}_l \gamma^\mu \nu_l\, )\, ,
\end{eqnarray}
where $q$, $l$ and $\nu_l$ denote quarks, charged leptons and
neutrinos, respectively, and the index $\alpha$ indicates the colour
degrees of freedom of the quarks. Since Eq.\ (\ref{anomaly}) also
holds for individual lepton families, the actual anomaly-free charges
are
\begin{equation}
\label{Li}
\frac{1}{3} B\ -\ L_e\, ,\quad \frac{1}{3} B\ -\ L_\mu\, ,\quad
\frac{1}{3} B\ -\ L_\tau\, .
\end{equation}
It is then obvious that $B+L$ symmetry is anomalously broken at the
quantum level.
The different gauge field configurations are characterized by
different Chern-Simons numbers $n_{\rm CS}$. The CS numbers label the
infinitely many degenerate vacua of the system. The variation of $B+L$
number due to a quantum tunnelling from one vacuum state into another
is given by
\begin{equation}
\label{DBL}
\Delta (B+L) \ =\ 2N_F\, \frac{\alpha_w}{8\pi}\int d^4x\
W^{\mu\nu,a}\widetilde{W}^a_{\mu\nu}\ =\ 2N_F \Delta n_{\rm CS}\, .
\end{equation}
At zero temperature, 't Hooft\cite{hooft} estimated the probability of
$B$-violating processes, and found them to be extremely suppressed by
a factor $\exp (-4\pi n_{\rm CS}/\alpha_w) \approx \exp (-150 n_{\rm
CS})$ relative to the $B$-conserving ones with $n_{\rm CS} = 0$.
The situation changes drastically at finite temperatures. The effect
of non-trivial topological instanton-type solutions, termed
sphalerons,\cite{sphal} is amplified at high temperatures, thereby
enhancing also the rate of the $B$-violating processes. To be precise,
sphaleron interactions are in thermal equilibrium for temperatures in
the interval
\begin{equation}
\label{Tsphal}
100\ {\rm GeV}\ \stackrel{\displaystyle <}{\sim }\ T\
\stackrel{\displaystyle <}{\sim }\ 10^{12}\ {\rm GeV}\, .
\end{equation}
Sphalerons may be thought of as the creation out of the vacuum of a
state
\begin{equation}
\label{vsphal}
\Pi_{i=1,N_F}\ (u_L d_L d_L \nu_L)_i\ .
\end{equation}
Since these interactions violate $B+L$, any primordial baryonic
asymmetry $B$ should have a significant component in $B-L$ or in the
charges stated in Eq.\ (\ref{Li}), which are preserved by sphalerons,
whereas any $B+L$ component will be washed out. Decays of heavy
Majorana neutrinos produce an excess in $L$, which can naively be
written as a sum of an excess in $\frac{1}{2} (B+L)$ and in
$\frac{1}{2} (B-L)$. Sphalerons will then erase the $B+L$ component
but preserve the $B-L$, so one expects that about half of the leptonic
asymmetry $L$ will be converted into the baryonic asymmetry $B$, and
also be preserved as $B-L$ asymmetry. As we will see below, a more
careful analysis based on chemical potentials leads to the conclusion
that sphalerons approximately convert one-third of an initial leptonic
asymmetry $L$ into the observed baryonic asymmetry $B$.
For illustration, we shall assume that all SM particles are almost
massless at temperatures above the critical temperature $T_c$.
Actually, they have thermal masses but, to leading order, these are
small and may be neglected. The number density of a particle $\alpha$
is given by
\begin{equation}
\label{nalpha}
n_\alpha\ =\ g_\alpha\, \int\, \frac{d^3\vec{p}_\alpha}{2E_\alpha
(2\pi)^3}\
\frac{1}{\exp [ (E_\alpha - \mu_\alpha )/T ]\, \pm 1 }\, ,
\end{equation}
where $g_\alpha$ counts the internal degrees of freedom of $\alpha$,
$\vec{p}_\alpha$ and $E_\alpha = (|\vec{p}_\alpha|^2 +
m_\alpha)^{1/2}$ are the three-momenta and the energy of the particle,
respectively. The plus sign in Eq.\ (\ref{nalpha}) is for particles
obeying the Fermi-Dirac statistics and the minus for particles
governed by the Bose-Einstein statistics. The chemical potential for
anti-particles, e.g.\ that of $\bar{\alpha}$, is opposite to that of
the particles, i.e.\ $\mu_\alpha = -\mu_{\bar{\alpha}}$. The later
relation is valid if particles and antiparticles have interaction
rates with photons or other gauge particles much higher than the
expansion rate of the Universe. This is almost the case for all SM
particles. However, this is not generally true for non-SM particles,
such as isosinglet or right-handed neutrinos, which do not have any
tree-level coupling to the $W$- and $Z$- bosons; their couplings are
suppressed by loops and small Yukawa couplings. Under these
assumptions, the number-density asymmetry of a SM particle $\alpha$
versus its antiparticle $\bar{\alpha}$ is easily estimated by
\begin{equation}
\label{nasym}
n_{\Delta \alpha}\ =\ n_\alpha\, -\, n_{\bar{\alpha}}\ \approx\
\frac{g_\alpha}{\pi^2}\ T^3\ \Big(\frac{\mu_\alpha}{T}\Big)\ .
\end{equation}
We shall now turn to an analysis of chemical potentials in the SM.
Since FCNC interactions are sufficiently fast, we assign the same
chemical potential for all different families of up and down quarks,
i.e.\ $(\mu_{u_L} , \mu_{d_L}, \mu_{u_R} , \mu_{d_R})$. In contrast
to quarks, individual leptons possess different chemical potentials,
i.e.\ $(\mu_{l_L}, \mu_{\nu_{lL}}, \mu_{l_R})$, where $l = e, \mu,
\tau$. Furthermore, the chemical potential of all neutral gauge
bosons, such as gluons, photons, and $Z$ bosons, vanish, and $\mu_W$
is the chemical potential of the $W^-$ boson. Finally, the components
of the Higgs doublet $[\chi^-, \phi^0 = (H-i\chi^0)/\sqrt{2}]$ have
chemical potentials $(\mu_0, \mu_-)$. Many chemical potentials can be
eliminated by means of chemical equilibrium reactions in the SM. More
explicitly, we have
\begin{equation}
\label{chem}
\begin{array}{rclcrcl}
W^- &\leftrightarrow&\bar{u}_L + d_L\, ,& ~~~~ & \mu_W &=& -\mu_{u_L} +
\mu_{d_L},\\
W^- &\leftrightarrow& \bar{\nu}_{lL} + l_L\, ,& &
\mu_W & =& -\mu_{\nu_{lL}} + \mu_{l_L}, \\
W^- &\leftrightarrow& \chi^- + \phi^0\, ,&& \mu_W &=& \mu_- +
\mu_0, \\
\phi^0 &\leftrightarrow& \bar{u}_L + u_R\, , && \mu_0 &=& -\mu_{u_L} +
\mu_{u_R}, \\
\phi^0 &\leftrightarrow& \bar{d}_L + d_R\, , && \mu_0 &=& -\mu_{d_L} +
\mu_{d_R}, \\
\phi^0 &\leftrightarrow& \bar{l}_L + l_R\, , &&
\mu_0 &=& -\mu_{l_L} + \mu_{l_R}.
\end{array}
\end{equation}
As independent parameters, we consider $\mu_u = \mu_{u_L}$, $\mu =
\sum_l \mu_{\nu_{lL}} = \sum_l \mu_{l_L}$, $\mu_0$ and $\mu_W$. In
the SM with $N_F$ families and $N_H$ Higgs doublets, the baryon and
lepton number $B$ and $L$ as well as the electric charge $Q$ and
hypercharge $Q_3$ may be expressed in terms of these quantities, as
follows:
\begin{eqnarray}
\label{BLQQ}
B &=& 4N_F\mu_u\, +\, 2N_F\mu_W\, , \nonumber\\
L &=& 3\mu\, +\, 2N_F\mu_W\, -\, N_F \mu_0\, ,\nonumber\\
Q &=& 2N_F\mu_u\, -\, 2\mu\, +\, 2(2N_F + N_H)\mu_0
- 2(2N_F + 2 + N_H) \mu_W\, ,\nonumber\\
Q_3 &=& - (2N_F + 4 + 2N_H) \mu_W\, .
\end{eqnarray}
Furthermore, the sphaleron interactions in Eq.\ (\ref{vsphal}) give
rise to the additional relation
\begin{equation}
\label{chemspal}
N_F(3\mu_u\ +\ 2\mu_W)\ +\ \mu\ =\ 0\, .
\end{equation}
Above the electroweak phase transition, both charges $Q$ and $Q_3$ are
conserved, i.e.\ $\langle Q \rangle = \langle Q_3 \rangle = 0$. Thus,
we have: $\mu_W = 0$, $\mu = -3N_F\mu_u$, and $\mu_0 = -
8N_F\mu_u/(4N_F + 2N_H)$. Using these relationships among the chemical
potentials, it is not difficult to obtain\cite{HT,Dreiner/Ross}
\begin{equation}
\label{BLrel}
B(T > T_c) \ =\ \frac{8N_F + 4N_H}{22N_F + 13 N_H}\ (B - L)\ .
\end{equation}
{}From Eq.\ (\ref{BLrel}), one concludes that almost independently of
the number of generations and Higgs doublets, roughly one-third of the
initial $B-L$ and/or $L$ asymmetry will be reprocessed by sphalerons
into an asymmetry in $B$. This amount of $B$ asymmetry persists even
after the electroweak phase transition.
\setcounter{section}{3}
\setcounter{equation}{0}
\section{Models with heavy Majorana neutrinos}\label{sec:3}
\noindent
GUT's such as SO(10) \cite{FM,Wol/Wyl} or E${}_6$ \cite{witten} models
naturally predict heavy Majorana neutrinos. These theories also
contain several other particles, e.g.\ leptoquarks, additional charged
and neutral gauge bosons, etc., which may have significant
interactions with heavy Majorana neutrinos and so affect the number
density of heavy neutrinos. To avoid excessive complication, we shall
assume that these new particles are much heavier than the lightest
heavy Majorana neutrino, and are therefore expected to decouple
sufficiently fast from the process of leptogenesis.
As already mentioned, SO(10) \cite{FM,Wol/Wyl} and/or E${}_6$
\cite{witten} models may naturally accommodate heavy Majorana
neutrinos. Specifically, SO(10) models can break down to the SM gauge
group in the following schematic way:
\begin{eqnarray}
\mbox{SO(10)} &\to & G_{422}=\mbox{SU}(4)_{\mbox{\scriptsize PS}}
\otimes\mbox{SU}(2)_R \otimes
\mbox{SU}(2)_L\nonumber\\
&\to & G_{3221} =
\mbox{SU}(3)_c\otimes \mbox{SU}(2)_R\otimes \mbox{SU}(2)_L\otimes
\mbox{U}(1)_{(B-L)}\nonumber\\
&\to & \mbox{SM} = G_{321} = \mbox{SU}(3)\otimes \mbox{SU}(2)_L\otimes
\mbox{U}(1)_Y \, ,
\end{eqnarray}
where the subscript PS characterizes the Pati-Salam gauge
group.\cite{PS} The spinor representation of SO(10) is 16-dimensional
and its decomposition under $G_{422}$ reads
\begin{equation}
\label{G422}
G_{422}\, :\ \mbox{{\bf 16}}\ \to\ (4,1,2)\, \oplus\, (\bar{4},2,1)\, .
\end{equation}
Evidently, SO(10) contains the left-right-symmetric gauge group
SU(2)$_R\otimes$SU(2)$_L\otimes$ U(1)$_{(B-L)}$, which necessitates
the presence of right-handed neutral leptons. In this scenario, there
can exist several Higgs-boson representations that may cause a
breaking of the groups $G_{422}$ and $G_{3221}$ down to the SM gauge
group $G_{321}$.\cite{Wol/Wyl,PalMoh}
E$_6$ theories \cite{witten} may also have a breaking pattern related
to SO(10) theories. In fact, the {\bf 27} spinor representation
decomposes into {\bf 16} $\oplus$ {\bf 10} $\oplus$ {\bf 1} under
SO(10). This leads to four singlet neutrinos per SM family: one
neutrino as isodoublet member in {\bf 16}, two neutrinos as isodoublet
members in {\bf 10}, and one singlet neutrino in {\bf 1}. In these
models, it is argued that depending on the representation of the
E${}_6$ Higgs multiplets, two of the four isosinglets can have
Majorana masses of few TeV,\cite{witten} whereas the other two may be
very heavy with masses of the order of the unification scale.
We shall now discuss a generic subgroup that may be derived from
SO(10) and/or E$_6$ models. This generic subgroup may be realized in
the usual SM augmented by a number $n_R$ of right-handed neutrinos
$\nu_{Ri}$, with $i=1, 2, \dots, n_R$. As we have discussed above, in
E$_6$ theories the active isosinglet neutrinos may be more than three.
In SO(10) models, left-right symmetry is more naturally realized with
one right-handed neutrino per family (for interesting alternatives,
see Ref.\cite{Wol/Wyl}). For the sake of generality, we shall keep the
number of iso-singlet neutrinos arbitrary. For definiteness, the quark
sector of the minimal model has the SM form, while the leptonic sector
consists of the fields:
\begin{displaymath}
\left( \begin{array}{c} \nu_{lL} \\ l_L \end{array} \right)\ ,\qquad l_R\ ,
\qquad \nu_{Ri}\ ,
\end{displaymath}
where $l=e,\mu,\tau$. At $T\gg T_c \stackrel{\displaystyle >}{\sim}
v$, the vacuum expectation value (VEV) of the SM Higgs doublet $\Phi$
at temperature $T$ (with $v=v(0)$) vanishes, $v(T)=0$. At these high
temperatures, all SM particles including Higgs fields are massless;
they only acquire thermal masses. However, one may have Majorana
masses in the Lagrangian given by
\begin{equation}
\label{Majmass}
-{\cal L}_M\ =\ \frac{1}{2}\, \sum\limits_{i,j=1}^{n_R}\,
\Big( \bar{\nu}^C_{Ri} M^\nu_{ij} \nu_{Rj}\ +\
\bar{\nu}_{Ri} M^{\nu *}_{ij} \nu^C_{Rj}\, \Big)\, .
\end{equation}
where $M^\nu$ is an $n_R\times n_R$-dimensional symmetric matrix,
which is in general complex. In Eq.\ (\ref{Majmass}), the superscript
$C$ denotes the operation of charge conjugation, which acts on the
four-component chiral spinors $\psi_L$ and $\psi_R$ as follows:
\begin{equation}
\label{psiC}
(\psi_L)^C\ =\ P_R C\bar{\psi}^T\, ,
\qquad (\psi_R)^C\ =\ P_L C\bar{\psi}^T\, ,
\end{equation}
where $P_{L(R)}=[1 -(+)\gamma_5]/2$ is the chirality projection
operator. The mass matrix $M^\nu$ can be diagonalized by means of a
unitary transformation
\begin{equation}
\label{diagMnu}
U^T M^\nu U\ =\ \widehat{M}^\nu\, ,
\end{equation}
where $U$ is an $n_R\times n_R$-dimensional unitary matrix and the
diagonal matrix $\widehat{M}^\nu$ contains the $n_R$ heavy Majorana
masses. Then, the respective $n_R$ mass eigenstates $N_i$ are related
to the flavour states $\nu_{Ri}$ through
\begin{equation}
\label{nuRi}
\nu_{Ri}\ =\ P_R\sum_{j=1}^{n_R} U_{ij} N_j\, ,\qquad
\nu^C_{Ri}\ =\ P_L\sum_{j=1}^{n_R} U^*_{ij} N_j\, .
\end{equation}
In the mass basis of heavy Majorana neutrinos, the Yukawa sector
governing the interactions of the heavy neutrinos with the Higgs and
lepton isodoublets is given by
\begin{equation}
\label{LYint}
{\cal L}_Y\ =\ -\, \sum\limits_{l=1}^{n_L} \sum\limits_{j=1}^{n_R}\,
h_{lj}\, (\bar{\nu}_{lL},\ \bar{l}_L)\,
\left( \begin{array}{c} (H\, -\, i\chi^0)/\sqrt{2} \\ -\, \chi^- \end{array}
\right)\, N_j\ +\ \mbox{H.c.}
\end{equation}
At high temperatures, the CP-even Higgs field $H$, the CP-odd Higgs
scalar $\chi^0$ and the charged Higgs scalars $\chi^\pm$ given in
(\ref{LYint}) are massless. In the low-$T$ limit $T\ll T_c$, the
field $H$ becomes the massive SM Higgs boson, whereas $\chi^0$ and
$\chi^\pm$ are the would-be Goldstone bosons eaten by the longitudinal
degrees of freedom of the gauge bosons $Z$ and $W^\pm$, respectively.
In the calculations in Section 6, we shall include the $M_H$
dependence in the CP asymmetries.
Let us now consider a simple one-generation model where the standard
fermionic content is extended by adding two isosinglet neutrinos,
e.g.\ $\nu_R$ and $S_L$. Then, the most general Yukawa sector that
preserves lepton number reads
\begin{equation}
\label{WW}
- {\cal L}\ =\ \frac{1}{2}\, (\bar{S}_L,\ (\bar{\nu}_R)^C )\,
\left( \begin{array}{cc}
0 & M \\
M & 0 \end{array} \right)\, \left( \begin{array}{c}
(S_L)^C \\ \nu_R \end{array} \right)\ +\
h_R\, (\bar{\nu}_L,\ \bar{l}_L) \tilde{\Phi} \nu_R\ +\ \mbox{H.c.},
\end{equation}
where $\tilde{\Phi}=i\sigma_2\Phi$ is the isospin conjugate Higgs
doublet and $\sigma_2$ is the usual Pauli matrix. The kinematic
parameters $M$ and $h_R$ may in general be complex but their phases
are not physical. One can make both real by appropriate redefinitions
of the fermionic fields, i.e.
\begin{equation}
\label{CProt}
L^T_L \equiv (\nu_L, l_L) \to e^{i\phi_L} L^T_L \, ,\qquad
\nu_R \to e^{i\phi_R} \nu_R,\qquad
S_L \to e^{i\phi_S} S_L .
\end{equation}
One choice could be: $\phi_R = 0$, $\phi_S = {\rm arg}(M)$, and
$\phi_L = {\rm arg}(h_R)$. Retaining the $L$-conserving structure of
the isosinglet mass matrix, such scenarios require a non-trivial
mixing among the generations to describe CP violation.\cite{BRV}
Furthermore, such scenarios do not produce the necessary leptonic
asymmetry for baryogenesis; however, see Ref.\cite{ARS} for an
interesting variant based on individual lepton-flavour violation.
In order to break both $L$ and CP symmetries of the Lagrangian in Eq.\
(\ref{WW}), one must consider at least two extensions in the model:
\begin{itemize}
\item[ (i)] The inclusion of two complex $L$-violating Majorana masses
$\mu_R\bar{\nu}_R\nu^C_R$ and $\mu_L\bar{S}^C_L S_L$.
\item[(ii)] The addition of the $L$-violating coupling $h_L\,
(\bar{\nu}_L,\ \bar{l}_L) \tilde{\Phi} (S_L)^C$ and one
$L$-violating mass parameter, e.g.\ $\mu_R\bar{\nu}_R\nu^C_R$.
\end{itemize}
The two models are related by a unitary rotation and are therefore
equivalent. The necessary conditions for CP invariance in these two
scenarios may be found to be
\begin{eqnarray}
\label{CPi_ii}
\mbox{(i)} && |h_R|^2\, {\rm Im} ( M^{*2}\mu_L\mu_R )\ =\ 0\, ,\nonumber\\
\mbox{(ii)} && {\rm Im} (h_Lh_R^*\mu_R M^*)\ =\ 0\, .
\end{eqnarray}
It is now interesting to remark that $\mu_L$ and $\mu_R$ can be much
smaller than $M$ within E$_6$ scenarios.\cite{witten} These parameters
may be induced after integrating out high-dimensional operators
involving ultra-heavy non-active neutrinos. One may think that the
lepton number is somehow violated, at the GUT or Planck scale $M_X$,
by these additional non-active isosinglet fields, and it is
communicated to the active isosinglet sector where $M\ll M_X$. In
this way, one can naturally obtain a see-saw-like relation for the
sizes of $\mu_L$ and $\mu_R$, i.e.
\begin{equation}
\label{muLR}
\mu_L,\ \mu_R\ \sim\ \frac{M^2}{M_X}\quad {\rm or}\quad
\frac{M^2}{M_S}\ ,
\end{equation}
where $M_S\approx 10^{-3}\, M_X$ could be some intermediate see-saw
scale. In such generic mass models, the heavy Majorana neutrinos $N_1$
and $N_2$ have a very small mass splitting given by
\begin{equation}
\label{xN}
x_N\ =\ \frac{m_{N_2}}{m_{N_1}}\ -\ 1\ \sim\ \frac{\mu_L}{M}\quad
{\rm or}\quad \frac{\mu_R}{M}\ .
\end{equation}
For instance, if $M=10$ TeV and $\mu_L=\mu_R = M^2/M_X$, one then
finds $x_N\approx 10^{-12}$--$10^{-11}$. As we will see in Section 6,
such small values of $x_N$ can lead to a resonant enhancement of CP
asymmetries in the heavy Majorana neutrino decays.
To obtain the sufficient and necessary conditions of CP invariance for
any flavour structure of the one-generation model with two isosinglet
neutrinos, one should use a more general approach, based on
generalized CP transformations \cite{BBG} for the fermionic fields:
\begin{equation}
\label{genCP}
L_L \to e^{i\phi_L} (L_L)^C\,, \qquad \nu_{Ri} \to V_{ij} (\nu_{Rj})^C,
\end{equation}
where $V$ is a $2\times 2$ dimensional unitary matrix. Notice that
the transformations given by Eq.\ (\ref{genCP}) satisfy the SM
symmetry of the mass-independent, conformal invariant part of the
Lagrangian; only $M^\nu$ breaks this symmetry softly. In such an
approach,\cite{BRV,BBG} one looks for all possible weak-basis
independent combinations that can be formed by Yukawa couplings and
the neutrino mass matrix $M^\nu$, and are simultaneously invariant
under the transformations (\ref{genCP}). In this way, we find the
condition
\begin{equation}
\label{CPinv}
{\rm Im}\, \mbox{Tr} ( h^\dagger h M^{\nu\dagger} M^\nu M^{\nu\dagger}
h^T h^* M^\nu )\ =\ m_{N_1}m_{N_2} (m^2_{N_1} - m^2_{N_2})\,
{\rm Im} (h_{l1}h^*_{l2})^2\ =\ 0\, ,
\end{equation}
where $h = (h_{l1}, h_{l2})$ is a row vector that contains the Higgs
Yukawa couplings defined in the mass basis of isosinglet neutrinos.
{}From Eq.\ (\ref{CPinv}), one readily observes that CP invariance
holds if $m_{N_1}=m_{N_2}$ and/or one of the isosinglet neutrinos is
massless. These considerations may be extended to models with $n_L$
weak isodoublets and $n_R$ neutral isosinglets. In this case, there
exist many conditions analogous to Eq.\ (\ref{CPinv}), which involve
high-order terms in the Yukawa-coupling matrix $h$. However, not all
of the conditions are sufficient and necessary for CP invariance. If
we assume that Higgs triplets are not present in the theory, the total
number of all non-trivial CP-violating phases is ${\cal N}_{CP} = n_L
(n_R-1)$.\cite{KPS}
\begin{figure}
\begin{center}
\begin{picture}(360,300)(0,0)
\SetWidth{0.8}
\ArrowLine(0,270)(30,270)\ArrowLine(30,270)(60,270)\Line(60,270)(60,230)
\ArrowLine(90,230)(60,230)\DashArrowLine(60,270)(90,290){5}
\DashArrowLine(30,270)(60,230){5}\Text(60,250)[]{{\boldmath $\times$}}
\Text(0,265)[lt]{$N_i$}\Text(45,280)[]{$l$}\Text(65,250)[l]{$N_j$}
\Text(95,290)[l]{$\chi^-$}\Text(95,230)[l]{$l$}\Text(40,250)[r]{$\chi^+$}
\Text(50,210)[]{\bf (a)}
\ArrowLine(120,270)(150,270)\ArrowLine(150,270)(180,270)
\ArrowLine(180,270)(180,230)\ArrowLine(180,230)(210,230)
\DashArrowLine(180,270)(210,290){5}\DashArrowLine(150,270)(180,230){5}
\Text(120,265)[lt]{$N_i$}\Text(165,280)[]{$\nu_l$}\Text(185,250)[l]{$N_j$}
\Text(215,290)[l]{$\chi^0$}\Text(215,230)[l]{$\nu_l$}
\Text(170,240)[r]{$\chi^0,H$}
\Text(170,210)[]{\bf (b)}
\ArrowLine(240,270)(270,270)\ArrowLine(270,270)(300,270)
\ArrowLine(300,270)(300,230)\ArrowLine(300,230)(330,230)
\DashArrowLine(300,270)(330,290){5}\DashArrowLine(270,270)(300,230){5}
\Text(240,265)[lt]{$N_i$}\Text(285,280)[]{$\nu_l$}\Text(305,250)[l]{$N_j$}
\Text(335,290)[l]{$H$}\Text(335,230)[l]{$\nu_l$}
\Text(290,240)[r]{$\chi^0,H$}
\Text(290,210)[]{\bf (c)}
\DashArrowLine(0,150)(30,150){5}\ArrowArc(50,150)(20,0,180)
\ArrowArc(50,150)(20,180,360)\DashArrowLine(70,150)(100,150){5}
\Text(0,155)[bl]{$\chi^-$} \Text(100,155)[br]{$\chi^-$}
\Text(50,175)[b]{$N_i$}\Text(50,125)[t]{$l$}
\Text(50,100)[]{\bf (d)}
\DashArrowLine(120,150)(150,150){5}\ArrowArc(170,150)(20,0,180)
\ArrowArc(170,150)(20,180,360)\DashArrowLine(190,150)(220,150){5}
\Text(120,155)[bl]{$\chi^0$} \Text(220,155)[br]{$\chi^0$}
\Text(170,175)[b]{$N_i$}\Text(170,125)[t]{$\nu_l$}
\Text(170,100)[]{\bf (e)}
\DashArrowLine(240,150)(270,150){5}\ArrowArc(290,150)(20,0,180)
\ArrowArc(290,150)(20,180,360)\DashArrowLine(310,150)(340,150){5}
\Text(240,155)[bl]{$H$} \Text(340,155)[br]{$H$}
\Text(290,175)[b]{$N_i$}\Text(290,125)[t]{$\nu_l$}
\Text(290,100)[]{\bf (f)}
\ArrowLine(0,40)(30,40)\ArrowLine(30,40)(70,40)
\ArrowLine(70,40)(100,40)\DashArrowArc(50,40)(20,0,180){5}
\Text(0,45)[bl]{$l'$}\Text(100,45)[br]{$l$}
\Text(50,65)[b]{$\chi^+$}\Text(50,35)[t]{$N_i$}
\Text(50,0)[]{\bf (g)}
\ArrowLine(120,40)(150,40)\ArrowLine(150,40)(190,40)
\ArrowLine(190,40)(220,40)\DashArrowArc(170,40)(20,0,180){5}
\Text(120,45)[bl]{$\nu_{l'}$}\Text(220,45)[br]{$\nu_l$}
\Text(170,65)[b]{$\chi^0,H$}\Text(170,35)[t]{$N_i$}
\Text(170,0)[]{\bf (h)}
\ArrowLine(240,40)(270,40)\ArrowLine(270,40)(310,40)
\ArrowLine(310,40)(340,40)\DashArrowArc(290,40)(20,0,180){5}
\Text(240,45)[bl]{$N_j$}\Text(340,45)[br]{$N_i$}
\Text(290,65)[b]{$\chi^\mp,\chi^0,H$}\Text(290,35)[t]{$l^\mp,\nu_l,\nu_l$}
\Text(290,0)[]{\bf (j)}
\end{picture}\\[0.7cm]
\end{center}
\fcaption{One-loop graphs contributing to the renormalization of the
couplings $\chi^-lN_i$, $\chi^0\nu_l N_i$ and $H\nu_lN_i$.}\label{fig:1}
\end{figure}
\newpage
\setcounter{section}{4}
\setcounter{equation}{0}
\section{Renormalization}\label{sec:4}
\noindent
At the tree level, CP violation in particle decays amounts to CPT
violation and therefore vanishes identically. A non-vanishing
contribution to CP asymmetries only arises at the one-loop level,
considering the diagrams shown in Fig.\ \ref{fig:1}. For this reason,
it is important to discuss how one-loop renormalization applies to
heavy Majorana-neutrino models\cite{KP} and its possible consequences
on CP asymmetries.
We start our discussion by expressing all bare quantities in terms of
renormalized ones in the following way:
\begin{eqnarray}
\label{Rbare}
\nu^0_{lL} &=& \sum\limits_{l'=1}^{n_L}\, \Big( \delta_{ll'}\, +\,
\frac{1}{2}\delta Z^\nu_{ll'} \Big) \nu_{l'L}\, ,\qquad
l^0_L\ =\ \sum\limits_{l'=1}^{n_L}\, \Big( \delta_{ll'}\, +\,
\frac{1}{2}\delta Z^l_{ll'} \Big) l'_L\, ,\\
N^0_i &=& \sum\limits_{j=1}^{n_R}\, \Big( \delta_{ij}\, +\,
\frac{1}{2}\delta Z^N_{ij} \Big) N_j\, ,\quad
\tilde{\Phi} \ =\ \Big( 1\, +\, \frac{1}{2}\delta Z_\Phi \Big) \tilde{\Phi}
\, ,\quad
h^0_{lj} \ =\ h_{lj}\ +\ \delta h_{lj}\, ,\nonumber
\end{eqnarray}
where unrenormalized kinematic parameters and fields are indicated by
a superscript `0'. Note that $\delta Z_\Phi$ collectively represents
the wave-function renormalization constants of all components of the
Higgs doublet $\tilde{\Phi}$ (or $\Phi$), i.e.\ the fields $\chi^\pm$,
$\chi^0$ and $H$. In Appendix A, we give analytic expressions for
Higgs and fermion self-energies. {}From these, one can easily see that
the divergent part of the Higgs wave-function renormalization is
exactly the same. In fact, $\delta Z_\Phi$ is universal in the limit
$M_H\to 0$.
Let us now consider that all quantities in the Lagrangian
(\ref{LYint}) are bare and we can substitute Eqs.\ (\ref{Rbare}) into
that bare Lagrangian. In addition to the renormalized Lagrangian,
which has the same structural form as the bare one, we then find the
counter-term (CT) Lagrangian
\begin{equation}
\label{deltaLY}
-\, \delta{\cal L}_Y\ =\ \frac{1}{2}\, \sum\limits_{l=1}^{n_L} h_{lj}
\sum\limits_{j=1}^{n_R}\, \Big(\, 2\frac{\delta h_{lj}}{h_{lj}}\, +\,
\delta Z_\Phi\, +\, \sum\limits_{l'=1}^{n_L} \delta Z^L_{l'l}\, +\,
\sum\limits_{k=1}^{n_R} \delta Z^N_{jk}\, \Big)
\, \bar{L}_{l'}\tilde{\Phi} N_k\ +\ \mbox{H.c.},
\end{equation}
where $L_l=(\nu_{lL},\ l_L )^T$ and $\delta Z^L = (\delta Z^l, \delta
Z^\nu)$. Owing to charge and hypercharge conservation on the vertices,
it is not difficult to show by naive power-counting that the one-loop
vertex corrections in Fig.\ \ref{fig:1}(a)--(c) are ultra-violet (UV)
finite (see also Appendix A).
Despite the fact that vertex corrections are UV finite by themselves,
the wave-function renormalizations of the Higgs and lepton isodoublets
and that of neutrino isosinglets contain UV divergences that do not
cancel. In accordance with the CT Lagrangian (\ref{deltaLY}), one may
require that all UV terms are to be absorbed into the definition of
$h_{lj}$, i.e.
\begin{equation}
\label{deltah}
\delta h_{lj}\ =\ -\, \frac{1}{2}\, \Big(\, h_{lj}\delta Z^{\rm div}_{\Phi}\,
+\, \sum\limits_{l'=1}^{n_L} h_{l'j}\delta Z^{L*}_{l'l}\, +\,
\sum\limits_{k=1}^{n_R} h_{lk}\delta Z^N_{kj}\, \Big)\, .
\end{equation}
It is important to stress that one-loop renormalization involves the
dispersive parts of self-energies and effectively leads to a
redefinition of the kinematic parameters, whereas all absorptive
corrections remain unaffected. Even though there might be some
high-order dependence due to the choice of different renormalization
schemes, we carry out the mass renormalization in the on-shell (OS)
scheme.\cite{OS} As we will see in Section 7, this scheme has some
field-theoretic advantages over other schemes, when applied to the
resummation approach describing the mixing of two unstable particles.
\setcounter{section}{5}
\setcounter{equation}{0}
\section{Resummation approach to unstable-particle mixing}\label{sec:5}
\noindent
The consistent description of unstable particles within the
conventional framework of perturbative S-matrix theory is an issue
related to a number of field-theoretic difficulties. Since unstable
particles decay exponentially with time, they cannot appear as
asymptotic {\em in} or {\em out} states in a process. Furthermore,
finite-order perturbation theory breaks down. The usual propagator
describing the unstable particle in the intermediate state of a given
process displays a physical singularity when the particle comes on its
mass shell. One is therefore compelled to use resummation methods that
treat unstable particles and unstable-particle mixing in a consistent
way; this is a rather subtle issue within the context of gauge
theories.\cite{PP,ANPB}
In a simple scalar theory with one unstable particle, Veltman
\cite{Velt} was able to show that, even if one removes the unstable
particle from the initial and final states and substitutes it in terms
of asymptotic states, the so-truncated S-matrix theory will still
maintain the field-theoretic properties of unitarity and causality.
Veltman's truncated S-matrix theory is rather useful to describe
resonant processes in collider experiments where the initial and final
states can be well prepared and detected. However, this formalism
cannot directly be applied to the early Universe, as it does not take
account of the many decoherentional collisions that an unstable
particle may undergo with the thermal background before it decays.
Therefore, one must seek a method that isolates the {\em
incoherent}\cite{LES} part of an S-matrix amplitude. The new
resummation method should include finite width effects in the mixing
and decay of unstable particles. This will be done in an effective
manner, by employing a procedure related to the
Lehmann--Symanzik--Zimmermann formalism (LSZ).\cite{LSZ} Then, the
{\em incoherent} decay amplitude derived with this method may
equivalently be embedded into a transition element \cite{PP,ANPB} in
line with Veltman's S-matrix formulation. As we will see in Section
9, the squared resummed decay amplitudes thus obtained will become the
relevant collision terms entering the Boltzmann equations for the
thermodynamic evolution of the Universe.
\begin{figure}
\begin{center}
\begin{picture}(360,100)(0,0)
\SetWidth{0.8}
\Text(0,60)[l]{$S_{i,\dots}$}\Text(30,60)[]{$=$}
\Text(60,60)[]{$\lim\limits_{p^2\to M^2_i}$}
\GOval(150,60)(30,20)(0){0.75}\Line(170,60)(200,60)\GCirc(215,60){15}{0.75}
\Line(230,60)(260,60)\Vertex(260,60){2}
\Line(120,90)(135,80)\Line(120,30)(135,40)\Vertex(120,90){2}
\Vertex(120,30){2}\Vertex(113,60){2}\Vertex(115,75){2}\Vertex(115,45){2}
\Text(115,90)[r]{$S_{i_1}$}\Text(115,30)[r]{$S_{i_n}$}
\Text(185,65)[b]{$S_k$}\Text(245,65)[b]{$S_j$}
\Text(270,80)[l]{$Z^{-1/2T}_{ji}\, \hat{\Delta}^{-1}_{ii}(p^2)$}
\LongArrow(250,50)(235,50)\Text(255,50)[l]{$p$}
\end{picture}\\[0.7cm]
\end{center}
\fcaption{Diagrammatic representation of the renormalized
$n-1$-non-amputated amplitude, $S_{i,\dots}$, and the LSZ reduction
formalism.}\label{fig:2}
\end{figure}
We shall now demonstrate the effective resummation approach to
unstable particle mixing. Let us consider a theory with two neutral
unstable scalars, e.g.\ $S_1$ and $S_2$. The approach can then be
extended to the case of unstable fermions such as heavy Majorana
neutrinos. The bare (unrenormalized) fields $S^0_i$ and their
respective masses $M^0_i$ may then be expressed in terms of
renormalized fields $S_i$ and masses $M_i$ in the following way:
\begin{eqnarray}
\label{RenS0}
S^0_i & = & Z^{1/2}_{ij}\, S_j \ =\ \Big( \delta_{ij}\, +\,
\frac{1}{2} \delta Z_{ij}\Big) S_j\ ,\\
\label{RenMass}
(M^0_i)^2 & = & M^2_i\, +\, \delta M^2_i\ .
\end{eqnarray}
Here and henceforth, summation is understood over repeated indices
that do not appear on both sides of an equation. In Eqs.\
(\ref{RenS0}) and (\ref{RenMass}), $Z^{1/2}_{ij}$ and $\delta M_i$ are
the wave-function and mass renormalization constants, respectively,
which can be determined from renormalization conditions imposed on the
two-point correlation functions, $\Pi_{ij}(p^2)$, for the transitions
$S_j\to S_i$ in some physical scheme, such as the on-mass-shell (OS)
renormalization scheme.\cite{OS} More details may be found in the
appendix.
In order to include the mixing of the unstable scalars, we must first
calculate all the $S_iS_j$ Green functions, with $i,j =1,2$. After
summing up a geometric series of the self-energies $\Pi_{ij}(p^2)$,
the full propagators may be obtained by inverting the following
inverse propagator matrix:
\begin{equation}
\label{InvD12}
\Delta^{-1}_{ij} (p^2)\ =\
\left[
\begin{array}{cc}
p^2\, -\, (M^0_1)^2\, +\, \Pi_{11}(p^2) & \Pi_{12}(p^2)\\
\Pi_{21}(p^2) & p^2\, -\, (M^0_2)^2\, +\, \Pi_{22}(p^2)
\end{array} \right]\, .
\end{equation}
The result of inverting the matrix in Eq.\ (\ref{InvD12}) may be given by
\begin{eqnarray}
\label{D11}
\Delta_{11}(p^2) &=& \left[ \, p^2\, -\, (M^0_1)^2
+\Pi_{11}(p^2)-\, \frac{\Pi^2_{12}(p^2)}{p^2-(M^0_2)^2+
\Pi_{22}(p^2)}\right]^{-1}\,
,\\
\label{D22}
\Delta_{22}(p^2) &=& \left[ \, p^2\, -\, (M^0_2)^2
+\Pi_{22}(p^2)-\, \frac{\Pi^2_{12}(p^2)}{p^2-(M^0_1)^2+
\Pi_{11}(p^2)}\right]^{-1}\,
,\\
\label{D12}
\Delta_{12}(p^2) &=& \Delta_{21}(p^2)\ =\
-\, \Pi_{12}(s) \Bigg[ \Big(p^2-(M^0_2)^2+\Pi_{22}(p^2)\Big)\nonumber\\
&&\times \Big( p^2 - (M^0_1)^2 +\Pi_{11}(p^2)\Big)\,
-\, \Pi^2_{12}(p^2)\, \Bigg]^{-1}\, .
\end{eqnarray}
where $\Pi_{12}(p^2)=\Pi_{21}(p^2)$. Moreover, we observe the crucial
factorization property for the off-diagonal ($i\not=j$) resummed
scalar propagators
\begin{eqnarray}
\label{Drel}
\Delta_{ij}(p^2) &=& -\, \Delta_{ii}(p^2)\
\frac{\Pi_{ij}(p^2)}{p^2\, -\, (M^0_j)^2\, +\, \Pi_{jj}(p^2)}\nonumber\\
&=& -\, \frac{\Pi_{ij}(p^2)}{p^2\, -\, (M^0_i)^2\, +\,
\Pi_{ii}(p^2)}\ \Delta_{jj}(p^2)\ .
\end{eqnarray}
The resummed unrenormalized scalar propagators $\Delta_{ij}(p^2)$ are
related to the respective renormalized ones $\hat{\Delta}_{ij}(p^2)$
through the expression
\begin{equation}
\label{D_Dhat}
\Delta_{ij}(p^2)\ =\
Z^{1/2}_{im}\, \hat{\Delta}_{mn}(p^2)\, Z^{1/2T}_{nj}\, ,
\end{equation}
where $\hat{\Delta}_{ij}(p^2)$ may be obtained from Eqs.\
(\ref{D11})--(\ref{D12}), just by replacing $M^0_i$ with $M_i$ and
$\Pi_{ij}(p^2)$ with $\widehat{\Pi}_{ij}(p^2)$. Note that the
property given in Eq.\ (\ref{Drel}) will also hold true for the
renormalized scalar propagators $\hat{\Delta}_{ij}(p^2)$.
Suppose that we wish to find the effective resummed decay amplitude
$\widehat{\cal T}_{S_i}$ for the decay $S_i$ to $n$ light stable
scalars $S_{i_1}$, \dots, $S_{i_n}$. In analogy to the LSZ formalism,
one starts with the Green function describing the transition shown in
Fig.\ \ref{fig:2}, and amputates the external legs by their inverse
propagators. For the stable external lines $S_{i_1}$, \dots,
$S_{i_n}$, the procedure is essentially the same with the usual LSZ
formalism. This formalism may then be extended to the external line
describing the $S_i S_j$ system. The intermediate steps of this
procedure are given by
\begin{eqnarray}
\label{LSZ2}
\widehat{\cal T}_{i\dots}& = & \lim\limits_{p^2\to M^2_i}\
{\cal T}^{amp}_{k\dots}
Z^{1/2}_{km}\, \hat{\Delta}_{mn}(p^2)\, Z^{1/2T}_{nj}
Z^{-1/2T}_{ji} \hat{\Delta}^{-1}_{ii}(p^2) \nonumber\\
&=& \lim\limits_{p^2\to M^2_i}\Big[ {\cal T}^{amp}_{k\dots}Z^{1/2}_{ki}\
-\ {\cal T}^{amp}_{k\dots}Z^{1/2}_{km} \frac{\widehat{\Pi}_{mi}(p^2)
(1-\delta_{mi})}{p^2-M^2_m+\widehat{\Pi}_{mm}(p^2)}\, \Big]\nonumber\\
&=& {\cal T}_{i\dots}\ -\ {\cal T}_{j\dots}\frac{\widehat{\Pi}_{ji}(M^2_i)
(1-\delta_{ij})}{M^2_i-M^2_j+\widehat{\Pi}_{jj}(M^2_i)}\ ,
\end{eqnarray}
where ${\cal T}_{i\dots}$ and ${\cal T}_{j\dots}$ are the renormalized
transition elements evaluated in the stable-particle approximation.
One should bear in mind that the OS renormalized self-energies
$\widehat{\Pi}_{ji}(M^2_i)$ in Eq.\ (\ref{LSZ2}) have no vanishing
absorptive parts, as renormalization can only modify the dispersive
(real) part of these self-energies. The reason is that the CT
Lagrangian must be Hermitian as opposed to the absorptive parts which
are anti-Hermitian. In fact, these additional width mixing effects are
the ones we wish to include in our formalism for decay amplitudes and
are absent in the conventional perturbation theory. It is also
important to observe that our approach to decays is not singular,
i.e.\ $\widehat{S}_{i\dots}$ displays an analytic behaviour in the
degenerate limit $M^2_i\to M^2_j$, because of the appearance of the
imaginary term $i{\rm Im}\widehat{\Pi}_{jj}(M^2_i)$ in the denominator
of the mixing factor present in the last equality of Eq.\
(\ref{LSZ2}). Finally, we must stress that the inclusion of these
phenomena has been performed in an effective manner. Since the
decaying unstable particle cannot appear in the initial
state,\cite{Velt} the resummed decay amplitude must be regarded as
being a part which can effectively be embedded into a resummed
S-matrix element.\cite{PP} This resummed S-matrix element describes
the dynamics of the very same unstable particle, which is produced by
some asymptotic states, resides in the intermediate state, and
subsequently decays either directly or indirectly, through mixing,
into the observed final states.
The resummation approach outlined above can now carry over to the
mixing between two unstable fermions, call them $f_1$ and $f_2$. As
we did for the case of scalars, we express the bare left- and
right-handed chiral fields, $f^0_{Li}$ and $f^0_{Ri}$ (with $i=1,2$),
in terms of renormalized fields as follows:
\begin{equation}
f^0_{Li}\ =\ Z^{1/2}_{Lij}\, f_{Lj}\ , \quad\qquad
f^0_{Ri}\ =\ Z^{1/2}_{Rij}\, f_{Rj}\ ,
\end{equation}
where $Z^{1/2}_{Lij}$ ($Z^{1/2}_{Rij}$) is the wave-function
renormalization constant for the left (right)- handed chiral fields,
which may be determined from the fermionic self-energy transitions
$f_j\to f_i$, $\Sigma_{ij} (\not\!\! p)$, e.g.\ in the OS
renormalization scheme.\cite{KP} Analogously to Eq.\ (\ref{InvD12}),
the resummed fermion propagator matrix may be obtained from
\begin{equation}
\label{InvS}
S_{ij}(\not\! p)\ =\ \left[ \begin{array}{cc} \not\! p - m^0_1 +
\Sigma_{11}(\not\! p) & \Sigma_{12}(\not\! p)\\
\Sigma_{21}(\not\! p) & \not\! p - m^0_2 + \Sigma_{22}(\not\! p)
\end{array} \right]^{-1},\qquad
\end{equation}
where $m^0_{1,2}$ are the bare fermion masses, which can be decomposed
into the OS renormalized masses $m_{1,2}$ and the CT mass terms
$\delta m_{1,2}$ as $m^0_{1,2}=m_{1,2} + \delta m_{1,2}$. Inverting
the matrix-valued $2\times 2$ matrix in Eq.\ (\ref{InvS}) yields
\begin{eqnarray}
\label{S11}
S_{11}(\not\! p) &=& \Big[\not\! p\, -\, m^0_1\, +\,
\Sigma_{11}(\not\! p)\, -\, \Sigma_{12}(\not\! p)
\frac{1}{\not\! p - m^0_2 + \Sigma_{22}(\not\! p)}
\Sigma_{21}(\not\! p) \Big]^{-1}, \\
\label{S22}
S_{22}(\not\! p) &=& \Big[\not\! p\, -\, m^0_2\, +\,
\Sigma_{22}(\not\! p)\, -\, \Sigma_{21}(\not\!
p) \frac{1}{\not\! p - m^0_1 + \Sigma_{11}(\not\! p)}
\Sigma_{12}(\not\! p) \Big]^{-1}, \\
\label{S12}
S_{12}(\not\! p) &=& -\, S_{11}(\not\! p)\,
\Sigma_{12}(\not\! p)\, \Big[ \not\! p\, -\, m^0_2\, +\,
\Sigma_{22}(\not\! p) \Big]^{-1} \nonumber\\ &=& -\, \Big[
\not\! p\, -\, m^0_1\, +\, \Sigma_{11}(\not\! p) \Big]^{-1}
\Sigma_{12}(\not\! p)\, S_{22}(\not\! p)\ ,\\
\label{S21}
S_{21}(\not\! p) &=& -\, S_{22}(\not\! p)\,
\Sigma_{21}(\not\! p)\, \Big[ \not\! p\, -\, m^0_1\, +\,
\Sigma_{11}(\not\! p) \Big]^{-1}\, \nonumber\\ &=& -\, \Big[
\not\! p\, -\, m^0_2\, +\, \Sigma_{22}(\not\! p)
\Big]^{-1} \Sigma_{21}(\not\! p)\, S_{11}(\not\! p)\ .
\end{eqnarray}
Equations (\ref{S12}) and (\ref{S21}) show that the resummed
propagators $S_{12}(\not\!\! p)$ and $S_{21}(\not\!\! p)$ are
endowed with a factorization property analogous to Eq.\ (\ref{Drel}).
Similarly, the renormalized and unrenormalized resummed propagators
are related by
\begin{equation}
\label{S_Shat}
S_{ij}(\not\! p)\ =\ (Z^{1/2}_{Lim}\, P_L\ +\ Z^{1/2}_{Rim}\, P_R )\,
\widehat{S}_{mn}(\not\! p)\, (Z^{1/2\dagger}_{Lnj}\, P_R\ +\
Z^{1/2\dagger}_{Rnj}\, P_L )\, ,
\end{equation}
where the caret on $S_{ij}(\not\!\! p)$ indicates that the resummed
fermionic propagators have been renormalized in the OS scheme.
Moreover, the renormalized propagators $\widehat{S}_{ij}$ $(\not\!\!
p)$ may be obtained by $S_{ij}(\not\!\! p)$ in Eqs.\
(\ref{S11})--(\ref{S21}), if the obvious replacements $m^0_i\to m_i$
and $\Sigma_{ij}(\not\! p)\to \widehat{\Sigma}_{ij}(\not\! p)$ are
made.
By analogy, one can derive the resummed decay amplitude,
$\widehat{\cal T}_{i\dots}$, of the unstable fermion $f_i\to X$,
as we did for the scalar case. More explicitly, we have
\begin{eqnarray}
\label{LSZ3}
\widehat{\cal T}_{i\dots}\, u_i(p) &=&
{ \cal T}^{amp}_{k\dots}\, (Z^{1/2}_{Lkm}\, P_L\, +\, Z^{1/2}_{Rkm}\, P_R )\,
\widehat{S}_{mn}(\not\! p)\, (Z^{1/2\dagger}_{Lnj}\, P_R\, +\,
Z^{1/2\dagger}_{Rnj}\, P_L )\nonumber\\
&&\times (Z^{-1/2\dagger}_{Lji}\, P_R\, +\,
Z^{-1/2\dagger}_{Rji}\, P_L )\, \widehat{S}^{-1}_{ii}(\not\! p)\,
u_i(p)\\
&=& {\cal T}_{i\dots}\, u_i(p)\ -\ (1-\delta_{ij}) {\cal T}_{j\dots}\,
\widehat{\Sigma}_{ji}(\not\! p)\, \Big[ \not\! p\, -\, m_j\, +\,
\widehat{\Sigma}_{jj}(\not\! p) \Big]^{-1} u_i(p)\, .\nonumber
\end{eqnarray}
Again, ${\cal T}_{i\dots}$ represent the respective renormalized
transition amplitudes evaluated in the stable-particle approximation.
The amplitudes ${\cal T}_{i\dots}$ also include all high-order
$n$-point functions, such as vertex corrections. Based on the formula
(\ref{LSZ3}), we shall calculate the CP asymmetries in the decays of
heavy Majorana neutrinos in the next section.
\setcounter{section}{6}
\setcounter{equation}{0}
\section{CP asymmetries}\label{sec:6}
\noindent
The resummation approach presented in the previous section may be
applied to describe $\varepsilon$- and $\varepsilon'$-type CP
violation in heavy Majorana neutrino decays shown in Fig.\
\ref{fig:3}. The same formalism may also be used to determine the
collision terms for the inverse decays, which occur in the formulation
of the Boltzmann equations (see also Section 8).
\begin{figure}
\begin{center}
\begin{picture}(300,100)(0,0)
\SetWidth{0.8}
\Vertex(50,50){2}
\Line(50,50)(90,50)\Text(70,62)[]{$N_i$}
\Line(130,50)(170,50)\Text(150,62)[]{$N_j$}
\GCirc(110,50){20}{0.9}\Text(110,50)[]{{\boldmath $\varepsilon$}}
\GCirc(180,50){10}{0.9}\Text(180,50)[]{\boldmath $\varepsilon$\bf'}
\DashArrowLine(187,55)(220,80){5}\Text(225,80)[l]{$\Phi^\dagger$}
\ArrowLine(187,45)(220,20)\Text(225,20)[l]{$L$}
\end{picture}\\
\end{center}
\fcaption{$\varepsilon$- and $\varepsilon'$-type CP violation in the
decays of heavy Majorana neutrinos.}\label{fig:3}
\end{figure}
Let us consider the decay $N_1\to l^-\chi^+$ in a model with two
right-handed neutrinos. The inclusion of all other decay channels is
then obvious. We shall first write down the transition amplitude
responsible for $\varepsilon$-type CP violation, denoted as ${\cal
T}^{(\varepsilon)}_N$, and then take CP-violating vertex corrections
into account. Applying (\ref{LSZ3}) to heavy Majorana neutrino decays,
we obtain
\begin{equation}
\label{TN1eps}
{\cal T}^{(\varepsilon)}_{N_1}\ =\ h_{l1}\, \bar{u}_lP_R u_{N_1}\ -\
ih_{l2}\, \bar{u}_l P_R \Big[\not\! p - m_{N_2} + i\Sigma_{22}^{abs}
(\not\! p)\Big]^{-1} \Sigma_{21}^{abs}(\not\! p) u_{N_1}\, ,
\end{equation}
where the absorptive part of the one-loop transitions $N_j\to N_i$,
with $i,j=1,2$, has the general form
\begin{equation}
\label{Sigabs}
\Sigma^{abs}_{ij} (\not\! p)\ =\ A_{ij}(p^2)\not\! p P_L\, +\,
A^*_{ij}(p^2)\not\! p P_R\, ,
\end{equation}
with
\begin{equation}
\label{Aij}
A_{ij}(p^2)\ =\ \frac{h_{l'i}h^*_{l'j}}{32\pi}\, \Big[\, \frac{3}{2}\, +\,
\frac{1}{2}\, \Big( 1-\frac{M^2_H}{p^2} \Big)^2\, \Big]\, .
\end{equation}
In the limit $M_H\to 0$, Eq.\ (\ref{Aij}) gives $A_{ij} = h_{l'i}
h^*_{l'j} / (16\pi)$. The CP-transform resummed amplitude describing
the decay $N_1\to l^+\chi^-$, $\overline{{\cal
T}}^{(\varepsilon)}_{N_1}$, reads
\begin{eqnarray}
\label{TCPN1eps}
\overline{{\cal T}}^{(\varepsilon)}_{N_1} &=&
h^*_{l1}\, \bar{v}_{N_1}P_L v_l\ -\
ih_{l2}\, \bar{v}_{N_1} \Sigma_{12}^{abs}(-\not\! p) \Big[\, -\not\! p -
m_{N_2} + i\Sigma_{22}^{abs} (-\not\! p)\Big]^{-1} P_L v_l\nonumber\\
&=& h^*_{l1}\, \bar{u}_lP_L u_{N_1}\ -\
ih^*_{l2}\, \bar{u}_l P_L \Big[\not\! p - m_{N_2} +
i\overline{\Sigma}_{22}^{abs}
(\not\! p)\Big]^{-1} \overline{\Sigma}_{21}^{abs}(\not\! p) u_{N_1}\, ,
\end{eqnarray}
where
\begin{equation}
\label{SigCabs}
\overline{\Sigma}^{abs}_{ij} (\not\! p)\ =\ A_{ij}(p^2)\not\! p P_R\, +\,
A^*_{ij}(p^2)\not\! p P_L
\end{equation}
is the charge-conjugate absorptive self-energy. The last step of Eq.\
(\ref{TCPN1eps}) is derived by making use of the identities
\begin{equation}
\label{idCP}
u(p,s)\ =\ C\bar{v}^T(p,s)\, ,\qquad C\gamma_\mu C^{-1}\ =\ -\gamma^T_\mu\, .
\end{equation}
The expressions in Eqs.\ (\ref{TN1eps}) and (\ref{TCPN1eps}) may be
simplified even further, if the Dirac equation of motion is employed
for the external spinors. Then, the two resummed decay amplitudes,
${\cal T}^{(\varepsilon)}_{N_1}$ and $\overline{{\cal
T}}^{(\varepsilon)}_{N_1}$, take the simple form
\begin{eqnarray}
\label{TN}
{\cal T}^{(\varepsilon)}_{N_1} &=& \bar{u}_lP_R u_{N_1}\,
\Big[\, h_{l1}\, -\, ih_{l2}\, \frac{m^2_{N_1}(1+iA_{22})A^*_{21}
+m_{N_1}m_{N_2}A_{21}}{m^2_{N_1}(1+iA_{22})^2 -m^2_{N_2}}\, \Big]\, ,\\
\label{TCPN}
\overline{{\cal T}}^{(\varepsilon)}_{N_1} &=&\bar{u}_lP_L u_{N_1}\,
\Big[\, h^*_{l1}\, -\, ih^*_{l2}\, \frac{m^2_{N_1}(1+iA_{22})A_{21}
+m_{N_1}m_{N_2}A^*_{21}}{m^2_{N_1}(1+iA_{22})^2 -m^2_{N_2}}\, \Big]\, .
\end{eqnarray}
In addition, the respective transition amplitudes involving the decays
$N_2\to l^-\chi^+$, ${\cal T}^{(\varepsilon)}_{N_2}$, and $N_2\to
l^+\chi^-$, $\overline{{\cal T}}^{(\varepsilon)}_{N_2}$, may be
obtained by interchanging the indices `1' and `2' everywhere in Eqs.\
(\ref{TN}) and (\ref{TCPN}).
In order to study the $\varepsilon$- and $\varepsilon'$-type
mechanisms of CP violation in heavy Majorana neutrino decays, we
define the following CP-violating quantities:
\begin{eqnarray}
\label{epsNi}
\varepsilon_{N_i} & =& \frac{|{\cal T}^{(\varepsilon)}_{N_i}|^2\, -\,
|\overline{{\cal T}}^{(\varepsilon)}_{N_i}|^2}{
|{\cal T}^{(\varepsilon)}_{N_i}|^2\, +\,
|\overline{{\cal T}}^{(\varepsilon)}_{N_i}|^2}\ ,\qquad \mbox{for}\ i=1,2\, ,\\
\label{epsN}
\varepsilon_N & =& \frac{|{\cal T}^{(\varepsilon)}_{N_1}|^2\, +\,
|{\cal T}^{(\varepsilon)}_{N_2}|^2\,
-\, |\overline{{\cal T}}^{(\varepsilon)}_{N_1}|^2
\, -\, |\overline{{\cal T}}^{(\varepsilon)}_{N_2}|^2}{
|{\cal T}^{(\varepsilon)}_{N_1}|^2\, +\, |{\cal T}^{(\varepsilon)}_{N_2}|^2
\, +\, |\overline{{\cal T}}^{(\varepsilon)}_{N_1}|^2\, +\,
|\overline{{\cal T}}^{(\varepsilon)}_{N_2}|^2}\ .
\end{eqnarray}
Correspondingly, the CP-violating parameters $\varepsilon'_{N_i}$ and
$\varepsilon'_{N}$ may be defined by
\begin{eqnarray}
\label{epsNipr}
\varepsilon'_{N_i} & =& \frac{|{\cal T}^{(\varepsilon')}_{N_i}|^2\, -\,
|\overline{{\cal T}}^{(\varepsilon')}_{N_i}|^2}{
|{\cal T}^{(\varepsilon')}_{N_i}|^2\, +\,
|\overline{{\cal T}}^{(\varepsilon')}_{N_i}|^2}\ ,
\qquad \mbox{for}\ i=1,2\, ,\\
\label{epsNpr}
\varepsilon'_N & =& \frac{|{\cal T}^{(\varepsilon')}_{N_1}|^2\, +\,
|{\cal T}^{(\varepsilon')}_{N_2}|^2\,
-\, |\overline{{\cal T}}^{(\varepsilon')}_{N_1}|^2
\, -\, |\overline{{\cal T}}^{(\varepsilon')}_{N_2}|^2}{
|{\cal T}^{(\varepsilon')}_{N_1}|^2\, +\, |{\cal T}^{(\varepsilon')}_{N_2}|^2
\, +\, |\overline{{\cal T}}^{(\varepsilon')}_{N_1}|^2\, +\,
|\overline{{\cal T}}^{(\varepsilon')}_{N_2}|^2}\ .
\end{eqnarray}
The last parameters quantify CP violation coming exclusively from the
one-loop irreducible vertices. In Eqs.\ (\ref{epsNi}) and
(\ref{epsN}), the parameters $\varepsilon_{N_i}$ and $\varepsilon_N$
share the common property that they do not depend on the final state
that $N_i$ decays, despite the fact that the individual squared matrix
elements do. In general, both $\varepsilon$- and $\varepsilon'$-type
contributions are not directly distinguishable in the decay widths
$\Gamma (N_i\to l^\pm\chi^\pm)$, unless $\varepsilon_{N_i}\gg
\varepsilon'_{N_i}$ and vice versa, for some range of the kinematic
parameters. Evidently, the physical CP asymmetries are given by
\begin{eqnarray}
\label{deltaNi}
\delta_{N_i} &=& \frac{\Gamma (N_i\to L\Phi^\dagger )\, -\,
\Gamma (N_i\to L^C \Phi)}{\Gamma (N_i\to L\Phi^\dagger )\, +\,
\Gamma (N_i\to L^C \Phi)}\ , \qquad \mbox{for}\ i=1,2\, ,\\
\label{deltaN}
\delta_N &=& \frac{\sum_{i=1}^2\Gamma (N_i\to L\Phi^\dagger )\, -\,
\sum_{i=1}^2\Gamma (N_i\to L^C \Phi)}{\sum_{i=1}^2
\Gamma (N_i\to L\Phi^\dagger )\, +\, \sum_{i=1}^2\Gamma (N_i\to L^C \Phi)}\ ,
\end{eqnarray}
where $L$ refers to all fermionic degrees of freedom of the leptonic
isodoublet that heavy Majorana neutrinos can decay. Nevertheless, the
parameters $\varepsilon_{N_i}$, $\varepsilon_N$, $\varepsilon'_{N_i}$
and $\varepsilon'_N$ defined above are very useful to determine the
contributions due to the different mechanisms of CP violation.
We now turn to the calculation of the CP-violating contribution, which
is entirely due to the heavy-neutrino self-energy effects.
Substituting Eqs.\ (\ref{TN}) and (\ref{TCPN}) into (\ref{epsNi}), we
arrive at the simple formulas\cite{APRD}
\begin{eqnarray}
\label{epsN1}
\varepsilon_{N_1} &\approx& \frac{{\rm Im} ( h^*_{l1}h_{l2})^2}{
|h_{l1}|^2|h_{l2}|^2}\
\frac{\Delta m^2_N m_{N_1} \Gamma_{N_2} }{(\Delta m^2_N)^2\, +\,
m^2_{N_1}\Gamma^2_{N_2}}\, ,\\
\label{epsN2}
\varepsilon_{N_2} &\approx& \frac{{\rm Im} ( h^*_{l1}h_{l2})^2}{
|h_{l1}|^2|h_{l2}|^2}\
\frac{\Delta m^2_N m_{N_2} \Gamma_{N_1} }{(\Delta m^2_N)^2\, +\,
m^2_{N_2}\Gamma^2_{N_1}}\, ,
\end{eqnarray}
where $\Delta m^2_N = m^2_{N_1} - m^2_{N_2}$ and
\begin{equation}
\label{GammaN}
\Gamma_{N_i}\ =\ \frac{|h_{li}|^2}{8\pi}\ m_{N_i}
\end{equation}
are the decay widths of the heavy Majorana neutrinos. Equations
(\ref{epsN1}) and (\ref{epsN2}) are a very good approximation for any
range of heavy-neutrino masses of interest. Both CP asymmetries
$\varepsilon_{N_1}$ and $\varepsilon_{N_2}$ are of the same sign and
go individually to zero when $\Delta m^2_N\to 0$, as it should be on
account of Eq.\ (\ref{CPinv}). In the conventional perturbation
theory, the width terms $m^2_{N_1} \Gamma^2_{N_2}$ and $m^2_{N_2}
\Gamma^2_{N_1}$ occurring in the last denominators on the RHS of Eqs.\
(\ref{epsN1}) and (\ref{epsN2}) are absent. This very last fact is
precisely what causes a singular behaviour when the degeneracy between
the two heavy Majorana neutrinos is exact. On physical grounds,
however, the only natural parameter that can regulate such a
singularity is the finite width of the heavy neutrinos, which is
naturally implemented within the resummation approach.
{}From Eqs.\ (\ref{epsN1}) and (\ref{epsN2}), it is not difficult to
derive the sufficient and necessary conditions for resonant
enhancement of CP violation. To be specific, CP violation can be of
order unity if and only if
\begin{eqnarray}
\label{CPcond}
{\rm (i)}&&\hspace{-0.3cm}
m_{N_1}\, -\, m_{N_2}\ \sim\ \pm\, A_{22} m_{N_2}\,
=\, \frac{\Gamma_{N_2}}{2}\
\ \mbox{and/or}\quad A_{11} m_{N_1}\, =\,
\frac{\Gamma_{N_1}}{2}\, ,\\
\label{dCP}
{\rm (ii)}&&\hspace{-0.3cm}
\delta_{CP}\ =\ \frac{|{\rm Im} (h^*_{l1}h_{l2})^2|}{|h_{l1}|^2
|h_{l2}|^2}\ \approx 1\ .
\end{eqnarray}
Before we present numerical estimates of CP asymmetries, we calculate
for completeness the contributions to CP violation arising entirely
from vertex effects. The $\varepsilon'$-type contributions can be
significant for large differences of heavy neutrino masses, e.g.\ for
$m_{N_1}-m_{N_2}\sim m_{N_1}$ or $m_{N_2}$. In this regime, both
$\varepsilon$-type and $\varepsilon'$-type effects are of comparable
order.\cite{LS1} It is useful to define first the function
\begin{equation}
\label{Fxa}
F(x,\alpha)\ =\ \sqrt{x}\, \Big[\, 1\, -\, \alpha\, -\, (1+x)
\ln\Big(\frac{1-\alpha+x}{x}\Big)\, \Big]\, .
\end{equation}
With $\alpha = 0$, $F(x,\alpha)$ reduces to the Fukugita-Yanagida loop
function $f(x) = \sqrt{x} [ 1 - (1+x) \ln (1 + 1/x)]$.\cite{FY} Then,
$L$-violating absorptive parts of the one-loop vertices $\chi^+lN_i$,
$\chi^0 \nu_l N_i$ and $H \nu_l N_i$, shown in Figs.\
\ref{fig:1}(a)--(c), are given by
\begin{eqnarray}
\label{eps'lN}
{\cal V}^{abs}_{\chi^+lN_i}(\not\! p) &=& -\, \frac{h^*_{l'i}h_{l'j}h_{lj}}{
16\pi\sqrt{p^2}}\ \not\! p P_L\, F\Big(\frac{m^2_{Nj}}{p^2}\ ,0\Big)\, ,\\
\label{eps'nuN}
{\cal V}^{abs}_{\chi^0\nu_lN_i}(\not\! p) &=& {\cal V}^{abs}_{H\nu_lN_i}
(\not\! p)\nonumber\\
&=& -\, \frac{h^*_{l'i}h_{l'j}h_{lj}}{
32\pi\sqrt{p^2}}\ \not\! p P_L\, \Big[\, F\Big(\frac{m^2_{Nj}}{p^2}\ ,0\Big)
\ +\ F\Big(\frac{m^2_{Nj}}{p^2}\ , \frac{M^2_H}{p^2}\Big)\, \Big]\, .\quad
\end{eqnarray}
Here, we have assumed that the external decaying heavy Majorana
neutrinos are off-shell, whereas the leptons and Higgs fields are on
their mass shell. The complete analytic expressions are calculated in
the appendix. Using Eqs.\ (\ref{eps'lN}) and (\ref{eps'nuN}) and
neglecting wave-function contributions, we compute the
$\varepsilon'$-type CP asymmetry in the conventional perturbation
theory. Considering all decay channels for the decaying heavy
Majorana neutrino, e.g.\ $N_1$, we find
\begin{eqnarray}
\label{eps'N1}
\varepsilon'_{N_1} &=& \frac{{\rm Im}(h^*_{l1}h_{l2})^2}{
16\pi |h_{l1}|^2\, [\frac{3}{4} +\frac{1}{4}(1-M^2_H/m^2_{N_1})^2]}\
\Big\{\, \frac{5}{4}\, F\Big(\frac{m^2_{N_2}}{m^2_{N_1}}\ ,0\Big)\, +\,
\frac{1}{4}\, F\Big(\frac{m^2_{N_2}}{m^2_{N_1}}\ ,
\frac{M^2_H}{m^2_{N_1}}\Big)\nonumber\\
&&+\, \frac{1}{4}\, \Big( 1\, -\, \frac{M^2_H}{m^2_{N_1}}\Big)^2\,
\Big[\, F\Big(\frac{m^2_{N_2}}{m^2_{N_1}}\ ,0\Big)\, +\,
F\Big(\frac{m^2_{N_2}}{m^2_{N_1}}\ , \frac{M^2_H}{m^2_{N_1}}\Big)
\, \Big]\, \Big\}\, .
\end{eqnarray}
In the limit $M_H\to 0$, the last formula simplifies to the known
result\cite{FY,MAL,CEV,epsilonprime}
\begin{equation}
\label{eps'}
\varepsilon'_{N_1}\ =\ \frac{{\rm Im}(h^*_{l1}h_{l2})^2}{
8\pi |h_{l1}|^2 }\ f\Big(\frac{m^2_{N_2}}{m^2_{N_1}}\Big)\, .
\end{equation}
Unlike $\varepsilon_{N_1}$, $\varepsilon'_{N_1}$ does not vanish in
the degenerate limit of the two heavy Majorana neutrinos $N_1$ and
$N_2$. However, when the value of $m_{N_1}$ approaches that of
$m_{N_2}$, the $\varepsilon'$-type part of the transition amplitude
squared for the $N_1$ decay becomes equal but opposite in sign to the
respective one of the $N_2$ decay. As a result, these two
$\varepsilon'$-type terms cancel one another, leading to the vanishing
of the CP-violating parameter $\varepsilon'_N$ defined in Eq.\
(\ref{epsNpr}). Consequently, as opposed to $\varepsilon$ effects,
$\varepsilon'$ effects cannot become resonant for any kinematic region
of mass parameters.
Both $\varepsilon$ and $\varepsilon'$ contributions can be included
into the resummed decay amplitudes. Considering Eqs.\ (\ref{eps'lN})
and (\ref{eps'nuN}) into account, we obtain
\begin{eqnarray}
\label{TN1}
{\cal T}_{N_1} \hspace{-2pt}&=&\hspace{-2pt} \bar{u}_l P_R\, \Big\{
h_{l1}+i{\cal V}^{abs}_{l1} (\not\! p) -
i\Big[h_{l2}+i{\cal V}^{abs}_{l2} (\not\! p) \Big]\nonumber\\
&&\times \Big[\not\! p - m_{N_2} + i\Sigma_{22}^{abs}(\not\! p)\Big]^{-1}
\Sigma_{21}^{abs}(\not\! p)\Big\} u_{N_1} ,\\
\label{TCPN1}
\overline{{\cal T}}_{N_1} \hspace{-2pt}&=&\hspace{-2pt}
\bar{u}_l P_L\, \Big\{
h^*_{l1}+i\overline{{\cal V}}^{abs}_{l1} (\not\! p)
- i\Big[h^*_{l2}+i\overline{{\cal V}}^{abs}_{l2} (\not\! p) \Big]\nonumber\\
&&\times
\Big[\not\! p - m_{N_2} + i\overline{\Sigma}_{22}^{abs}(\not\! p)\Big]^{-1}
\overline{\Sigma}_{21}^{abs}(\not\! p)\Big\} u_{N_1} ,
\end{eqnarray}
where the notation of the off-shell one-loop vertices has been
simplified to ${\cal V}^{abs}_{li}(\not\!\! p)$. The vertex functions
$\overline{{\cal V}}^{abs}_{li}(\not\! p)$ are the charge conjugates
of ${\cal V}^{abs}_{li}(\not\! p)$ and may hence be recovered from
Eqs.\ (\ref{eps'lN}) and (\ref{eps'nuN}), by taking the complex
conjugate for the Yukawa couplings and replacing $P_R$ with $P_L$.
Although the calculation of the CP-violating observables
$\delta_{N_i}$ defined in Eq.\ (\ref{deltaNi}) is quite
straightforward from Eqs.\ (\ref{TN1}) and (\ref{TCPN1}), it is not
very easy to present analytic expressions in a compact form.
\begin{figure}[hb]
\leavevmode
\begin{center}
\epsfxsize=11.cm
\epsffile[0 0 539 652]{baufig4.eps}
\end{center}
\fcaption{Numerical estimates of CP asymmetries in scenario
I.}\label{fig:4}
\end{figure}
\begin{figure}[ht]
\leavevmode
\begin{center}
\epsfxsize=11.cm
\epsffile[0 0 539 652]{baufig5.eps}
\end{center}
\fcaption{Numerical estimates of CP asymmetries versus
$m_{N_2}/m_{N_1} -1 $ in scenario II.}\label{fig:5}
\end{figure}
\begin{figure}[hb]
\leavevmode
\begin{center}
\epsfxsize=11.cm
\epsffile[0 0 425 425]{baufig6.eps}
\end{center}
\fcaption{Numerical estimates of CP asymmetries as a function
of $m_{N_2}/m_{N_1} -1 $ in scenario II.}\label{fig:6}
\end{figure}
To gauge better the dependence of CP asymmetries on the heavy neutrino
masses, we shall adopt two simple scenarios with two-right handed
neutrinos that mix actively with one lepton family $l$ only:
\begin{eqnarray}
\label{scenario}
\mbox{I.} && m_{N_1}\, =\, 10\ \mbox{TeV}\, ,\qquad h_{l1}=10^{-6},\
\quad h_{l2}=10^{-6}(1+i)\, ,\nonumber\\
\mbox{II.} && m_{N_1}\, =\, 10^9\ \mbox{TeV}\, ,\qquad h_{l1}=10^{-2},\
\quad h_{l2}=10^{-2}(1+i)\, .
\end{eqnarray}
We assume that $N_2$ is always heavier than $N_1$, i.e.\ $m_{N_1}\leq
m_{N_2}$. The above two scenarios comply qualitatively with
Sakharov's third requirement of out-of-equilibrium condition (see also
discussion in Section 8). In view of Eq.\ (\ref{dCP}), both scenarios
I and II given above represent maximal cases of CP violation with
$\delta_{CP}=1$. Therefore, results for any other model may readily
be read off by multiplying the CP asymmetries with the appropriate
model-dependent factor $\delta_{CP}$.
Figure \ref{fig:4} exhibits the dependence of the CP asymmetries as a
function of the parameter $x_N$ for scenario I. The parameter $x_N$
defined in Eq.\ (\ref{xN}) is a measure of mass degeneracy for the two
heavy Majorana neutrinos $N_1$ and $N_2$. We divide the range of
values for the parameter $x_N$ into two regions: the first region is
plotted in Fig.\ \ref{fig:4}(a) and pertains to the kinematic domain
where resonant CP violation due to heavy-neutrino mixing occurs. The
second one, shown in Fig.\ \ref{fig:4}(b), represents the kinematic
range far away from the resonant CP-violating phenomenon. The dotted
line in Fig.\ \ref{fig:4}(a) gives the prediction of
$\varepsilon_{N_1}$, when Eq.\ (\ref{epsN1}) is calculated in the
conventional finite-order perturbation theory. Obviously,
$\varepsilon^{pert}_{N_1}$ diverges for sufficiently small values of
$x_N$, e.g.\ $x_N < 10^{-13}$. If resummation of the relevant
fermionic self-energy graphs is considered, the prediction for
$\varepsilon_{N_1}$ becomes analytic and is given by the dashed line
in Fig.\ \ref{fig:4}. The $\varepsilon_{N_1}$ line shows a maximum
for $x_N\approx 10^{-13}$. In agreement with the conditions in Eqs.\
(\ref{CPcond}) and (\ref{dCP}), CP violation may resonantly increase
up to order unity.\cite{APRL,ANPB} The solid line in Fig.\ \ref{fig:4}
displays the dependence of the CP-violating parameter $\delta_N$ in
Eq.\ (\ref{deltaN}) on $x_N$, where $\varepsilon'$-type contributions
are included. The latter are very small in this scenario, so as to
account for the BAU, e.g.\ $\varepsilon'_{N_1}\approx 10^{-16}$.
Finally, we comment on the fact that $\delta_N$ vanishes in the
CP-invariant limit $x_N\to 0$, as it should be on account of Eq.\
(\ref{CPinv}).
Figures \ref{fig:5} and \ref{fig:6} give numerical estimates of CP
asymmetries in scenario II. The difference of this model with that of
scenario I is that the $\varepsilon'$-type effects may not be
negligible in the off-resonant region, as can be seen from Figs.\
\ref{fig:5}(a) and \ref{fig:6}. In particular, for values of the
parameter $x_N < 10^{-11}$ or $x_N > 1$, the individual
$\varepsilon'_{N_1}$- and $\varepsilon'_{N_2}$-type contributions may
prevail over the $\varepsilon$-type ones. Models with $x_N> 1$ have
been extensively discussed in the
literature.\cite{FY,MAL,CEV,epsilonprime} Numerical estimates for such
models are displayed in Fig.\ \ref{fig:6}. We first focus our
attention on the domain with $x_N<10^{-2}$. In Fig.\ \ref{fig:5}(a),
we observe that $\varepsilon'_{N_1}$ and $\varepsilon'_{N_2}$,
represented by the dotted lines, do not vanish in the CP-invariant
limit $x_N\to 0$, as opposed to $\varepsilon_{N_1}$. As a
consequence, the CP asymmetry $\delta_{N_1}$ in Eq.\ (\ref{deltaNi}),
in which both $\varepsilon_{N_1}$- and $\varepsilon'_{N_1}$-type terms
are considered within our formalism, does not vanish either. The
reason is that the physical CP-violating parameter in this highly
degenerate mass regime for $N_1$ and $N_2$ is the observable
$\delta_N$ defined in Eq.\ (\ref{deltaN}). In fact, $\delta_N$ and
$\varepsilon_{N_1}$ share the common feature that both tend
consistently to zero as $x_N\to 0$. This fact must be considered to
be one of the successes of the resummation approach. Again, CP
violation is resonantly amplified, when the condition in Eq.\
(\ref{CPcond}) is satisfied, as can be seen from Fig.\ \ref{fig:5}(b).
Finally, we must remark that $-\delta_N$ flips sign and eventually
becomes negative for $x_N\gg 1$, as can be seen from Fig.\
\ref{fig:6}. However, in this kinematic range, we must consider a
further refinement into the definition of $\delta_N$. The effect of
the different dissipative Boltzmann factors multiplying the decay
rates of the heavy Majorana neutrinos $N_1$ and $N_2$ must also be
included in $\delta_N$. These phenomena will be taken into account in
Section 8.
\setcounter{section}{7}
\setcounter{equation}{0}
\section{Unitarity and CPT invariance in the resummation approach}\label{sec:7}
\noindent
It is interesting to see how the resummation approach preserves CPT
invariance and unitarity.\cite{ANPB} An immediate consequence of
unitarity and CPT symmetry is that CP violation in the $L$-violating
scattering process $L\Phi^\dagger \to L^C\Phi$ is absent to order
$h^6_{li}$.\cite{RCV,BP} We will concentrate on the resonant part of
the amplitude, as it is the dominant one.
Our aim is to show that to `one loop',
\begin{equation}
\label{DCP}
\Delta_{\rm CP}\ =\ \int d{\rm LIPS}\
|{\cal T}^{\rm res}_{L\Phi^\dagger \to L^C\Phi}|^2\
-\ \int d{\rm LIPS}\
|\overline{\cal T}^{\rm res}_{L^C\Phi \to L\Phi^\dagger}|^2\ =\ 0,
\end{equation}
where LIPS stands for the two-body Lorentz-invariant phase space. For
simplicity, we omit external spinors and absorb irrelevant constants
in the definition of the Yukawa-coupling matrix $h = (h_{l1},h_{l2})$.
Using matrix notation, the resummed transition amplitudes are written
\begin{equation}
\label{Tres}
{\cal T}^{\rm res}_{L\Phi^\dagger \to L^C\Phi}\ =\
hP_R\, S(\not\! p)\, P_R h^T\, ,\qquad
\overline{\cal T}^{\rm res}_{L^C\Phi \to L\Phi^\dagger}\ =\
h^*P_L\, \bar{S}(\not\! p)\,P_L h^\dagger\, ,
\end{equation}
with $\bar{S}(\not\!\! p) = S^T(\not\!\! p)$ being the
CP/T-conjugate propagator matrix of $S (\not\!\! p)$. In writing the
CP/T-conjugate amplitude $\overline{\cal T}^{\rm res}_{L^C\Phi \to
L\Phi^\dagger}$, we have employed the identities (\ref{idCP}) for
spinor objects and made use of the rotational symmetry of the
amplitude. The latter has the effect of reversing the spatial
components of the four momenta. We also neglect possible P-odd spin
correlations involving external leptons since they will be averaged
away when forming the matrix element squared.
We start the proof by noticing that as a consequence of CPT
invariance,
\begin{equation}
\label{CPT}
|{\cal T}^{\rm res}_{L\Phi^\dagger \to L\Phi^\dagger}|^2\ =\
|{\cal T}^{\rm res}_{L^C\Phi \to L^C \Phi}|^2\, .
\end{equation}
This equality is indeed valid, since
\begin{equation}
|hP_R\, S(\not\! p)\, P_Lh^\dagger|\ =\ |h^*P_L\, S^T(\not\! p)\, P_R h^T|\
=\ |h^*P_L\, \bar{S}(\not\! p)\, P_R h^T|\ .
\end{equation}
Unitarity of the theory prescribes the following relation governing
the resummed propagators:
\begin{equation}
\label{OT}
S^{-1}(\not\! p)\ -\ S^{-1\dagger}(\not\! p) \ =\ -i \int
d{\rm LIPS} \not\! p (h^T h^* P_L\, +\, h^\dagger h P_R)\ .
\end{equation}
This last relation is also known as the optical theorem. Based on the
optical theorem, we can prove the equality
\begin{equation}
\label{Uni}
\int d{\rm LIPS}\
|{\cal T}^{\rm res}_{L\Phi^\dagger \to L\Phi^\dagger, L^C\Phi}|^2\
=\ \int d{\rm LIPS}\
|\overline{\cal T}^{\rm res}_{L^C\Phi \to L\Phi^\dagger,
L^C\Phi}|^2\ .
\end{equation}
Indeed, using Eq.\ (\ref{OT}), we find
\begin{eqnarray}
\label{Tres1}
\int d{\rm LIPS}\
|{\cal T}^{\rm res}_{L\Phi^\dagger \to L\Phi^\dagger, L^C\Phi}|^2
&& \nonumber\\
&&\hspace{-4cm}= \int d{\rm LIPS}\
hP_R\, S(\not\! p)\, \not\! p (h^T h^* P_L\, +\, h^\dagger h P_R)\,
S^\dagger (\not\! p)\, P_L h^\dagger\nonumber\\
&&\hspace{-4cm}= -i\, hP_R\, [S(\not\! p)\, -\, S^\dagger (\not\! p)]\,
P_L h^\dagger\ =\ 2\, hP_R\, {\rm Im} S(\not\! p)\, P_Lh^\dagger\, ,
\end{eqnarray}
and for the CP-conjugate total rate,
\begin{eqnarray}
\label{Tres2}
\int d{\rm LIPS}\
|{\cal T}^{\rm res}_{L^C\Phi \to L\Phi^\dagger, L^C\Phi}|^2
& =& 2\, h^*P_L\, {\rm Im} \bar{S}(\not\! p)\, P_Rh^T\, \nonumber\\
&=& 2\, hP_R\, {\rm Im} S(\not\! p)\, P_Lh^\dagger\, .
\end{eqnarray}
As the RHSs of Eqs.\ (\ref{Tres1}) and (\ref{Tres2}) are equal, the
equality (\ref{Uni}) is obvious. Subtracting Eq.\ (\ref{CPT}) from
Eq.\ (\ref{Uni}), it is not difficult to show that $\Delta_{\rm CP}$
vanishes at the one-loop resummed level. We should remark that the
resummation approach\cite{ANPB} satisfies CPT and unitarity {\em
exactly}, without recourse to any re-expansion of the resummed
propagator. If we also include resummed amplitudes subleading in the
Yukawa couplings, then residual CP-violating terms that are formally
of order $h^8_{li}$ and higher occur in $\Delta_{\rm CP}$. These
terms result from the interference of two resummed amplitudes
containing one-loop vertex graphs. Because of unitarity, however, the
residual CP-violating terms of order $h^8_{li}$ and $h^{10}_{li}$ will
cancel at two loops together with respective CP-violating terms coming
from one-loop $2\to 4$ scatterings, and so on.
In the approach under consideration,\cite{ANPB} the physical
transition amplitude is obtained by sandwiching the resummed
propagators between matrix elements related to initial and final
states of the resonant process. Therefore, diagonalization of $S
(\not\! p)$ is no longer necessary, thereby avoiding possible
singularities emanating from non-diagonalizable (Jordan-like)
effective Hamiltonians [or equivalently $S^{-1} (\not\!\!
p)$].\cite{ANPB} In fact, such effective Hamiltonians represent
situations in which the CP-violating mixing between the two unstable
particles reaches its maximum and physical CP asymmetries are
therefore large. In such a case, the complex mass eigenvalues of the
effective Hamiltonian are exactly equal.
To see this point in more detail, let us consider the following
effective Hamiltonian for the $N_1N_2$ system:
\begin{equation}
\label{effHam}
{\cal H}(\not\! p)\ =\ \left[ \begin{array}{cc}
m_1 - \widehat{\Sigma}_{11}(\not\! p) & -\widehat{\Sigma}_{12}(\not\! p)\\
-\widehat{\Sigma}_{21}(\not\! p) & m_2 - \widehat{\Sigma}_{22}(\not\! p)
\end{array} \right]\ \approx\
\left[ \begin{array}{cc}
m_N + a - i|b| & -ib\\
-ib^* & m_N - a - i|b| \end{array} \right],
\end{equation}
in the approximation $\not\!\! p \to m_N \approx m_1\approx m_2$. In
Eq.\ (\ref{effHam}), the parameters $a$ and $b$ are real and complex,
respectively, and $m_1 = m_N+a$, $m_2 = m_N-a$. The complex parameter
$b$ represents the absorptive part of the one-loop neutrino
transitions $N_i\to N_j$. Unitarity requires that the determinant of
the absorptive part of ${\cal H}(\not\!\! p)$ be non-negative. For
the effective Hamiltonian (\ref{effHam}), the corresponding
determinant is zero. One-generation models naturally lead to such an
absorptive effective Hamiltonian. If $a = |b|$, the two complex mass
eigenvalues of ${\cal H} (\not\!\! p)$ are exactly degenerate and
equal to $m_N-i|b|$. Then, the effective Hamiltonian cannot be
diagonalized via a similarity transformation in this limit, i.e.\ the
respective diagonalization matrices become singular.\cite{ANPB}
An interesting question one may raise in this context is the
following. Since models with non-diagonalizable effective Hamiltonians
lead to an exact equality between their complex mass eigenvalues, how
then can this fact be reconciled with the CP-invariance condition
(\ref{CPinv})? According to the condition (\ref{CPinv}), any effect
of CP violation must vanish identically, and should not even be large!
To resolve this paradox, one should notice that in the presence of a
large particle mixing, the mass eigenstates of $S^{-1} (\not\!\! p)$
are generally non-unitary among themselves, whereas the
OS-renormalized mass eigenstates\cite{KP} form a well-defined unitary
basis (or any other renormalization scheme that preserves
orthonormality of the Hilbert space), upon which perturbation theory
can be formulated order by order. Therefore, the field-theoretic OS
renormalized masses are those that enter the condition of CP
invariance given by Eq.\ (\ref{CPinv}). Consequently, if the two
complex mass eigenvalues of the effective Hamiltonian are equal, this
does not necessarily entail an equality between their respective OS
renormalized masses, and therefore absence of CP violation as
well.\cite{ANPB}
\setcounter{section}{8}
\setcounter{equation}{0}
\section{Boltzmann equations}\label{sec:8}
\noindent
The thermodynamic evolution of the system in the radiation-dominated
era of the Universe may be described by a set of coupled Boltzmann
equations (BE's).\cite{KW,EWK&SW,KT,HT} These equations determine the
time evolution of the lepton-number asymmetry which will be converted
into the observed BAU by sphalerons. We shall solve the BE's
numerically and present results for the expected BAU within the two
different democratic-type scenarios I and II discussed in Section 6.
Finally, we will give estimates of the finite-temperature effects, and
discuss their impact on resonant CP violation through mixing.
Before solving numerically the BE's, it is instructive to discuss
first the out-of equilibrium constraints on heavy neutrino decays.
Sakharov's third necessary condition requires that the expansion rate
of the Universe be smaller than the decay rate of any $L$-violating
process. The most dominant $L$-violating process is the decay of the
heavy Majorana neutrinos themselves, $\Gamma_{N_i}$ (cf.\ Eq.\
(\ref{GammaN})). To a good approximation, we have the approximate
inequality
\begin{equation}
\label{Sakh3}
\Gamma_{N_i} (T=m_{N_i})\ \stackrel{\displaystyle <}{\sim}\
2\, K\, H(T=m_{N_i})\, ,
\end{equation}
where $K\approx 1$--$1000$ is a factor quantifying the deviation of
the decay rates from the expansion rate of the Universe, and $H(T)$ is
the Hubble parameter
\begin{equation}
\label{Hubble}
H(T)\ =\ 1.73\, g_*^{1/2}\, \frac{T^2}{M_{\rm Planck}}\ .
\end{equation}
with $M_{\rm Planck} = 1.2\times 10^{19}$ GeV and $g_*\approx
100$--400 being the number of active degrees of freedom in usual
extensions of the SM. Then, the out-of equilibrium constraint in Eq.\
(\ref{Sakh3}) translates into the bound
\begin{equation}
\label{hli_bound}
|h_{li}|^2\ \stackrel{\displaystyle <}{\sim}\
7.2\,K \times 10^{-14}\, \Big( \frac{m_{N_i}}{1\ \mbox{TeV}}\Big)\, .
\end{equation}
Although not mandatory, this very last constraint may be applied to
all Yukawa couplings.
As we have discussed in Section 2 (see also Eq.\ (\ref{BLrel})), above
the electroweak phase transition the $B+L$-sphaleron interactions
convert approximately one-third of the lepton-to-entropy density ratio
$Y_L = n_L/s$ into a baryon-to-entropy density ratio $Y_B = n_B/s$,
i.e.\ \cite{BS,HT}
\begin{equation}
\label{YB_YL}
Y_B\approx -\, \frac{1}{3}\, Y_L\, \approx -\,\frac{1}{3K}\
\frac{\delta_{N_i}}{g_*}\ .
\end{equation}
The last approximate equality represents the asymptotic solution of
the relevant BE's.\cite{MAL,APRD} {}From Eq.\ (\ref{YB_YL}), we see
that $Y_B$ can be in the observed ball park, i.e.\ $Y_B\approx
10^{-10}$, if $|\delta_{N_i}|/K$ are of order $10^{-7}$--$10^{-6}$.
Clearly, CP asymmetries of order unity allow for very large values of
$K$. As a consequence, the thermal plasma can then be rather dense
and the conditions of kinetic equilibrium in BE's can comfortably be
satisfied even within the minimal leptogenesis scenario under study.
We now turn to the discussion of BE's. The lepton asymmetry for a
system with two heavy Majorana neutrinos is determined by the coupled
system of BE's\cite{KT,MAL}
\begin{eqnarray}
\label{BENi}
\frac{dn_{N_i}}{dt}\, +\, 3Hn_{N_i} &=&
-\, \Big( \frac{n_{N_i}}{n^{eq}_{N_i}}\, -\, 1\Big)\, \gamma_{N_i}\\
\label{BElept}
\frac{dn_L}{dt}\, +\, 3Hn_{L} &=& \sum\limits_{i=1}^2\, \Big[\, \delta_{N_i}
\, \Big( \frac{n_{N_i}}{n^{eq}_{N_i}}\, -\, 1\Big)\, -\, \frac{n_L}{2n^{eq}_l}
\, \Big]\, \gamma_{N_i}\ -\
\frac{n_L}{n^{eq}_l}\, \gamma_{\sigma}\, ,
\end{eqnarray}
where $n_{N_i}$, $n_L=n_l-n_{\bar{l}}$ are the densities of the number
of $N_i$ and the lepton-number asymmetry, respectively, and
$n^{eq}_{N_i}$ and $n^{eq}_l$ are their values in thermal equilibrium.
The Hubble parameter $H=(dR/dt)/R$ determines the expansion rate of
the Universe and also depends on the temperature $T$, through the
relation in Eq.\ (\ref{Hubble}). In Eqs.\ (\ref{BENi}) and
(\ref{BElept}), $\gamma_{N_i}$ and $\gamma_{\sigma}$ are the decay and
scattering collision terms, respectively:
\begin{eqnarray}
\label{gNi}
\gamma_{N_i}& =& n^{eq}_{N_i}\, \frac{K_1 (m^2_{N_i}/T)}{K_2(m^2_{N_i}/T)}\,
\Gamma_{N_i}\, ,\\
\label{gsigma}
\gamma_\sigma &=& \frac{T}{8\pi^4}\, \int_{s_{\rm thr}}^\infty\,
ds\, s^{3/2}\, K_1 (\sqrt{s}/T)\, \sigma'(s)\, .
\end{eqnarray}
Here, $s_{\rm thr}$ is the threshold of a generic process $a+b \to
c+d$, and
\begin{equation}
\label{sigmapr}
\sigma'(s)\ =\ \frac{1}{4}\ \theta(\sqrt{s} - m_a - m_b)\
\lambda\Big( 1,\ \frac{m^2_a}{s},\ \frac{m^2_b}{s}\,\Big)\
\hat{\sigma} (s)
\end{equation}
with $\lambda (x,y,z) = (x-y-z)^2 - 4yz$. In Eq.\ (\ref{gNi}),
$K_1(z)$ and $K_2(z)$ are the modified Bessel functions defined in
Ref.\cite{MA&IAS} The cross section $\hat{\sigma}(s)$ mainly comprises
the scatterings $L^C\Phi\to L\Phi^\dagger$ and its CP-conjugate
process $L\Phi^\dagger\to L^C\Phi$, and is evaluated at $T=0$ by
subtracting all those real intermediate contributions that have
already been taken into account in the direct and inverse decays of
heavy Majorana neutrinos.\cite{EWK&SW} The collision term
$\gamma_{\sigma}$ acts as a CP-conserving depletion term, which is
formally of order $\gamma^2_{N_i}$ at $T\approx m_{N_i}$.\cite{KT}
There is also the $\Delta L =2$ reaction $\Phi\Phi \leftrightarrow
LL$, which is much weaker than the latter, as long as the
out-of-equilibrium constraint on the Yukawa couplings in Eq.\
(\ref{hli_bound}) is imposed. Finally, there exist additional
contributions to the BE's,\cite{MAL} coming from processes such as
$N_i L \leftrightarrow Q_i \bar{t}_R$, $N_i Q_i \leftrightarrow L
t_R$. These contributions are quite strong at very high temperatures,
$T \gg m_{N_i}$, and lead to a decoherence phenomenon between the
heavy neutrinos $N_1$ and $N_2$. At the crucial leptogenesis epoch,
when $T\approx m_{N_i}$, the rates of the latter processes are
kinematically suppressed and smaller than the decay rates of the heavy
Majorana neutrinos.\cite{KT}
Many applicable assumptions are involved in BE's (\ref{BENi}) and
(\ref{BElept}). More details may be found in Ref.\cite{KT} First, we
have considered the Friedmann-Robertson-Walker model in the
non-relativistic limit. Second, we have adopted the Maxwell-Boltzmann
statistics, which is a good approximation in the absence of effects
that originate from Bose condensates or arise from a degeneracy of
many Fermi degrees of freedom. Third, we have assumed that the lepton
and Higgs weak isodoublets, $L$ and $\Phi$, are practically in thermal
equilibrium, and neglected high orders in $n_L/n^{eq}_l$ and
$\delta_{N_i}$. In this context, it has also been assumed that the
different particle species are in kinetic equilibrium, i.e.\ that the
particles may rapidly change their kinetic energy through elastic
scatterings but the processes responsible for a change of the number
of particles are out of equilibrium. These out-of-equilibrium
reactions are described by the BE's (\ref{BENi}) and (\ref{BElept}).
To solve these BE's numerically, it proves useful to make the following
change of variables:
\begin{equation}
x\ =\ \frac{m_{N_1}}{T}\ ,\qquad t\ =\ \frac{1}{2H(T)}\ =\
\frac{x^2}{2H(x=1)}\ .
\end{equation}
Such an ansatz is also valid for the radiation-dominated phase of the
Universe while baryogenesis takes place. Then, we define the
parameters
\begin{equation}
\label{Kparam}
K\ =\ \frac{K_1(x)}{K_2(x)}\, \frac{\Gamma_{N_1}}{H(x=1)}\ ,
\qquad \gamma\ =\ \frac{K_2(x)K_1(\xi x)}{K_1(x)K_2(\xi x)}\,
\frac{\Gamma_{N_2}}{\Gamma_{N_1}}\ ,
\end{equation}
with $\xi = m_{N_2}/m_{N_1}\ge 1$. In addition, we introduce the
quantities $Y_{N_i} = n_{N_i}/s$ and $Y_L = n_L/s$, where $s$ is the
entropy density. In an isentropically expanded Universe, the entropy
density has the time dependence $s(t)=\mbox{const.}\times R^{-3}(t)$
and may be related to the number density of photons, $n_\gamma$, as
$s=g_* n_\gamma$, where $g_*$ is given after Eq.\ (\ref{Hubble}).
Emplyoing the above definitions and relations among the parameters, we
obtain the BE's in terms of the new quantities $Y_{N_1}$, $Y_{N_2}$
and $Y_L$:
\begin{eqnarray}
\label{BEYN1}
\frac{dY_{N_1}}{dx} &=& -\, (Y_{N_1} - Y^{eq}_{N_1}) Kx^2\, ,\\
\label{BEYN2}
\frac{dY_{N_2}}{dx} &=& -\, (Y_{N_2} - Y^{eq}_{N_2}) \gamma Kx^2\, ,\\
\label{BEYL}
\frac{dY_L}{dx} &=& \Big[\, (Y_{N_1}-Y^{eq}_{N_1})\delta_{N_1}\, +\,
(Y_{N_2} - Y^{eq}_{N_2} )\gamma\delta_{N_2}\, -\, \frac 12 g_* Y_L
(Y^{eq}_{N_1}+\gamma Y^{eq}_{N_2}) \nonumber\\
&&-\, g_* Y_L Y^{eq}_{N_1} \frac{\gamma_\sigma}{\gamma_{N_1}}\,
\Big]\, Kx^2\, .
\end{eqnarray}
The heavy-neutrino number-to-entropy densities in equilibrium
$Y^{eq}_{N_i}(x)$ are given by\cite{EWK&SW}
\begin{equation}
\label{YeqN}
Y^{eq}_{N_1}(x)\ =\ \frac{3}{8g_*}\, \int_{x}^\infty\,
dz\, z\, \sqrt{z^2-x^2}\, e^{-z}\ =\ \frac{3}{8g_*}\, x^2\, K_2(x)\, ,
\end{equation}
and $Y^{eq}_{N_2}(x)=Y^{eq}_{N_1}(\xi x)$. The differential equations
(\ref{BEYN1})--(\ref{BEYL}) are solved numerically, using the initial
conditions
\begin{equation}
\label{InBE}
Y_{N_1}(0)\ =\ Y_{N_2}(0)\ =\ Y^{eq}_{N_1}(0)\ =\ Y^{eq}_{N_2}(0)
\quad \mbox{and}\quad Y_L(0)=0\, .
\end{equation}
These initial conditions merely reflect the fact that our Universe
starts evolving from a lepton-symmetric state, in which the heavy
Majorana neutrinos are originally in thermal equilibrium. Here, we
should remark that the low-temperature limit of the numerical
predictions does not strongly depend on the initial conditions
(\ref{InBE}), if $L$-violating interactions are in thermal equilibrium
at $T\gg m_{N_i}$. The reason is that at very high temperatures, the
BE's (\ref{BEYN1})--(\ref{BEYL}) exhibit a running independent fixed
point, and any initial value for $Y_{N_1}$, $Y_{N_2}$ and $Y_L$ is
rapidly driven to the thermal-equilibrium values given by Eq.\
(\ref{InBE}).\cite{EAP} After the evolution of the Universe to
temperatures much below $m_{N_1}$, a net lepton asymmetry has been
created. This lepton asymmetry will then be converted into the BAU via
the sphalerons. During a first order electroweak phase transition,
the produced excess in $L$ is also encoded as an excess in $B$, which
is given by Eq.\ (\ref{BLrel}).\cite{BS,HT} The observed BAU is
$Y^{obs}_B = (0.6 - 1)\times 10^{-10}$,\cite{KT} which corresponds to
an excess of leptons $-Y^{obs}_L \approx 10^{-9} - 10^{-10}$. In the
latter estimate, we have included the possibility of generating the
BAU via an individual lepton asymmetry.\cite{Dreiner/Ross}
\begin{figure}[ht]
\leavevmode
\begin{center}
\epsfxsize=11.cm
\epsffile[0 0 539 652]{baufig8.eps}
\end{center}
\fcaption{Lepton asymmetries for selected heavy Majorana neutrino
scenarios.}\label{fig:7}
\end{figure}
Figure \ref{fig:7} shows the dependence of $Y_L(x)$ on $x=m_{N_1}/T$
for two representative scenarios defined in Eq.\ (\ref{scenario}) for
different values of $x_N = m_{N_2}/m_{N_1} - 1$.\cite{APRD} The
observed range for $Y_L$, $Y^{obs}_L$, is indicated with two confining
horizontal dotted lines. In scenario I (Fig.\ \ref{fig:7}(a)), a
heavy-neutrino mass splitting $x_N$ of order $10^{-9}$--$10^{-8}$ is
sufficient to account for the BAU. For comparison, it is worth
mentioning that the degree of mass degeneracy between $K_L$ and $K_S$
is of order $10^{-15}$, which is by far smaller than the one
considered here. We find that the $\varepsilon$-type CP violation is
dominant, whereas $\varepsilon'$-type effect are extremely suppressed.
Numerical estimates for the second scenario are displayed in Fig.\
\ref{fig:7}(b). This scenario is closer to the traditional one
considered in Ref.\cite{FY} Here, it is not necessary to have a high
degree of degeneracy for $N_1$ and $N_2$ to get sufficient CP
violation for the BAU. In this case, both $\varepsilon$- and
$\varepsilon'$-type mechanisms of CP violation are equally important.
Therefore, the main consequence of $\varepsilon$-type CP violation is
that the leptogenesis scale may be as low as 1 TeV, even for models
with universal Yukawa couplings.\cite{APRD}
In the scenario of leptogenesis induced by mixing of heavy Majorana
neutrinos, one may have to worry about effects, which could affect the
resonant condition of CP violation in Eq.\ (\ref{CPcond}). For
instance, there may be broadening effects at high temperatures due to
collisions among particles. Such effects will contribute terms of
order $h^4_{li}$ to the $N_i$ widths and are small in
general.\cite{EWK&SW,Roulet} On the other hand, finite temperature
effects on the $T=0$ masses of particles may be significant. Because
of the SM gauge interactions, the leptons and the Higgs fields receive
appreciable thermal masses,\cite{HAW,MEC,CKO} i.e.
\begin{eqnarray}
\label{thermal}
\frac{m^2_L(T)}{T^2} &=& \frac{1}{32}\, (3g^2\, +\, g'^2)\ \approx\
0.044\, ,
\end{eqnarray}
where $g$ and $g'$ are the SU(2)$_L$ and U(1)$_Y$ gauge couplings at
the running scale $M_Z$. The isosinglet heavy neutrinos also acquire
thermal masses through Yukawa interactions \cite{HAW}, i.e.
\begin{equation}
\label{mN(T)}
\frac{m^2_{N_i}(T)\, -\, m^2_{N_i}(0)}{T^2}\ =\ \frac{1}{16}\, |h_{li}|^2\, .
\end{equation}
Such a $T$-dependent mass shift is small and comparable to the $N_i$
widths at $T\approx m_{N_i}$. Therefore, it is easy to see that the
condition for resonant CP violation through mixing in Eq.\
(\ref{CPcond}) is qualitatively satisfied. Finally, the Higgs field
also receives appreciable thermal contributions. The authors of
Ref.\cite{CKO} have computed the one-loop Higgs thermal mass, and
found that $M_\Phi (T)/T \stackrel{\displaystyle <}{\sim} 0.6$ for
values of the Higgs-boson mass favoured by LEP2, i.e.\ $M_H < 200$.
In this range of Higgs masses, the thermal widths $\Gamma_{N_i}(T)$
will be reduced with respect to $\Gamma_{N_i}(0)$ by a factor of 2 or
3 due to sizeable phase-space corrections. Nevertheless, the
influence on the resonant phenomenon of CP violation through mixing is
not dramatic when the latter effects are included, and therefore large
leptonic CP asymmetries are still conceivable.
\newpage
\setcounter{section}{9}
\setcounter{equation}{0}
\section{Low-energy phenomenology of heavy Majorana neutrinos}\label{sec:9}
\noindent
Whether heavy Majorana neutrinos can lead to interesting phenomenology
at collider and lower energies is an issue that strongly depends on
the out-of-equilibrium constraint given by Eq.\ (\ref{hli_bound}). If
this constraint is applied to all lepton families, heavy Majorana
neutrinos have very little impact on collider phenomenology. However,
this very last statement is rather model dependent. One can imagine,
for example, a scenario in which $\Delta L_e$-violating operators
exist and are out-of-equilibrium, and the $\mu$ and $\tau$ sectors do
not communicate any interaction to the electron sector, i.e.\ $\Delta
(L_e - L_\mu) = \Delta (L_e - L_\tau) = 0$. Since sphalerons conserve
the individual quantum number $B/3 - L_e$ (see also Section 2), the
observed baryonic asymmetry can be preserved in an excess of $L_e$,
independently of whether $\Delta L_\mu$- and/or $\Delta
L_\tau$-non-conserving operators are in thermal equilibrium or not. As
we will discuss below, such scenarios with strong mixing in the muon
and tau sectors only can give rise to a variety of new-physics
phenomena in a strength that can be probed in laboratory experiments.
\subsection{Lepton-flavour and/or number processes}
\noindent
Heavy Majorana neutrinos with masses in the range 0.2 -- 1 TeV may be
produced directly at high-energy $ee$,\cite{PRODee} $ep$,\cite{PRODep}
and $pp$ colliders,\cite{PRODpp} whose subsequent decays can give rise
to distinct like-sign dilepton signals. If heavy Majorana neutrinos
are not accessible at high-energy colliders, they can still induce
lepton-flavour-violating decays of the $Z$ boson,\cite{KPS_Z,BSVMV}
the Higgs particle,\cite{APhiggs} and the $\tau$ and $\mu$
leptons.\cite{IP,IP2} As we will see, non-decoupling quantum effects
due to potentially large SU(2)$_L$-breaking masses play a key role in
these flavour-changing-neutral-current (FCNC) phenomena.\cite{IP}
Heavy Majorana neutrinos may cause breaking of universality in
leptonic diagonal $Z$-boson\cite{BKPS} and $\pi$ decays\cite{KP} or
influence\cite{SB&AS} the size of the electroweak oblique parameters
$S$, $T$ and $U$.\cite{STU} In fact, there exist many observables
summarized in Ref.\cite{LL} to which heavy Majorana neutrinos may have
sizeable contributions. These observables include $\tau$-polarization
asymmetries, neutrino-counting experiments at the CERN Large Electron
Positron Collider (LEP1) or at the Stanford Linear Accelerator (SLC),
etc.
In the following we shall show that high SU(2)$_L$-breaking masses in
a class of neutrino models can lead to large FCNC effects. Because
these effects are not correlated with light neutrino masses, one can
overcome the see-saw suppression relations that usually accompany such
new-physics phenomena.\cite{Cheng/Li} Let us consider a two-generation
model of the kind. The model is similar to the one discussed in
Section 3. It has two isosinglet neutrinos $\nu'_R$ and $S'_L$. In
the weak basis $( (\nu_{\mu L})^C,\ (\nu_{\tau L})^C,\ (S'_L)^C,\
\nu'_R )$ the neutrino mass matrix then takes the form
\begin{equation}
\label{Mnmatr}
{\cal M}^\nu \ =\ \left(
\begin{array}{cccc}
0 & 0 & 0 & m_1\\
0 & 0 & 0 & m_2\\
0 & 0 & 0 & M\\
m_1 & m_2 & M & \mu
\end{array} \right).
\end{equation}
Diagonalization of ${\cal M}^\nu$ yields two zero eigenvalues, which
would correspond to massless $\mu$ and $\tau$ neutrinos. If $\mu \neq
0$ and $\mu/M \ll 1$, they receive small radiative masses at high
orders.\cite{ZPCAP} The other two states are very heavy, of order
$M\pm \mu$. In contrast to the usual seesaw scenario, the ratios
$m_1/M$ and $m_2/M$ remain fully unconstrained. Global analyses of
low-energy data\cite{LL} restrict their values to $m_1/M,\ m_2/M
\stackrel{\displaystyle <}{\sim} 0.1$. For reader's convenience, we
define the parameters
\begin{equation}
\label{snul}
(s^{\nu_\mu}_L)^2\ \simeq\ \frac{m_1^2}{M^2}\ ,\qquad
(s^{\nu_\tau}_L)^2\ \simeq\ \frac{m_2^2}{M^2}\ ,
\end{equation}
The newly introduced parameters quantify neutrino mixings between the
light and heavy Majorana states. They also parameterize the deviation
of the modified $Wl\nu_l$ coupling from the SM one. An extensive
discussion is given in Ref.\cite{LL,IP} Including renormalization
effects into the definition of light--heavy neutrino mixings,\cite{KP}
one may tolerate the following upper limits
\begin{equation}
\label{snubound}
(s^{\nu_\mu}_L)^2\ \ < \ \ 0.010, \quad
(s^{\nu_\tau}_L)^2\ \ < \ \ 0.035,\quad {\rm and}\quad
(s^{\nu_e}_L)^2 \ < \ \ 1\times 10^{-8}.
\end{equation}
The last limit comes from the requirement that only the electron
family is responsible for baryogenesis. Of course, electron and muon
families may interchange their roles in Eq.\ (\ref{snubound}).
\begin{figure}
\begin{center}
\begin{picture}(360,400)(0,0)
\SetWidth{0.8}
\ArrowLine(0,360)(20,360)\ArrowLine(60,360)(80,360)
\GCirc(40,360){20}{0.7}\Photon(40,340)(40,320){3}{2}
\Text(0,365)[b]{$l$}\Text(80,365)[b]{$l'$}
\Text(42,320)[l]{$Z,\gamma$}
\Text(100,360)[]{$=$}
\ArrowLine(120,360)(140,360)\ArrowLine(180,360)(200,360)
\ArrowLine(140,360)(180,360)\Text(160,367)[b]{$n_i$}
\DashArrowArc(160,360)(20,180,270){3}\PhotonArc(160,360)(20,270,360){2}{3}
\Text(145,340)[r]{$G^-$}\Text(175,340)[l]{$W^-$}
\Photon(160,340)(160,320){3}{2}
\Text(120,365)[b]{$l$}\Text(200,365)[b]{$l'$}
\Text(162,320)[l]{$Z,\gamma$}
\Text(160,300)[]{\bf (a)}
\ArrowLine(240,360)(260,360)\ArrowLine(300,360)(320,360)
\ArrowLine(260,360)(300,360)\Text(280,367)[b]{$n_i$}
\DashArrowArc(280,360)(20,270,360){3}\PhotonArc(280,360)(20,180,270){2}{3}
\Text(265,340)[r]{$W^-$}\Text(295,340)[l]{$G^-$}
\Photon(280,340)(280,320){3}{2}
\Text(240,365)[b]{$l$}\Text(320,365)[b]{$l'$}
\Text(282,320)[l]{$Z,\gamma$}
\Text(280,300)[]{\bf (b)}
\ArrowLine(0,260)(20,260)\ArrowLine(60,260)(80,260)
\ArrowLine(20,260)(60,260)\Text(40,267)[b]{$n_i$}
\PhotonArc(40,260)(20,180,270){2}{3}\PhotonArc(40,260)(20,270,360){2}{3}
\Text(25,240)[r]{$W^-$}\Text(55,240)[l]{$W^-$}
\Photon(40,240)(40,220){3}{2}
\Text(0,265)[b]{$l$}\Text(80,265)[b]{$l'$}
\Text(42,220)[l]{$Z,\gamma$}
\Text(40,200)[]{\bf (c)}
\ArrowLine(120,260)(140,260)\ArrowLine(180,260)(200,260)
\ArrowLine(140,260)(180,260)\Text(160,267)[b]{$n_i$}
\DashArrowArc(160,260)(20,180,270){3}\DashArrowArc(160,260)(20,270,360){2}
\Text(145,240)[r]{$G^-$}\Text(175,240)[l]{$G^-$}
\Photon(160,240)(160,220){3}{2}
\Text(120,265)[b]{$l$}\Text(200,265)[b]{$l'$}
\Text(162,220)[l]{$Z,\gamma$}
\Text(160,200)[]{\bf (d)}
\ArrowLine(240,260)(260,260)\ArrowLine(300,260)(320,260)
\Photon(260,260)(300,260){2}{4}\Text(280,267)[b]{$W^-$}
\ArrowLine(260,260)(280,240)\ArrowLine(280,240)(300,260)
\Text(270,240)[r]{$n_i$}\Text(295,240)[l]{$n_j$}
\Photon(280,240)(280,220){3}{2}
\Text(240,265)[b]{$l$}\Text(320,265)[b]{$l'$}
\Text(282,220)[l]{$Z$}
\Text(280,200)[]{\bf (e)}
\ArrowLine(0,160)(20,160)\ArrowLine(60,160)(80,160)
\DashArrowLine(20,160)(60,160){3}\Text(40,167)[b]{$G^-$}
\ArrowLine(20,160)(40,140)\ArrowLine(40,140)(60,160)
\Text(30,140)[r]{$n_i$}\Text(55,140)[l]{$n_j$}
\Photon(40,140)(40,120){3}{2}
\Text(0,165)[b]{$l$}\Text(80,165)[b]{$l'$}
\Text(42,120)[l]{$Z$}
\Text(40,100)[]{\bf (f)}
\ArrowLine(120,160)(135,160)\ArrowLine(135,160)(150,160)
\ArrowLine(150,160)(180,160)\ArrowLine(180,160)(200,160)
\Text(120,165)[b]{$l$}\Text(142,165)[b]{$l$}
\Text(165,165)[b]{$n_i$}\Text(200,165)[b]{$l'$}
\Photon(135,160)(135,120){3}{4}
\Text(137,120)[l]{$Z,\gamma$}
\PhotonArc(165,160)(15,180,360){2}{5}\Text(165,135)[]{$W^-$}
\Text(160,100)[]{\bf (g)}
\ArrowLine(240,160)(255,160)\ArrowLine(255,160)(270,160)
\ArrowLine(270,160)(300,160)\ArrowLine(300,160)(320,160)
\Text(240,165)[b]{$l$}\Text(262,165)[b]{$l$}
\Text(285,165)[b]{$n_i$}\Text(320,165)[b]{$l'$}
\Photon(255,160)(255,120){3}{4}
\Text(257,120)[l]{$Z,\gamma$}
\DashArrowArc(285,160)(15,180,360){3}\Text(285,135)[]{$G^-$}
\Text(280,100)[]{\bf (h)}
\ArrowLine(0,60)(20,60)\ArrowLine(20,60)(50,60)
\ArrowLine(50,60)(65,60)\ArrowLine(65,60)(80,60)
\Text(0,65)[b]{$l$}\Text(57,65)[b]{$l'$}
\Text(35,65)[b]{$n_i$}\Text(80,65)[b]{$l'$}
\Photon(65,60)(65,20){3}{4}
\Text(67,20)[l]{$Z,\gamma$}
\PhotonArc(35,60)(15,180,360){2}{5}\Text(35,35)[]{$W^-$}
\Text(40,0)[]{\bf (i)}
\ArrowLine(120,60)(140,60)\ArrowLine(140,60)(170,60)
\ArrowLine(170,60)(185,60)\ArrowLine(185,60)(200,60)
\Text(120,65)[b]{$l$}\Text(177,65)[b]{$l'$}
\Text(155,65)[b]{$n_i$}\Text(200,65)[b]{$l'$}
\Photon(185,60)(185,20){3}{4}
\Text(187,20)[l]{$Z,\gamma$}
\DashArrowArc(155,60)(15,180,360){3}\Text(155,35)[]{$G^-$}
\Text(160,0)[]{\bf (j)}
\end{picture}\\[0.7cm]
\end{center}
\fcaption{Feynman graphs pertaining to the decay $Z\to ll'$. Graphs
related to the effective $\gamma ll'$ vertex are also
displayed.}\label{fig:8}
\end{figure}
\begin{figure}
\begin{center}
\begin{picture}(360,300)(0,0)
\SetWidth{0.8}
\ArrowLine(0,360)(20,360)\ArrowLine(60,360)(80,360)
\GCirc(40,360){20}{0.7}\Photon(40,340)(40,320){3}{2}
\Text(0,365)[b]{$\tau$}\Text(80,365)[b]{$l'$}
\Text(45,333)[l]{$Z,\gamma$}
\ArrowLine(40,320)(0,320)\ArrowLine(80,320)(40,320)
\Text(0,317)[t]{$l_1$}\Text(80,317)[t]{$l_1$}
\Text(40,295)[]{\bf (a)}
\ArrowLine(120,360)(140,360)\ArrowLine(180,360)(200,360)
\ArrowLine(140,360)(180,360)\Text(160,368)[b]{$n_i$}
\ArrowLine(140,320)(120,320)\ArrowLine(200,320)(180,320)
\ArrowLine(180,320)(140,320)\Text(160,314)[t]{$n_j$}
\Photon(140,360)(140,320){2}{4}\Photon(180,360)(180,320){2}{4}
\Text(120,365)[b]{$\tau$}\Text(200,365)[b]{$l'$}
\Text(120,317)[t]{$l_1$}\Text(200,317)[t]{$l_2$}
\Text(137,340)[r]{$W^-$}\Text(185,340)[l]{$W^+$}
\Text(160,295)[]{\bf (b)}
\ArrowLine(240,360)(260,360)\ArrowLine(300,360)(320,360)
\ArrowLine(260,360)(300,360)\Text(280,368)[b]{$n_i$}
\ArrowLine(260,320)(240,320)\ArrowLine(320,320)(300,320)
\ArrowLine(300,320)(260,320)\Text(280,314)[t]{$n_j$}
\DashArrowLine(260,360)(260,320){3}\Photon(300,360)(300,320){2}{4}
\Text(240,365)[b]{$\tau$}\Text(320,365)[b]{$l'$}
\Text(240,317)[t]{$l_1$}\Text(320,317)[t]{$l_2$}
\Text(257,340)[r]{$G^-$}\Text(305,340)[l]{$W^+$}
\Text(280,295)[]{\bf (c)}
\ArrowLine(0,260)(20,260)\ArrowLine(60,260)(80,260)
\ArrowLine(20,260)(60,260)\Text(40,268)[b]{$n_i$}
\ArrowLine(20,220)(0,220)\ArrowLine(80,220)(60,220)
\ArrowLine(60,220)(20,220)\Text(40,214)[t]{$n_j$}
\Photon(20,260)(20,220){2}{4}\DashArrowLine(60,260)(60,220){3}
\Text(0,265)[b]{$\tau$}\Text(80,265)[b]{$l'$}
\Text(0,217)[t]{$l_1$}\Text(80,217)[t]{$l_2$}
\Text(17,240)[r]{$W^-$}\Text(65,240)[l]{$G^+$}
\Text(40,195)[]{\bf (d)}
\ArrowLine(120,260)(140,260)\ArrowLine(180,260)(200,260)
\ArrowLine(140,260)(180,260)\Text(160,268)[b]{$n_i$}
\ArrowLine(140,220)(120,220)\ArrowLine(200,220)(180,220)
\ArrowLine(180,220)(140,220)\Text(160,214)[t]{$n_j$}
\DashArrowLine(140,260)(140,220){3}\DashArrowLine(180,260)(180,220){3}
\Text(120,265)[b]{$\tau$}\Text(200,265)[b]{$l'$}
\Text(120,217)[t]{$l_1$}\Text(200,217)[t]{$l_2$}
\Text(137,240)[r]{$G^-$}\Text(185,240)[l]{$G^+$}
\Text(160,195)[]{\bf (e)}
\ArrowLine(240,260)(260,260)\ArrowLine(320,260)(300,260)
\Line(260,260)(300,260)\Text(280,268)[b]{$n_i$}\Text(280,260)[]{{\boldmath
$\times$}}
\ArrowLine(260,220)(240,220)\ArrowLine(300,220)(320,220)
\Line(300,220)(260,220)\Text(280,214)[t]{$n_j$}\Text(280,220)[]{{\boldmath
$\times$}}
\Photon(260,260)(260,220){2}{4}\Photon(300,260)(300,220){2}{4}
\Text(240,265)[b]{$\tau$}\Text(320,265)[b]{$l_2$}
\Text(240,217)[t]{$l'$}\Text(320,217)[t]{$l_1$}
\Text(257,240)[r]{$W^-$}\Text(305,240)[l]{$W^+$}
\Text(280,195)[]{\bf (f)}
\ArrowLine(0,160)(20,160)\ArrowLine(80,160)(60,160)
\Line(20,160)(60,160)\Text(40,168)[b]{$n_i$}\Text(40,160)[]{{\boldmath
$\times$}}
\ArrowLine(20,120)(0,120)\ArrowLine(60,120)(80,120)
\Line(60,120)(20,120)\Text(40,114)[t]{$n_j$}\Text(40,120)[]{{\boldmath
$\times$}}
\Photon(20,160)(20,120){2}{4}\DashArrowLine(60,160)(60,120){3}
\Text(0,165)[b]{$\tau$}\Text(80,165)[b]{$l_2$}
\Text(0,117)[t]{$l'$}\Text(80,117)[t]{$l_1$}
\Text(17,140)[r]{$W^-$}\Text(65,140)[l]{$G^+$}
\Text(40,95)[]{\bf (g)}
\ArrowLine(120,160)(140,160)\ArrowLine(200,160)(180,160)
\Line(140,160)(180,160)\Text(160,168)[b]{$n_i$}\Text(160,160)[]{{\boldmath
$\times$}}
\ArrowLine(140,120)(120,120)\ArrowLine(180,120)(200,120)
\Line(180,120)(140,120)\Text(160,114)[t]{$n_j$}\Text(160,120)[]{{\boldmath
$\times$}}
\DashArrowLine(140,160)(140,120){3}\Photon(180,160)(180,120){2}{4}
\Text(120,165)[b]{$\tau$}\Text(200,165)[b]{$l_2$}
\Text(120,117)[t]{$l'$}\Text(200,117)[t]{$l_1$}
\Text(137,140)[r]{$G^-$}\Text(185,140)[l]{$W^+$}
\Text(160,95)[]{\bf (h)}
\ArrowLine(240,160)(260,160)\ArrowLine(320,160)(300,160)
\Line(260,160)(300,160)\Text(280,168)[b]{$n_i$}\Text(280,160)[]{{\boldmath
$\times$}}
\ArrowLine(260,120)(240,120)\ArrowLine(300,120)(320,120)
\Line(300,120)(260,120)\Text(280,114)[t]{$n_j$}\Text(280,120)[]{{\boldmath
$\times$}}
\DashArrowLine(260,160)(260,120){3}\DashArrowLine(300,160)(300,120){3}
\Text(240,165)[b]{$\tau$}\Text(320,165)[b]{$l_2$}
\Text(240,117)[t]{$l'$}\Text(320,117)[t]{$l_1$}
\Text(257,140)[r]{$G^-$}\Text(305,140)[l]{$G^+$}
\Text(280,95)[]{\bf (i)}
\Text(40,60)[]{\boldmath $+\quad ( l_1 \leftrightarrow l$\bf')}
\end{picture}\\
\end{center}
\fcaption{Feynman graphs pertaining to the decay $\tau\to
l'l_1l_2$.}\label{fig:9}
\end{figure}
Heavy Majorana neutrinos can induce sizeable FCNC decays of the type
$Z\to \mu\tau$, $\tau\to \mu^- e^-e^+$ or $\tau\to \mu^-\mu^-\mu^+$
through quantum corrections presented in Figs.\ \ref{fig:8} and
\ref{fig:9}. Thus, the matrix element relevant for the generic decay
$\tau (p_\tau)\to l(p_l)l_1(p_1)\bar{l}_2(p_2)$ acquires contributions
from $\gamma$- and $Z$-mediated graphs as well as from graphs with box
diagrams. The respective transition elements are given by
\begin{eqnarray}
\label{Tgamma}
i{\cal T}_\gamma (\tau\to l l_1 \bar{l}_2) &=&
\frac{\alpha_w^2s_w^2}{4M_W^2}
\delta_{l_1 l_2} \bar{u}_{l_1}\gamma^\mu v_{l_2}\
\bar{u}_{l}\Big[ F^{\tau l}_\gamma (\gamma_\mu-\frac{q_\mu\not\! q}{q^2})
(1-\gamma_5)\nonumber\\
&& -iG_\gamma^{\tau l} \sigma_{\mu\nu}\frac{q^\nu}{q^2}
(m_\tau(1+\gamma_5)+m_{l}(1-\gamma_5))\Big]u_\tau\, ,\\
\label{TZ}
i{\cal T}_Z(\tau\to l l_1 \bar{l}_2)&=& \frac{\alpha_w^2}{16M_W^2}\
\delta_{l_1 l_2}
F_Z^{\tau l} \bar{u}_{l}\gamma_\mu(1-\gamma_5)u_\tau\
\bar{u}_{l_1}\gamma^\mu
(1-4s_w^2-\gamma_5)v_{l_2}\, ,\qquad\\
\label{Tbox}
i{\cal T}_{\rm box}(\tau \to l l_1 \bar{l}_2) &=&
\frac{\alpha_w^2}{16M_W^2}\ F_{\rm box}^{\tau ll_1l_2}\
\bar{u}_{l}\gamma_\mu(1-\gamma_5)u_\tau\ \bar{u}_{l_1}\gamma^\mu
(1-\gamma_5)v_{l_2}\ ,
\end{eqnarray}
where $q=p_1+p_2$, $s^2_w=1-M^2_W/M^2_Z$, and $F_\gamma^{\tau l}$,
$G_\gamma^{\tau l}$, $F_Z^{\tau l}$, $F_{\rm box}^{\tau l l_1 l_2}$
are certain composite form factors whose analytic form is given in
Ref.\cite{IP} Nevertheless, it is useful to examine the asymptotic
behaviour of the composite form factors for large values of
$\lambda_{N_1} = m^2_{N_1}/M^2_W$ and $\lambda_{N_2} =
m^2_{N_2}/M^2_W$ in the two-generation model under discussion. For
simplicity, we consider $\lambda_{N_1} \sim \lambda_{N_2} \sim
\lambda_N \gg 1$. In this limit, we find
\begin{eqnarray}
\label{CFgam}
F_\gamma^{\tau l} &\to & -\ \frac{1}{6}\; s_L^{\nu_\tau}s_L^{\nu_{l}}
\ln\lambda_N\, , \\
\label{CGZ}
G_\gamma^{\tau l} &\to & \frac{1}{2}\; s_L^{\nu_\tau}s_L^{\nu_{l}}\, ,\\
\label{CFZ}
F_Z^{\tau l} &\to & -\; \frac{3}{2}s_L^{\nu_\tau}s_L^{\nu_{l}}\ln\lambda_N\;
-\; \frac{1}{2} s_L^{\nu_\tau}s_L^{\nu_{l}}\sum_{i=1}^{3}\
(s_L^{\nu_i})^2\lambda_N\, ,\\
\label{CFbox}
F_{Box}^{\tau ll_1l_2} &\to &
-\; (s_L^{\nu_\tau}s_L^{\nu_{l}}\delta_{l_1l_2}
+s_L^{\nu_\tau}s_L^{\nu_{l_1}}\delta_{ll_2})\;
+\; \frac{1}{2}s_L^{\nu_\tau}s_L^{\nu_{l}}s_L^{\nu_{l_1}}
s_L^{\nu_{l_2}}\; \lambda_N\, .
\end{eqnarray}
If all light--heavy neutrino mixings $s_L^{\nu_l}$ are held fixed to a
constant, the one-loop functions $F_\gamma^{\tau l}$, $G_\gamma^{\tau
l}$, $F_Z^{\tau l}$, and $F_{Box}^{\tau ll_1l_2}$ in Eqs.\
(\ref{CFgam})--(\ref{CFbox}) do not vanish in the heavy neutrino
limit, and therefore seem to violate the decoupling theorem due to
Appelquist and Carazzone.\cite{AC} However, it is known that the
decoupling theorem does not apply to theories based on the spontaneous
or dynamical breaking of gauge symmetries. Since we hold
$s_L^{\nu_l}$ fixed but increase the heavy-neutrino masses, the
violation of the decoupling theorem originates from the
SU(2)$_L$-breaking Dirac mass terms $m_1$ and
$m_2$.\cite{ZPCAP,APhiggs} The expected decoupling of the isosinglet
mass $M$\cite{Senjan} can also be seen in Eqs.\
(\ref{CFgam})--(\ref{CFbox}). This time we keep the Dirac masses
$m_1$ and $m_2$ fixed and increase $M$. Taking Eq.\ (\ref{snul}) into
account, it is then easy to show that all composite form factors
vanish for large values of $M\approx m_{N_1},\ m_{N_2}$.
Consequently, there is a competitive loop effect of two scales, namely
Dirac versus Majorana scale. High Dirac masses lead to non-decoupling
loop effects while large Majorana masses give rise to a screening and
reduce the strength of the effective FCNC coupling. Nevertheless,
extensive analyses have shown that a non-decoupling `window' confined
by the two mass scales exists, within which FCNC and other new-physics
phenomena come out to be rather large at a level that may be probed in
next-round experiments.\cite{APhiggs,IP,IP2}
As examples, we calculate neutrinoless tau-lepton decays and
flavour-violating $Z$-boson decays. Taking the dominant
non-decoupling parts of the composite form factors into account, we
arrive at the simple expressions for the branching ratios
\begin{eqnarray}
\label{Btauemu}
B(\tau^-\to \mu^- e^- e^+) &\simeq &
\frac{\alpha_w^4}{24576\pi^3}\ \frac{m_\tau^4}{M_W^4}\
\frac{m_\tau}{\Gamma_\tau} \Big[ \ |F_{\rm box}^{\tau \mu ee }|^2\nonumber\\
&&\hspace{-1.5cm} +\
2 (1-2s^2_w){\rm Re} [F_Z^{\tau \mu }F_{\rm box}^{\tau \mu ee *}]\
+\ 8s^4_w|F_Z^{\tau \mu}|^2 \ \Big] \nonumber\\
&&\hspace{-1.5cm}\simeq\
\frac{\alpha_w^4}{98304\pi^3}\ \frac{m_\tau^4}{M_W^4}\
\frac{m_\tau}{\Gamma_\tau}
\frac{m^4_N}{M^4_W}\ (s_L^{\nu_\tau})^2 (s_L^{\nu_\mu})^2
\Big\{ (s_L^{\nu_{e}})^4\nonumber\\
&&\hspace{-1.5cm} +\
2(1-2s^2_w)(s_L^{\nu_e})^2 \sum_i (s_L^{\nu_i})^2
+ 8s^4_w \Big[\sum_i (s_L^{\nu_i})^2\Big]^2\ \Big\}\, ,\\
\label{Btaumumu}
B(\tau^-\rightarrow \mu^- \mu^- \mu^+) & \simeq &
\frac{\alpha_w^4}{24576\pi^3}\ \frac{m_\tau^4}{M_W^4}\
\frac{m_\tau}{\Gamma_\tau} \Big[ \frac{1}{2}|
F_{\rm box}^{\tau \mu\mu\mu }|^2\nonumber\\
&&\hspace{-1.5cm} +\
2(1-2s^2_w){\rm Re} [F_Z^{\tau \mu}F_{\rm box}^{\tau\mu\mu\mu *}]\
+\ 12 s^4_w|F_Z^{\tau \mu}|^2 \Big] \nonumber\\
&&\hspace{-1.5cm}\simeq\
\frac{\alpha_w^4}{98304\pi^3}\ \frac{m_\tau^4}{M_W^4}\
\frac{m_\tau}{\Gamma_\tau}
\frac{m^4_N}{M^4_W}\ (s_L^{\nu_\tau})^2 (s_L^{\nu_\mu})^2
\Big\{ \frac{1}{2}(s_L^{\nu_\mu})^4\nonumber\\
&&\hspace{-1.5cm} +\
2(1-2s^2_w)(s_L^{\nu_\mu})^2 \sum_i (s_L^{\nu_i})^2
+ 12s^4_w \Big[\sum_i (s_L^{\nu_i})^2\Big]^2\ \Big\}\, ,\quad\\
\label{BZtaumu}
B(Z\to \tau^- \mu^+ + \mu^- \tau^+) &=&
\frac{\alpha_w^3}{48\pi^2c_w^3}\frac{M_W}{\Gamma_Z}
|{\cal F}_Z^{\tau\mu}(M^2_Z)|^2 \nonumber\\
&&\hspace{-1.5cm}\simeq\
\frac{\alpha_w^3}{768\pi^2c_w^3}\frac{M_W}{\Gamma_Z}
\frac{m^4_N}{M^4_W} (s_L^{\nu_\mu})^2 (s_L^{\nu_\tau})^2
\Big[ \sum_i (s_L^{\nu_i})^2 \Big]^2\, ,\quad
\end{eqnarray}
where $\Gamma_\tau=2.16\times 10^{-12}$ GeV and $\Gamma_Z = 2.49$ GeV
are respectively the total widths of the $\tau$ lepton and the $Z$
boson known from experiment,\cite{PDG} and ${\cal F}_Z^{\tau\mu}(0) =
F_Z^{\tau\mu}/2$. The complete analytic results of the branching
ratios in Eqs.\ (\ref{Btauemu})--(\ref{BZtaumu}) are presented in
Ref.\cite{IP}
To give an estimate of the size of the FCNC effects, we take the
maximally allowed values $(s^{\nu_\tau}_L)^2=0.035$ and
$(s^{\nu_\mu}_L)^2=0.010$ given by Eq.\ (\ref{snubound}). These
light-heavy neutrino mixings lead to the branching ratios
\begin{eqnarray}
\label{BR}
B(\tau^- \to \mu^- \mu^- \mu^+) &\stackrel{\displaystyle <}{\sim}&
2\times 10^{-6}\, ,\qquad
B(\tau^- \to \mu^- e^- e^+)\ \ \stackrel{\displaystyle <}{\sim}\ \
1\times 10^{-6}\,,
\nonumber\\
B(Z\to \mu\tau) &\stackrel{\displaystyle <}{\sim}& 1.1\times 10^{-6}\ .
\end{eqnarray}
In Eq.\ (\ref{BR}), the upper limits are estimated by using the
heavy-neutrino mass $m_N\simeq 4$ TeV, which results from the
requirement that perturbative unitarity be valid. The theoretical
predictions of the branching ratios must be contrasted with the
present experimental upper limits on these decays\cite{PDG}
\begin{eqnarray}
B(\tau^- \to \mu^- \mu^- \mu^+),\ B(\tau^- \to \mu^- e^- e^+) &<&
1.4\times 10^{-5}\,,\nonumber\\
B(Z\to \mu\tau) &<& 1.3\times 10^{-5}\, ,
\end{eqnarray}
at the 90$\%$ confidence level. Future high-luminosity colliders and
higher-precision experiments are capable of improving the above upper
limits by one order of magnitude and so probe possible new-physics
effects due to heavy Majorana neutrinos.
\begin{figure}
\begin{center}
\begin{picture}(200,150)(0,0)
\SetWidth{0.8}
\ArrowLine(50,80)(80,80)\Text(50,90)[l]{$e^-$}
\Line(80,80)(110,80)\Text(95,80)[]{{\boldmath $\times$}}
\Text(95,68)[]{$N_i$}
\ArrowLine(140,80)(110,80)\Text(125,90)[]{$e^-$}
\Line(140,80)(170,80)\Text(155,80)[]{{\boldmath $\times$}}
\Text(155,92)[]{$N_j$}
\ArrowLine(170,80)(200,80)\Text(200,90)[r]{$e^-$}
\DashArrowArcn(110,80)(30,180,360){5}
\DashArrowArc(140,80)(30,180,270){5}
\DashArrowArc(140,80)(30,270,360){5}
\Photon(140,50)(140,20){3}{4}\Text(145,20)[l]{$\gamma$}
\Text(110,115)[b]{$\chi^-$}
\Text(118,60)[tr]{$\chi^-$}\Text(163,60)[lt]{$\chi^-$}
\end{picture}\\[0.5cm]
\end{center}
\fcaption{Typical two-loop diagram contributing to the EDM
of electron.}\label{fig:10}
\end{figure}
\newpage
\subsection{Electric dipole moment of the electron}
\noindent
In general, CP-violating new-physics interactions may give rise to a
large contribution to the EDM of the electron. This results in an
interaction in the Lagrangian of the form
\begin{equation}
\label{EDM}
{\cal L}_{d}\ =\ ie\, \Big(\frac{d_e}{e}\Big)\, \bar{e}\, \sigma_{\mu\nu}
\gamma_5\, e\, \partial^\mu A^\nu\, .
\end{equation}
The experimental upper bound on electron EDM is very strict: $(d_e/e)
< 10^{-26}$ cm.\cite{PDG} This bound is very crucial, as heavy
Majorana neutrinos can induce an EDM of the electron at two
loops.\cite{Ng2} A typical diagram is shown in Fig.\ \ref{fig:10}. A
simple estimate of this contribution based on a naive dimensional
analysis for $m_{N_2},\ m_{N_1}\gg M_W$ gives\cite{APRD}
\begin{equation}
\label{EDMaj}
\frac{d_e}{e} \sim (10^{-24}\, \mbox{cm})\, \times\,
{\rm Im}(h_{1e}h^*_{2e})^2\,
\frac{m_{N_1}m_{N_2}(m^2_{N_1}-m^2_{N_2})}{(m^2_{N_1} + m^2_{N_2})^2}\,
\ln\Big(\frac{m_{N_1}}{M_W}\Big)
\, .
\end{equation}
In Eq.\ (\ref{EDMaj}) the factor depending on $m_{N_i}$ is always much
smaller than unity. Clearly, the above EDM limit could be important
for $|h_{li}| \gg 0.1$ and/or ultra-heavy Majorana neutrinos with
$m_{N_i}>10^{11}$ TeV. In this prediction, one should bear in mind
that stability of the Higgs potential under radiative corrections
requires $|h_{li}|={\cal O}(1)$.\cite{HLP} Nevertheless, the EDM
contribution is several orders of magnitude below the experimental
bound for $x_N < 10^{-3}$ and/or $|h_{l1}|,\ |h_{l2}|
\stackrel{\displaystyle <}{\sim} 10^{-2}$. In this context, it is
interesting to notice that leptogenesis models with nearly degenerate
heavy Majorana neutrinos can naturally evade possible EDM constraints.
\setcounter{section}{10}
\setcounter{equation}{0}
\section{Conclusions}\label{sec:10}
\noindent
We have reviewed many recent developments that have been taking place
in the scenario of baryogenesis through leptogenesis, and discussed
the implications that heavy Majorana neutrinos may have for laboratory
experiments. In the standard leptogenesis scenario,\cite{FY}
$L$-violating decays of heavy Majorana neutrinos, which are out of
thermal equilibrium, produce an excess in $L$ that is converted into
the observed BAU through $B+L$-violating interactions mediated by
sphalerons. We have paid more attention to different kinds of
mechanisms of CP violation involved in the $N_i$ decays. One has two
generic types: (a) CP violation originates from the interference
between a tree-level graph and the absorptive part of the one-loop
vertex ($\varepsilon'$-type CP violation) and (b) CP violation comes
from the interference between a tree-level graph and the absorptive
part of the one-loop $N_i$ self-energy ($\varepsilon$-type CP
violation).
Recently, there has been renewed interest in $\varepsilon$-type CP
violation in models with mixed heavy Majorana neutrinos. If the
masses of two heavy neutrinos become degenerate, then finite-order
perturbation theory does no longer apply, and various methods have
been invoked to cope with this problem in the
literature.\cite{Paschos} Here, we have discussed the whole issue
based on an effective resummation approach to unstable particle
mixing.\cite{APRD} One then finds that $\varepsilon$-type CP violation
is resonantly enhanced if the mass splitting of the heavy Majorana
neutrinos is comparable to their widths (cf.\ Eq.\ (\ref{CPcond})),
and if the parameter $\delta_{CP}$ defined in Eq.\ (\ref{dCP}) has a
value close to 1. These two conditions turn out to be necessary and
sufficient for resonant CP violation of order unity.\cite{APRD} As a
consequence, the scale of leptogenesis may be lowered up to TeV
energies. In fact, E$_6$-motivated scenarios with nearly degenerate
heavy Majorana neutrinos of order 1 TeV and universal Yukawa couplings
can still be responsible for the BAU. This last observation receives
firm support after solving numerically the BE's.\cite{APRD} In this
kinematic range, the $\varepsilon'$-type contributions are extremely
suppressed. Also, finite-temperature effects on masses of heavy
neutrinos and on decay widths do not spoil the above conditions for
resonant CP asymmetries. Finally, constraints due to electron EDM are
still too weak to play a role in leptogenesis.
The fact that the isosinglet-neutrino scale can be lowered to TeV
energies has a number of virtues. If one has to appeal to (local)
supersymmetry in order to maintain the flatness of the inflaton
potential, one then has to worry about the cosmological consequences
of the gravitino during the nucleosynthesis epoch. Since the weakly
interacting gravitinos are at most of the order of a few TeV, their
slow decay rate will lead to an overproduction of D and $^3{\rm He}$
unless the number density-to-entropy ratio at the time of reheating
after inflation is less than about $10^{-10}$.\cite{EWK&SW} This leads
to quite low reheating temperatures $T_{\rm RH}
\stackrel{\displaystyle <}{\sim} 10^{10}$ GeV, after which the
radiation-dominated era starts and baryogenesis or leptogenesis can in
principle occur. The latter causes a major problem for GUT-scale
baryogenesis and GUT's as well, especially if $T_{\rm RH} \sim M_{\rm
GUT}$. In this context, it is important to remark that
supersymmetric extensions of models with isosinglet neutrinos in the
multi-TeV range as the ones discussed here can comfortably evade the
known gravitino problem mentioned above. In such scenarios, the heavy
inflatons with a mass of order $10^{12}$ GeV can now decay abundantly
to the relatively lighter heavy Majorana neutrinos yielding
equilibrated (incoherent) thermal distributions for the latter
particles.
Heavy Majorana neutrinos may also induce sizeable FCNC effects such as
$Z\to \tau\mu$ and $\tau\to \mu\mu\mu$ at a level that can be probed
in near-future experiments. Non-decoupling loop effects of high
SU(2)$_L$-breaking masses play a significant role in increasing
drastically the strength of these new-physics
phenomena.\cite{ZPCAP,APhiggs,IP,IP2} However, these heavy Majorana
neutrinos cannot account for the BAU at the same time. Depending on
the model, they can coexist with the heavy Majorana neutrinos
responsible for leptogenesis without destroying baryogenesis. At the
LHC,\cite{PRODpp} the viability of such models can be tested directly
by looking for like-sign dilepton signals. Such signals will then
strongly point towards the scenario of baryogenesis through
leptogenesis as the underlying mechanism for understanding the
baryonic asymmetry in nature.
\nonumsection{Acknowledgements}
\noindent
The author gratefully acknowledges discussions with Bill Bardeen,
Zurab Berezhiani, James Bjorken, Francisco Botella, Wilfried
Buchm\"uller, Stan Brodsky, Darwin Chang, Sacha Davidson, Ara
Ioannisian, Pasha Kabir, Wai-Yee Keung, Emmanuel Paschos, Michael
Peskin, Georg Raffelt, Utpal Sarkar, Mikhail Shaposhnikov, Leo
Stodolsky, Arkady Vainshtein, and Xinmin Zhang. The author also
wishes to thank Jose Bernab\'eu, Amitava Datta, Kai Diener, Zoltan
Gagyi-Palffy, Monoranjan Guchait, Amon Ilakovac, Bernd Kniehl,
J\"urgen K\"orner, Marek Nowakowski, Joannis Papavassiliou, and Karl
Schilcher for collaboration.
\nonumsection{References}
|
1,477,468,751,155 | arxiv | \section{Introduction}
In the context of non-Hermitian time-independent quantum mechanics many
systems are known to posses real spectra in a certain parameter regime that
becomes spontaneously broken when some coupling constants are driven beyond
the exceptional point \cite{Bender:1998ke,Benderrev,Alirev,moiseyev2011non}.
Unlike their optical analogues \cite{Muss,MatMakris,Guo}, where the
spontaneously broken regime is of great interest, in quantum mechanics this
regime is usually discarded on grounds of being nonphysical since it leads
inevitably to infinite growth in energy due to the fact that the energy
eigenvalues emerge as complex conjugate pairs. In \cite{AndTom3} we
demonstrated that the introduction of an explicit time-dependence into a
non-Hermitian Hamiltonian can make the spontaneously broken $\mathcal{PT}
-regime physically meaningful. The reason for this phenomenon is that the
energy operator becomes modified due an additional term related to the Dyson
operator and hence its expectation values can become real. Here we extend
the previous analysis of the broken $\mathcal{PT}$-regime from a one
dimensional two-level system \cite{AndTom3} to a two-dimensional system with
infinite Hilbert space.
In addition, we show that technically it is simpler to employ
Lewis-Riesenfeld invariants \cite{Lewis69} instead of directly solving the
time-dependent Dyson map or the time-dependent quasi-Hermiticity relation.
All approaches are of course equivalent, but the invariant method splits the
problem into several more treatable steps. In particular, it can be viewed
as reformulating the nonpseudo-Hermitian relation for the Hamiltonians
involved, i.e. the time-dependent Dyson relation, into a pseudo-Hermitian
relation for the corresponding invariants. The latter quantities are well
studied in the time-independent setting and are far easier to solve as they
do not involve derivatives with respect to time. Loosely speaking the
time-derivative in the time-dependent Dyson relation acting on the Dyson map
has been split up into the two time-derivatives acting on the invariants
ensuring their conservation. Besides this aspect related to the
technicalities associated to the solution procedure we also provide the
first explicitly solved time-dependent system in higher dimensions.
Our manuscript is organized as follows: In section 2 we recall the key
equations that determine the Dyson map and hence the metric operator. In
section 3 we introduce our two-dimensional model. As first we demonstrate
how it may be solved in a time--independent setting. Subsequently we
determine the time-dependent Dyson map in two alternative ways, comparing
the direct and the Lewis-Riesenfeld method. In addition, we compute the
analytical solutions to the time-dependent Schr\"{o}dinger equation and use
them to evaluate instantaneous energy expectation values. Our conclusions
are stated in section 4.
\section{Time-dependent Dyson equation versus Lewis-Riesenfeld invariants}
The central object to compute in the study non-Hermitian Hamiltonian systems
is the metric operator $\rho $ that can be expressed in terms of the Dyson
operator $\eta $ as $\rho =\eta ^{\dagger }\eta $. Unlike as in the
time-independent scenario a non-Hermitian Hamiltonian $H(t)\neq H^{\dagger
}(t)$ can no longer be related to a Hermitian counterpart $h(t)=h^{\dagger
}(t)$ in a pseudo-Hermitian way, that is via a similarity transformation,
but instead the two Hamiltonians are related to each other by means of the
time-dependent Dyson relation
\begin{equation}
h(t)=\eta (t)H(t)\eta ^{-1}(t)+i\hbar \partial _{t}\eta (t)\eta ^{-1}(t).
\label{hH}
\end{equation
When the Hamiltonian $h(t)$ is observable, this relation implies immediately
that the Hamiltonian $H(t)$ is not observable \cit
{CA,time1,time6,fringmoussa} as it is not a self-adjoint operator with
regard to the standard or modified inner product. The Hamiltonians are
understood to be the operators governing the time-evolution of the systems
satisfying the time-dependent Schr\"{o}dinger equations
\begin{equation}
\mathcal{H}(t)\Psi _{\mathcal{H}}(t)=i\hbar \partial _{t}\Psi _{\mathcal{H
}(t),\qquad \text{for }\mathcal{H}=h,H. \label{TS}
\end{equation
The Hamiltonian is only identical to the observable energy operator in the
Hermitian case, but different in the non-Hermitian setting where it has to
be modified to
\begin{equation}
\tilde{H}(t):=\eta ^{-1}(t)h(t)\eta (t)=H(t)+i\hbar \eta ^{-1}(t)\partial
_{t}\eta (t). \label{Henergy}
\end{equation
The two wavefunctions in (\ref{TS}) are related to each other by the Dyson
ma
\begin{equation}
\Psi _{h}(t)=\eta (t)\Psi _{H}(t). \label{sol}
\end{equation
Besides the time-dependent Dyson relation also the time-dependent
quasi-Hermiticity relation is then modified, by acquiring an additional
derivative term in the metric operato
\begin{equation}
H^{\dagger }(t)\rho (t)-\rho (t)H(t)=i\hbar \partial _{t}\rho (t).
\label{qH}
\end{equation}
It was demonstrated \cite{fringmoussa,fringmoussa2,AndTom1,AndTom2,AndTom3}
that the equations (\ref{hH}) and (\ref{qH}) can be directly solved
consistently for $\eta (t)$ and $\rho (t)$, respectively. Alternatively, but
completely equivalent, one may also employ the standard Lewis-Riesenfeld
approach \cite{Lewis69} of computing invariants as argued in \cit
{khantoul2017invariant,maamache2017pseudo}. This approach requires to
compute the two conserved time-dependent invariants $I_{h}(t)$ and $I_{H}(t)
, i.e. $dI_{h}/dt=dI_{H}/dt=0$, from the evolution equations
\begin{equation}
\partial _{t}I_{\mathcal{H}}(t)=i\hbar \left[ I_{\mathcal{H}}(t),\mathcal{H
(t)\right] ,\qquad ~~~\ \ \text{for~\ }\mathcal{H}=h=h^{\dagger },H\neq
H^{\dagger }. \label{LR0}
\end{equation
Using these two equations together with the Dyson relation (\ref{hH}) it is
straightforward to derive that the two invariants are simply related by a
similarity transformation
\begin{equation}
I_{h}(t)=\eta (t)I_{H}(t)\eta ^{-1}(t)\text{.} \label{simhH}
\end{equation
Since the invariant $I_{h}$ is Hermitian, the invariant $I_{H}$ is its
pseudo-Hermitian counterpart. When $I_{h}$ and $I_{H}$ have been
constructed, (\ref{simhH}) is a much easier equation to solve for $\eta (t)
, than directly the Dyson relation (\ref{hH}). At this point one has
therefore also obtained the metric operator simply by $\rho =\eta ^{\dagger
}\eta $. Next one may also employ the invariants to construct the
time-dependent eigenstates from the standard equations \cite{Lewis69}
\begin{eqnarray}
~~~I_{\mathcal{H}}(t)\left\vert \phi _{\mathcal{H}}(t)\right\rangle
&=&\Lambda \left\vert \phi _{\mathcal{H}}(t)\right\rangle ,~~~~~~~~~~~~~~\ \
\ \ \ \ \ ~\ ~\left\vert \Psi _{\mathcal{H}}(t)\right\rangle =e^{i\hbar
\alpha (t)}\left\vert \phi _{\mathcal{H}}(t)\right\rangle ,~~~~~ \label{LR1}
\\
\dot{\alpha} &=&\left\langle \phi _{\mathcal{H}}(t)\right\vert i\hbar
\partial _{t}-\mathcal{H}(t)\left\vert \phi _{\mathcal{H}}(t)\right\rangle
,\qquad \dot{\Lambda}=0~ \label{LR2}
\end{eqnarray
for $\mathcal{H}=h$ and $\mathcal{H}=H$. Below we compare the two approaches
and conclude that even though the approach using invariants is more lengthy,
it dissects the original problem into several easier smaller steps when
compared to solving the Dyson equation directly. Of course both approaches
are equivalent and must lead to the same solutions for $\eta (t)$, as we
also demonstrate.
In what follows we set $\hbar =1$.
\section{2D systems with infinite Hilbert space in the broken $\mathcal{PT}
-regime}
\subsection{Two dimensional time-independent models}
We set up our model by considering at first a $\mathcal{PT}$-symmetric
system that we then slightly modify by going from a model with partially
broken $\mathcal{PT}$-symmetry to one with completely broken $\mathcal{PT}
-symmetry. We commence with one of the simplest options for a
two-dimensional non-Hermitian system by coupling two harmonic oscillators
with a non-Hermitian coupling term in space
\begin{equation}
H_{xy}=\frac{1}{2m}\left( p_{x}^{2}+p_{y}^{2}\right) +\frac{1}{2}m\left(
\Omega _{x}^{2}x^{2}+\Omega _{y}^{2}y^{2}\right) +i\kappa xy,~~~~~~m,\kappa
,\Omega _{x},\Omega _{y}\in \mathbb{R}. \label{Napkin}
\end{equation
This non-Hermitian Hamiltonian is symmetric with regard to the antilinear
transformations \cite{EW} $\mathcal{PT}_{\pm }:x\rightarrow \pm x$,
y\rightarrow \mp y$, $p_{x}\rightarrow \mp p_{x}$, $p_{y}\rightarrow \pm
p_{y}$, $i\rightarrow -i$, i.e. $\left[ \mathcal{PT}_{\pm },H_{xy}\right] =0
. Using standard techniques from $\mathcal{PT}$-symmetric/quasi-Hermitian
quantum mechanics \cite{Bender:1998ke,Benderrev,Alirev}, it can be decoupled
easily into two harmonic oscillators
\begin{equation}
h_{xy}=\eta H_{xy}\eta ^{-1}=\frac{1}{2m}\left( p_{x}^{2}+p_{y}^{2}\right)
\frac{1}{2}m\left( \omega _{x}^{2}x^{2}+\omega _{y}^{2}y^{2}\right) ,
\end{equation
by a simple rotation using the angular momentum operator $L_{z}=xp_{y}-yp_{x}
$ in the Dyson map $\eta =e^{\theta L_{z}}$ and constraining the parameters
involved a
\begin{equation}
\omega _{x}^{2}=\frac{\Omega _{x}^{2}\cosh ^{2}\theta +\Omega _{y}^{2}\sinh
^{2}\theta }{\cosh 2\theta },~~\omega _{y}^{2}=\frac{\Omega _{x}^{2}\sinh
^{2}\theta +\Omega _{y}^{2}\cosh ^{2}\theta }{\cosh 2\theta },~~\tanh
2\theta =\frac{2\kappa }{m\left( \Omega _{y}^{2}-\Omega _{x}^{2}\right) }.
\label{xx}
\end{equation
By the last equation in (\ref{xx}) it follows that one has to restrict
\left\vert \kappa \right\vert \leq m\left( \Omega _{y}^{2}-\Omega
_{x}^{2}\right) /2$ for this transformation to be meaningful. Thus as long
as the Dyson map is well defined, i.e. the constraint holds, the energy
eigenspectra
\begin{equation}
E_{n,m}=\left( n+\frac{1}{2}\right) \omega _{x}+\left( m+\frac{1}{2}\right)
\omega _{y}.
\end{equation
of $h$ and $H$ are identical and real. The restriction on $\kappa $ is the
same as the one found in \cite{MandalMY,beygi2015}, where the decoupling of
H$ to $h$ was realized by an explicit coordinate transformation instead of
the Dyson map. In fact, identifying the parameter $k$ in \cite{MandalMY} as
k=\cosh 2\theta $, and somewhat similarly in \cite{beygi2015}, the
coordinate transformation becomes a rotation realized by the similarity
transformation acting on the coordinates and the momenta, i.e. we obtain
H\rightarrow h$ with the coordinate transformatio
\begin{equation}
v\rightarrow ~~\eta v\eta ^{-1}=\left(
\begin{array}{cc}
\cosh \theta & i\sinh \theta \\
-i\sinh \theta & \cosh \theta
\end{array
\right) v,~~~~~\text{for }v=\left(
\begin{array}{c}
x \\
\end{array
\right) ,\left(
\begin{array}{c}
p_{x} \\
p_{y
\end{array
\right) .
\end{equation
Such a scenario is mostly well understood and in analogy to the case studied
in \cite{AndTom3}, solving the time-dependent Dyson equation for $\eta (t)$
will allow to make sense of the regime for $\kappa \rightarrow \kappa (t)$
beyond the exceptional point.
Let us now slightly modify the model above by modifying some of the
constants and by adding a term that also couples the two harmonic oscillator
Hamiltonians in the moment
\begin{equation}
H_{xyp}=\frac{a}{2}\left( p_{x}^{2}+x^{2}\right) +\frac{b}{2}\left(
p_{y}^{2}+y^{2}\right) +i\frac{\lambda }{2}\left( xy+p_{x}p_{y}\right)
,\qquad a,b,\lambda \in \mathbb{R}. \label{xyp}
\end{equation
Clearly this Hamiltonian is also symmetric with regard to the same
antilinear symmetry as $H_{xy}$, i.e. we have $\left[ \mathcal{PT}_{\pm
},H_{xyp}\right] =0$. Thus we expect the eigenvalues to be real or to be
grouped in pairs of complex conjugates when the symmetry is broken for the
wavefunctions.
It is convenient to express this Hamiltonian in a more generic algebraic
fashion as
\begin{equation}
H_{K}=aK_{1}+bK_{2}+i\lambda K_{3}, \label{Hk}
\end{equation
where we defined Lie algebraic generators
\begin{equation}
K_{1}=\frac{1}{2}\left( p_{x}^{2}+x^{2}\right) ,~~K_{2}=\frac{1}{2}\left(
p_{y}^{2}+y^{2}\right) ,~~K_{3}=\frac{1}{2}\left( xy+p_{x}p_{y}\right)
,~~K_{4}=\frac{1}{2}\left( xp_{y}-yp_{x}\right) . \label{om}
\end{equation
Besides the generators already appearing in the Hamiltonian we added one
more generator, $K_{4}=L_{z}/2$, to ensure the closure of the algebra, i.e.
we hav
\begin{equation}
\begin{array}{lll}
\left[ K_{1},K_{2}\right] =0,~ & \left[ K_{1},K_{3}\right] =iK_{4}, & \left[
K_{1},K_{4}\right] =-iK_{3}, \\
\left[ K_{2},K_{3}\right] =-iK_{4},~~ & \left[ K_{2},K_{4}\right] =iK_{3},~~
& \left[ K_{3},K_{4}\right] =i(K_{1}-K_{2})/2
\end{array}
\label{alg}
\end{equation
Notice that $K_{i}^{\dagger }=K_{i}$ for $i=1,\ldots ,4$. In what follows we
mostly use the algebraic formulation so that our results also hold for
representations different from (\ref{om}). We report that the Hamiltonian
H_{xy}$ in (\ref{Napkin}) requires at least a ten dimensional Lie algebra
when demanding $xy$ to be one of the Lie algebraic generators, which is the
reason we consider first the more compactly expressible Hamiltonian $H_{xyp}
.
Using the same form of the Dyson map $\eta =e^{\theta L_{z}}$ as above,
albeit with $\theta =\func{arctanh}[\lambda /(b-a)]$, this Hamiltonian is
decoupled into
\begin{equation}
h_{K}=\eta H_{K}\eta ^{-1}=\frac{1}{2}(a+b)\left( K_{1}+K_{2}\right) +\frac{
}{2}\sqrt{(a-b)^{2}-\lambda ^{2}}\left( K_{1}-K_{2}\right) ,
\end{equation
for $\left\vert \lambda \right\vert <\left\vert a-b\right\vert $. So clearly
for $a=b$ we are in the completely broken $\mathcal{PT}$-regime. That choice
is in addition very convenient as it allows for a systematic construction of
the eigenvalue spectrum of $H_{K}(b=a)$. Since the following commutators
vanish $\left[ H_{K}(b=a),K_{1}+K_{2}\right] =$ $\left[ H_{K}(b=a),K_{3
\right] =\left[ K_{1}+K_{2},K_{3}\right] =0$, one simply needs to search for
simultaneous eigenstates of $K_{3}$ and $K_{1}+K_{2}$ to determine the
eigenstates if $H_{K}(b=a)$, due to Schur's lemma. Indeed for the
representation (\ref{om}) we obtain for $H_{K}(b=a)$ the eigenstate
\begin{equation}
\varphi _{n,m}(x,y)=\frac{e^{-\frac{x^{2}}{2}-\frac{y^{2}}{2}}}{2^{n+m}\sqrt
n!m!\pi }}\left[ \dsum\limits_{k=0}^{n}\binom{n}{k}H_{k}(x)H_{n-k}(y)\right]
\left[ \dsum\limits_{l=0}^{m}(-1)^{l}\binom{m}{l}H_{l}(y)H_{m-l}(x)\right] ,
\end{equation
with corresponding eigenenergie
\begin{equation}
E_{n,m}=E_{m,n}^{\ast }=a(1+n+m)+i\frac{\lambda }{2}(n-m).
\end{equation
Here $H_{n}(x)$ denotes the $n$-th Hermite polynom in $x$. The states are
orthonormal with regard to the standard inner product $\left\langle \varphi
_{n,m}\right. \left\vert \varphi _{n^{\prime },m^{\prime }}\right\rangle
=\delta _{n,n^{\prime }}\delta _{m,m^{\prime }}$. The reality of the
subspectrum with $n=m$ is explained by the fact that the $\mathcal{PT}_{\pm }
$-symmetry is preserved, i.e. we can verify that $\mathcal{PT}_{\pm }$
\varphi _{n,n}=\varphi _{n,n}$. However, when $n\neq m$ the $\mathcal{PT
_{\pm }$-symmetry is spontaneously broken and the eigenvalues occur in
complex conjugate pairs.
Hence this Hamiltonian should be discarded as nonphysical in the
time-independent regime, but we shall see that it becomes physically
acceptable when the parameters $a$ and $\lambda $ are taken to be explicitly
time-dependent.
\subsection{A solvable 2D time-dependent Hamiltonian in the broken $\mathcal
PT}$-regime}
We solve now the explicitly time-dependent non-Hermitian Hamiltonian
\begin{equation}
H(t)=\frac{a(t)}{2}\left( p_{x}^{2}+p_{y}^{2}+x^{2}+y^{2}\right) +i\frac
\lambda (t)}{2}\left( xy+p_{x}p_{y}\right) ,\qquad a(t),\lambda (t)\in
\mathbb{R}. \label{H}
\end{equation
According to the above discussion, the instantaneous eigenvalue spectrum of
H(t)$ belongs to the spontaneously broken $\mathcal{PT}$-regime.
\subsubsection{The time-dependent Dyson equation}
Let us now compute the right hand side of the time-dependent Dyson relation
\ref{hH}). For that purpose we assume that the Dyson map is an element of
the group associated to the algebra (\ref{alg}) and take it to be of the for
\begin{equation}
\eta (t)=\dprod\nolimits_{i=1}^{4}e^{\gamma _{i}(t)K_{i}},\qquad \gamma
_{i}\in \mathbb{R}. \label{eta}
\end{equation
As $\eta $ is not a unitary operator by definition, we have taken the
\gamma _{i}$ to be real to avoid irrelevant phases. Using now (\ref{eta})
and (\ref{H}) in (\ref{hH}), the right hand side will be Hermitian if and
only i
\begin{equation}
\gamma _{1}=\gamma _{2}=q_{1},\quad \dot{\gamma}_{3}=-\lambda \cosh \gamma
_{4},\quad \dot{\gamma}_{4}=\lambda \tanh \gamma _{3}\sinh \gamma _{4},
\label{34}
\end{equation
for some real constant $q_{1}\in \mathbb{R}$. The Hermitian Hamiltonian
results to
\begin{equation}
h(t)=a(t)\left( K_{1}+K_{2}\right) +\frac{\lambda (t)}{2}\frac{\sinh \gamma
_{4}}{\cosh \gamma _{3}}\left( K_{1}-K_{2}\right) . \label{hher}
\end{equation
For the representation (\ref{om}) these are simply two decoupled harmonic
oscillators with time-dependent coefficients. The energy operator $\tilde{H}$
as defined in equation (\ref{Henergy}) become
\begin{equation}
\tilde{H}(t)=a(t)\left( K_{1}+K_{2}\right) +\frac{\lambda (t)}{4}\sinh
(2\gamma _{4})\left( K_{1}-K_{2}\right) -i\lambda (t)\left( \sinh ^{2}\gamma
_{4}K_{3}-\sinh \gamma _{4}\tanh \gamma _{3}K_{4}\right) .
\end{equation}
The constraining relations (\ref{34}) may be solved directly for $\gamma
_{3} $ and $\gamma _{4}$, but not in a straightforward manner.\ We eliminate
$\lambda $ and $dt$ from the last two equations in (\ref{34}), so that
d\gamma _{4}=-\tanh \gamma _{3}\tanh \gamma _{4}d\gamma _{3}$, hence
obtaining $\gamma _{4}$ as a function of $\gamma _{3}$
\begin{equation}
\gamma _{4}=\func{arcsinh}\left( \kappa \func{sech}\gamma _{3}\right)
\label{43}
\end{equation
with integration constant $\kappa $. Defining $\chi (t):=\cosh \gamma _{3}$
we use (\ref{34}) and (\ref{43}) to derive that the central equation that
needs to be satisfied is the Ermakov-Pinney equation \cite{Ermakov,Pinney}
with a dissipative ter
\begin{equation}
\ddot{\chi}-\frac{\dot{\lambda}}{\lambda }\dot{\chi}-\lambda ^{2}\chi =\frac
\kappa ^{2}\lambda ^{2}}{\chi ^{3}}. \label{DEP}
\end{equation
This equation is ubiquitous in the context of solving time-dependent
Hermitian systems, even in the Hermitian setting, see e.g. \cit
{leach2008ermakov}. While some solutions to this equation are known, we
demonstrate here that solving this nonlinear differential equation can be
completely bypassed when employing Lewis-Riesenfeld invariants instead and
computing $\eta $ from the pseudo-Hermiticity relation (\ref{simhH}) for the
invariants instead.
\subsubsection{The time-dependent Dyson map from pseudo-Hermiticity}
It is natural to assume that the invariants $I_{H}$, $I_{h}$ as well as the
Hermitian Hamiltonian $h(t)$ lie in the same algebra as the non-Hermitian
Hamiltonian $H(t)$. Furthermore we note that $I_{h}(t)$ needs to be
Hermitian, so that we make the Ans\"{a}tz
\begin{equation}
I_{H}(t)=\dsum\limits_{i=1}^{4}\alpha _{i}(t)K_{i},~~\ \
~~~I_{h}(t)=\dsum\limits_{i=1}^{4}\beta _{i}(t)K_{i},\quad
~~h(t)=\dsum\limits_{i=1}^{4}b_{i}(t)K_{i}, \label{IhH}
\end{equation
with~$\alpha _{i}=\alpha _{i}^{r}+i\alpha _{i}^{i}\in \mathbb{C}$,
b_{i},\beta _{i},\alpha _{i}^{r},\alpha _{i}^{i}\in \mathbb{R}$.
\paragraph{The Lewis-Riesenfeld invariant $I_{H}(t)$:}
Substituting the expressions for $I_{H}(t)$ and $H(t)$ into the equation in
\ref{LR0}) and reading off the coefficients of the generators $K_{i}$ we
obtain the four constraint
\begin{equation}
\dot{\alpha}_{1}=\frac{i}{2}\lambda \alpha _{4},\quad ~\dot{\alpha}_{2}=
\frac{i}{2}\lambda \alpha _{4},\quad ~\dot{\alpha}_{3}=0,\quad ~\dot{\alpha
_{4}=i\lambda (\alpha _{2}-\alpha _{1}).
\end{equation
These equations are easily solved b
\begin{equation}
\alpha _{1}=\frac{c_{1}}{2}+c_{3}\cosh \left[ c_{4}-\!\!\dint\limits_{0}^{t
\lambda (s)ds\right] ,~~\alpha _{2}=c_{1}-\alpha _{1},~~\alpha
_{3}=c_{2},~~\alpha _{4}=2ic_{3}\sinh \left[ c_{4}-\!\!\dint\limits_{0}^{t
\lambda (s)ds\right] , \label{alpha}
\end{equation
with complex integration constants $c_{i}=c_{i}^{r}+ic_{i}^{i}$,
c_{i}^{r},c_{i}^{i}\in \mathbb{R}$. At this point we have two options, we
may either compute directly the invariant $I_{h}(t)$ for the Hamiltonian
h(t)$ as given in (\ref{hher}) by using the evolution equation (\ref{LR0})
or the similarity relation (\ref{simhH}) instead.
\paragraph{The Lewis-Riesenfeld invariant $I_{h}(t)$:}
Denoting the coefficients of $K_{1}$ and $K_{2}$ in (\ref{hher}) by $b_{1}(t)
$ and $b_{2}(t)$, respectively, as defined in the expansion for generic $h(t)
$ in (\ref{IhH}), the relation for the invariants (\ref{LR0}) leads to the
constraint
\begin{equation}
\dot{\beta}_{1}=0,\quad ~\dot{\beta}_{2}=0,\quad ~\dot{\beta}_{3}=\beta
_{4}(b_{2}-b_{1}),\quad ~\dot{\beta}_{4}=\beta _{3}(b_{1}-b_{2}).
\end{equation
These four coupled first order differential equations are easily solved b
\begin{equation}
\beta _{1}=c_{5},\quad \beta _{2}=c_{6},\quad \beta _{3}=c_{7}\cos \left[
c_{8}-\!\!\dint\nolimits_{0}^{t}(b_{1}-b_{2})ds\right] ,\quad \beta
_{4}=-c_{7}\sin \left[ c_{8}-\!\!\dint\nolimits_{0}^{t}(b_{1}-b_{2})ds\right]
. \label{sol1}
\end{equation
Next we invoke the pseudo-Hermiticity relation for the invariants (\re
{simhH}).
\paragraph{Relating $I_{H}(t)$ and $I_{h}(t)$:}
So far we have treated the Hermitian and non-Hermitian systems separately.
Next we relate them using the Ans\"{a}tze (\ref{eta}) for $\eta (t)$ and
\ref{IhH}) for the invariants in the expression (\ref{simhH}). We obtain
eight equations by reading off the coefficients and separating the result
into real and imaginary parts. We can solve the resulting equations for the
real functions
\begin{eqnarray}
\beta _{1} &=&\frac{1}{2}\left[ \alpha _{1}^{r}+\alpha _{2}^{r}-\alpha
_{4}^{i}\sinh \gamma _{3}+\alpha _{3}^{i}\sinh \gamma _{4}\cosh \gamma
_{3}+\left( \alpha _{1}^{r}-\alpha _{2}^{r}\right) \cosh \gamma _{3}\cosh
\gamma _{4}\right] , \label{sol2} \\
\beta _{2} &=&\frac{1}{2}\left[ \alpha _{1}^{r}+\alpha _{2}^{r}+\alpha
_{4}^{i}\sinh \gamma _{3}-\alpha _{3}^{i}\sinh \gamma _{4}\cosh \gamma
_{3}-\left( \alpha _{1}^{r}-\alpha _{2}^{r}\right) \cosh \gamma _{3}\cosh
\gamma _{4}\right] , \\
\beta _{3} &=&\left( \alpha _{2}^{i}-\alpha _{1}^{i}\right) \sinh \gamma
_{4}+\alpha _{3}^{r}\cosh \gamma _{4}, \\
\beta _{4} &=&\left[ \left( \alpha _{1}^{i}-\alpha _{2}^{i}\right) \cosh
\gamma _{4}-\alpha _{3}^{r}\sinh \gamma _{4}\right] \sinh \gamma _{3}+\alpha
_{4}^{r}\cosh \gamma _{3}
\end{eqnarray
with the additional constraint
\begin{eqnarray}
\alpha _{1}^{i}+\alpha _{2}^{i} &=&0,~~~~~~\ \ \ \ \ \ \ \ \ \ \ \ \ \alpha
_{3}^{r}\alpha _{3}^{i}+\alpha _{4}^{r}\alpha _{4}^{i}=2\alpha
_{1}^{i}(\alpha _{2}^{r}-\alpha _{1}^{r}),~~ \label{con} \\
\tanh \gamma _{3} &=&\frac{\alpha _{4}^{i}}{\sqrt{\left( \alpha
_{1}^{r}-\alpha _{2}^{r}\right) ^{2}-(\alpha _{3}^{i})^{2}}},~~~~~~\tanh
\gamma _{4}=\frac{\alpha _{3}^{i}}{\alpha _{2}^{r}-\alpha _{1}^{r}}~~.
\label{con2}
\end{eqnarray
We also used here $\gamma _{1}=\gamma _{2}$.
Next we compare our solutions in (\ref{alpha}), (\ref{sol1}) and (\ref{sol2
)-(\ref{con2}). First we use the expressions for the $\alpha _{i}$ from (\re
{alpha}) in (\ref{sol2})-(\ref{con2}). The constraints (\ref{con}) imply
that $c_{1}^{i}=0~$and $4c_{3}^{r}c_{3}^{i}=-c_{2}^{r}c_{2}^{i}$ so that the
time-dependent coefficients in the Hermitian invariant $I_{h}$ result t
\begin{eqnarray}
\beta _{1} &=&\frac{c_{1}^{r}}{2}\pm \frac{1}{2}\sqrt
4(c_{3}^{r})^{2}-(c_{2}^{i})^{2}}, \label{v1} \\
\beta _{2} &=&\frac{c_{1}^{r}}{2}\pm \frac{1}{2}\sqrt
4(c_{3}^{r})^{2}-(c_{2}^{i})^{2}}, \label{v2} \\
\beta _{3} &=&\pm \frac{c_{2}^{r}}{2c_{3}^{r}}\frac{\left[
4(c_{3}^{r})^{2}-(c_{2}^{i})^{2}\right] }{\sqrt
4(c_{3}^{r})^{2}-(c_{2}^{i})^{2}\func{sech}^{2}\left[ c_{4}^{r}-\din
\nolimits_{0}^{t}\lambda (s)ds\right] }}, \label{v3} \\
\beta _{4} &=&\pm \frac{c_{2}^{r}c_{2}^{i}}{2c_{3}^{r}}\sqrt{\frac
4(c_{3}^{r})^{2}-(c_{2}^{i})^{2}}{4(c_{3}^{r})^{2}-(c_{2}^{i})^{2}\func{sech
^{2}\left[ c_{4}^{r}-\dint\nolimits_{0}^{t}\lambda (s)ds\right] }}\tanh
\left[ c_{4}^{r}-\dint\nolimits_{0}^{t}\lambda (s)ds\right] , \label{v4}
\end{eqnarray
with the constraint $2\left\vert c_{3}^{r}\right\vert >\left\vert
c_{2}^{i}\right\vert $. These expressions need to match with those computed
directly in (\ref{sol2}). It is clear how to identify the constants $c_{5}$
and $c_{6}$ in (\ref{sol1}) when comparing to (\ref{v1}) and (\ref{v2}).
Less obvious is the comparison between the $\beta _{3}$ and $\beta _{4}$.
Reading off $b_{1}$ and $b_{2}$ from (\ref{hher}) and using (\ref{con2}), we
comput
\begin{equation}
\dint\nolimits_{0}^{t}(b_{1}-b_{2})ds=\arctan \left[ \frac{c_{2}^{i}}{\sqrt
4(c_{3}^{r})^{2}-(c_{2}^{i})^{2}}}\tanh \left[ c_{4}^{r}-\din
\nolimits_{0}^{t}\lambda (s)ds\right] \right] .
\end{equation
Setting next the constants $c_{8}=0$, $c_{7}=\pm c_{2}^{r}\sqrt
4(c_{3}^{r})^{2}-(c_{2}^{i})^{2}}/(2c_{3}^{r})$ the solution in (\ref{sol1})
matches indeed with (\ref{v3}) and (\ref{v4}).
We can now assemble our expressions for $\eta $ by using the results for
\gamma _{3}$ and $\gamma _{4}$ from (\ref{con2}) together with the
expressions in (\ref{alpha}) obtainin
\begin{eqnarray}
\gamma _{3} &=&\arctan \left[ \frac{\tanh \left[ q_{2}-\din
\nolimits_{0}^{t}\lambda (s)ds\right] }{\sqrt{1-q_{3}^{2}\func{sech}\left[
q_{2}-\dint\nolimits_{0}^{t}\lambda (s)ds\right] ^{2}}}\right] ~,~~
\label{g34} \\
\gamma _{4} &=&-\func{arccot}\left[ \frac{1}{q_{3}}\cosh \left[
q_{2}-\dint\nolimits_{0}^{t}\lambda (s)ds\right] \right] ,
\end{eqnarray
with the identification $q_{2}=c_{4}^{r}$ and $q_{3}=c_{2}^{i}/(2c_{3}^{r})$.
We convince ourselves that the function
\begin{equation}
\chi (t)=\cosh \gamma _{3}=\sqrt{\frac{\cosh ^{2}\left[ q_{2}-\din
\nolimits_{0}^{t}\lambda (s)ds\right] -q_{3}^{2}}{1-q_{3}^{2}}}
\label{solEP}
\end{equation
computed with $\gamma _{3}$ as given in (\ref{g34}) does indeed satisfy the
dissipative Ermakov-Pinney equation (\ref{DEP}) when identifying the
constants as $\kappa =q_{3}/\sqrt{1-q_{3}^{2}}$. We also express the
Hamiltonian (\ref{hher}) explicitly a
\begin{equation}
h(t)=f_{+}(t)K_{1}+f_{-}(t)K_{2}~~~~\text{with }f_{\pm }(t)=a(t)\pm \frac
q_{3}\sqrt{1-q_{3}^{2}}\lambda (t)}{1+\cosh \left[ 2q_{2}-2\din
\nolimits_{0}^{t}\lambda (s)ds\right] -2q_{3}^{2}}, \label{fpm}
\end{equation
which is evidently Hermitian for $\left\vert q_{3}\right\vert <1$.
\subsubsection{Eigenstates, phases and instantaneous energy expectation
values}
We note that the computation of the Dyson map did not require the knowledge
of any eigenstates, neither when using Lewis-Riesenfeld invariants nor in
the directly approach of solving the time-dependent Dyson relation. This
also means that so far we have not solved the time-dependent Schr\"{o}dinger
equation nor did we use the eigenstate equations (\ref{LR1}) and (\ref{LR2
). Let us therefore carry out the final step and determine all eigenstates,
including relevant phases, and use them to evaluate the energy expectation
values.
The exact solution to the time-dependent Schr\"{o}dinger equation for the
harmonic oscillator with time-dependent mass and frequency is well known for
twenty years \cite{pedrosa1997exact}. Adapting that solution here to our
notation and situation, for the Hamiltonian $\tilde{h}(t)=$ $a(t)K_{1}$,
with $a(t)$ being any real function of $t$, it read
\begin{equation}
\tilde{\varphi}_{n}(x,t)=\frac{e^{i\alpha _{n}(t)}}{\sqrt{\varkappa (t)}
\exp \left[ \left( \frac{i}{a(t)}\frac{\dot{\varkappa}(t)}{\varkappa (t)}
\frac{1}{\varkappa ^{2}(t)}\right) \frac{x^{2}}{2}\right] H_{n}\left[ \frac{
}{\varkappa (t)}\right] ,~~\ \label{Ped}
\end{equation
with phase
\begin{equation}
\alpha _{n}(t)=-\left( n+\frac{1}{2}\right) \ \dint\nolimits_{0}^{t}\frac
a(s)}{\varkappa ^{2}(s)}ds,
\end{equation
and $\varkappa (t)$ being restricted to the dissipative Ermakov-Pinney
equation
\begin{equation}
\ddot{\varkappa}-\frac{\dot{a}}{a}\dot{\varkappa}+a^{2}\varkappa =\frac{a^{2
}{\varkappa ^{3}}. \label{EP2}
\end{equation
Thus while we could bypass to solve this equation in the form of (\ref{DEP})
for the determination of $\eta $ when it involved $\lambda $, it has
re-emerged for the computation of the eigenstates involving $a$ with a
different sign in front of the last term on the right hand side. Using the
wavefunction (\ref{Ped}) we compute here the expectation value for $K_{1}$
and a normalization factor
\begin{eqnarray}
\left\langle \tilde{\varphi}_{n}(x,t)\right\vert K_{1}\left\vert \tilde
\varphi}_{m}(x,t)\right\rangle &=&2^{n-2}n!(2n+1)\sqrt{\pi }\frac
a^{2}(1+\varkappa ^{4})+\varkappa ^{2}\dot{\varkappa}^{2}}{a^{2}\varkappa
^{2}}\delta _{n,m}, \label{ex1} \\
\left\langle \tilde{\varphi}_{n}(x,t)\right. \left\vert \tilde{\varphi
_{n}(x,t)\right\rangle &=&2^{n}n!\sqrt{\pi }:=N.
\end{eqnarray
Next we notice that the expectation value (\ref{ex1}) does not depend on
time
\begin{equation}
\frac{d}{dt}\left[ \frac{a^{2}(1+\varkappa ^{4})+\varkappa ^{2}\dot{\varkapp
}^{2}}{a^{2}\varkappa ^{2}}\right] =\frac{2\dot{\varkappa}}{a^{2}}\left(
\ddot{\varkappa}-\frac{\dot{a}}{a}\dot{\varkappa}+a^{2}\varkappa -\frac{a^{2
}{\varkappa ^{3}}\right) =0. \label{EPf}
\end{equation
by recognizing in (\ref{EPf}) one of the factors as the Ermakov-Pinney
equation in the form (\ref{EP2}). It is clear that this constant will
dependent on the explicit solution for (\ref{EP2}). So for definiteness we
compute it by adapting the solution (\ref{solEP}) to account for the
aforementioned different sign
\begin{equation}
\varkappa (t)=\sqrt{\tilde{\kappa}\cos \left[ 2\dint\nolimits_{0}^{t}a(s)d
\right] +\sqrt{1+\tilde{\kappa}^{2}}}. \label{solk}
\end{equation
For this solution we calculat
\begin{equation}
\frac{a^{2}(1+\varkappa ^{4})+\varkappa ^{2}\dot{\varkappa}^{2}}
a^{2}\varkappa ^{2}}=2\sqrt{1+\tilde{\kappa}^{2}}.
\end{equation
Thus for the normalized wavefunction $\hat{\varphi}_{n}(x,t)=\tilde{\varphi
_{m}(x,t)/\sqrt{N}$ involving the solution (\ref{solk}) we fin
\begin{equation}
\left\langle \hat{\varphi}_{n}(x,t)\right\vert K_{1}\left\vert \hat{\varphi
_{m}(x,t)\right\rangle =\left( n+\frac{1}{2}\right) \sqrt{1+\tilde{\kappa
^{2}}\delta _{n,m}.
\end{equation
Hence the solution to the time-dependent Schr\"{o}dinger equation for the
Hermitian Hamiltonian $h(t)$ in (\ref{fpm}) is simply
\begin{equation}
\Psi _{h}^{n,m}(x,y,t)=\hat{\varphi}_{n}^{+}(x,t)\hat{\varphi}_{m}^{-}(y,t)
\end{equation
when replacing $a\rightarrow f^{\pm }$, $\varkappa \rightarrow \varkappa
_{\pm }$, $\tilde{\kappa}\rightarrow \tilde{\kappa}_{\pm }$ and $\alpha
_{n}\rightarrow \alpha _{n}^{\pm }$ in an obvious manner. We have now
assembled all the information needed to compute the instantaneous energy
expectation value
\begin{eqnarray}
E^{n,m}(t) &=&\left\langle \Psi _{h}^{n,m}(t)\right\vert h(t)\left\vert \Psi
_{h}^{n,m}(t)\right\rangle =\left\langle \Psi _{H}^{n,m}(t)\right\vert \rho
(t)\tilde{H}(t)\left\vert \Psi _{H}^{n,m}(t)\right\rangle \label{Eexp} \\
&=&f_{+}(t)\left( n+\frac{1}{2}\right) \sqrt{1+\tilde{\kappa}_{+}^{2}
+f_{-}(t)\left( m+\frac{1}{2}\right) \sqrt{1+\tilde{\kappa}_{-}^{2}}, \notag
\end{eqnarray
with constants $\kappa _{\pm }$. It is clear that this expectation value is
real for any given time-dependent fields $a(t)$, $\lambda (t)\in \mathbb{R}$
and constants $\tilde{\kappa}_{\pm }\in \mathbb{R}$, $\left\vert
q_{3}\right\vert <1$. Hence, we have explicitly shown that one can draw the
same conclusion as in the one-dimensional case \cite{AndTom3}, that a
time-independent non-Hermitian Hamiltonian in the spontaneous spontaneously
broken $\mathcal{PT}$-regime becomes physically meaningful in the
time-dependent\ setting.
\section{Conclusions}
We have presented the first higher dimensional solution of the
time-dependent Dyson relation (\ref{hH}) relating a non-Hermitian and a
Hermitian Hamiltonian system with infinite dimensional Hilbert space. As for
the one dimensional case studied in \cite{AndTom3}, we have demonstrated
that the time-independent non-Hermitian system in the spontaneously broken
\mathcal{PT}$-regime becomes physically meaningful when including an
explicit time-dependence into the parameters of the model and allowing the
metric operator also to be time-dependent. The energy operator (\ref{Henergy
) has perfectly well-defined real expectation values (\ref{Eexp}).
Technically we have compared two equivalent solution procedures, solving the
time-dependent Dyson relation directly for the Dyson map or alternatively
computing Lewis-Riesenfeld invariants first and subsequently constructing
the Dyson map from the similarity relation that related the Hermitian and
non-Hermitian invariants. The latter approach was found to be simpler as the
similarity relation is far easier than the differential version (\ref{hH}).
The price one pays in this approach is that one needs to compute the two
invariants first. However, the differential equations for these quantities
turned out to be easier than the (\ref{hH}). In particular, it was possible
to entirely bypass the dissipative Ermakov-Pinney equation in the
computation of $\eta (t)$. Nonetheless, this ubiquitous equation re-emerged
in the evaluation of the eigenfunctions involving different time-dependent
fields and with a changed sign.
\bigskip \noindent \medskip \textbf{Acknowledgments:} TF is supported by a
City, University of London Research Fellowship.
|
1,477,468,751,156 | arxiv | \section{Introduction}
Facility location concerns with the optimal placement of one or several new facilities/plants to satisfy the customers' demands. In discrete facility location problems, the positions of both the customers and the potential new facilities are part of the input as well as the travel costs between them. On the other hand, in continuous facility location problems, although the (geographical) coordinates (in a given $d$-dimensional space) of the customers are provided, the information about the potential location of the facilities is unknown, in the sense that the facilities can be located at any place of the given space. Both the discrete and the continuous versions of facility location problems have been widely studied in the literature (see the monographs \cite{DH_2002,NP_2005} and the references therein). Several versions of these facility location problems have been analyzed, by considering different objective functions \cite{RNPF_MMOR00}, by fixing either the number of facilities to be located (as in the $p$-median or $p$-center problems) \cite{H_OR64} or maximum capacities for the facilities (capacitated facility location)\cite{KH_MS63,P_ORP08}, or assuming uncertainty in the demands of the customers (see \cite{AFS_O11,CNS_O17,CS_2015} for a recent review), amongst many others.
In this paper, we propose a unified framework for facility location problems in which the underlying problem is a discrete facility location problem. However, because of locational imprecision or unaccuracy, the new facilities are allowed to be located not only in the exact location of the potential facilities, but in certain regions around each of them, the \textit{neighborhoods}. In case the initial placements of the potential facilities are exact enough, that is, their neighborhoods are singletons (with a single element which coincides with the initial placement of the potential facilities), the problem becomes the discrete location version of the problem. On the other hand, if the neighborhoods are large enough, the problem turns into the continuous location version of the problem, allowing the facilities to be located in the entire space. Otherwise, different shapes and sizes for the neighborhoods allow one to model how imprecise the locational information provided is. The goal is, apart from the discrete location decision of the problem (placement of facilities among the given set and allocations customers-plants), to find the optimal location of the open facilities in the neighborhoods. The main difference between this problem and its underlying discrete facility location problem, is that in the latest, the travel distances between facilities and customers are assumed to be known, while in the neighborhood version of the problem, as in the continuous case, those distances depend on the place where the facility is located in the neighborhood. Hence, in this problem, the matrix of travel costs is not provided, but a distance measure to compute the travel costs between customers and facilities is given. This problem, as far as we know, has not been fully investigated in Location Analysis, although some attempts have been presented in \cite{Cooper_JRS78} and \cite{Juel_OR81} where sensitivity analyses were performed by allowing the customers to \textit{move} around disc-shaped neighborhoods on the plane. Also, this problem can be seen as a constrained version of the classical multifacility location problem, which have been only partially studied in the literature (see \cite{BPE_EJOR16}). This framework will be called \textit{Facility Location with Neighborhoods}, a terminology borrowed from the neighborhood versions of the Minimum Spanning Tree problem \cite{BFP_ARXIV16,DFHKKLS_TCS15} and Traveling Salesman problem \cite{B_JA05,DM_JA03}.
The importance of analyzing this family of problems comes from its wide range of applications. It is well known that discrete facility location problems are useful in many real-world applications (see \cite{LG_IJOC12,MNS_CORS06}, amongst many others). However, in many situations, as for instance in the design of telecommunication networks, where a set of servers must be located to supply connection to a set of customers, the exact location of a server may not be exactly provided. In contrast, a region where the decision maker wishes to locate each of the facilities (a corridor, a room, or any other bounded space) can be \textit{easily} given. In such a case, a robust worst-case decision would not reflect reality, since the decision maker does not known the location of the facility because a lack of certainty but because it allows locational flexibility to the decision. An optimal design may be obtained if the new facilities are allowed to be located in adequately chosen neighborhoods.
In this paper, we provide suitable mathematical programming formulations for the neighborhood versions of a widely studied family of objective functions in facility location problems: ordered median (OM) functions. In these problems, $p$ facilities are to be located by minimizing a flexible objective function that allows one to model different classical location problems. For instance, OM problems allow modeling location problems in which the customers support the median ($p$-median) or the maximum ($p$-center) travel costs, among many other robust alternatives.
OM problems were introduced in Location Analysis by Puerto and Fern\'andez \cite{PF_JNCA00} and several papers have analyzed this family of objective functions in facility location: discrete problems \cite{KNPR_TOP10,LPP_COR17,Ponce2016}, continuous problems \cite{BEP_COR13,BEP14}, network/tree location problems~\cite{KNP_NETW03,PT_MP05,Tang_etal16}, hub location problems~\cite{PRR_COR11}, stochastic facility location problems~\cite{Yan_etal17}, multiobjecive location\cite{Grabis_etal12}, etc (see \cite{PR_chapter2015} for a recent overview on the recent developments on ordered median location problems). In particular, we analyze the neighborhood version of OM location problems for the so-called monotone case. We study the still general case in which the neighborhoods are second-order cone representable regions. These sets allow one to model as particular cases polyhedral neighborhoods or $\ell_\tau$-norm balls. The distance measure to represent travel costs between customers and facilities are assumed to be $\ell_\nu$-norm based distances. Within this framework we present four different Mixed Integer Second Order Cone Optimization (MISOCO) models.
The current limitations of the on-the-shelf solvers to solve mixed integer nonlinear problems, and the difficulty of solving even the underlying problem (the classical $p$-median problem is NP-hard), makes the resolution of the problem under study a hard challenge. For that reason, we also develop two math-heuristic algorithms based on different location-allocation schemes, which are able to solve larger problems.
Our paper is organized in five sections. In Section \ref{sec:1} we introduce the problem and some general properties are stated. Section \ref{sec:formulations} is devoted to provide four different mixed integer non linear programming formulations of the problem. At the end of the section, we run some computational experiments in order to compare the four formulations. In Section \ref{heuristics} the two math-heuristic approaches are described, and the results of some computational experiments are reported. Finally, some conclusions are presented in Section \ref{sec:conclusions}.
\section{DOMP with Neighborhoods}
\label{sec:1}
In this section we introduce the Ordered Median Problem with Neighborhoods (OMPN) in which the underlying discrete facility location problem is the Discrete Ordered $p$-Median Problem (DOMP).
For the sake of presentation, we first describe the DOMP problem. The input data for the problem is:
\begin{itemize}
\item $\mathcal{A}=\{a_1, \ldots, a_n\} \subseteq \mathbb{R}^d$: set of coordinates of the customers. We assume, as usual in the location literature, that the coordinates of potential facilities coincides with $\mathcal{A}$.
\item $D = \Big(d(a_i,a_j)\Big)_{i,j=1}^n \in \mathbb{R}^{n\times n}$: Travel cost matrix between facilities.
\item $\lambda_1, \ldots, \lambda_n \geq 0$: Ordered median function weights.
\end{itemize}
The goal of DOMP is to select, from the elements of $\mathcal{A}$, a subset of $p$ facilities, $\mathcal{B} \subset \mathcal{A}$ with $|\mathcal{B}|=p$, that minimizes the ordered median objective function:
$$
\displaystyle\sum_{i=1}^n \lambda_i D_{(i)},
$$
where $D_i= \min_{b\in \mathcal{B}} d(a_i,b)$ (the smallest travel cost to supply customer $i$ from the open facilities), and $D_{(i)}$ represent the $i$-th largest element in the set $\{D_1, \ldots, D_n\}$, i.e. $D_{(i)} \in \{D_1, \ldots, D_n\}$ with $D_{(1)} \geq \cdots \geq D_{(n)}$.
The DOMP can be stated as the following optimization problem:
\begin{equation}
\min_{\mathcal{B} \subset \mathcal{A}: |\mathcal{B}|=p} \displaystyle\sum_{i=1}^n \lambda_i D_{(i)
\label{domp0}\tag{${\rm DOMP}$}
\end{equation}
We will assume that the $\lambda$-weights verify $\lambda_1 \geq \cdots \geq \lambda_n \geq 0$, dealing with the so-called \textit{convex ordered median problem}. Most of the main well-known objective functions in Locational Analysis are part of this family, as for instance:
\begin{itemize}
\item Median ($\lambda=(1,\ldots, 1)$): $\sum_{i=1}^{n} D_i$.
\item Center ($\lambda=(1,0,\ldots,0)$: $\max_{i=1, \ldots, n} \;D_i$.
\item $K$--Centrum $\lambda=(1,\stackrel{K}{\ldots},1,0,\ldots,0)$: $\sum_{i=1}^{K} D_{(i)}$.
\item Cent-Dian$_{\alpha}$ ($\lambda=(1,1-\alpha, \cdots, 1-\alpha)$): $\alpha \max_{i=1, \ldots, n} D_{i}+(1-\alpha)\sum_{1\leq i\leq n} D_{i}$, for $0\leq\alpha\leq1$.
\end{itemize}
Ordered median functions are continuous and symmetric (in the sense that they are invariant under permutations). Furthermore, if $\lambda_1 \geq \ldots \geq \lambda_n \geq 0$, ordered median functions are convex, fact that will be exploited throughout this paper. The interested reader is referred to \cite{PR_chapter2015} for a complete description of the properties of ordered median functions.
A few formulations and exact solution approaches for DOMP have been developed since the problem was introduced. In particular Boland et. al \cite{BDNP_COR06} formulated the problem as a (non convex) quadratic problem with quadratic constraints. A suitable three index (pure) binary programming reformulation with $O(n^3$) variables and $O(n^2)$ linear constraints was provided by linearizing the bilinear terms. A second formulation, reducing to two the indices of the variables in the formulation, was also presented in the same paper by using a different linearization strategy, and that allows reducing the number of binary variables to $O(n^2)$. Puerto \cite{P_ORP08}, Marin et. al \cite{MNV_MMOR10}, Marin et. al \cite{MNPV_DAM09} and Labb\'e et. al \cite{LPP_COR17} provided alternative formulations for the problem with two and three indices, that need, in a preprocessing phase, sorting the elements in the matrix $D$ (and removing duplicates). All the above mentioned formulations are valid for general ordered median problems. Concerning the convex case, Ogryzack and Tamir \cite{OT_IPL03} presented a different formulation which exploits the monotonicity of the $\lambda$-weights by applying a $k$-sum representation of the ordered median function (see also the recent paper \cite{PRT_MP16} for further details on the powerful of this representation in a wide variety of optimization problems). Finally, in Blanco et. al \cite{BEP14} the authors derived a formulation that also avoid using the binary variables for sorting the involved distances. Also, a few heuristic approaches are available in the literature for the DOMP problem (see \cite{DNHM_AOR05,PPG_EJOR14,SKD_EJOR07}).
Observe also that in the DOMP, once the travel costs matrix is provided, the locational coordinates of the customers are not needed, and then, the problem does not depend on the dimension of the space where the customers live.
For the OMPN framework, instead of providing a travel cost matrix between customers, we consider a travel distance measure $d: \mathbb{R}^d\times \mathbb{R}^d \rightarrow \mathbb{R}_+$ induced by a norm $\|\cdot\|$, i.e., $d(a,b) = \|a-b\|$, for $a, b \in \mathcal{A}$.
Also, each potential facility, $a\in \mathcal{A}$, is associated to a convex set, $\mathcal{N}(a) \subset \mathbb{R}^d$, with $a \in \mathcal{N}(a)$, its \textit{neighbourhood}. We denote by $\overline{\mathcal{N}} = \displaystyle\prod_{a\in\mathcal{A}} \mathcal{N}(a)$, the space of neighborhoods. We also consider in this case, set-up costs for opening facilities (which may be neighborhood-dependent), which we denote by $f(a)$ for each $a\in \mathcal{A}$.
The goals of the Ordered Median Problem with Neighborhoods are:
\begin{itemize}
\item to find the indices of the $p$ facilities to open: $\mathcal{B}=\{b_1, \ldots, b_p\}$,with $b_j \in \mathcal{A}$ for $j=1, \ldots, p$,
\item to locate the facilities into their neighbourhoods: $\bar b_1, \ldots, \bar b_p$ with $\bar b_j \in \mathcal{N}(b_j)$, $j=1, \ldots, p$, and
\item to allocate customers to their closest open facilities $\bar b_1, \ldots, \bar b_p$,
\end{itemize}
by minimizing an ordered median function of the travel distances plus set-up costs.
Observe that the optimization problem to solve for the OMPN is similar to \eqref{domp0}:
\begin{equation}
\min_{\stackrel{\mathcal{B} \subset \mathcal{A}: |\mathbb{B}|=p}{\bar a \in \mathcal{N}}} C(\mathcal{B}) := \displaystyle\sum_{i=1}^n \lambda_i D_{(i)} + \displaystyle\sum_{b \in \mathcal{B}} f(b)\label{dompn0}\tag{${\rm OMPN}$}
\end{equation}
but now, $D_i = \min_{b \in \mathcal{B}} d(a_i, \bar b)$, i.e. the travel distance from a customer to its closest facility depends on the position of the facilities in their neighborhoods. So both the discrete location (open facilities and allocation scheme) and the continuous location decisions (coordinates of the new facilities) are involved in the problem.
We use the classical notation for the variables in $p$-median problem:
$$
x_{ij} = \left\{\begin{array}{cl}
1 & \mbox{if client $i$ is allocated to facility $j$ ($i\neq j$) or if facility $j$ is open ($i=j$),}\\
0 & \mbox{otherwise}
\end{array}\right.
$$
for $i, j=1, \ldots, n$.
Note that, using the above family of variables, the set of open facilities and assignments between customers and $p$ facilitites can be represented by the set $\mathcal{X} = \mathcal{X}_R \cap \{0,1\}^{n\times n}$, where
\begin{eqnarray*}
\mathcal{X}_R = \Big\{x \in [0,1]^{n\times n}: \displaystyle\sum_{j=1}^n x_{ij} =1, \forall i=1, \ldots, n, \displaystyle\sum_{j=1}^n x_{jj}=p, x_{ij} \leq x_{jj}, \forall i, j=1, \ldots, n\Big\}
\end{eqnarray*}
is the so-called \emph{$p$-median polytope}.
Observe also that, the above settings easily extend to the case in which the possible connections between demand points and facilities is induced by a graph.
On the other hand, the set of distances, will be represented by the following set:
\begin{eqnarray*}
\mathcal{D} = \Big\{(d,\bar a) \in \mathbb{R}^{n\times n}_+ \times \overline \mathcal{N}: d_{ij} \geq \|a_i - \bar a_j\|, i, j=1, \ldots, n, i\neq j\Big\},
\end{eqnarray*}
where $d_{ij}$ (when one tries to minimize some aggregating function of the travel-costs) represents the distance between the customer located at $a_i$ and the facility located at $\bar a_j$, for all $i, j=1, \ldots, n$
Note that the set $\mathcal{D}$ can be \emph{easily} adapted to the case in which each customer uses a different travel distance measure (norm), and the structure of $\mathcal{D}$ remains the same.
With the above notation, the general OMPN can be compactly formulated as:
\begin{align}
\min &\displaystyle\sum_{i=1}^n \lambda_i z_{(i)} + \displaystyle\sum_{j=1}^n f_j x_{jj}\label{dompn:0}\\
\mbox{s.t. } & z_i = \displaystyle\sum_{i=1}^n d_{ij} x_{ij}, i, j=1, \ldots, n, \label{bilinear}\\
& x \in \mathcal{X}, (d,\bar a) \in \mathcal{D}.\label{dompn:D}
\end{align}
where $f_j$ denotes the set-up cost of the facility initially located at $a_j$, $j=1, \ldots, n$ and $z_i$ represents the minimum distance between customer located at $a_i$ and the open facilities.
Observe that \eqref{dompn:0}--\eqref{dompn:D} is a mixed integer non linear programming problem (MINLP), whose continuous relaxation is not convex nor concave due to the bilinear constraint \eqref{bilinear} and probably to the constraints in $\mathcal{D}$. In case the neighborhoods are convex, the set $\mathcal{D}$ is also convex (because of the convexity of the norm). Hence, if the discrete location variables $x$ were known, the problem (also because the convexity of the ordered median function) becomes a continuous convex problem. On the other hand, if the distances were known, the problem becomes a DOMP, so several formulations can be applied to solve the problem. In the OMPN, both $\mathcal{X}$ and $\mathcal{D}$ are part of the final decision. Thus, both the difficulties of handling the DOMP problem and the continuous problem are inherited to the OMPN. In particular, since the $p$-median problem (or the $p$-center problem) is known to be NP-hard \cite{KarivHakimi_SIAMAM79} which is a particular case of OMPN, the OMPN is also NP-hard.
The simplest OMPN problem, apart from the DOMP case (where the neighborhoods can be seen as singletons), is obtained when the set $\mathcal{D}$ is a polyhedron (and then, defined by a set of linear inequalities). Since the geometry of $\mathcal{D}$ depends on the distance measure induced by $\|\cdot\|$ and the shapes of the neighborhoods, $\mathcal{D}$ will be a polyhedron when these two features can be linearly represented. The norms which are polyhedrally-representable are called \textit{block} (or polyhedral) norms (see \cite{NP_2005}) which are characterized by the fact that their unit balls are polytopes, i.e., $P=\{z\in \mathbb{R}^d: \|z\|\leq 1\}$ is a bounded polyhedron. On the other hand, the neighborhoods, because they are assumed to be compact and convex sets, their polyhedral representability is assured if and only if they are also polytopes. In those cases, both the set $\mathcal{D}$ and $\mathcal{X}$ are identified with sets of linear inequalities (and integrality constraints in $\mathcal{X}$). Furthermore, as can be checked in \cite{OO2012} or \cite{PR_chapter2015}, the ordered median function can be also modeled by using a set of linear inequalities and equations and by adding a set of $O(n^2)$ binary variables to our model. The above observations are summarized in the following result.
\begin{thm}
Let $\lambda_1 \geq \cdots \geq \lambda_n \geq 0$, $\|\cdot\|$ a block norm and $\mathcal{N}(a)$ a polyhedron, for each $a \in \mathcal{A}$. Then OMPN can be formulated as a
mixed-integer linear programming problem.
\end{thm}
\begin{proof}
The proof follows noting that constraints in the form $Z \geq \|X-Y\|$, as those that appear in the description of $\mathcal{D}$, can reformulated as:
$$
Z \geq e^t (X-Y), \; \forall e \in {\rm Ext}(P^*),
$$
where ${\rm Ext}(P^*)$ the set of extreme points of $P^* = \{v \in \mathbb{R}^d: v^t b_g \leq 1, g=1, \ldots, |Ext(P)|\}$, the unit ball of the dual norm of $P$ (see \cite{NP_2005,Ward-Wendell} ).
\end{proof}
The following example illustrates the new framework under study.
\begin{ex}\label{ex:1}
Let us consider a set of customers/potential facilities with coordinates in the plane $\mathcal{A}=\{(0,5)$, $(1,1)$, $(1,6)$, $(1,4)$, $(5,3)$, $(10,4)$, $(6.5,0)$, $(8,6)\}$, and travel distances measured with the Euclidean norm. The solutions for the $2$-median, the $2$-center and the $2$-$4$-center ($K$-center with $K=4$) are drawn in Figure \ref{fig:ex1_0} (there, stars represent open facilities, customers are identified with dots and lines are the allocation patterns).
\begin{figure}[h]
\input{ex0-1.tex}
\caption{Solutions for $2$-median, $2$-center and $2$-$4$-center for the data in Example \ref{ex:1}.\label{fig:ex1_0}}
\end{figure}
Note that, as expected, the solutions for the three DOMP problems highly depend on the $\lambda$-weights, being the optimal set of open facilities different for the three cases.
Let us now consider, for each demand point, a neighbourhood defined as the Euclidean disk with radii $r \in \{1, 0.6, 1, 0.6, 2.4, 2.4, 0.8, 1.6\}$ (see Figure \ref{fig:ex1_1}).
\begin{figure}[h]
\input{ex0-2.tex}
\caption{Neighbourhoods for the facilities of Example \ref{ex:1}.\label{fig:ex1_1}}
\end{figure}
The new facilities, now, are not restricted to be exactly located in the given coordinates but in a disk around them. We consider the radius of its neighborhood (disk) as a mesure of the set-up cost for each facility, that is $f(a_i)=r_i$. The solutions of the neighbourhood version of the $2$-median, $2$-center and $2$-$4$-center problems are shown in Figure \ref{fig_ex1_2}.
\begin{figure}[h]
\input{ex0-3.tex}
\caption{Solutions for $2$-median, $2$-center and $2$-$4$-center with neighbourhood for the data in Example \ref{ex:1}.\label{fig_ex1_2}}
\end{figure}
\end{ex}
In what follows, we derive some structural properties of DOMP that are inherited to the OMPN.
For each $i,j =1, \ldots, n$, we denote by $\widehat{D}_{ij} = \max \{\|a_i - \bar a_j\|: \bar a_j \in \mathcal{N}_j\},$ and $\widehat{d}_{ij} = \max \{\|a_i - \bar a_j\|: \bar a_j \in \mathcal{N}_j\}$, upper and lower bounds for the distances between the $i$th customer and the $j$th potential facility, respectively.
\begin{prop}\label{prop:1}
The following properties are satisfied:
\begin{enumerate}
\item There exists an optimal solution of \eqref{dompn0} in which the $p$ smaller travel distances equal $0$.
\item Let $\mathcal{B}\subseteq \mathcal{A}$ a set of $p$ facilities such that its ordered cost $C(\mathcal{B}) \leq UB$ and such that $\min_{j\neq i} \widehat{d}_{ij} > \dfrac{UB}{\displaystyle\sum_{i=1}^m \lambda_i}$ for some $m=2, \ldots, n$, then the $i$-th client is sorted at most in position $m$ in the whole sorted set of optimal distances.
\end{enumerate}
\end{prop}
\begin{proof} $ $
{\it 1.} The result follows from the observation that if the facility $a \in \mathcal{A}$ is an open facility, the travel costs between $a$ and $a$ are zero.
{\it 2.} Assume that the $i$th customer is sorted in position $r \geq m$ in the sorting sequence of distances, i.e., $D_{(1)} \geq \ldots \geq D_{(m)} \geq D_{(r)} = D_i$. Then, we have that:
$$
c(\mathcal{B}) = \displaystyle\sum_{l=1}^n \lambda_l D_{(l)} + \displaystyle\sum_{b\in B} f(b) \geq \displaystyle\sum_{l=1}^m \lambda_l D_{(l)} \geq \displaystyle\sum_{l=1}^m \lambda_l D_{i} = D_i \left(\displaystyle\sum_{l=1}^m \lambda_l\right) > UB
$$
which contradicts the hyphotesis.
\end{proof}
\section{MINLP Formulations for the OMPN}
\label{sec:formulations}
In this section, we describe different mathematical programming formulations for solving general OMPN. In particular, we extend the formulations presented in \cite{BEP14}, \cite{BDNP_COR06} and \cite{OT_IPL03} to our problem. As mentioned above, the main difference between the DOMP and the OMPN problem is that in the OMPN the distances are not part of the input, but part of the decision. Hence, the formulations for the DOMP based on preprocessing the travel distances matrix (as those proposed in \cite{LPP_COR17},\cite{MNV_MMOR10}, \cite{MNPV_DAM09} or \cite{P_ORP08}) cannot be applied to our framework.
Observe that, in OMPN, an adequate representation of $\mathcal{D}$ is crucial for the development of efficient solution approaches for the problem. We assume that the neighborhoods belong to a family of convex sets that allows us to represent most of the convex shapes which are useful in practice, and that can be efficiently handled by commercial optimization solvers: second order cone (SOC)-representable sets \cite{LVBL_SOC98}. SOC-representable sets are convex sets defined by second-order cone constraints in the form:
\begin{equation}\label{soc}\tag{SOC}
\|A_i\,x-b_i\|_2 \leq c_i^t x + d_i, \forall i=1, \ldots, M,\\
x \in \mathbb{R}^{N},
\end{equation}
where $A_1, \ldots, A_i \in \mathbb{R}^{M_i\times N}$, $b_i \in \mathbb{R}^{M_i}$, $c_i\in \mathbb{R}^{N}$, $d_i \in \mathbb{R}$, for $i=1, \ldots, M$, and $\|\cdot\|_2$ is the Euclidean norm. Most of the state-of-the-art solvers are capable to efficiently solve optimization problems involving SOC constraints by means of quadratic constraints with positive definite matrices, second order cone constraints (in the form $x^t x \leq y^2$, for $y\geq 0$) or rotated second order cone constraints ($x^t x \leq yz$ with $y, z\geq 0$). SOC constraints allow one to represent, not only Euclidean balls, but any $\ell_\tau$-norm ball (see \cite{BEP14} for further details on the explicit representation of $\ell_\tau$-norm based distance constraints as a set of SOC constraints for any $\tau\in \mathbb{Q}$ with $\tau\geq 1$). Clearly, any polyhedron is SOC-representable (setting $A$ and $b$ equal to zero) so any intersection of $\ell_\tau$-norm balls and polyhedra is suitable to be represented as a set of second order cone constraints. Hence, both our neighborhoods and the distances involved in our problem will be defined as SOC-constraints, being then $\mathcal{D}$ a SOC-representable set.
For the sake of simplicity, and without loss of generality, we assume that the neighborhood of each $a\in \mathcal{A}$ is a $\ell_\tau$-norm ball, i.e. $\mathcal{N}(a)= \{z \in \mathbb{R}^d: \|z-a\|_\tau \leq r_a\}$, for some $r_a \in \mathbb{R}_+$ and $\tau \in \mathbb{Q}$ with $\tau \geq 1$.
Also, we consider that the travel distances are induced by a $\ell_\nu$-norm with $\nu\in \mathbb{Q}$ and $\nu\geq 1$. With these settings, we explicitly describe $\mathcal{D}$ as follows:
$$
\mathcal{D} = \{(d,\bar a) \in \mathbb{R}_+^{n\times n} \times \mathbb{R}^{n\times d}: d_{ij} \geq \|a_i - \bar a_j\|_\nu, r_j \geq \|a_j - \bar a_j\|_\tau, i,j=1, \ldots, n\}
$$
where $r_j$ denotes the radius of the neighborhood $\mathcal{N}(a_j)$, i.e., $r_j=r_{a_j}$.
The following result, whose proof is straightforward from \cite[Theorem 2]{BEP14}, allows us to efficiently represent the set $\mathcal{D}$ when the involved norms are $\ell_\tau$-based norms.
\begin{prop}
Let $\tau= \frac{r_\tau}{s_\tau}\geq 1$ and $\nu=\frac{r_\nu}{s_\nu} \geq 1$ with $r_\tau, s_\tau, r_\nu, s_\nu \in \mathbb{Z}_+$ and $\gcd(r_\tau,s_\tau)=\gcd(r_\nu,s_\nu)=1$. Then, $\mathcal{D}$ is representable as a set of $(n^2 +n)(2d+1)$ linear inequalities and $nd(n\log\;r_\nu + \log\;r_\tau)$ second order cone contraints.
\end{prop}
\subsection{The three index formulation}
The first formulation is based on the one proposed in \cite{BDNP_COR06}, which uses, apart from the $x_{jj}$-variables described above, the following set of sorting/allocation binary variables for the DOMP:
$$
w_{ij}^k = \left\{\begin{array}{cl} 1 & \mbox{if customer $i$ is allocated to facility $j$ and its distance, $\|a_i-\bar a_j\|$,}\\
& \mbox{is sorted in the $k$th position.}\\
0 & \mbox{otherwise}.\end{array}\right.
$$
This formulation reads as follows:
\begin{align}
\min &\displaystyle\sum_{i, j, k=1}^n \lambda_k d_{ij} w_{ij}^{k} + \displaystyle\sum_{j=1}^n f_j x_{jj}\label{domp1}\tag{${\rm OMPN}_{3I}$}\\
\mbox{s.t. } & \displaystyle\sum_{j,k=1}^n w_{ij}^{k}=1, \forall i=1, \ldots,n,\label{domp1:1}\\
& \displaystyle\sum_{i,j=1}^n w_{ij}^{k}=1, \forall k=1, \ldots,n,\label{domp1:2}\\
& \displaystyle\sum_{k=1}^n w_{ij}^k \leq x_{jj}, \forall i, j=1, \ldots, n,\label{domp1:3}\\
& \displaystyle\sum_{j=1}^n x_{jj}= p,\label{domp1:4}\\
& \displaystyle\sum_{i,j=1}^n d_{ij} w_{ij}^{k-1} \geq \displaystyle\sum_{i,j=1}^n d_{ij} w_{ij}^{k}, \forall k=2, \ldots, n,\label{domp1:5}\\
& w_{ij}^{k} \in \{0,1\}, \forall i,j,k,=1, \ldots, n,\nonumber\\
& x_{jj} \in \{0,1\}, \forall j=1, \ldots, n.\nonumber\\
& (d,\bar a) \in \mathcal{D}.\nonumber
\end{align}
The objective function assigns to each sorted distance its adequate weight $\lambda$. \eqref{domp1:1} (resp. \eqref{domp1:2}) ensures that each demand point (resp. each position) is assigned to a unique facility and a unique position (resp. demand point). \eqref{domp1:3} assures that allocation is not allowed unless the plant is open, and \eqref{domp1:4} restrict the problem to open exactly $p$ facilities. Constraints \eqref{domp1:5} allows us a correct definition of the $w$-variables in which the sorting of the distances is imposed (the $(k-1)$th is at least as larger as the $k$-th distance).
Although the above formulation is valid for OMPN, both the objective function and the set of constraints \eqref{domp1:5} are quadratic and non-convex. We introduce a new set of variables to account for the non linear terms in the above formulation:
$$
\theta_{ij}^{k} = d_{ij} w_{ij}^k, \quad i, j, k=1, \ldots, n.
$$
Using the $\theta$-variables, the objective function can be reformulated as:
$$
\displaystyle\sum_{i, j, k=1}^n \lambda_k \theta_{ij}^{k} + \displaystyle\sum_{j=1}^n f_j x_{jj}.
$$
The correct definition of the new variables and satisfaction of constraints \eqref{domp1:5} is assured by the following sets of linear constraints:
\begin{align*}
\theta_{ij}^{k} \geq& d_{ij} - \widehat{D}_{ij}(1-w_{ij}^{k}), &\forall i, j, k=1, \ldots, n,\\
\displaystyle\sum_{i, j=1}^n \theta_{ij}^{k-1} \geq& \displaystyle\sum_{i, j=1}^n \theta_{ij}^{k}, &\forall i, j, k=1, \ldots, n,
\end{align*}
where the first set of constraints comes from the McCormick linear reformulation \cite{M_MP76} of the bilinear terms defining the $\theta$-variables, and the second is the reformulation of \eqref{domp1:5} with the new variables.
The formulation above, has $O(n^3)$ variables and $O(n^2)$ constraints. Properties \ref{prop:1} allow us to strengthen the formulation \eqref{domp1}. In particular, if $UB$ is a known upper bound for the optimal value of OMPN, then:
$$
w_{ij}^k = 0, \forall i, j, k =1, \ldots, n, \mbox{ such that } \displaystyle\min_{j\neq i} \widehat{D}_{ij} > \dfrac{UB}{\displaystyle\sum_{l=k}^n \lambda_l}.
$$
Also, because of the relationship between the $w$ and $x$ variables we get that
$$
\displaystyle\sum_{k=1}^n w_{jj}^k = x_{jj}, \forall j=1,\ldots, n,
$$
are valid equations for \eqref{domp1}.
\subsection{The $2$-index formulation}
The second formulation, also based on the one presented in \cite{BDNP_COR06}, considers an alternative representation of the sorting variables. It uses two different sets of variables. The first one allows us to sort the distances of supplying each of the customers:
$$
s_{ik}= \left\{\begin{array}{cl} 1 & \mbox{if the distance supported by the $i$th customer is sorted in the $k$th position.}\\
0 & \mbox{otherwise}.\end{array}\right.
$$
while the second represents the sorting (non decreasing) sequence of distances:
$$
\xi_k = \displaystyle\sum_{i=1}^n s_{ik} \displaystyle\sum_{j=1}^n d_{ij} x_{ij}, \quad k=1, \ldots, n.
$$
This representation allows us to simplify the formulation to the following with $O(n^2)$ variables and $O(n^2)$ constraints.
\begin{align}
\min & \displaystyle\sum_{k=1}^n \lambda_k \xi_k + \displaystyle\sum_{j=1}^n f_j x_{jj}\label{domp2}\tag{${\rm OMPN}_{2I}$}\\
\mbox{s.t. } & \xi_{k} \geq \xi_{k+1}, \forall k=1,\ldots, n-1,\label{domp2:1}\\
& \displaystyle\sum_{k=1}^n \xi_k = \displaystyle\sum_{i,j=1}^n d_{ij} x_{ij},\label{domp2:2}\\
& \xi_k \geq d_{ij}x_{ij} - \widehat{D}_{ij}\; \left(1- s_{ik}\right), \forall i, k=1, \ldots, n,\label{domp2:3}
\end{align}
\begin{align}
&& \displaystyle\sum_{i=1}^n s_{ik}= 1,\forall k=1, \ldots, n\label{domp2:4}\\
& \displaystyle\sum_{k=1}^n s_{ik}= 1,\forall i=1, \ldots, n,\label{domp2:5}\\
& \xi_{i}\geq 0, \forall i, k=1, \ldots, n, \nonumber\\
& s_{ik} \in \{0,1\}, \forall i, k=1, \ldots, n,\nonumber\\
& x \in \mathcal{X}, (d,\bar a) \in \mathcal{D}.\nonumber
\end{align}
The correct definition of the $\xi$-variables is assured by constraints \eqref{domp2:1}--\eqref{domp2:3}, while contraints \eqref{domp2:4} and \eqref{domp2:5} allows us the adequate modeling of the $s$-variables.
As in \eqref{domp1}, to avoid nonconvex terms in the formulation, the bilinear terms $d_{ij}x_{ij}$ can be linearized by introducing a new variable $\theta_{ij} = d_{ij}x_{ij}$ and replacing \eqref{domp2:2} and \eqref{domp2:3} by:
\begin{align}
\displaystyle\sum_{k=1}^n \xi_k &= \displaystyle\sum_{i,j=1}^n \theta_{ij},\label{domp2:6}\\
\xi_k & \geq \theta_{ij} - \widehat{D}_{ij} \left(1- s_{ik}\right), \forall i, j, k=1, \ldots, n.\label{domp2:7}\\
\theta_{ij} & \geq d_{ij} - \widehat{D}_{ij} \left(1-x_{ij}\right), \forall i, j, k=1, \ldots, n.\label{domp2:8}
\end{align}
\subsection{The $K$-sum formulation}
\label{formulation:OT}
Ogryczak and Tamir presented in \cite{OT_IPL03} some linear programming formulations for the problem of minimizing the sum of the $K$ largest linear functions (which is a particular case of ordered median function). In the same paper, the approach is extended to the minimization of convex ordered median functions by means of a telescopic sum of $K$-sum functions. In the next formulation, we apply this idea to formulate the OMPN. For the sake of readability, we first formulate the $K$-center problem.
Let $\lambda=(1, \stackrel{K)}{\ldots}, 1, 0, \stackrel{n-K)}{\ldots}, 0)$. The ordered median function associated to this particular choice of $\lambda$ is known as the $K$-center function. With our notation, provided the set of distances $D_1, \ldots, D_n$, the $K$-center problem consists of minimizing $\Theta_K(D) = \sum_{i=1}^K D_{(i)}$. Such an objective function, is proved in \cite{OT_IPL03} to be equivalent to the following expression
$$
\Theta_K(D) = \dfrac{1}{n} \left( K \displaystyle\sum_{i=1}^n D_i + \min_{t\in \mathbb{R}} \displaystyle\sum_{i=1}^n (K\;(t-D_i)_+ + (n-K)\;(D_i-t)_+)\right)
$$
where $z_+=\max \{0, z\}$ for $z\in \mathbb{R}$, and the optimal value $t^*$ into the above expression coincides with $D_{(K)}$ (the $K$-th largest distance). Hence, to minimize $\Theta_K(D)$ one may proceed by solving:
\begin{align*}
\min \;\;K\;t + \displaystyle\sum_{i=1}^n z_i\\
\mbox{s.t. } & z_i \geq D_i-t, \forall i=1, \ldots, n,\\
& z_i \geq 0, \forall i=1, \ldots, n,\\
& t \in \mathbb{R}.
\end{align*}
where the variable $z_i$ is identified with $(D_i -t)_+$ in the above formulation, for $i=1, \ldots, n$. Thus, incorporating the whole information to define the distances in our location problem, the $K$-center location problem with neighborhoods can be formulated as:
\begin{align}
\min & \;\; K\; t + \displaystyle\sum_{i=1}^n z_i + \displaystyle\sum_{j=1}^n f_j x_{jj}\label{KC}\tag{${\rm KCN}_{OT}$}\\
\mbox{s.t. } & z_i \geq D_i-t, \forall i=1, \ldots, n,\\
& D_i \geq d_{ij}- \widehat{D}_{ij} (1-x_{ij}), \forall i,j, =1, \ldots, n,\label{kc:1}\\
& z_i, D_i \geq 0, \forall i=1, \ldots, n,\\
& t \in \mathbb{R},\\
& x \in \mathcal{X}, (d,\bar a) \in \mathcal{D}.\nonumber
\end{align}
The above formulation can be extended to general convex ordered median functions. Observe that if $\lambda_1 \geq \cdots \geq \lambda_n \geq \lambda_{n+1}:=0$ one may represent the ordered median function of the distances $D_1,\ldots, D_n$ by using a telescopic sum:
$$
\displaystyle\sum_{i=1}^{n-1} \lambda_i D_{(i)} =\displaystyle\sum_{k=1}^{n-1} (\lambda_k-\lambda_{k+1}) \displaystyle\sum_{i=1}^K D_{(i)} =\displaystyle\sum_{K=1}^n \Delta_k \Theta_K(D)
$$
where $\Delta_k=\lambda_k-\lambda_{k+1} \geq 0$ for $k=1, \ldots, n-1$.
Thus, the convex ordered objective functions can be equivalently rewritten as a weighted sum of $K$-sums, being then suitable to be represented as in the $K$-center problem. With such an observation and introducing new $t$-variables (one for each of the $K$-sums involved) and $z$-variables, in this case with two indices to account not only for the customer ($i$) but also for the $K$-sum representation, one obtain the following valid formulation for the OMPN:
\begin{align}
\min &\;\displaystyle\sum_{k=1}^{n} \Delta_k (kt_k + \displaystyle\sum_{i=1}^n z_{ik}) +\displaystyle\sum_{j=1}^n f_j x_{jj}\ \label{domp3}\tag{${\rm OMPN}_{OT}$}\\
\mbox{s.t. } &z_{ik} \geq D_{i} - t_k,\forall i, k=1, \ldots, n,\nonumber\\
& D_i \geq d_{ij}- \widehat{D}_{ij} (1-x_{ij}), \forall i,j, =1, \ldots, n,\\
& z_{ik}, D_i \geq 0, \forall i=1, \ldots, n,\\
& t_k \in \mathbb{R}, k=1, \ldots, n,\\
& x \in \mathcal{X}, (d,\bar a) \in \mathcal{D}.\nonumber
\end{align}
Observe that this formulation has also $O(n^2)$ variables and $O(n^2)$ constraints, but, as will be shown in the computational results, it has a better performance than \eqref{domp2} since it uses, intrinsically, the especial structure of the convex ordered median objective.
\subsection{The BEP formulation}
Finally, we present a formulation, based on the one provided in \cite{BEP14} and that, as the one in the previous subsection, is only valid for the convex case. The idea under the formulation comes from the observation that, because $\lambda_1 \geq \cdots \geq \lambda_n \geq 0$, the evaluation of the ordered median function on a set of distances, is reached when choosing, among all possible permutations of the indices, $\mathcal{P}_n$, the one that maximizes the weighted sum, that is:
$$
\displaystyle\sum_{i=1}^n \lambda_i D_{(i)} = \max_{\sigma \in \mathcal{P}_n} \displaystyle\sum_{i=1}^n \lambda_i D_{\sigma(i)}.
$$
Then, if the permutations of $\{1, \ldots, n\}$ are represented by using the set of binary variables
$$
p_{ik}=\left\{\begin{array}{cl} 1 & \mbox{if the permutation assigns index $i$ ito index $k$},\\
0 & \mbox{otherwise}\end{array}\right.,
$$
verifying that $\sum_{i=1}^n p_{ik}=1$ (for all $k=1, \ldots, n$) and $\sum_{k=1}^n p_{ik}=1$ (for all $i=1, \ldots, n$).
Then, using these variables, the ordered median sum of a given set of values $D_1, \ldots, D_n$ is equivalent to:
\begin{eqnarray*}
\displaystyle\sum_{i=1}^n \lambda_i D_{(i)} & = & \max_{p \in \{0,1\}^{n\times n}} \displaystyle\sum_{i,k=1}^n \lambda_k D_i p_{ik}\\
& & \mbox{s.t. } \displaystyle\sum_{i=1}^n p_{ik}= 1, \forall k=1, \ldots, n,\\
& & \qquad \displaystyle\sum_{k=1}^n p_{ik}= 1, \forall i=1, \ldots, n.
\end{eqnarray*}
The optimization problem above is an assignment problem, then, the total unimodularity of the constraints matrix assures that its optimal value coincides with the one of its dual problem which reads:
\begin{align*}
\min\;\; &\displaystyle\sum_{k=1}^n u_k + \displaystyle\sum_{i=1}^n v_i \\
\mbox{s.t. } & u_i + v_k \geq \lambda_k D_i, \forall i, k=1, \ldots, n,\\
& u, v \in \mathbb{R}^n.
\end{align*}
Merging the above representation of the ordered median function into the location problem, the OMPN is reformulated as:
\begin{align}
\min & \displaystyle\sum_{k=1}^n u_k + \displaystyle\sum_{i=1}^n v_i + \displaystyle\sum_{j=1}^n f_j x_{jj}\label{domp4}\tag{${\rm OMPN}_{BEP}$}\\
\mbox{s.t. } & u_i + v_k \geq \lambda_k D_i, \forall i,k=1, \ldots, n,\label{domp4:0}\\
& D_i \geq d_{ij} - \widehat{D}_{ij}(1-x_{ij}), \forall i, j=1, \ldots, n,\label{domp4:1}\\
& x \in \mathcal{X},\nonumber\\
& (d,\bar a) \in \mathcal{D}.\nonumber
\end{align}
\subsection{Comparison of Formulations}
\label{sec:compform}
In this section we provide some theoretical results that allow us to compare the tightening of the convex relaxations of each of the provided formulations. Let us denote by $z^R_{3I}$, $z^R_{2I}$, , $z^R_{OT}$ and $z^R_{BEP}$ optimal values of the continuous relaxations of formulations \eqref{domp1}, \eqref{domp2}, \eqref{domp3} and \eqref{domp4}, respectively.
\begin{propo}
\label{relaxations}
The following relations are verified:
$$
z^R_{3I} \leq z^R_{2I} \leq z^R_{BEP} = z^R_{OT}.
$$
\end{propo}
\begin{proof}
Denote by $F_{3I}$, $F_{2I}$, $F_{OT}$ and $F_{BEP}$ the feasible regions of \eqref{domp1}, \eqref{domp2}, \eqref{domp3} and \eqref{domp4} obtained when relaxing the integrality conditions of the models.
\begin{itemize}
\item Let us consider the mapping $\pi: \mathbb{R}^n_+\times \mathbb{R}^{n \times n}_+ \times [0,1]^{n\times n} \times \mathcal{X}_R \times \mathcal{D} \rightarrow \mathbb{R}^{n^3}_+ \times [0,1]^3 \times [0,1]^n \times \mathcal{D} $ defined as:
$$
\pi (\xi, \theta, s, x, (d, \bar a)) = ((\xi_k s_{ik} x_{ij})_{i,j,k=1}^n, (s_{ik}x_{ij})_{i,j,k=1}^n, (x_{jj})_{j=1}^n, (d, \bar a))
$$
First, let us check that $\pi(F_{2I}) \subseteq F_{3I}$, which would prove the first inequality. Let $(\theta, \xi, s, x, (d, \bar a)) \in F_{2I}$, and define $(\bar \theta, \bar x, (d,\bar a))=\pi(\theta, \xi, s, x, (d, \bar a))$, i.e.:
$$
\bar \theta_{ij}^k = \xi_k s_{ik} x_{ij}, \; \bar w_{ij}^k = s_{ik}x_{ij}, \; \bar x_{jj}= x_{jj}, \; \forall i,j,k=1, \ldots, n.
$$
By construction, the constraints \eqref{domp1:1}-\eqref{domp1:4} are verified:
\begin{itemize}
\item $\displaystyle\sum_{j, k=1}^n \bar w_{ij}^k =\displaystyle\sum_{j, k=1}^n s_{ik} x_{ij} = \displaystyle\sum_{j=1}^n x_{ij} \displaystyle\sum_{k=1}^n s_{ik} \stackrel{\eqref{domp2:5}}{=} \displaystyle\sum_{j=1}^n x_{ij} \stackrel{x\in \mathcal{X}_R}{=} 1$.
\item $\displaystyle\sum_{i, j=1}^n \bar w_{ij}^k =\displaystyle\sum_{i, j=1}^n s_{ik} x_{ij} = \displaystyle\sum_{i=1}^n s_{ik} \displaystyle\sum_{j=1}^n x_{ij} \stackrel{x\in \mathcal{X}_R}{=} \displaystyle\sum_{i=1}^n s_{ik} \stackrel{\eqref{domp2:4}}{=} 1$.
\item $\displaystyle\sum_{k=1}^n \bar w_{ij}^k = \displaystyle\sum_{k=1}^n s_{ik}x_{ij} = x_{ij} \displaystyle\sum_{k=1}^n s_{ik} \stackrel{\eqref{domp2:5}}{=} x_{ij} \stackrel{x\in \mathcal{X}_R}{=} x_{jj}$.
\item $\displaystyle\sum_{j=1}^n \bar x_{jj} = \displaystyle\sum_{j=1}^n x_{jj} = \stackrel{x\in \mathcal{X}_R}{=} p$.
\item $\bar \theta_{ij}^k = \xi_k s_{ik} x_{ij} \stackrel{\eqref{domp2:7}, \eqref{domp2:8}}{\geq} (d_{ij}- \widehat{D}_{ij}(2-s_{ik}-x_{ij})) \, s_{ik}x_{ij} = d_{ij}- \widehat{D}_{ij}(1- s_{ik}x_{ij}) + (\widehat{D}_{ij} - d_{ij})(1-s_{ik}x_{ij}) + \widehat{D}_{ij}s_{ik} x_{ij} (s_{ik}+x_{ij}) \stackrel{\bar w_{ij}^k = s_{ik}x_{ij}, s_{ik}, x_{ij}\leq 1, d_{ij}\leq \widehat{D}_{ij}}{\geq} d_{ij}- \widehat{D}_{ij}(1-\bar w_{ij}^k)$.
\item $\displaystyle\sum_{i, j=1}^n \bar \theta_{ij}^k = \displaystyle\sum_{i,j=1}^n \xi_k s_{ik} x_{ij} = \xi_k \displaystyle\sum_{i=1}^n s_{ik} \displaystyle\sum_{j=1}^n x_{ij} \stackrel{x \in \mathcal{X}_R}{=} \xi_{k} \displaystyle\sum_{i=1}^n s_{ik} \stackrel{\eqref{domp2:4}}{=} \xi_k \stackrel{\eqref{domp2:1}}{\geq} \xi_{k+1} = \displaystyle\sum_{i, j=1}^n \bar \theta_{ij}^{k+1}$.
\end{itemize}
Then, $\pi (\theta, \xi, s, x, (d, \bar a)) \in F_{3I}$, so $\pi(F_{2I}) \subset F_{3I}$, i.e., any solution of the convex relaxation of \eqref{domp2} induces a solution of the convex relaxation of \eqref{domp1}. Furthermore, the objective values for $(\theta, \xi, s, x, (d, \bar a))$ in \eqref{domp2} and $\pi (\theta, \xi, s, x, (d, \bar a))$ in \eqref{domp1} coincides:
\begin{align*}
\displaystyle\sum_{i,j,k=1}^n \lambda_k \bar \theta_{ij}^k + \displaystyle\sum_{j=1}^n f_j\bar x_{jj} &= \displaystyle\sum_{i,j,k=1}^n \lambda_k \xi_k s_{ik} x_{ij} + \displaystyle\sum_{j=1}^n f_j x_{jj} \\
&= \displaystyle\sum_{k=1}^n \lambda \xi_k \displaystyle\sum_{i=1}^n s_{ik} \displaystyle\sum_{j=1}^n x_{ij} \\
&\stackrel{x \in \mathcal{X}_R}{=} \displaystyle\sum_{k=1}\lambda_k \xi_k \displaystyle\sum_{i=1}^n s_{ik} \\
& \stackrel{\eqref{domp2:4}}{=} \displaystyle\sum_{k=1}^n \lambda_k \xi_k
\end{align*}
Thus, $z_{2I}^R \geq z_{3I}^R$.
\item Let $(u, v, D, x, (d,\bar a)) \in \mathbb{R}^n \times \mathbb{R}^n \times \mathbb{R}^n_+ \times \mathcal{X}_R \times \mathcal{D}$ be the optimal solution of the continuous relaxation of \eqref{domp4}. Let $p_{ik}$ the optimal dual variables associated to constraint \eqref{domp4:0}. By optimality conditions they must verify:
\begin{align*}
\displaystyle\sum_{i=1}^n p_{ik}=1, \forall k=1, \ldots, n,\\
\displaystyle\sum_{k=1}^n p_{ik}=1, \forall i=1, \ldots, n.\\
\end{align*}
Let us construct the following vector in $\mathbb{R}^n_+ \times \mathbb{R}^{n\times n}_+ \times [0,1]^{n\times n} \times \mathcal{X}_R \times \mathcal{D}$:
$$
\left(\bar \xi, \bar \theta, \bar s, x, (d,\bar a)\right):= \left(\left(\displaystyle\sum_{i=1}^n p_{ik} D_i\right)_{k=1}^n, \left(d_{ij}x_{ij}\right)_{i,j=1}^n, \left(p_{ik}\right)_{i,k=1}^n, x, (d,\bar x)\right).
$$
By construction, $\bar s$ clearly verifies \eqref{domp2:4} and \eqref{domp2:5}. Also, note from the construction of the BEP formulation that for given $D_1, \ldots, D_n$, the problem
\begin{align*}
\min &\displaystyle\sum_{i=1}^n u_i + \displaystyle\sum_{k=1}^n v_k\\
\mbox{s.t} & u_i+ v_k \geq D_i,\\
& u_i, v_k \in \mathbb{R}, \forall i, k=1, \ldots, n.
\end{align*}
is equivalent to
\begin{align*}
\max &\displaystyle\sum_{i,k=1}^n \lambda_k D_i p_{ik}\\
\mbox{s.t} & \displaystyle\sum_{i=1}^n p_{ik}=1, \forall k=1, \ldots, n,\\
& \displaystyle\sum_{k=1}^n p_{ik}=1, \forall i=1, \ldots, n,\\
& p_{ik} \in \{0,1\}, \forall i,k =1, \ldots, n,
\end{align*}
which is an assignment problem related to the \textit{best} sorting on the variables based on their costs given by $D_1, \ldots, D_n$. Because the monotonicity and nonnegativity of the $\lambda$-weights, this is equivalent to compute the ordered median sum $\displaystyle\sum_{i=1}^n \lambda_i D_{(i)} = \displaystyle\sum_{i,k=1}^n \lambda_k D_i p_{ik}$ (where $p$ are the corresponding solution to the problem above indicating if $p_{ik}=1$ that element $i$ is sorted in the $k$th position). Hence, $\bar \xi_k = \displaystyle\sum_{i=1}^n p_{ik} D_i \geq \displaystyle\sum_{i=1}^n p_{ik+1} D_i = \bar \xi_{k+1}$ (constraint \eqref{domp2:1}). The proof of the verification of remainder constraints are straightforward. Also, the reader can easily check that the objective values of both solutions coincide. Thus, $z^R_{BEP} \geq z^R_{3I}$
\item Let $(u, v, D, x, (d,\bar a)) \in \mathbb{R}^n \times \mathbb{R}^n \times \mathbb{R}^n_+ \times \mathcal{X}_R \times \mathcal{D}$ be the optimal solution of the continuous relaxation of \eqref{domp4}. Let us construct a feasible solution for the continuous relaxation of \eqref{domp3}. Let $(\bar t, \bar z) \in \mathbb{R}^n \times \mathbb{R}^{n\times n}_+$ the solution to the problem
\begin{align*}
\min & \displaystyle\sum_{k=1}^{n-1} \Delta_k (kt_k + \displaystyle\sum_{i=1}^n z_{ik})\\
s.t. & z_{ik} \geq D_i - t_k, \forall i, k=1, \ldots, n,\\
& z_{ik} \geq 0, \forall i, k=1, \ldots, n,\\
& t_k \in \mathbb{R}, \forall k=1, \ldots, n.
\end{align*}
By the construction in Subsection \ref{formulation:OT}, the vector $(\bar t, \bar z, D, x, (d,\bar a))$ is a feasible solution to the continuous relaxation of \eqref{domp3} with same objective value than $(u, v, D, x, (d,\bar a))$ in the continuous relaxation of \eqref{domp4}, being then $z_{OT}^R \leq z_{BEP}^R$. Observe that the oposite direction can be derived with a similar reasoning. Thus, $z_{OT}^R = z^R_{BEP}$.
\end{itemize}
\end{proof}
In the above result, the relationship between $z^R_{2I}$ and $z^R_{BEP}$ (or $z^R_{OT}$) is not stated. One may think that the continuous relaxation of \eqref{domp4} is tightener than \eqref{domp2}, because the first exploits the monotonicity of the $\lambda$-weights. However, that is not always true as illustrated in the following example.
\begin{ex}
Let us consider five points in $\mathbb{R}^2$, $\mathcal{A}=\{(2,92),$ $(33,70),$ $(65,50),$ $(73, 69),$ $(40, 63)\}$ and neighborhoods defined as Euclidean disks with radii $\{2, 1, 0.05, 5, 1\}$. If the distance measure is the Euclidean norm and we choose as ordered median function the one with $\lambda=(1,1,1,1,1)$ and $p=2$, we get that:
$$
z^R_{3I} = 2.8348 < z^R_{OT}=z^R_{BEP} = 24.4140 < z^R_{2I} = 34.2145 < z_{OPT}^* = 68.4751
$$
where $z_{OPT}$ is the optimal value of the OMPN.
\end{ex}
\subsection{Computational Comparison of Relaxations}
We have run a series of experiments to study the computational performance of the formulations \eqref{domp1}, \eqref{domp2}, \eqref{domp3} and \eqref{domp4} and also to know the computational limits of the OMPN. We have randomly generated instances of $n$ demand points in $[0,100]^2$ and $[0,100]^3$ with $n$ ranging in $\{5, 6, 7, 8, 9, 10, 20, 30\}$. Five random instances were generated for each number of points. We solved OMPN problems $p$ (number of new facilities to be located) ranging in $\{2,3,5\}$ (provided that $p<n$). Euclidean distances were considered to measure the distances between points. We considered neighborhoods defined as discs (for the planar instances) or 3-dimensional Euclidean balls (for the $3$-dimensional instances) with randomly generated radii. The sizes of the neighborhoods were generated under four scenarios:
\begin{description}
\item[Scenario 1.] Radii generated in $[0,5]$.
\item[Scenario 2.] Radii generated in $[5,10]$.
\item[Scenario 3.] Radii generated in $[10,15]$.
\item[Scenario 4.] Radii generated in $[15,20]$.
\end{description}
In Figure \ref{fig:radius} we show, for one of our $8$-points instances, the neighborhoods for each the four scenarios. Note that Scenario 1 slightly differs from the DOMP while Scenario 4 is closer to the continuous problem. As will be observed from our experiments, the computational difficulty of solving problems with larger radii is higher than the one of those with small radii.
\begin{figure}[h]
\input{radius1.tex}
\caption{Shapes of the neighborhoods for the different scenarios.\label{fig:radius}}
\end{figure}
The set-up cost of each facility was assumed to be the radius of its neighborhood. It can be interpreted as the cost of covering the neighborhood (larger as $r$ increases).
The four formulations were coded in C, and solved using Gurobi 7.01 in a Mac OSX El Capitan with an Intel Core i7 processor at 3.3 GHz and 16GB of RAM. A time limit of 1 hour was set in all the experiments.
Also, four different convex ordered median problems were solved for each of the instances:
\begin{description}
\item[$p$-Median (M):] $\lambda=(1, \ldots, 1)$.
\item[$p$-Center (C):] $\lambda=(1,0, \ldots,0)$.
\item[$p$-KCenter (K):] $\lambda=(\overbrace{1,\ldots,1}^{\lfloor\frac{n}{2}\rfloor}, 0,\ldots, 0)$.
\item[$p$-Cent-Dian$_{0.5}$ (D):] $\lambda=(1, 0.5, \ldots, 0.5)$.
\end{description}
For the small instances ($n \leq 10$) we report here only the results obtained for $n=10$ (the interested reader may check the complete result of the experiments for $n=5, \ldots, 10$ in \href{http://bit.ly/resultsDOMPN}{bit.ly/resultsDOMPN})
First, we run the continuous relaxation of the four formulations to analyze their integrality gaps. The average results are shown in Table \ref{table:IG}. We report the integrality gaps ($IG=\dfrac{z^*}{z^R}$) for each of the formulations (\eqref{domp1}, \eqref{domp2} and \eqref{domp4} (obviating \eqref{domp3} whose integrality gap coincides with the one of \eqref{domp4} by Theorem \ref{relaxations}). The table summarizes the average integrality gaps for the each of the scenarios. As remarked above, there is no order relation between $z^R_{2I}$ and $z^R_{BEP}$. We have bolfaced those values in which the average integrality gaps of \eqref{domp2} is smaller than the one for \eqref{domp4}. Note that it only happens for a few combinations of $n$, $p$, problem types and scenarios. In particular, it mostly occurs for small values of $p$ and only for Scenario 1. In Table \ref{gana2I} we show the percentage of instances (of all of those with fixed $n$ and $p$) for which $z_{2I}^R \geq z_{BEP}^R$. In the total instances, this percentage is $9.97\%$, while among the instances of Scenario 1 is $26.15\%$.
Tables \ref{table:2dtodas} and \ref{table:3dtodas} show the results for the planar and 3-dimensional problems, respectively. For each of the formulations, we provide the average CPU times (\texttt{time}), the number of nodes explored in the branch-and-bound tree (\texttt{\#nodes}) and the deviation with respect to the solution obtained at the end of the root node of the search tree (\texttt{\%gapR}).
\input{tablesnew.tex}
As can be observed from the results, formulations \eqref{domp3} and \eqref{domp4} are much less time consuming than \eqref{domp1} and \eqref{domp2} in all the cases. Also, the solutions obtained after exploring the root node of \eqref{domp3} and \eqref{domp4} are tighten than the rest. Consequently, the number of explored nodes to find the optimal solution or to certify optimality is higher in the two first formulations. Observe that the results are as expected since the first two formulations do not exploit the convexity of monotone ordered median problems. Observe that the \textit{sorting constraints} in the first two formulations involve binary variables while in the two last formulations no need of new binary variables are needed for this task.
Since \eqref{domp3} and \eqref{domp4} seems to have a similar computational behavior for the small-size instances, we have performed a series of experiments for medium-size instances to compare these two formulations. The results are shown in Table \ref{table:OT-BEP}, where now $n$ ranges in $\{20,30\}$ and $p$ in $\{2,5,10\}$. As can be observed, the performance (in terms of CPU time) of both formulation is similar, but \eqref{domp4} seems to need, in average, less CPU time to solve the problems in most of the problems, and the standard deviations (StDev) of the consuming times for \eqref{domp4} are smaller than those for \eqref{domp3}. Furthermore, we were able to solve all the instances before the 1 hour time limit, but $2.56\%$ of them by using \eqref{domp4}, while \eqref{domp3} was not able to solve $11.34\%$ of the the problems. Moreover, in all the instances, \eqref{domp4} obtained better upper bounds for the optimal value of the problems in all the instances (the deviation of the best upper bounds obtained with the OT formulation with respect to the best solution obtained with the BEP formulation is shown in column \%DevBest).
{\small \begin{table}[h]
\centering
{\tiny
\begin{tabular}{|c|c|c|cc|cc|cc|c|}\hline
Sc.& $n$ & $p$ & Time$_{BEP}$ & StDev$_{BEP}$ & Time$_{OT}$ & StDev$_{OT}$ & \%NonSolved$_{BEP}$ & \%NonSolved$_{OT}$ & \%DevBest \\\hline
\multirow{6}{*}{3} & \multirow{3}{*}{20} & 2 & 9.73 & 2.71 & 11.36 & 3.58 & 0\% & 0\% & 0\% \\
&&5 & 253.35 & 18.65 & 449.32 & 28.15 & 0\% & 2.56\% & 0.01\% \\
&&10 & 46.97 & 13.85 & 77.52 & 17.47 & 0\% & 0\%& 0\% \\\cline{2-10}
& \multirow{3}{*}{30} &2 & 59.64 & 6.48 & 148.23 & 14.70 & 0\% & 0\% & 0\% \\
&&5 & 2931.44 & 36.11 & 3099.25 & 32.91 & 75\% & 77.5\% & 1.63\% \\
&&10 & 2861.03 & 37.34 & 3070.86 & 34.40 & 77.5\% & 80\% & 3.75\% \\\hline
\multirow{6}{*}{4} & \multirow{3}{*}{20} &2 & 26.45 & 4.40 & 30.03 & 4.95 & 0\% & 0\% & 0\% \\
&&5 & 1865.88 & 41.15 & 1874.01 & 41.33 & 40\% & 40\% & 0.30\% \\
&&10 & 9.51 & 2.44 & 22.13 & 6.17 & 0\% &0\% & 0\% \\\cline{2-10}
\multirow{6}{*}{3} & \multirow{3}{*}{30} &2 & 735.58 & 36.69 & 849.51 & 37.10 & 15\% & 15\% & 0.17\% \\
&&5 & 2742.49 & 39.04 & 2836.95 & 36.91 & 75\% & 75\% & 1.15\% \\
&&10 & 2745.59 & 39.12 & 2789.27 & 38.12 & 75\% & 75\% & 3.28\%\\\hline
\end{tabular}}
\caption{Comparison of \eqref{domp3} and \eqref{domp4} for instances with $n=20, 30$.\label{table:OT-BEP}}
\end{table} }
\section{Math-heuristics for the OMPN}
\label{heuristics}
In this section we describe two mathematical programming location-allocation based heuristic approaches for solving the OMPN for larger instances. Some heuristics have been proposed for solving ordered $p$-median problems (see \cite{DNHM_AOR05}). However, most of them are based on the use of ``fast'' procedures to compute the overall cost of opening/closing certain sets of facilities. Note that when the travel cost matrix is provided and a set of open facilities is obtained, one can easily evaluate, for each customer, its cheapest facility (or its second cheapest facility), and once all of them are computed, evaluate an ordered median function can be also efficiently performed. In the OMPN, even if the open facilities are known, the allocation costs depend on the final location of the facilities (which depends also on the customers allocated to each facility). Hence, the developed heuristics for the DOMP are no longer valid for the OMPN problem. We propose two alternative local search math-heuristic which allows us to solve larger instances of the problem at smaller computational costs than the exact approaches, at the price of not warrantying the optimality of the solution.
The two heuristics procedures that we propose are part of the well-known familiy of location-allocation procedures. These type of schemes based on performing changes over a set of $p$ facilities candidates for being opened, trying to improve the incumbent objective value. For the sake of that we need to compute the overall cost of opening a given set of facilities $J \subseteq \{1, \ldots, n\}$ with $|J|=p$. Observe that if a set of open facilities is known, we have to compute the minimum (ordered weighted) cost of allocating the customers to those facilities. As mentioned above the computation of such a cost will involve the computation of the allocation customer-open facility and also the position of the open facilities inside their neighborhoods. Although different formulations can be used for such a task, we will use the one based on formulation \eqref{domp4}.
\begin{propo}
Let $J \subset N=\{1, \ldots, n\}$ with $|J|=p$. Then, the cost of using $J$ as open facilities of the OMPN problem can be computed by solving the following mixed integer non linear programming problem:
\begin{align}
c(J) := \min& \displaystyle\sum_{i \in N\backslash J} u_i + \displaystyle\sum_{k \in N \backslash J} v_k + \displaystyle\sum_{j \in J} f_j,\label{cost:J}\tag{${\rm ALLOC}(J)$}\\
\mbox{s.t. } & u_i + v_k \geq \lambda_k z_i, \forall i, k \in N \backslash J,\nonumber\\
& z_i \geq d_{ij} - \widehat{D}_{ij} (1-x_{ij}), \forall i \in N \backslash J, j \in J,\nonumber\\
& d_{ij} \geq \|a_i - \bar a_j\|, \forall i \in N \backslash J, j \in J,\nonumber\\
& r_j \geq \|a_j - \bar a_j\|, \forall j \in J,\nonumber\\
& \displaystyle\sum_{j \in J} x_{ij} = 1, \forall i \in N\backslash J,\nonumber\\
& x_{ij} \in \{0,1\}, \forall i \in N \backslash J, j \in J,\nonumber\\
&z_i \geq 0,\forall i \in N \backslash J.\nonumber
\end{align}
\end{propo}
\begin{proof}
The proof easily follows noting that \eqref{cost:J} is nothing but the simplification of \eqref{domp4} when the values of $x_{jj}$ are known and fixed to $x_{jj}=1$ if $j \in J$ and $x_{jj}=0$, otherwise.
\end{proof}
For each $J$, \eqref{cost:J} can be reformulated as a mixed integer second order cone constraint problem with $(n-p)p$ binary variables (instead of the $n^2$ in \eqref{domp4}. Furthermore, a variable fixing strategy can be applied to \eqref{cost:J} in order to reduce the number of binary variables of the problem.
\begin{prop}
Let $J \subseteq N$ with $|J|=p$, $i\in N\backslash J$, $j\in J$ and $x^*\in\mathcal{X}$ optimal allocation solutions of \eqref{cost:J}.
\begin{enumerate}
\item\label{1}If $\exists k \in J\backslash\{j\}$ such that $\widehat{D}_{ik} < \widehat{d}_{ij}$, then $x^*_{ij}=0$.
\item If $\min_{k \neq j} \widehat{d}_{ik} > \widehat{D}_{ij}$, then $x^*_{ij}=1$.
\item If $\{j^\prime \in J: \widehat{D}_{ik} \geq \widehat{d}_{ij^\prime}, \forall k \neq j^\prime\}= \{j\}$, then $x^*_{ij}=1$.
\end{enumerate}
\end{prop}
\begin{proof}$ $
\begin{enumerate}
\item Let us assume that $x^*_{ij}=1$. Then, $d_{ij}^* = \|a_i - \bar a^*_j\| \geq \widehat{d}_{ij}$. By hypothesis there exists $k \in J$ ($k \neq j$) with $\widehat{D}_{ik} < \widehat{d}_{ij}$. Hence, $d_{ij}^* > \widehat{D}_{ik} \geq d_{ik} = \|a_i - \bar a_k^*\|$, so $d_{ij}^* \neq \min_{j^\prime \in J} \|a_i - \bar a_k\|$ contradicting the optimality of the solution.
\item If $x^*_{ij}=0$, then, there exists $k \in J$, $k\neq j$ such that $d_{ij}^* \geq d_{ik}^* > \widehat{D}_{ij} \geq d_{ij}^*$. Thus, $x^*_{ij}=1$.
\item If applying \ref{1}, all the facilities except $j$ must verify $x_{ij^\prime}^*=0$, then the unique choice for allocating $i$ is $j$.
\end{enumerate}
\end{proof}
As we will show in our computational experiments, the above strategies for fixing variables allows us to fix an average of $80\%$ of the binary variables in the test problems.
Using the above-described formulation, we implemented two different heuristic algorithms. Both algorithms will move through different feasible solutions in order to improve an initial feasible solution. This initial solution is constructed by either solving the standard DOMP problem with a set of weights based on the distances between centers of the neighborhoods (a convex combination of $\widehat{D}_{ij}$ and $\widehat{d}_{ij}$), or solving the OMPN problem for simpler neighborhoods (as polyhedral neighborhoods) and polyhedral distances (which may require less computational effort than general $\ell_\tau$-norm based metrics or neighborhoods). Hence, we consider that an initial solution $x^0 \in \mathcal{X}$ is known.
\subsection{Math-heuristic Algorithm 1}
Given a feasible solution $\bar x \in \mathcal{X}$, the first algorithm searches, for each facility $j_0$ in $J$, the best replacement by a facility in $N \backslash J$. Two different options are possible here. First, to construct the new set of open facilities $J^\prime= J \cup \{i\} \backslash \{j_0\}$ for each $i \in N\backslash J$, solve \eqref{cost:J} for such a $J^\prime$ and keep the best possible change for $j_0$. The second option is to solve a single mixed integer non linear programming problem which decides (through the binary variable $\xi_i$, whether the non-opened facility $i$ is interchanged by $j_0$ to obtain the best improvement:
\begin{align}
\min& \displaystyle\sum_{i \in N} u_i + \displaystyle\sum_{k \in N} v_k + \displaystyle\sum_{j\in J \backslash\{j_0\}} f_j + \displaystyle\sum_{i \in N \backslash J} f_i \xi_i,\label{r:j}\tag{${\rm BestRepl}(j)$}\\
\mbox{s.t. } & u_i + v_k \geq \lambda_k z_i, \forall i, k \in N \backslash J,\nonumber\\
& z_i \geq d_{ij} - \widehat{D}_{ij} (1-x_{ij}), \forall i \in \{j_0\} \cup N \backslash J, j \in J\backslash\{j_0\},\label{b:1}\\
& z_i \geq d_{ij} - \widehat{D}_{ij} (2-x_{ij}-\xi_j), \forall i \in \{j_0\} \cup N \backslash J, j \in J\backslash\{j_0\},\label{b:2}\\
& d_{ij} \geq \|a_i - \bar a_j\|, \forall i \in \{j_0\} \cup N \backslash J, j \neq j_0\in J,\nonumber\\
& r_j \geq \|a_j - \bar a_j\|, \forall j\neq \{j_0\} \in N,\nonumber\\
& \displaystyle\sum_{j \in N \backslash\{j_0\}} x_{ij} = 1, \forall i \in \{j_0\} \cup N \backslash J,\nonumber\\
& x_{ij} \leq \xi_j, \forall j \in N\backslash J,\label{b:3}\\
& \displaystyle\sum_{i \in N\backslash J} \xi_i = 1, \label{b:4}\\
& x_{ij}, \xi_i \in \{0,1\}, \forall i \in \{j_0\} \cup N \backslash J, j \in N \backslash\{j_0\},\nonumber\\
&z_i \geq 0,\forall i \in N \backslash J.\nonumber
\end{align}
Note that constraints \eqref{b:1} are the linearization of the bilinear terms as in the previus formulations, but obviating the facility that wants to be replaced ($j_0$). For the candidates to replace $j_0$, constraints \eqref{b:2} assures that in case $j$ is chosen for the replacement, and a customer $i$ is allocated to $j$, then the travel cost for $i$ is $d_{ij}$, otherwise, the constraint is redundant. With respect to the variables $\xi$ that model the selection of the facility to be swapped with $j_0$, \eqref{b:3} ensures that unchosen facilities cannot serve any customer. \eqref{b:4} assures that a single choice for $j_0$ is possible. Although \eqref{r:j} is similar to \eqref{domp4}, the number of binary variables in the problemiss $(n-p)n$ instead of $n^2$.
In our experiments, we have checked that solving \eqref{r:j} required more CPU time than solving the $n-p$ problems in the form \eqref{cost:J}, although for problems in which $n-p \ll n$ (i.e., when $p$ is large), the \textit{compact} formulation may consume less CPU time than loading and solving $n-p$ problems in the shape of \eqref{cost:J}.
In what follows we describe our math-heuristic procedure, whose pseudocode is shown in Algorithm \ref{alg:heuristic}. Given an initial set of $p$ open facilities, it iterates by interchanging open facilities with other potential facilities trying to improve the best upper bound. At each iteration an open facility is selected to be replaced and the best replacement is chosen. After checking all the open facilities, if an improvement is found when compared to the best upper bound, the latest and the set of open facilities are updated. The procedure repeats the same scheme until a termination criterion is fulfilled. In our case, two stopping criteria are considered: maximum number of iterations and maximum number of iterations without improvement in the solution. In order to reduce the computation times required for solving \eqref{cost:J} or \eqref{r:j}, we consider a randomized version of the algorithm in which instead of finding best replacements for all the open facilities, a random one is selected at that phase of the approach.
\begin{algorithm}[H]
\SetKwInOut{Input}{Initialization}\SetKwInOut{Output}{output}
\Input{Let $\widehat{J} \subset N$ with $|\widehat{J}|=p$ an initial set of open facilities and $UB=c(\widehat{J})$.}
\While{$it<it_{max}$}{
\For{$j_0\in \widehat{J}$}{
Find the best replacement for $j$ (by solving \eqref{cost:J} for $J=\widehat{J} \cup \{i\} \backslash \{j\}$ or \eqref{r:j}: $c_{j_0} = c(J)$.
}
\If{$c_j <UB$}{
Update $UB=c_j$\\
$\widehat{J}=J$\\
BREAK}
Increase $it$.
}
\caption{Math-Heuristic 1 for solving OMPN.\label{alg:heuristic}}
\end{algorithm}
A crucial point of local search heuristics is the quality of an initial feasible solution (in the $x$-variables). We compute the solution of the DOMP problem but using at costs between facilities $i$ and $j$ a convex combination of the lower and upper bounds $\widehat{d}_{ij}$ and $\widehat{D}_{ij}$ which provide good results in practice.
\subsection{Math-Heuristic Algorithm 2}
The second heuristic is based on alternating the location and allocation decisions. Initially, a DOMP is solved by fixing $\bar a = a$, and precomputing the distances between the facilities. Once a solution is obtained, the optimal open facilities are kept and given as input to \eqref{cost:J}. Then, the variables $\bar a$ are updated with the obtained solution and the process is repeated until stabilization. In order to escape from local optima, the scheme is applied again but forbidding the use of one of the facilities opened in the first stage. The process iterates until no improvements are found.
The pseudocode for this approach is shown in Algorithm \ref{alg:h2}.
\begin{algorithm}[H]
\SetKwInOut{Input}{Initialization}\SetKwInOut{Output}{output}
\Input{$\bar a = a$}
\While{$|f_1 - f_2| > \varepsilon$}{
$\bullet$ Solve DOMP for $d_{ij} = \| a_i - \bar a_j\|$. Update $J = \{j\in N: x^*_{jj} =1\}$ and its objective value $f_1$.\par
$\bullet$ Solve \eqref{cost:J} and update $\bar a$.
}
\For{$j_0 \in J$}{
Initialize $\bar a = a$.
\While{$|f_1 - f_2| > \varepsilon$}{
$\bullet$ Solve DOMP for $d_{ij} = \| a_i - \bar a_j\|$ forbiding opeing $j_0$. Update $J = \{j\in N: x^*_{jj} =1\}$ and its objective value $f_1$.\par
$\bullet$ Solve \eqref{cost:J} and update $\bar a$ and its objective value $f_2$.
}
}
\caption{Math-Heuristic 2 for solving OMPN.\label{alg:h2}}
\end{algorithm}
\section{Experiments}
In order to test the performance of the math-heuristic approaches, we have run some experiments over the real dataset instance of 2-dimensional coordinates (normalized longitude and latitude) of geographical centers of 49 states of the Unites States (we exclude Alaska, Hawaii and those outside Northamerica). We considered as neighborhoods Euclidean disks with radii based on the areas of each state. For each state (indexed by $j$), the area (in km$^2$), $A_j$, was obtained and we construct the radius $r^0_j = \sqrt{\frac{A_j}{\pi}}$. The coordinates and the discs built applying this strategy are drawn in Figure \ref{us}. Then, three different scenarios were considered:
\begin{description}
\item[S1]: $r_j = r^0_j$, for $j=1, \ldots 49$.
\item[S2]: $r_j = 2\times r^0_j$, for $j=1, \ldots 49$.
\item[S3]: $r_j = 3\times r^0_j$, for $j=1, \ldots 49$.
\end{description}
The interested reader may download the datasets at \url{http://bit.ly/datasetUS}.
\input{usr1.tex}
We implemented in Gurobi under the C API the two math-heuristic approaches and we compare the running times and the best values obtained with these procedures and those obtained with the exact \eqref{domp4} formulation (with a time limit of 1 hour). We solved the $p$-Median (M), $p$-Center (C), $p$-$25$-center (K) and $p$-Centdian (D) with $p \in \{2, 5, 10\}$. The results are reported in Table \ref{res:heuristics}. In such a table, the first column indicates the scenario ($1$, $2$ or $3$), the second column (Pr.) shows the problem type and the third column indicates the number of facilities to be open, $p$. The values of the solutions obtained by using the different aproaches as well as their CPU running times (in seconds) are reported:
\begin{itemize}
\item Initial solution obtained by solving the nominal DOMP problem and solving \eqref{cost:J} for the obtained open facilities: \texttt{H0} and \texttt{t0}.
\item Best solution obtained by the math-heuristic approach 1: \texttt{H1} and \texttt{t1}.
\item Best solution obtained by the math-heuristic approach 2: \texttt{H2} and \texttt{t2}.
\item Best solution obtained by exact formulation \eqref{domp4} within the time limit: \texttt{BEP} and \texttt{tBEP}.
\end{itemize}
We also report in the 12th column (\texttt{\%VarFixed}) the average number of binary variables fixed in the first heuristic, and the pertentage deviations of the obtained solutions with respect to the best solution found with the exact formulation within the time limit: \texttt{G1}, \texttt{G2} and \texttt{G0} for the first heuristic, the second heuristic and the initial solution, respectively.
\begin{landscape}
\begin{table}[h]
{\small \centering
\begin{tabular}{|c|c|c|cccc|cccc|c|ccc|}\hline
\texttt{Sc.} & \texttt{$p$} & \texttt{Pr.} & \texttt{H0} & \texttt{H1} & \texttt{H2} & \texttt{BEP} & \texttt{t0} & \texttt{t1} & \texttt{t2} & \texttt{tBEP} & \texttt{\%VarFixed} & \texttt{G1} & \texttt{G2} & \texttt{G0} \\\hline
\multirow{12}{*}{SC1} & \multirow{4}{*}{2} & M & 395.3482 & 394.8909 & 395.3482 & 394.891 & 2.5 & 31.52 & 13.28 & 32.44 & 89.15\% & 0.00\% & 0.12\% & 0.12\% \\
& & C & 19.105 & 18.0278 & 18.0278 & 18.0278 & 0.08 & 0.28 & 4.2 & 8.18 & 90.41\% & 0.00\% & 0.00\% & 5.64\% \\
& & K & 272.8855 & 270.8245 & 270.7348 & 270.7348 & 0.84 & 12.88 & 33.24 & 22.92 & 89.84\% & 0.03\% & 0.00\% & 0.79\% \\
& & D & 207.2299 & 207.2299 & 207.2299 & 207.2298 & 2.99 & 18.14 & 12.41 & 22.12 & 89.88\% & 0.00\% & 0.00\% & 0.00\% \\\cline{2-15}
& \multirow{4}{*}{5} & M & 222.3974 & 221.607 & 221.5985 & 222.6594 & 2.68 & 56.03 & 160.26 & $>3600$ & 81.54\% & -0.47\% & -0.48\% & -0.12\% \\
& & C & 18.619 & 18.2246 & 16.2491 & 16.2491 & 0.17 & 0.35 & 9.57 & 37.59 & 84.90\% & 10.84\% & 0.00\% & 12.73\% \\
& & K & 167.6711 & 163.564 & 160.1906 & 160.1138 & 3.98 & 24.07 & 239.04 & $>3600$ & 83.05\% & 2.11\% & 0.05\% & 4.51\% \\
& & D & 120.9466 & 120.0572 & 119.8601 & 119.7391 & 2.66 & 67.81 & 116.23 & $>3600$ & 83.02\% & 0.27\% & 0.10\% & 1.00\% \\\cline{2-15}
& \multirow{4}{*}{10} & M & 146.2992 & 144.6164 & 144.2134 & 141.8635 & 8.4 & 145.96 & 245.51 & $>3600$ & 85.33\% & 1.90\% & 1.63\% & 3.03\% \\
& & C & 27.0357 & 22.9692 & 19.9944 & 19.9944 & 0.19 & 3.06 & 15.9 & 55.57 & 84.83\% & 12.95\% & 0.00\% & 26.04\% \\
& & K & 118.7461 & 118.0626 & 117.7095 & 125.0503 & 6.34 & 70.23 & 1760.77 & $>3600$ & 85.80\% & -5.92\% & -6.24\% & -5.31\% \\
& & D & 86.6926 & 85.2866 & 85.7954 & 85.5946 & 10.84 & 94.08 & 357.67 & $>3600$ & 86.32\% & -0.36\% & 0.23\% & 1.27\% \\\hline
\multirow{12}{*}{SC2} & \multirow{4}{*}{2} & M & 399.6586 & 395.1866 & 398.2659 & 395.1789 & 2.58 & 96.14 & 27.78 & 462.06 & 77.53\% & 0.00\% & 0.78\% & 1.12\% \\
& & C & 23.3894 & 21.8305 & 22.1818 & 21.7935 & 0.11 & 0.45 & 6.72 & 10.48 & 85.33\% & 0.17\% & 1.75\% & 6.82\% \\
& & K & 275.9869 & 274.8279 & 274.6485 & 274.2601 & 1.4 & 25.67 & 32.68 & 365.95 & 78.30\% & 0.21\% & 0.14\% & 0.63\% \\
& & D & 211.7095 & 209.7885 & 211.8238 & 209.7885 & 2.59 & 138.32 & 56.4 & 262.4 & 75.94\% & 0.00\% & 0.96\% & 0.91\% \\\cline{2-15}
& \multirow{4}{*}{5} & M & 228.6722 & 221.514 & 226.7708 & 223.3972 & 637.69 & $>3600$ & 689.41 & $>3600$ & 67.96\% & -0.85\% & 1.49\% & 2.31\% \\
& & C & 29.0877 & 23.0926 & 22.0907 & 21.5931 & 0.55 & 1.68 & 15.67 & 46.04 & 71.11\% & 6.49\% & 2.25\% & 25.77\% \\
& & K & 171.2099 & 168.7613 & 167.6214 & 166.7242 & 21.55 & 163.7 & 1025.78 & $>3600$ & 64.07\% & 1.21\% & 0.54\% & 2.62\% \\
& & D & 129.7708 & 125.1129 & 125.4487 & 127.2981 & 266.3 & 600.94 & 641.76 & $>3600$ & 69.92\% & -1.75\% & -1.47\% & 1.91\% \\\cline{2-15}
& \multirow{4}{*}{10} & M & 153.013 & 153.013 & 155.285 & 151.5515 & 3600.1 & $>3600$ & 3208.72 & $>3600$ & 76.33\% & 0.96\% & 2.40\% & 0.96\% \\
& & C & 44.5115 & 35.0981 & 29.8835 & 29.842 & 0.66 & 16.66 & 46.47 & 53.42 & 74.23\% & 14.98\% & 0.14\% & 32.96\% \\
& & K & 132.6658 & 132.6658 & 131.9971 & 142.0924 & $>3600$ & $>3600$ & $>3600$ & $>3600$ & 76.33\% & -7.11\% & -7.65\% & -7.11\% \\
& & D & 100.474 & 100.474 & 97.6098 & 108.2469 & $>3600$ & $>3600$ & 3215.14 & $>3600$ & 76.33\% & -7.74\% & -10.90\% & -7.74\% \\\hline
\multirow{12}{*}{SC3} & \multirow{4}{*}{2} & M & 404.466 & 394.9373 & 398.7388 & 395.7311 & 2.94 & 108.55 & 49.53 & $>3600$ & 62.96\% & -0.20\% & 0.75\% & 2.16\% \\
& & C & 28.0977 & 24.1051 & 24.1924 & 24.1051 & 0.11 & 0.66 & 6.03 & 10.05 & 74.69\% & 0.00\% & 0.36\% & 14.21\% \\
& & K & 280.5643 & 278.0229 & 278.9015 & 278.0228 & 1.42 & 32.54 & 41.81 & 1455.24 & 52.65\% & 0.00\% & 0.32\% & 0.91\% \\
& & D & 216.5167 & 210.8067 & 214.7534 & 210.6248 & 2.97 & 74.29 & 83.65 & 1488.5 & 72.21\% & 0.09\% & 1.92\% & 2.72\% \\\cline{2-15}
& \multirow{4}{*}{5} & M & 232.5595 & 223.4184 & 235.2989 & 240.8416 & 915.52 & $>3600$ & 702.09 & $>3600$ & 60.38\% & -7.80\% & -2.36\% & -3.56\% \\
& & C & 34.7835 & 28.4267 & 27.4501 & 26.5732 & 0.23 & 3.57 & 13.95 & 36.93 & 64.23\% & 6.52\% & 3.19\% & 23.60\% \\
& & K & 183.8955 & 171.2615 & 178.0249 & 173.4423 & 128.12 & 1912.6 & 1310.85 & $>3600$ & 54.76\% & -1.27\% & 2.57\% & 5.68\% \\
& & D & 134.1824 & 133.0089 & 132.521 & 138.3565 & 1069.6 & $>3600$ & 1206.69 & $>3600$ & 57.96\% & -4.02\% & -4.40\% & -3.11\% \\\cline{2-15}
& \multirow{4}{*}{10} & M & 167.5751 & 167.5751 & 171.1057 & 177.7685 & $>3600$ & $>3600$ & 3208.25 & $>3600$ & 70.41\% & -6.08\% & -3.89\% & -6.08\% \\
& & C & 50.7627 & 47.0082 & 39.3866 & 38.5431 & 1.1 & 4.02 & 25.13 & 32.2 & 70.00\% & 18.01\% & 2.14\% & 24.07\% \\
& & K & 144.9262 & 144.9262 & 146.6831 & 161.303 & $>3600$ & $>3600$ & 2336.69 & $>3600$ & 70.41\% & -11.30\% & -9.97\% & -11.30\% \\
& & D & 110.2827 & 110.2827 & 110.5119 & 116.5976 & $>3600$ & $>3600$ & 3214.45 & $>3600$ & 70.41\% & -5.73\% & -5.51\% & -5.73\% \\\hline
\end{tabular}
\label{res:heuristics}
\caption{Results of Math-Heuristic Approaches and \eqref{domp4} in the US dataset. \label{res:heuristics}}}
\end{table}
\end{landscape}
One can observe from the results that, the CPU times needed to run the math-heuristic approaches are much smaller than those needed to solve the OMPN problem with the MINLP formulation. In those cases in which all the approaches were able to solve the problem before the time limit of one hour, the highest deviation with respect to the optimal solutions was $15\%$ for the first heuristic and $3.2\%$ for the second one. In those cases in which the exact approach was not able to certify optimality in one hour, the math-heuristic approaches found a better solution for the problem. In the first heuristic, we apply the fixing variables strategy each time \eqref{cost:J} is solved. The average percentage of binary variables that are fixed with this strategy, is at least $84\%$ for scenario SC1, $75\%$ for SC2 and $52\%$ for SC3. Observe also that the initial solution based on fixing the open facilities to the solution of the DOMP problem and then compute the location on the neighborhoods and the allocation of the customers according to these positions, is in some case far of being a close-to-optimal choice, with percentage deviations of $33\%$ in some cases.
Note that the two math-heuristic approaches are still very time consuming. One may not forget that both proposed approaches are based on solving mixed integer non linear programming problems which are known to be NP-hard. The advantage of the two approaches is that they provided good quality solutions at the first iterations, which are competitive with the exact solutions (in terms of gap).
In Figure \ref{fig:solC5} we show the best solutions obtained for the test problem for $p=5$ under the center objective function for scenario SC1. The initial solution for this problem is drawn in Figure \ref{fig:solC05} (the solutions for $p=2, 5, 10$ can be found in \href{http://bit.ly/resultsDOMPN}{bit.ly/resultsDOMPN}). The reader can observe that \textit{small} modification of the coordinates of the potential facilities (through neighborhoods) may produce different location-allocation solutions.
\input{usr1-solC.tex}
\input{usr1-solC0.tex}
\section{Conclusions}
\label{sec:conclusions}
A unified version of the classical $p$-median problem is introduced in this paper which includes as particular cases the discrete and the continuous $p$-median and $p$-center problems. The problem considers that each facility can be located not only in the exact given position but in a neighborhood around it. Also, ordered median objective functions are modeled for the problem. Several mathematical programming formulations are proposed based on formulations for the discrete ordered median problem obtained from different sources.
Two location-allocation approaches for solving the problem are presented. Although the optimization problems needed to solve are still NP-hard, the reduced dimensionality of them allows us to provide good quality solution in more reasonable times.
Several extensions are posible within this new framework. The first is the development of decomposition approaches for solving the OMPN. Lagrangean decomposition (relaxing the ordering constraints) combined with Benders decomposition (to \textit{separate} the discrete and the continuous decisions) may produce exact solutions in better CPU times. On the other hand, although we analyze the ordered $p$-median problem, the results in this paper can be extended to other discrete location problems. For instance, capacitated \cite{P_ORP08} or multiperiod \cite{AFHP_CORS09,NS_chapter2015} location problems can be embedded into the neighborhoods framework. Other interesting related problem which is left for further research is the consideration of location-routing problems with neighborhoods. That problem would involve not only the discrete facility location problem with neighborhoods but also the TSP with neighborhoods, then, the combination of the methods proposed in this paper with those provided in \cite{GMS_OMS13} may be applicable to the problem. Also, the case in which the neighborhood of each facility is the union of convex sets would be an interesting next step within this framework. In particular, it would model the case in which two facilities may belong to the same neighborhood. The extended MINLP formulations for such a problem will become disjunctive MINLP for which some techniques are available in the literature. Another approach that would extend the version introduced in this paper is the one in which $k_j$ facilities are allowed to be located at the $j$-th neighborhood to allocate the demand points. In such a case, a nested multifacility $p$-median problem is considered for which more sophisticated strategies should be developed to solve even small-size instances.
\section*{Acknowledgements}
The author was partially supported by project MTM2016-74983-C2-1-R (MINECO, Spain), the research group SEJ-534 (Junta de Andaluc\'ia) and the research project PP2016-PIP06 (Universidad de Granada).
\bibliographystyle{amsplain}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,477,468,751,157 | arxiv | \section{Introduction}
\IEEEPARstart{A}{erial} surveys are used to measure gamma-ray emission over large areas, with applications in geological exploration, radiation protection, mapping distributed contamination, and finding radiological sources outside of regulatory control~\cite{moxham_airborne_1960, galbraith_rock_1983, sanderson_aerial_1991, sanada_aerial_2015}.
Commonly used analysis techniques utilize the counts in certain spectral energy windows to measure and map background levels or to detect sources, and these approaches generally trade statistics for specificity to particular radioactive isotopes~\cite{grasty_uranium_1975, grasty_analysis_1985, detwiler_spectral_2015}.
Other techniques have been developed to leverage the information contained across the entire spectrum, thus gaining statistics but potentially losing specificity~\cite{crossley_inversion_1982, minty_airborne_1992, minty_multichannel_1998, aage_new_1999}.
Though potentially more powerful than windowed approaches, these techniques require detailed modeling of the different background components in order to compensate for the loss of specificity.
Here we will review current full-spectrum approaches and present a new approach using Non-negative Matrix Factorization (NMF) that, due to the reformulation of the problem, provides new insights into airborne survey measurements.
\subsection{Aerial gamma-ray background}
The natural backgrounds encountered by airborne systems are a combination of terrestrial sources and non-terrestrial sources (e.g.,~\cite{minty_fundamentals_1997}).
The terrestrial sources are typically a combination of \isot{K}{40}, \isot{U}{238}/\isot{U}{235} decay series isotopes, and \isot{Th}{232} decay series isotopes, collectively denoted KUT backgrounds.
Emission from terrestrial sources can be classified into two types: \textit{direct} emission, consisting of unscattered and scattered photons that are incident on the detector from below, and \textit{skyshine}, consisting of photons that have scattered in the air above the detector that are incident from above~\cite{minty_multichannel_1998}.
The non-terrestrial backgrounds are radon gas and its radioactive progeny, the effects of cosmic rays, and KUT backgrounds from the aircraft itself.
The radon component consists primarily of \isot{Rn}{222} progeny, and the cosmic component contains \(511\)~keV emission from cosmogenic positron production and a cosmogenic power-law continuum~\cite{sandness_accurate_2009, mitchell_skyshine_2009}.
An illustrative study of the full-spectrum background components for a stationary ground-based detector was undertaken in refs.~\cite{sandness_accurate_2009, mitchell_skyshine_2009}.
In that study the authors empirically separate terrestrial from non-terrestrial backgrounds, isolate skyshine from other terrestrial emission, and generate models that correctly reproduce each component.
Fully modeling the background required Monte Carlo simulations with terrestrial emission up to at least \(300\)~m away from the detector and with \(300\)~m of atmosphere overhead to produce the correct amount of skyshine, showing the complexity of the problem for even a static detector on the ground.
Other ground-based studies have shown the influence of skyshine at low energies even for distant sources~\cite{beck_radiation_1968, nason_benchmark_1981}.
\subsection{Background models}
Before going any further, we note that the term \textit{spectrum model} will carry a dual meaning in this work.
In the first sense, a spectrum model is a Monte Carlo physics simulation that reproduces the components of a measured spectrum.
In the second sense, a spectrum model is any mathematical decomposition of a measured spectrum, regardless of physical meaning of the components.
Both models provide a numerical description of the data, with the former providing a physical understanding and the latter providing a mathematical understanding.
An ideal method would include aspects of both kinds of models in one physical and mathematical description.
For a mathematical model of gamma-ray background to be consistent with physics (e.g., the additive nature of photons), the measured spectra should be expressed as a linear sum of spectral components.
For this work, we begin with a measured dataset \(\mathbf{X}\), which is an \(n \times d\) matrix where each of the \(n\)~rows is a measured spectrum with \(d\)~bins.
We consider linear decompositions of \(\mathbf{X}\) of the form
\begin{equation}\label{eq:linear_decomp}
\mathbf{X} \approx \mathbf{\hat{X}} = \mathbf{A} \mathbf{V},
\end{equation}
where each row of the \(n \times d\) matrix \(\mathbf{\hat{X}}\) is a model-estimated spectrum, \(\mathbf{A}\) is an \(n \times k\) matrix of weights, \(\mathbf{V}\) is a \(k \times d\) matrix whose rows are the component spectra, and \(k \leq d\) is the number of components.
In nearly all cases, \fref{eq:linear_decomp} is an ill-posed problem with many potential solutions that depend on the choice of \(k\) and desired constraints on \(\mathbf{A}\) and \(\mathbf{V}\).
Here we will review some of the approaches for finding solutions, along with their advantages and disadvantages.
\subsection{Full Spectrum Analysis}
Full Spectrum Analysis (FSA) of gamma-ray spectra is one approach that has been studied for aerial and borehole detector systems~\cite{crossley_inversion_1982, minty_airborne_1992, minty_multichannel_1998, smith_multi-function_1983, hendriks_full-spectrum_2001}.
FSA aims to express a measured spectrum as a linear combination of component spectra that are predetermined through simulations and calibration measurements.
For example, component spectra for the KUT backgrounds have been developed to determine the isotopic compositions of the ground beneath an aircraft~\cite{hendriks_full-spectrum_2001}.
The \(k\) spectral components make up the rows of \(\mathbf{V}\), and the weights \(\mathbf{A}\) are typically calculated by minimizing the sum of the square differences between \(\mathbf{X}\) and \(\mathbf{\hat{X}}\), but since least squares can admit non-physical (i.e., negative) solutions to~\fref{eq:linear_decomp}, using non-negative least squares has been suggested to enforce the non-negativity of \(\mathbf{A}\)~\cite{caciolli_new_2012}.
Another desirable property for a spectrum model is to preserve the number of counts in each spectrum, which least squares minimization alone does not enforce, but this constraint can be included in the minimization~\cite{caciolli_new_2012}.
One challenge of FSA is that the decomposition can be poor if the component spectra are not correct, so care must be taken to ensure their accuracy.
The component spectra are usually determined by using a combination of Monte Carlo simulations and calibration measurements, for example by measuring slabs of material with known KUT concentrations underneath wooden boards to simulate direct terrestrial emission (e.g.,~\cite{dickson_utilizing_1981}).
However, such measurements are unable to reproduce the effects of skyshine, which is a major terrestrial contribution to airborne spectra below \(400\)~keV~\cite{minty_multichannel_1998}.
\subsection{Noise-Adjusted Singular Value Decomposition}
Other full-spectrum modeling techniques have focused on using mathematical methods to discover and utilize structure found within measured data without requiring a physical understanding.
One commonly used method is Noise-Adjusted Singular Value Decomposition (NASVD), which was developed to remove statistical noise from aerial gamma-ray measurements by attempting to separate true spectral variations from statistical noise~\cite{hovgaard_new_1997, hovgaard_reducing_1997, minty_improved_1998}.
NASVD improved upon similar work using Principal Component Analysis (PCA)~\cite{dickson_utilizing_1981}.
NASVD has been used to map low levels of contamination~\cite{aage_new_1999} and has recently been applied to the problem of estimating real-time backgrounds for aerial surveys~\cite{kulisek_real-time_2015}.
NASVD decomposes measured spectra into principal components and then keeps only those components which represent true spectral variability.
Although NASVD is typically used to smooth spectra before further processing, isotope-specific signatures in the first few components have led some to comment on the possible associations between NASVD components and KUT and radon backgrounds (e.g.,~\cite{hovgaard_reducing_1997}).
However, unlike FSA, all but the first component contain negative values, so this method permits solutions to~\fref{eq:linear_decomp} that have negative photon fluxes, which are non-physical.
\subsection{Non-negative Matrix Factorization}
Recently Non-negative Matrix Factorization (NMF)~\cite{paatero_positive_1994, lee_learning_1999} was introduced as a technique for performing dimensionality reduction of gamma-ray background data~\cite{bilton_non-negative_2019} and for gamma-ray source identification~\cite{koudelka_modular_2016}.
NMF has already been used for spectral deconvolution in other fields of spectroscopy (e.g.,~\cite{sajda_nonnegative_2004, liou_unsupervised_2005, pauca_nonnegative_2006}).
NMF is a method of solving~\fref{eq:linear_decomp} such that both \(\mathbf{A}\) and \(\mathbf{V}\) are constrained to be non-negative.
NMF can be thought of as a bridge between FSA and NASVD --- like NASVD, the NMF components are determined from the measurements themselves, and like FSA, the NMF components are compatible with physics and thus are capable (although not guaranteed) of describing actual background emissions.
Indeed recent analysis of data from a vehicle-borne system has found correlations between NMF-derived background components and features derived from panoramic images of the scene around the vehicle~\cite{bandstra_attribution_2018}.
Other work has shown NMF to have advantages over other methods for spectral anomaly detection and identification~\cite{bilton_non-negative_2019}.
In this work NMF is applied to aerial survey data and the resulting components are interpreted as known background sources, thus connecting a purely mathematical model of the spectra with a physical model.
We start in \Fref{sec:lakemohave} by analyzing a survey containing passes over a land-water interface at low altitude to demonstrate the basic method.
NMF components are compared to a full Monte Carlo model in \Fref{sec:lakemohave_modeling} and \Fref{sec:lakemohave_model_results}, and then compared with NASVD components in \Fref{sec:nasvd}.
More complex datasets are analyzed in \Fref{sec:bayarea}, showing the potentially wide applicability of the method.
Finally, the implications for mapping, anomaly detection, and data fusion using NMF are discussed in \Fref{sec:discuss}.
\section{Analysis of a land-water interface}\label{sec:lakemohave}
The data analyzed here were taken on 14~February~2012 at Lake Mohave on the Nevada-Arizona border.
The detector system was flown on a helicopter and consisted of four Radiation Solutions Inc.~RSX-3 units, totaling twelve \(2 \times 4 \times 16\)-inch NaI(Tl) detectors~\cite{radiation_solutions_inc._airborne_2019}.
Photon events were recorded by the data acquisition system in \(3\)~keV bins from~\(0\) to \(3072\)~keV, and for this analysis an energy cut of \(30-3060\)~keV was used.
Events were rebinned into \(d = 200\)~non-uniform bins with bin widths approximately proportional to the square root of energy in order to approximate the detector energy resolution, decrease the sparsity of the spectral counts, and reduce the computation time needed for NMF\@.
The time range of interest is 12:12:15 to 12:32:05 local time for a total of \(n = 2381\)~samples at a rate of \(2\)~Hz.
During the collection, the aircraft made ten passes over a land-water interface at a nominal altitude of \(100\)~ft above ground level (\Fref{fig:lakemohave_map}).
\fboxsep=0p
\fboxrule=1p
\begin{figure}[t!]
\centering
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (image) at (0,0) {\includegraphics[width=\columnwidth]{ams_lakemohave_20120214_030m_0200b_c3/map}};
\begin{scope}[x={(image.south east)},y={(image.north west)}]
\node[anchor=south west,inner sep=0] (image) at (0.19,0.13) {\fcolorbox{green}{green}{\includegraphics[width=0.25\columnwidth]{ams_lakemohave_20120214_030m_0200b_c3/map_inset}}};
\end{scope}
\end{tikzpicture}
\caption{The helicopter flight path of the Lake Mohave dataset showing repeated land-water crossings.
The inset shows a magnified view of the shoreline where the land-water crossings were made.
(Map imagery: Google.)\label{fig:lakemohave_map}}
\end{figure}
Because of the repeated passes over the land-water interface, this dataset isolates a special case of background variation often encountered in aerial surveys.
Over water and far from land, the system essentially measures only radon and cosmic backgrounds, plus any aircraft background, while over land it measures a combination of all background sources.
Using this dataset, we are able to investigate the major features of the background and match these features to possible source terms.
\subsection{NMF decomposition method}\label{sec:lakemohave_decomp}
NMF reduces the dimensionality of data by solving~\fref{eq:linear_decomp} subject to the constraint that all of the matrices are non-negative.
Essentially, NMF decomposes each spectrum into a linear combination of a pre-specified (and in our case, small) number of constituent spectra using non-negative coefficients.
In reality, the detection of photon events by a detector should be consistent with this formulation since photons themselves are non-negative and, excepting the effects of detector dead time, detected photon events sum together linearly.
Interpreting the estimate \(\mathbf{\hat{X}} = \mathbf{A} \mathbf{V}\) as the mean of the Poisson-distributed photon event counts \(\mathbf{X}\), the objective function to minimize is the Poisson negative log likelihood
\begin{equation}\label{eq:loss}
\Lambda(\mathbf{X} | \mathbf{\hat{X}}) = \sum_i \sum_j \left(\hat{X}_{ij} - X_{ij} \log(\hat{X}_{ij}) + \log X_{ij}!\right).
\end{equation}
There is no closed form solution to this problem so an iterative approach must be taken.
We use the maximum likelihood update rules given in ref.~\cite{lee_learning_1999} to perform the optimization over both \(\mathbf{A}\) and \(\mathbf{V}\), which will be referred to as \textit{training} the NMF model.
During the training, the rows of \(\mathbf{V}\) are constrained to sum to unity so that the weights \(\mathbf{A}\) will be the integrated number of photon counting events attributed to each component.
Two caveats to NMF are that the decomposition may not be unique and the decomposition may only find a local minimum.
If there exists a \(k \times k\) invertible matrix \(\mathbf{D}\) such that \( \mathbf{V'} \equiv \mathbf{D} \mathbf{V}\) and \( \mathbf{A'} \equiv \mathbf{A} \mathbf{D}^{-1}\) are both non-negative, then \(\mathbf{A'} \mathbf{V'} = \mathbf{A} \mathbf{V}\) is an equivalent decomposition~\cite{paatero_positive_1994}.
(Note that \(\mathbf{D}\) itself need not be non-negative.)
A trivial solution for \(\mathbf{D}\) is a permutation matrix, which underscores the fact that there is no preferred ordering of NMF components, unlike in PCA or SVD\@.
The introduction of additional constraints on \(\mathbf{A}\) and \(\mathbf{V}\) in the form of regularization terms helps to break the symmetry and guide the solutions to have desired properties~\cite{pauca_nonnegative_2006, yu_minimum_2007}, though no constraints are added here.
Since the ordering of components is arbitrary, we chose to sort the components in decreasing order of the variance of their weights, so that component~0 has the highest variability, component~1 has less variability, etc.
This ordering is analogous to the ordering that results from PCA or NASVD\@.
The choice of initialization of \(\mathbf{A}\) and \(\mathbf{V}\) can strongly influence the particular solution that the minimization algorithm finds, since NMF is a non-convex optimization problem and many optimization techniques will only find the nearest local minimum~\cite{lee_algorithms_2001}.
Many random initializations of \(\mathbf{A}\) and \(\mathbf{V}\) were initially tried, and although some physical features (that will be discussed later) were apparent, these components were often not smooth and therefore unlikely to be comparable with the spectra of different background sources.
For the sake of repeatability, and also because this initialization has generally led to smooth spectral components, we performed the following initialization.
We made an ansatz that NMF might at least separate the over-water background from the terrestrial background, and to estimate the over-water background we chose all spectra with total counts less than an arbitrary threshold of~\(1.05\) times the minimum.
The average total counts of this set were used to initialize one column of \(\mathbf{A}\), and the average unit-normalized spectrum was used to initialize the corresponding row of \(\mathbf{V}\).
The residual total counts were split evenly among the remaining \(k - 1\) components, and the corresponding rows of \(\mathbf{V}\) were initialized to the average unit-normalized spectrum of the residual spectra.
This initialization means that the remaining \(k - 1\) components would be initialized identically, which would result in mathematically identical treatment during the optimization process.
To distinguish these \(k - 1\) components from each other, small positive random numbers less than~\(10^{-6}\) were added to the remaining \(k - 1\) components of \(\mathbf{V}\).
As mentioned above, NMF differs from SVD in that the number of spectral components \(k\) must be chosen before calculating the decomposition.
NMF models with \(k=1\)~to~\(7\) components were fit to the Lake Mohave dataset, stopping the iterations after the change in \(\Lambda / n\) became less than an arbitrarily chosen threshold of~\(10^{-9}\).
To select the optimal number of components for a particular dataset, the Akaike Information Criterion (AIC)~\cite{akaike_new_1974} was used, as in related work~\cite{bilton_non-negative_2019}.
The AIC balances the likelihood of a model with the tendency to overfit as the number of model parameters is increased.
We find that for the dataset under examination here, three components is the optimal number according to AIC (\Fref{fig:aic}).
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\columnwidth]{model_select_aic_lakemohave}
\caption{Akaike Information Criterion (AIC) for NMF models with different numbers of components for the Lake Mohave dataset.
The model with three components has the best (lowest) AIC\@.
The line connecting the points has been added to guide the eye.\label{fig:aic}}
\end{figure}
\subsection{NMF results}
The initialized set of components for the three-component NMF decomposition and the resulting components after NMF training are shown in the upper pane of \Fref{fig:lakemohave_decomp}, along with the weights in the lower pane.
A comparison between the initialized components and the final components reveals that besides the engineered feature of an over-water component (component~2), the two other components have diverged significantly in shape from each other.
Some physical features are immediately apparent.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.42\textwidth]{ams_lakemohave_20120214_030m_0200b_c3/nmf_comp_spectra_init}
\includegraphics[width=0.42\textwidth]{ams_lakemohave_20120214_030m_0200b_c3/nmf_comp_spectra}\\
\includegraphics[width=0.85\textwidth]{ams_lakemohave_20120214_030m_0200b_c3/nmf_comp_weights}
\caption{NMF components and weights for a three-component decomposition of the Lake Mohave dataset.
The two top panels show the components initialized according to the procedure in the text (top left; components~0 and~1 are nearly identical) and the final components after optimization (top right).
The bottom panel shows the gross count rate and NMF component weights.
The components in the top plots are the rows of \(\mathbf{V}\) after dividing by bin widths and time, and then multiplying by their average weight to show their scale relative to the mean spectrum and each other.
The component weights are simply the columns of \(\mathbf{A}\).\label{fig:lakemohave_decomp}}
\end{figure*}
Component~0 is characterized by prominent \(1460.8\) and \(2614.5\)~keV lines from the decay of \isot{K}{40} and \isot{Tl}{208} (\isot{Th}{232} series), respectively, and several weaker lines are apparent: \(238.6\), \(338.3\), \(583.2\), \(911.2\), and \(969.0\)~keV, all of which are from the \isot{Th}{232} decay series.
This evidence points to nearby terrestrial emission as a possible source for component~0 events.
Component~1, on the other hand, is characterized by a high rise at the low energy end of the spectrum, peaking below \(100\)~keV, as well as weak line features --- the only lines clearly visible are the \(1460.8\)~keV and \(2614.5\)~keV lines.
These features suggest that component~1 contains distant emission sources with appreciable attenuation and scattering.
The sharp rise in the continuum below \(400\)~keV is also suggestive of skyshine, the phenomenon where terrestrial emission is scattered down from the atmosphere above the detectors~\cite{hovgaard_reducing_1997}.
Component~2 contains the major \isot{Rn}{222}-series lines at \(242.0\), \(295.2\), \(351.9\), \(609.3\), \(1120.3\), and \(1764.5\)~keV as well as the positron annihilation line at \(511\)~keV\@.
It is the dominant component above \(2800\)~keV, where a continuum of photons due to cosmic-ray interactions makes up the majority of the background~\cite{sandness_accurate_2009}.
Thus component~2 seems to contain radon and cosmic emission.
The region around \(1460.8\) displays an artifact from the strong \isot{K}{40} line feature in both of the other components.
The lower pane of \Fref{fig:lakemohave_decomp} shows the fitted weights for the three components during the entire dataset, and \Fref{fig:weights_interface} shows the same but only during the crossings of the land-water interface.
The behavior of the weights at the interface bolsters the previously mentioned spectroscopic interpretation of the NMF components.
As the aircraft approaches land, component~0 rises later and more rapidly than component~1, and component~2 stays relatively constant.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\columnwidth]{ams_lakemohave_20120214_030m_0200b_c3/interface_030m}
\caption{NMF component weights during the ten crossings of the land-water interface in the Lake Mohave dataset.
Component~0 rises more rapidly than component~1 when approaching land, while component~2 is approximately constant.\label{fig:weights_interface}}
\end{figure}
\subsection{Monte Carlo modeling of spectral components}\label{sec:lakemohave_modeling}
In order to quantitatively understand the spectral shapes of the NMF components, a Monte Carlo-based model of the detector system was used to generate spectral shapes for the postulated sources of background.
As in ref.~\cite{sandness_accurate_2009}, Monte Carlo simulations that can accurately model backgrounds require a geometry that extends hundreds of meters in all directions.
The authors of ref.~\cite{sandness_accurate_2009} modeled emission from the ground out to a range of \(300\)~m, and required an atmosphere that extended \(300\)~m above in order to accurately model skyshine from the ground.
Because our detector system is elevated above the ground, we extend the ground emission out to \(610\)~m (2000 ft) and allow air scattering up to a height and radius of \(5000\)~m.
Simulations of this scale are slow, so like ref.~\cite{sandness_accurate_2009} we make simplifying assumptions to break the simulation into smaller, more tractable parts.
For each part of the simulation, a response matrix was generated to relate the ``input'' spectrum to the ``output'' spectrum.
Each element \(R_{ij}\) of a response matrix \(\mathbf{R}\) gives the probability that an event in bin \(j\) of the input spectrum will result in an event in bin \(i\) of the output spectrum, so the output spectrum is found using a matrix dot product:
\begin{equation}
\mathbf{x}^{\mathrm{out}} = \mathbf{R} \mathbf{x}^{\mathrm{in}}.
\end{equation}
For simplicity, the input flux distribution for all response matrices, unless otherwise noted, is proportional to the cosine of the normal to the relevant geometry's surface.
For terrestrial emission, we generated a response matrix \(\mathbf{R}_{\mathrm{terr}}\) for photon emission from within a slab of \(1.5\)~m thick soil by tallying the photon emission through the surface.
The soil composition used is the U.S. average from ref.~\cite{mcconn_jr_compendium_2011}.
The flux distribution that emerges is approximately proportional to the cosine of the normal to the slab surface, as assumed in the response matrices and as seen in ref.~\cite{sandness_accurate_2009}.
To simulate the transport of terrestrial photons through the atmosphere, a larger series of simulations were run using the MEGAlib Geant4-based Monte Carlo code~\cite{zoglauer_megalib_2006, agostinelli_geant4simulation_2003}.
These simulations involved monoenergetic gamma rays emitted from a point on a horizontal plane that represented the ground.
The photons were emitted into the upward direction with an angular distribution proportional to the cosine of the angle relative to the ground normal and were transported through dry sea-level air until absorption or they left a cylindrical volume of \(5000\)~m radius and height.
The tracking of photons that struck the ground surface were terminated.
Within the air, tally volumes defined by concentric cylinders were formulated to produce a series of track-length flux estimator tallies.
The photon energy and azimuth and elevation angles were also tracked in each tally volume, resulting in \(5^{\circ}\)-spaced angular bins and 31~energy bins consisting of 30 equally spaced down-scatter energy bins and a single full-energy bin.
The response due to a planar source is then inferred by leveraging symmetry and assuming that summing tallies at different radii is equivalent to integrating over the areal extent of the ground source.
These summed per-source photon fluxes (responses) were separately calculated at \(100\)~ft above a uniformly emitting ground plane for four geometric contributions to the overall flux: direct responses consisting of the flux from upward-directed photons emitted within (\(\mathbf{R}_{\mathrm{dir,near}}\)) or beyond (\(\mathbf{R}_{\mathrm{dir,dist}}\)) a ``cutoff'' radius \(r_{\mathrm{cutoff}}\) measured along the ground from the tally position, and skyshine responses consisting of the sum of all flux tally contributions from photons traveling downward emitted within (\(\mathbf{R}_{\mathrm{sky,near}}\)) or beyond (\(\mathbf{R}_{\mathrm{sky,dist}}\)) the cutoff radius.
For cosmic and radon backgrounds we simulated a \(3000 \times 3000 \times 1000\)~m rectangular prism of atmosphere and generated a response matrix for uniformly distributed isotropic emission within it and tallied photons that are incident on a \(1000 \times 1000\)~m window centered on the large face.
This atmospheric emission matrix was denoted \(\mathbf{R}_{\mathrm{atm}}\).
Radon and \(511\)~keV emission were propagated through this response matrix, but the cosmic continuum was not since it is an empirical model based upon detector measurements, not the physical production mechanism.
Finally, to simulate the response of the NaI detectors to all incident background sources, a response matrix \(\mathbf{R}_{\mathrm{abs}}\) was generated for photons incident on the large face of a \(2 \times 4 \times 16\)~inch NaI detector that are absorbed by the detector.
The complete response matrix for nearby emission is therefore
\begin{equation}
\mathbf{R}_{\mathrm{near}} = \mathbf{R}_{\mathrm{abs}} (\mathbf{R}_{\mathrm{dir,near}} + \mathbf{R}_{\mathrm{sky,near}}) \mathbf{R}_{\mathrm{terr}},
\end{equation}
and the matrix for distant emission is
\begin{equation}
\mathbf{R}_{\mathrm{dist}} = \mathbf{R}_{\mathrm{abs}} (\mathbf{R}_{\mathrm{dir,dist}} + \mathbf{R}_{\mathrm{sky,dist}}) \mathbf{R}_{\mathrm{terr}}.
\end{equation}
The complete matrix for radon and \(511\)~keV emission is
\begin{equation}
\mathbf{R}_{\mathrm{Rn+511}} = \mathbf{R}_{\mathrm{abs}} \mathbf{R}_{\mathrm{atm}},
\end{equation}
while no response matrix was applied to the cosmic continuum.
\subsection{Results of Monte Carlo model fits}\label{sec:lakemohave_model_results}
To attribute the modeled background components to the NMF components, the following analysis was performed.
First, the NMF components were multiplied by their average weights to give them an absolute scale of counts per second per keV\@.
This scaling was performed so that when fitting near and distant KUT spectra to the components, not only their shapes but also their relative magnitudes could be constrained.
The variance of each element of this scaled \(\mathbf{V}\) matrix was estimated from the second derivatives of \(\Lambda \) with respect to the elements of \(\mathbf{V}\) (i.e., the Fisher information matrix).
The nine different postulated background components were then calculated for a given value of \(r_{\mathrm{cutoff}}\) and cosmic continuum power-law index:
\begin{itemize}
\item Terrestrial \isot{K}{40} (nearby and distant)
\item Terrestrial \isot{U}{238} series (nearby and distant)
\item Terrestrial \isot{Th}{232} series (nearby and distant)
\item Atmospheric \isot{Rn}{222} series
\item Cosmic continuum
\item Cosmic \(511\)~keV emission
\end{itemize}
For all sources except for the cosmic continuum, the input spectra \(\mathbf{x}^{\mathrm{in}}\) consisted of delta functions for each gamma-ray emission line weighted by branching ratio for the isotope or decay series assuming secular equilibrium.
The input spectrum for the cosmic continuum was modeled as a power law with arbitrary normalization.
All background components were then obtained by computing the dot product with the appropriate response matrix from the previous section.
Finally, all of the background components were convolved with a model of the detector energy resolution to produce realistic line widths.
In order to establish which background components contribute most strongly to each NMF component, a \(\chi^2\) minimization was performed to fit a linear combination of all nine background components simultaneously to the three NMF components, for a total of 27~parameters.
To find the best cutoff radius and cosmic power-law index, the \(\chi^2\) minimization was performed for each value of \(r_{\mathrm{cutoff}}\) in a grid from \(50\)~m to \(150\)~m in \(5\)~m increments, and for values of the power-law index between~\(0.00\) and~\(1.20\) in increments of~\(0.05\).
All coefficients were constrained to be non-negative, and the coefficients between the nearby and distant components of each terrestrial type (e.g., nearby and distant terrestrial \isot{K}{40}) were constrained so that their ratio was within \(20\% \) of the ratio for uniform infinite plane emission.
For all fits, the region below \(200\)~keV was excluded because in that region materials surrounding the detectors begin to strongly influence the shape of the background and those materials have not been modeled here.
After performing these fits with only the \(20\% \) constraint between near and distant terrestrial emission, the best-fit model was consistent with the basic hypothesis about the nature of the NMF components.
Component~0 was dominated by the nearby KUT background components and \isot{Rn}{222}, which closely resembles the nearby \isot{U}{238} background component.
Component~1 was dominated by the distant KUT components and nearby \isot{U}{238}.
Finally, component~2 was dominated by \isot{Rn}{222}, the cosmic continuum, nearby \isot{U}{238}, and cosmogenic \(511\)~keV\@.
The nearby \isot{U}{238} and \isot{Rn}{222} series spectra are similar in shape, which could explain the appearance of nearby \isot{U}{238} in the fit to NMF component~2.
The best value of \(r_{\mathrm{cutoff}}\) was \(85\)~m, while the best cosmic power-law index was~\(0.55\).
The fits were performed again but with the additional constraints of allowing only nearby KUT to fit component~0, distant KUT for component~1, and radon and cosmics for component~2.
The best overall fit was found when \(r_{\mathrm{cutoff}} = 85\)~m and the cosmic power-law index was~\(0.65\).
The best-fit cosmic power-law index was significantly lower than the value of~\(1.3\) measured in ref.~\cite{sandness_accurate_2009}.
Specific details about the different detectors may account for some of the difference, however such differences are unlikely to account for all of the discrepancy.
The different elevations above sea level, as well as the altitude above ground level, may also affect the power law observed.
Differences in how radon was modeled could also contribute.
The nature of this discrepancy remains unknown.
\Fref{fig:lakemohave_fits} shows the results of these constrained fits.
The skyshine portion from all terrestrial components was summed together and plotted separately to show the portion of the component that is estimated to come from skyshine.
For component~0, the skyshine contribution makes up a maximum of about \(40\%\) of the spectrum below 100~keV, while for component~1 skyshine makes up two-thirds of events below \(100\)~keV and more than half of all events below \(400\)~keV\@.
Component~2 is dominated by radon, with smaller contributions from the cosmic continuum and \(511\)~keV emission.
The sum of all the weighted NMF components, which is approximately the mean measured spectrum, is also shown in \Fref{fig:lakemohave_fits}, together with the sums of all the fitted background components.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.42\textwidth]{ams_lakemohave_20120214_030m_0200b_c3/fit_components_chi2_085m_0.65_0}
\includegraphics[width=0.42\textwidth]{ams_lakemohave_20120214_030m_0200b_c3/fit_components_chi2_085m_0.65_1}\\
\includegraphics[width=0.42\textwidth]{ams_lakemohave_20120214_030m_0200b_c3/fit_components_chi2_085m_0.65_2}
\includegraphics[width=0.42\textwidth]{ams_lakemohave_20120214_030m_0200b_c3/fit_components_chi2_085m_0.65_all}
\caption{Monte Carlo background components fit to the three NMF components (top left, top right, and bottom left) and the sum of all three components (bottom right) from the Lake Mohave dataset.
These fits provide strong evidence for the identification of component~0 with nearby emission, component~1 with distant emission, and component~2 with radon and cosmic emission.
The skyshine contributions from the terrestrial sources are shown separately.\label{fig:lakemohave_fits}}
\end{figure*}
The NMF components and the background model fits are in general agreement and provide strong evidence for the identification of component~0 with nearby emission, component~1 with distant emission, and component~2 with radon and cosmic emission.
\subsection{Comparison to NASVD}\label{sec:nasvd}
Finally, we compare the NMF components to components derived using NASVD\@.
To perform NASVD, one scales each element of \(\mathbf{X}\) to have unit variance and then performs singular value decomposition (SVD) on the resulting matrix~\cite{hovgaard_new_1997}.
To estimate the mean of each bin, NASVD assumes that the average spectral shape is approximately the same.
Defining~\(\mathbf{s}\) to be the average of the rows of \(\mathbf{X}\) that is then normalized to sum to~\(1\) (i.e., the average unit-normalized spectrum):
\begin{equation}
s_j \equiv \frac{\sum_i X_{ij}}{\sum_i \sum_j X_{ij}},
\end{equation}
and also defining the sum of the counts in each spectrum (i.e., the gross counts) to be~\(\mathbf{c}\):
\begin{equation}
c_i \equiv \sum_j X_{ij},
\end{equation}
NASVD uses \(c_i s_j\) as an approximation to the mean of measurement \(X_{ij}\) and forms the unit-variance matrix \(\mathbf{X'}\):
\begin{equation}\label{eq:xprime}
\mathbf{X'} \equiv \mathrm{diag}(\mathbf{c})^{-1/2}\, \mathbf{X}\, \mathrm{diag}(\mathbf{s})^{-1/2}.
\end{equation}
Then SVD is used to decompose \(\mathbf{X'}\):
\begin{equation}\label{eq:svd}
\mathbf{X'} = \mathbf{U} \mathbf{\Sigma} \mathbf{W}^T,
\end{equation}
where \(\mathbf{U}\) is an \(n \times n\) unitary matrix, \(\mathbf{\Sigma}\) is an \(n \times d\) diagonal matrix of the singular values, and \(\mathbf{W}\) is a \(d \times d\) unitary matrix.
To remove noise, the highest \(k\) singular values are chosen and the rest are culled to form the \(n \times k\) matrix \(\mathbf{\Sigma}_k\) and the \(d \times k\) matrix \(\mathbf{W}_k\).
Now defining
\begin{eqnarray}
\mathbf{A} &\equiv& \mathrm{diag}(\mathbf{c})^{1/2} \mathbf{U} \mathbf{\Sigma}_k \\
\mathbf{V} &\equiv& \mathbf{W}_k^T \mathrm{diag}(\mathbf{s})^{1/2}
\end{eqnarray}
we obtain the NASVD solution to~\fref{eq:linear_decomp}.
A consequence of NASVD is that the first column of \(\mathbf{U} \mathbf{\Sigma}_k\) is~\(\sqrt{\mathbf{c}}\) and the first column of \(\mathbf{W}_k\) is~\(\sqrt{\mathbf{s}}\), so therefore the first column of \(\mathbf{A}\) is~\(\mathbf{c}\) and the first row of \(\mathbf{V}\) is~\(\mathbf{s}\)~\cite{hovgaard_airborne_1998}.
The remaining components are additive perturbations to the mean spectrum in order of decreasing variance, and the row-wise sums of \(\mathbf{V}\) beyond the first component are zero.
(This latter fact is a result of the orthogonality of \(\mathbf{W}_k\).
Letting \(\mathbf{w}_j\) be the \(j\)th column of \(\mathbf{W}_k\), then since the first column of \(\mathbf{W}_k\) is \(\mathbf{w}_0 = \sqrt{\mathbf{s}}\), the sum of the \(j\)th row of \(\mathbf{V}\) is \(\mathbf{w}_j \cdot \sqrt{\mathbf{s}} = \mathbf{w}_j \cdot \mathbf{w}_0 = \delta_{0j}\).)
\Fref{fig:lakemohave_nasvd} shows the results of an NASVD decomposition for the Lake Mohave dataset.
Components~0--2 account for over \(95\%\) of the variance, while components~3 and higher have much smaller variances than the first three and do not show coherent spectral features.
Components~1 and 2 clearly display features that can be associated with spectroscopic phenomena, such as photopeaks, but because they take on negative values, each component cannot be the direct result of background source emission.
For example, increasing the weight of component~1 decreases the relative contribution of radon and cosmics while increasing the contribution of nearby \isot{K}{40} and \isot{Tl}{208} emission.
Likewise, increasing the weight of component~2 increases the contribution from distant emission relative to nearby emission.
As a further comparison to the NMF decomposition, \Fref{fig:weights_interface_nasvd} shows the weights of the first three NASVD components during the ten crossings of the land-water interface and is the equivalent of \Fref{fig:weights_interface}.
At the interface, the weights for components~1 and 2 change sign to account for the rapidly changing shape of the spectrum in that region.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.42\textwidth]{ams_lakemohave_20120214_030m_0200b_c3/nasvd_comp_spectra}\\
\includegraphics[width=0.85\textwidth]{ams_lakemohave_20120214_030m_0200b_c3/nasvd_comp_weights}
\caption{NASVD components for the Lake Mohave data (top) and component weights (bottom).
Spectral components have been multiplied by their maximum weight to show their scale relative to each other.
In the bottom plot, the component~0 weights have been scaled by a factor of~\(0.1\) for ease of comparison to the others.
The first NASVD component is always the average spectral shape and its weights are the gross counts, while the following components represent variations from the average shape.
Components~0--2 account for over \(95\%\) of the variance of the dataset while components~3 and higher largely contain statistical noise.\label{fig:lakemohave_nasvd}}
\end{figure*}
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\columnwidth]{ams_lakemohave_20120214_030m_0200b_c3/interface_nasvd_030m}
\caption{Weights of the first three NASVD components during the ten crossings of the land-water interface in the Lake Mohave dataset (cf. the NMF weights in~\Fref{fig:weights_interface}).
The weights for component~0 are the gross counts, while components~1 and 2 change the relative importance of radon and cosmics and distant emission, respectively.
The component~0 weights have been scaled by a factor of~\(0.1\) for ease of comparison to the others.\label{fig:weights_interface_nasvd}}
\end{figure}
In order to quantitatively compare the abilities of NASVD and NMF to represent the same spectra, we first note that the gross counts of both NASVD- and NMF-reconstructed spectra exactly match the gross counts of the measured spectra.
For NASVD, this property follows from the fact that the first column of \(\mathbf{A}\) are the gross counts \(\mathbf{c}\), and the sum of the \(j\)th row of \(\mathbf{V}\) is \(\delta_{j0}\).
For NMF, this property follows from the fact that when maximizing Poisson likelihood, any model that has an effective scale factor will be fit so that the sum of the model matches the measured gross counts.
Since both methods exactly reconstruct each spectrum's gross counts, it is also important to compare how well they fit the shape of the spectrum.
For this purpose we introduce the Spectral Anomaly Detection (SAD) metric, the L2 norm between the \(i\)th measured spectrum and the \(i\)th reconstructed spectrum~\cite{miller_gamma-ray_2018, bilton_non-negative_2019}:
\begin{equation}
\mathrm{SAD}_i \equiv \frac{\sum_j || X_{ij} - \hat{X}_{ij} ||^2}{\sum_j || X_{ij} ||},
\end{equation}
where the denominator normalizes the metric to a mean of unity.
We evaluated SAD for NMF and NASVD reconstructions with three components and histogrammed the results in~\Fref{fig:sad}.
The distribution of the difference in the SAD metric between the NASVD and NMF reconstructions is much narrower than the distribution of either SAD metric, indicating that the two methods reconstruct the measured spectra with similar fidelity.
This result is not surprising since both models have the same linear form with similar numbers of free parameters, though NMF has additional (non-negativity) constraints.
Notably, the histogram of SAD metric differences has an average value greater than zero, which indicates that NMF yields slightly better reconstructions than NASVD, at least according to the SAD metric.
The average value of the SAD metric difference becomes negative for this dataset (i.e., favoring NASVD) only once the number of NASVD components used in the reconstruction has been increased to nine.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\columnwidth]{ams_lakemohave_20120214_030m_0200b_c3/nasvd_loss_sad_hist}
\caption{Spectral Anomaly Detection (SAD) metric for spectra reconstructed using NASVD and NMF with 3~components.
The red histogram shows the difference in the metric between the two methods.
This difference is small, indicating that they reconstruct the measured spectra with similar fidelity.\label{fig:sad}}
\end{figure}
\section{Application to San Francisco Bay Area surveys}\label{sec:bayarea}
To demonstrate the application of NMF decomposition under more general conditions, two survey datasets from the San Francisco Bay Area (henceforth, Bay Area) were analyzed using the techniques of the previous section.
These datasets were part of a larger survey campaign performed in August~2012~\cite{nstec_remote_sensing_laboratory_aerial_2012}.
The first set was taken at the Port of Oakland on 30~August 2012 and consisted of 5,687 \(1\)-Hz spectra, and the second dataset was collected in Pacifica on 28~August 2012 and consisted of 4,505 \(1\)-Hz spectra.
The data were taken with the same NaI(Tl) detector system as the Lake Mohave data, and the same energy range and binning scheme were used.
Both the data rate (\(1\)~Hz) and the nominal flight altitude above ground level (\(300\)~ft) differed from the Lake Mohave dataset (\(2\)~Hz and \(100\)~ft, respectively).
Both surveys contain land-water interfaces like the Lake Mohave survey, but both also contain roads and man-made structures.
The Pacifica survey is also of special interest because it contains rugged terrain and some geological features with substantially lower potassium content than typical (and therefore weak \isot{K}{40} emission), providing additional spectral variability.
\subsection{Anomaly detection and removal}\label{sec:bayarea_anomalies}
For the Bay Area datasets, the Spectral Comparison Ratio Anomaly Detection (SCRAD) method~\cite{pfund_examination_2007, jarman_comparison_2008, detwiler_spectral_2015} was applied to the data to identify and remove any spectral anomalies that could affect the NMF decomposition.
To use SCRAD one needs to bin each spectrum into a series of predetermined bins, track estimates of the means and covariances of the bins over time using an exponentially weighted moving average (EWMA), and then construct a vector of spectral comparison ratios (SCRs) from the binned counts.
SCRAD relies on the background estimate being accurate (we used an EWMA with a parameter of~\(0.1\)) and each bin being large enough that its statistics are Gaussian, not Poisson.
For SCRAD, 13~non-overlapping spectral windows were used with boundaries of \(30\), \(60\), \(80\), \(100\), \(120\), \(140\), \(160\), \(200\), \(250\), \(300\), \(400\), \(700\), \(1200\), and \(3060\)~keV\@.
These windows were chosen to span the entire energy range and so that no counts in any bin were fewer than~\(30\) in order to ensure the Gaussian approximation was statistically valid.
No effort was made to optimize the windows for any particular types of anomalies, nor was nuisance rejection implemented (i.e., N-SCRAD~\cite{pfund_improvements_2016}).
The SCR distance metric \(D_{SCR}^2\) was used as the test statistic~\cite{jarman_comparison_2008}, and its expected chi-squared distribution with \(12\)~degrees of freedom was used to set a threshold of \(5\sigma \) significance.
Using this metric, three anomalies were found in the Port of Oakland dataset.
These anomalies, at indices \(1964\), \(2304\), and \(4040\), can be seen in \Fref{fig:bayarea_weights} where they are marked with red arrows.
The anomaly at index \(1964\) lasted at least \(6\)~seconds and is of unknown origin.
It consisted of a hard continuum extending beyond \(3\)~MeV.
The anomaly at index \(2304\) consisted of an increase in counts around \(100\)~keV and occurred when the aircraft was near the position of the first anomaly on the following flight line, so it is likely down-scattered photons from the first anomaly.
The third anomaly at index \(4040\) was likely a \isot{Tc}{99\mathrm{m}} source due to elevated counts up to \(140\)~keV\@.
The data associated with these anomalies were excluded from the NMF training and are not shown on the maps in \Fref{fig:bayarea_maps}.
\subsection{Results of NMF decompositions}\label{sec:bayarea_decomp}
NMF was initialized and trained on the Bay Area datasets in the same manner as the Lake Mohave dataset.
NMF models with \(k=1\) to \(4\) components were trained, and AIC was again used to determine the optimal number of components to describe the dataset, resulting in \(k=3\) components for both the Port of Oakland and Pacifica datasets.
\Fref{fig:bayarea_comps} shows the resulting NMF components for each dataset, and \Fref{fig:bayarea_weights} shows the weights for the Port of Oakland dataset.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.42\textwidth]{ams_bayarea_oakland_berkeley_port_0200b_c3/nmf_comp_spectra}
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (image) at (0,0) {\includegraphics[width=0.42\textwidth]{ams_bayarea_pacifica_0200b_c3/nmf_comp_spectra}};
\begin{scope}[x={(image.south east)},y={(image.north west)}]
\draw [line width=1pt, red, opacity=0.7, -stealth]
({(74 + (662 / 3060) * 504) / 601}, 0.703) -- ++(0.0, -0.1);
\end{scope}
\end{tikzpicture}
\caption{NMF components for a three-component factorization fitted to the Port of Oakland data (left) and Pacifica (right).
These decompositions have a strong resemblance with each other and with the Lake Mohave decomposition (\Fref{fig:lakemohave_decomp}).
A red arrow marks the presence of the \(662\)~keV line from \isot{Cs}{137} in component~0 from Pacifica.\label{fig:bayarea_comps}}
\end{figure*}
\begin{figure*}[t!]
\centering
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (image) at (0,0) {\includegraphics[width=0.85\textwidth]{ams_bayarea_oakland_berkeley_port_0200b_c3/nmf_comp_weights}};
\begin{scope}[x={(image.south east)},y={(image.north west)}]
\draw [line width=1pt, red, opacity=0.7, -stealth]
({(145 + (1964 / 5678) * 1649) / 1823}, 0.99) -- ++(0.0, -0.07);
\draw [line width=1pt, red, opacity=0.7, -stealth]
({(145 + (2304 / 5678) * 1649) / 1823}, 0.89) -- ++(0.0, -0.07);
\draw [line width=1pt, red, opacity=0.7, -stealth]
({(145 + (4040 / 5678) * 1649) / 1823}, 0.90) -- ++(0.0, -0.07);
\end{scope}
\end{tikzpicture}
\begin{tikzpicture}
\node[anchor=south west,inner sep=0] (image) at (0 ,0) {\includegraphics[width=0.85\textwidth]{ams_bayarea_oakland_berkeley_port_0200b_c3/nscrad_scrad}};
\begin{scope}[x={(image.south east)},y={(image.north west)}]
\draw [line width=1pt, red, opacity=0.7, -stealth]
({(145 + (1964 / 5678) * 1649) / 1822}, 0.99) -- ++(0.0, -0.07);
\draw [line width=1pt, red, opacity=0.7, -stealth]
({(145 + (2304 / 5678) * 1649) / 1822}, 0.50) -- ++(0.0, -0.07);
\draw [line width=1pt, red, opacity=0.7, -stealth]
({(145 + (4040 / 5678) * 1649) / 1822}, 0.45) -- ++(0.0, -0.07);
\end{scope}
\end{tikzpicture}
\caption{The gross count rate and NMF component weights for the Port of Oakland dataset (top).
The bottom plot shows the SCRAD test statistic used to identify the anomalies in the dataset, and the red line indicates a threshold of \(5\sigma\) significance.
The red arrows mark the locations of the anomalies.\label{fig:bayarea_weights}}
\end{figure*}
NMF decompositions are not expected to be identical in all situations since they are fitted to the training data provided.
For example, the Pacifica survey was over rugged terrain with regions of abnormally low \isot{K}{40}, while the Port of Oakland survey was over flat terrain but with man-made materials, which could give rise to sharp changes in KUT concentrations.
The datasets may also have different ambient backgrounds, which would affect any terms that are approximately constant.
In general both NMF decompositions are qualitatively similar to each other and to the Lake Mohave decomposition (\Fref{fig:lakemohave_decomp}), aside from some notable differences.
Component~0 on average makes up a smaller fraction of the total spectrum, which may be due to greater attenuation of nearby emission at the higher altitude.
One subtle difference in component~0 from Pacifica is that there is an indication of a weak \isot{Cs}{137} peak at \(662\)~keV from 1950s weapons-testing fallout, perhaps due to less surface disturbance than the Port of Oakland and greater deposition than Lake Mohave due to higher yearly rainfall.
In component~1, the \isot{K}{40} peak at \(1460.8\)~keV is less prominent (Oakland) or not at all present (Pacifica).
Component~2 in both Bay Area datasets now includes a clear \isot{K}{40} peak, which could be due to different aircraft background and the presence of potassium in the salt water of the Pacific Ocean and the San Francisco Bay.
Maps were generated for both datasets by plotting the NMF component weights on a map without any further corrections (\Fref{fig:bayarea_maps}).
As expected from the reasoning developed in~\Fref{sec:lakemohave}, component~0 has more well defined land-water boundaries than component~1, consistent with component~0 representing nearby terrestrial emission and component~1 representing distant terrestrial emission.
Component~2 is flat for the entirety of both surveys, which is consistent with cosmic and radon emission.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.48\textwidth]{ams_bayarea_oakland_berkeley_port_0200b_c3/map_bw_nmf0}
\includegraphics[width=0.48\textwidth]{ams_bayarea_pacifica_0200b_c3/map_bw_nmf0}\\
\includegraphics[width=0.48\textwidth]{ams_bayarea_oakland_berkeley_port_0200b_c3/map_bw_nmf1}
\includegraphics[width=0.48\textwidth]{ams_bayarea_pacifica_0200b_c3/map_bw_nmf1}\\
\includegraphics[width=0.48\textwidth]{ams_bayarea_oakland_berkeley_port_0200b_c3/map_bw_nmf2}
\includegraphics[width=0.48\textwidth]{ams_bayarea_pacifica_0200b_c3/map_bw_nmf2}
\caption{Maps of the weights for each of the three NMF components in the Port of Oakland (left) and Pacifica (right) datasets.
For both NMF decompositions, component~0 (top) has sharper land-water boundaries than component~1 (middle), while component~2 (bottom) remains relatively constant.
(Map imagery: Google.)\label{fig:bayarea_maps}}
\end{figure*}
\subsection{Modeling the source terms}\label{sec:bayarea_modeling}
The NMF components were modeled in a similar way as the Lake Mohave dataset (\Fref{sec:lakemohave_modeling}), with the primary difference being that the response matrices \(\mathbf{R}_{\mathrm{dir,near}}\), \(\mathbf{R}_{\mathrm{sky,near}}\), \(\mathbf{R}_{\mathrm{dir,dist}}\), and \(\mathbf{R}_{\mathrm{sky,dist}}\) were adjusted for the higher altitude using the same suite of simulations.
In an attempt to model a potential \isot{K}{40} background from the aircraft, an extra \isot{K}{40} source was fit to component~2 that consisted of \(\mathbf{R}_{\mathrm{abs}}\) applied to a \isot{K}{40} spectrum.
The best value of \(r_{\mathrm{cutoff}}\) was determined to be \(145\)~m for the Port of Oakland and \(135\)~m for Pacifica (cf. \(r_{\mathrm{cutoff}} = 85\)~m for the Lake Mohave dataset at \(100\)~ft).
The optimal power-law index was found to be~\(1.20\) for both datasets, significantly larger than the Lake Mohave value (\(0.65\)) but closer to the measurement in ref.~\cite{sandness_accurate_2009} (\(1.3\)).
It is unknown why there is such a large discrepancy between the Lake Mohave power-law index and the Bay Area power-law index, but it could point to a deficiency in the modeling of the radon, cosmics, or both.
The results of fits between the Monte Carlo background components and NMF components for the Port of Oakland are shown in \Fref{fig:bayarea_fits}.
These fits were constrained in the same way as the fits to the Lake Mohave components.
Once again, the modeled backgrounds are a qualitatively good fit to the NMF components, providing evidence for the identification of component~0 with nearby emission, component~1 with distant emission, and component~2 with radon and cosmic emission at this altitude.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.4\textwidth]{ams_bayarea_oakland_berkeley_port_0200b_c3/fit_components_chi2_145m_1.20_0}
\includegraphics[width=0.4\textwidth]{ams_bayarea_oakland_berkeley_port_0200b_c3/fit_components_chi2_145m_1.20_1}\\
\includegraphics[width=0.4\textwidth]{ams_bayarea_oakland_berkeley_port_0200b_c3/fit_components_chi2_145m_1.20_2}
\includegraphics[width=0.4\textwidth]{ams_bayarea_oakland_berkeley_port_0200b_c3/fit_components_chi2_145m_1.20_all}
\caption{Monte Carlo background components fit to the three NMF components (top left, top right, and bottom left) and the sum of all three components (bottom right) from the Port of Oakland dataset.
The skyshine contributions from the terrestrial sources are shown separately.\label{fig:bayarea_fits}}
\end{figure*}
As in the Lake Mohave fits, the model for component~0 has an excess continuum below \(100\)~keV, although these energies could easily be attenuated by a small amount of material near the detectors.
Unlike the Lake Mohave fits, the model for component~2 also has an excess below \(100\)~keV due to the larger contribution of the cosmic continuum relative to radon (possibly due to the higher altitude above ground level).
This excess could also be attenuated by material near the detectors, which is not included in this model.
\section{Discussion}\label{sec:discuss}
In this work we have presented the application of NMF to the decomposition of gamma-ray spectra from aerial surveys, with a particular focus on identifying background source terms for the resulting spectral components.
Since NMF uses the data itself to derive the primary spectral components, it has the advantage over FSA of being able to adapt to spectral shapes that have not been included in modeled components.
When compared to NASVD, NMF is able to reconstruct spectra to a similar fidelity while preserving both non-negativity and maximizing Poisson likelihood, and, in at least the cases presented here, the components appear to have a plausible physical origin.
Here we have focused on aerial data that includes water, a feature that isolates the radon and cosmic background components and favors a three-component NMF decomposition.
If the data do not include time over water, it may be harder to separate radon and cosmics from other background.
For these situations, other methods such as altitude spirals, where the aircraft flies repeated patterns in the same location at increasing altitudes, may help separate cosmics and radon from other background.
Also, a validated Monte Carlo model could potentially be used to initialize an NMF decomposition in order to guide it toward physically meaningful components.
A feature of NMF decompositions in the cases presented here is that NMF can approximately separate distant terrestrial emission from other background sources using the data alone.
Evidence for this separation has also been seen in vehicle-borne gamma-ray data~\cite{bandstra_attribution_2018}.
By exploiting this separation, NMF can potentially improve the resolution of aerial survey maps by disentangling distant emission from nearby emission.
In addition, by finding background components with physical origins, NMF could be leveraged in new algorithms for anomaly detection and background estimation.
NMF has already been shown to be competitive with PCA-based methods for spectral anomaly detection, which may be due to its accurate treatment of Poisson statistics and ability to be consistent with physics~\cite{bilton_non-negative_2019}.
The different temporal variability of the components could be exploited by Kalman filters or low-pass filters to find anomalous behavior.
Even in the data presented here from the Port of Oakland (\Fref{fig:bayarea_weights}), the anomalies found by the SCRAD metric can be easily identified as brief spikes in the NMF component~1 count rate.
Since component~1 is believed to arise from distant emission sources, it should not exhibit such high frequency behavior.
Component~2 also increases during the large anomaly at index \(1964\), which is an unusual departure from its relatively constant count rate.
With the ability to decompose the low-energy region of the spectrum where skyshine and down-scatter are most significant, NMF could be used by algorithms intended to identify anomalies below \(400\)~keV\@.
This regime is especially of interest when searching for spectral anomalies due to nuclear material, since many important isotopes have gamma-ray lines below \(400\)~keV (e.g.,~\cite{martin-burtart_airborne_2012}), however that region of the spectrum is notoriously difficult to model.
One drawback with using NMF is that the training can be computationally intensive --- the \(k=3\) model took about \(100\)~s running on four \(2.7\)~GHz cores, however the matrix computations can easily be split across any number of available cores.
\Fref{tab:compute} shows the duration of the training time for different NMF models trained on the Lake Mohave dataset, including the time it takes to calculate the weights for a single spectrum after the training has been completed.
The time to calculate the NASVD decomposition is shown for comparison.
The length of training time is set largely by the arbitrary convergence criterion used here, and NMF models in general may not need to meet the same criterion to be useful.
\begin{table}
\centering
\caption{Computation times for NMF on the Lake Mohave dataset.\label{tab:compute}}
\begin{tabular}{ccrr}
Method & Components & Training & Single spectrum \\
\midrule
NASVD & N/A & \(0.44\)~s & N/A \\
NMF & \(k=1\) & \(0.70\)~s & \(2.1\)~ms \\
NMF & \(k=2\) & \(65.81\)~s & \(10.2\)~ms \\
NMF & \(k=3\) & \(107.13\)~s & \(11.7\)~ms \\
NMF & \(k=4\) & \(389.74\)~s & \(14.4\)~ms \\
NMF & \(k=5\) & \(1,347.28\)~s & \(17.6\)~ms \\
\end{tabular}
\end{table}
Even though the training can take a significant length of time, once the NMF model has been trained for a given detector and environment and \(\mathbf{V}\) is fixed, the subsequent calculation of weights for previously unseen spectra is fast.
This application highlights another distinction between FSA, NMF, and NASVD --- typically, FSA is used to decompose spectra that have not yet been measured, while NASVD is used to decompose only previously measured spectra (although see ref.~\cite{kulisek_real-time_2015} for a real-time application of NASVD).
NMF has been presented here retrospectively like NASVD, but both NMF and NASVD components can be fit to future measurements under the assumption that the components derived from the training set are also effective at reducing the dimensionality of future data.
Using a trained NMF model to decompose spectra has been shown to be an effective tool for spectral anomaly detection in a mobile detection system~\cite{bilton_non-negative_2019}.
Finally, NMF is a potentially powerful framework for any analysis involving variable gamma-ray backgrounds since it provides a consistent description for both the mathematical and physical characteristics of the measurements.
As such, NMF provides a natural framework for data fusion, such as incorporating non-radiological contextual information about the environment.
Some work on vehicle backgrounds has already shown evidence for the ability to connect NMF components to non-radiological contextual information (i.e., video images)~\cite{bandstra_attribution_2018}.
As a simple example of data fusion for aerial measurements, having knowledge of the distance to a land-water interface would provide useful information about the behavior of the distant and nearby background components.
This area has much promise for further research.
\section{Acknowledgments}
This work has been supported by the US Department of Homeland Security, Domestic Nuclear Detection Office, under IAA HSHQDC-11-X-00380\@. This support does not constitute an express or implied endorsement on the part of the Government.
This work was also performed under the auspices of the US Department of Energy by Lawrence Berkeley National Laboratory under Contract DE-AC02-05CH11231.
The project was funded by the US Department of Energy, National Nuclear Security Administration, Office of Defense Nuclear Nonproliferation Research and Development (DNN R\&D).
We would like to thank Ren Cooper and Joseph Curtis for their helpful comments and Erika Suzuki for proofreading this manuscript.
\bibliographystyle{IEEEtran}
|
1,477,468,751,158 | arxiv | \section{Introduction}
Mean-field games (MFGs) are models for large populations of competing rational agents who seek to optimize an individual objective function. A typical model is the {\em backward-forward MFG}. In one dimension, this game is determined by following the system of partial differential equations (PDEs):
\begin{equation}\label{eq: General MFG}
\begin{cases}
-u_t + H(u_x)= \varepsilon u_{xx} + g(m), \\
m_t-(H'( u_x) m)_x=\varepsilon m_{xx} . \end{cases}
\end{equation}
For convenience, the spatial domain, corresponding to the variable $x$, is the $1$-dimensional torus, $ {\mathbb{T}}$, identified with the interval $[0,1]$. The time domain, corresponding to the variable $t$, is the interval $[0,T]$ for some terminal time, $T>0$. The unknowns in the above system are $u:{\mathbb{T}}\times [0,T]\to {\mathbb{R}}$ and $m:{\mathbb{T}}\times [0,T]\to {\mathbb{R}}$. In this game, each agent seeks to solve an optimal control problem. The function $u(x,t)$ is the value function
for this control problem for an agent located at $x\in {\mathbb{T}}$ at the time $t$.
This control problem is determined by a
Hamiltonian, $H: {\mathbb{R}}\to {\mathbb{R}}$, $H\in C^2$, and a coupling between each agent and the mean field, $m$, given by the function $g: {\mathbb{R}}^+\to
{\mathbb{R}}$, $g\in C^1$.
The first equation in \eqref{eq: General MFG} is a Hamilton-Jacobi equation
and expresses the optimality of the value function, $u$.
For each $t\in [0, T],$ $m$ is a probability density in ${\mathbb{T}}$. The second equation of (\ref{eq: General MFG}), the Fokker-Planck equation, determines the evolution of $m$. The parameter $\varepsilon\geqslant 0$ is the viscosity coefficient in the Fokker-Planck equation; $\varepsilon=0$ corresponds to {\em first-order MFGs} and $\varepsilon>0$ to {\em parabolic MFGs}. The system \eqref{eq: General MFG} is endowed with terminal-initial conditions; the initial value of $m$ is prescribed at $t=0$ and the terminal value of $u$, at $t=T$:
\begin{equation}
\label{itc}
\begin{cases}
u(x,T) = u_T(x) \\
m(x,0) = m_0(x).
\end{cases}
\end{equation}
As a result, \eqref{eq: General MFG}-\eqref{itc} is called the {\em terminal-initial value problem} or the {\em backward-forward MFG}.
Here, we examine a related model, the {\em forward-forward MFG} problem. This model is constructed by the reversal of the time variable in the Hamilton-Jacobi equation in (\ref{eq: General MFG}). Accordingly, the {\em forward-forward MFG system} in $\mathbb{ T}\times
[0,T]$ is determined by
\begin{equation}
\label{ffmfg}
\begin{cases}
u_t + H(u_x)= \varepsilon u_{xx} + g(m)\\
m_t-(H'( u_x) m)_x=\varepsilon m_{xx},
\end{cases}
\end{equation}
together with the {\em initial-initial condition}:
\begin{equation}\label{ini-ini}
\begin{cases}
u(x,0) = u_0(x) \\
m(x,0) = m_0 (x).
\end{cases}
\end{equation}
The forward-forward model was introduced in \cite{DY}
to approximate
stationary MFGs.
The key insight is that the parabolicity in (\ref{ffmfg}) should imply
the long-time convergence to a stationary solution.
In the preceding MFG, a typical Hamiltonian, $H$, is the quadratic Hamiltonian, $H(p)=\frac{p^{2}}{2}$, or for $\gamma>1,$ the power-like Hamiltonian, $H(p)=\frac{1}{\gamma}|p|^\gamma$ or $H(p)=(1+p^2)^{\frac \gamma 2}$. Regarding the coupling nonlinearity, $g$, here, we consider the power-like case, $g(m)=m^\alpha$ for some $\alpha>0$, or the logarithmic case, $g(m)=\ln m$.
Considerable research has focused on proving the existence of solutions for backward-forward MFGs. For example, weak solutions for parabolic problems were considered in \cite{ ll2, porretta2}, strong solutions for parabolic problems in \cite{GPim2, GPim1, ll2}, and weak solutions for first-order MFGs in \cite{Cd2, GraCard}. The stationary case was also investigated in detail since it was first considered in \cite{ll1}. For this case, the existence of classical and weak solutions was investigated in \cite{GMit,GP,GPat,GPatVrt}. The uniqueness of solution is well understood (both for stationary and time-dependent MFGs) via the monotonicity method introduced in \cite{ll1,ll2}.
Monotonicity properties are also fundamental for the existence theory developed in \cite{FG2}. One-dimensional MFGs provide examples and guidance
for the study of higher-dimensional problems and numerical methods
\cite{AFG}. Moreover, these games have an independent interest in problems in networks and graphs \cite{ CaCa2016, camillinet, MR3146865} and congestion \cite{GP, GLP2}.
In contrast to that of the backward-forward case, our understanding of forward-forward MFGs is limited.
In particular, the existence and the long-time convergence of the forward-forward model have not been addressed, except in a few cases, see \cite{GPff}
and
\cite{llg2}. In
\cite{llg2}, the forward-forward problem was examined in the context of eductive stability of stationary MFGs with a logarithmic coupling. In \cite{GPff}, the existence and regularity
of solutions for the forward-forward, uniformly
parabolic MFGs with subquadratic Hamiltonians was proven. Except for these cases, the question of existence and regularity is open in all other regimes. In the case of forward-forward MFGs without viscosity, these questions are particularly challenging. Moreover, the long-time convergence has not been established even in the parabolic case. Nevertheless, numerical results in \cite{AcCiMa} and \cite{Ci} indicate that convergence holds and that the forward-forward model approximates well stationary solutions.
Not only as an effective tool to approximate stationary problems, the forward-forward MFGs can also be regarded as a learning game. In backward-forward MFGs, the density of the agents is transported by the (future) optimal trajectories of an optimal control problem. In the forward-forward model, the interpretation the evolution of the agents is less straightforward. In this model, the density is transported by past optimal trajectories because the corresponding control problem has initial data, not terminal data. Thus,
the actions of the agents are determined by a learning strategy where past densities drive their evolution.
This paper is structured as follows. In Section \ref{cl}, we reformulate
\eqref{eq: General MFG}-\eqref{itc} and (\ref{ffmfg})-(\ref{ini-ini}) as systems of conservation laws. There, we identify new conserved quantities for these problems in the case where $\varepsilon=0$. Conserved quantities are fundamental in analyzing PDEs and in testing and validating numerical methods.
Here, they are used in the long-time convergence analysis.
Next, in Section \ref{wte}, we derive wave-type equations that are equivalent to (\ref{ffmfg})-(\ref{ini-ini}). For example, for the first-order, logarithmic forward-forward model, we obtain the PDE
\[
u_{tt}=(1+u_x^2) u_{xx}.
\]
The preceding equation is
equivalent to an elastodynamics problem.
The corresponding elastodynamics equations have entropy solutions when the stress function is monotone.
Thus, we obtain the existence of solutions for the original MFG.
In addition, using results from \cite{BEJ}, we identify a class of explicit solutions for the logarithmic MFGs. These explicit solutions provide an example where shocks arise in the forward-forward model.
Finally, in Section \ref{pmfg}, we examine forward-forward parabolic MFGs. Here, the entropies
identified in Section \ref{cl} play an essential role in our analysis of the long-time behavior of solutions. Due to the parabolicity, these entropies are dissipated and force the long-time convergence of the solutions of (\ref{ffmfg})-(\ref{ini-ini}).
\section{Systems of conservation laws and first-order MFGs}
\label{cl}
Here, we consider deterministic MFGs; that is, $\varepsilon=0.$ In this case, \eqref{eq:
General MFG} and \eqref{ffmfg} are equivalent to conservation laws, at least for smooth enough solutions. In this preliminary section, we examine these conservation laws and identify conserved quantities. In Section \ref{pmfg}, we use these conserved quantities to establish the long-time convergence of the parabolic forward-forward MFG \eqref{ffmfg}.
Before proceeding, we recall some well-known results on systems
conservation
laws in one dimension.
We consider a conservation law of the form \begin{equation}\label{eq: Conservation laws}
U_t + (F(U))_x=0,
\end{equation}
where $U: \mathbb{R}\times \mathbb{T} \longrightarrow \mathbb{R}^2$
is the unknown and $F :\mathbb{R}^2 \longrightarrow \mathbb{R}^2 $ is the
flux function. We say that $(E,Q)$ is an entropy/entropy-flux pair if
\begin{equation}\label{eq : entropy-entropy flux}
(E(U))_t + (Q(U))_x=0
\end{equation}
for any smooth solution of \eqref{eq: Conservation
laws}. We note that (\ref{eq : entropy-entropy flux}) implies that $E(U)$ is
a conserved quantity if the solution $U$ of (\ref{eq: Conservation
laws}) is smooth; that is,
\begin{equation}
\dfrac{d}{dt} \int_{\mathbb{T}} E (U) dx =- \int_{\mathbb{T}} (Q(U))_x dx
=0.
\end{equation}
\subsection{Backward-Forward MFG}\label{sec: bfmfg}
Now, we assume that \eqref{eq:
General MFG} has a smooth enough solution, for example, $u,m\in C^2({\mathbb{T}}\times (0,\infty))\cap C({\mathbb{T}}\times
[0,\infty)).$ We set $v=u_x$ and differentiate the first equation in \eqref{eq:
General MFG}
with respect to $x$. Accordingly, we obtain the following system
\begin{equation}\label{eq: foba mfg system2}
\begin{cases}
v_t+(g(m)-H(v))_x=0,\\
m_t-(mH'(v))_x=0.
\end{cases}
\end{equation}
To investigate the existence of an entropy for (\ref{eq: foba mfg system2}), we look for an entropy/entropy-flux $(E,Q)$ satisfying \eqref{eq : entropy-entropy flux} for $U=(v,m)$.
By expanding \eqref{eq
: entropy-entropy flux}, we get
\begin{equation}\label{eq : fb mfg entropy 01}
\frac{\partial E}{\partial v} v_t+\frac{\partial E}{\partial m} m_t + \frac{\partial Q}{\partial v} v_x+\frac{\partial Q}{\partial m} m_x =0.
\end{equation}
In light of (\ref{eq: foba mfg system2}), (\ref{eq : fb mfg entropy 01}) becomes
\begin{equation}\label{eq : fb mfg entropy 0}
\frac{\partial E}{\partial v} H'(v) v_x- \frac{\partial E}{\partial v} g'(m) m_x + \frac{\partial E}{\partial m} H'(v) m_x +\frac{\partial E}{\partial m} m H''(v)v_x + \frac{\partial Q}{\partial v} v_x+\frac{\partial
Q}{\partial m} m_x =0.
\end{equation}
Thus,
\begin{equation}\label{eq : fb mfg entropy 1}
\frac{\partial Q}{\partial v}= - \frac{\partial E}{\partial v} H'(v)-\frac{\partial E}{\partial m} m H''(v)\qquad\hbox{and}\qquad \frac{\partial Q}{\partial m}=\frac{\partial E}{\partial v} g'(m) -\frac{\partial E}{\partial
m} H'(v).
\end{equation}
Consequently, we obtain the following PDE for $E$
\begin{equation}\label{eq : fb mfg entropy 1}
\frac{\partial }{\partial m}\left( - \frac{\partial E}{\partial v} H'(v)-\frac{\partial E}{\partial m} m H''(v)\right) =
\frac{\partial }{\partial v}\left( \frac{\partial E}{\partial v} g'(m) -\frac{\partial E}{\partial
m} H'(v)\right).
\end{equation}
After elementary computations,
the above equation becomes \begin{equation}\label{eq: cons E condition}
\frac{1}{ H''(v)}
\frac{\partial^2 E}{\partial v^2}+\frac{1}{P''(m)} \frac{\partial^2 E}{\partial m^2}=0,
\end{equation}
where
\begin{equation}
\label{Pofm}
P''(m)=\frac{g'(m)}{m}.
\end{equation}
The preceding equation has the following trivial solutions:
$$E(v,m)=\alpha v+\beta m,\ \alpha,\beta \in {\mathbb{R}}.$$ By inspection, we can verify that the following two expressions solve \eqref{eq: cons E condition}:
\[
E(v,m)=mv \quad \text{and} \quad E(v,m)=H(v)-P(m).
\]
Moreover, if $g$ is increasing, $P$ is a convex function whereas if $g$ is decreasing, $P$ is concave.
Using separation of variables and writing
\[
E=\Phi(v)\Psi(m),
\]
we derive the following conditions
\[
\begin{cases}
\frac{1}{H''(v)}\frac{\Phi''(v)}{\Phi(v)}=\lambda\\
\frac{1}{P''(m)}\frac{\Psi''(m)}{\Psi(m)}=-\lambda.
\end{cases}
\]
The conditions above take a simple form when $g(m)=\dfrac{m^2}{2}$, which corresponds to $P(m)=\frac{m^2}{2}$, and $H(v)=\frac{v^2}{2}$, namely
\[
\begin{cases}
\Phi''(v)=\lambda\Phi(v)\\
\Psi''(m)=-\lambda \Psi(m) .
\end{cases}
\]
Thus, we have solutions of the form $\Phi(v)=e^{\pm \sqrt{\lambda}v}$ and $\Psi(m)=e^{\pm i\sqrt{\lambda}m}$, which have exponential growth or oscillation depending upon the sign of $\lambda$.
In addition to these conservation laws, there are also polynomial conservation laws. For illustration, some of these are shown in Table \ref{T1}. In Table \ref{T2}, we present some conservation laws for the anti-monotone backward-forward MFG with $g(m)=-\frac{m^2}{2}$.
These laws are
straightforward to compute as the determining equations for $E$ are
\[
\frac{\partial^2 E}{\partial m^2}+\frac{\partial^2 E}{\partial v^2}=0
\]
in the monotone case and
\[
\frac{\partial^2 E}{\partial m^2}-\frac{\partial^2 E}{\partial v^2}=0
\]
in the anti-monotone case.
In both cases, these equations have solutions that are homogeneous polynomials in $m$ and $v$. In the monotone case, these conservation laws are the real and imaginary parts of $(m+i v)^k$.
In the anti-monotone case, some of the conservation laws are coercive and, thus, control the $L^p$ norms of $v$\ and $m$ (at least for smooth solutions).
\begin{table}
\centering
\begin{tabular}{|c|c|}\hline
Degree &$E(v,m)$\\\hline
3&$ v^3-3 m^2 v$ \\\hline
3& $m^3-3 m v^2$ \\\hline
4&$-6 m^2 v^2+m^4+v^4$\\\hline
4&$ m v^3-m^3 v$\\\hline
5&$ -10 m^2 v^3+5 m^4 v+v^5$\\\hline
5&$ -10 m^3 v^2+5 m v^4+m^5$\\\hline
6&$ 15 m^4 v^2-15 m^2 v^4-m^6+v^6$\\\hline
6&$ m^5 v-\frac{10}{3} m^3 v^3+mv^5$\\\hline
\end{tabular}
\smallskip
\caption{Conservation laws for the backward-forward MFG with $H(v)=\frac{v^2}{2}$ and $g(m)=\frac{m^2}{2}$ up to degree 6. }
\label{T1}
\end{table}
\begin{table}
\centering
\begin{tabular}{|c|c|}\hline
Degree &$E(v,m)$\\\hline
3&$3 m^2 v+v^3$\\\hline
3&$3 m v^2+m^3$\\\hline
4&$6 m^2 v^2+m^4+v^4$\\\hline
4&$ m^3 v+m v^3$\\\hline
5& $10 m^2 v^3+5
m^4 v+v^5$\\\hline
5& $10 m^3 v^2+5 m v^4+m^5$\\\hline
6&
$ 15 m^4
v^2+15 m^2 v^4+m^6+v^6$\\\hline
6& $m^5 v+\frac{10}{3}
m^3 v^3+m v^5$\\\hline
\end{tabular}
\smallskip
\caption{Conservation laws for the backward-forward MFG with $H(v)=\frac{v^2}{2}$
and $g(m)=-\frac{m^2}{2}$ up to degree 6. }
\label{T2}
\end{table}
\subsection{Forward-forward MFG}\label{ss: ffmfg}
As previously, we assume that \eqref{ffmfg} has a solution, $u,m\in C^2({\mathbb{T}}\times
(0,\infty))\cap C({\mathbb{T}}\times
[0,\infty))$, and we set $v:=u_x$. We differentiate the first equation in \eqref{ffmfg} with respect to $x$ and obtain the system:
\begin{equation}\label{eq: foff mfg system2}
\begin{cases}
v_t-(g(m)-H(v))_x=0,\\
m_t-(mH'(v))_x=0.
\end{cases}
\end{equation}
We begin by examining the entropies for \eqref{eq: foff mfg system2}; that is, we look for $(E,Q)$ satisfying \eqref{eq : entropy-entropy flux} for $U=(v,m)$. We expand (\ref{eq : entropy-entropy flux}) to get
\begin{equation}\label{XYZ}
\frac{\partial E}{\partial v} v_t+\frac{\partial E}{\partial m} m_t + \frac{\partial Q}{\partial v} v_x+\frac{\partial
Q}{\partial m} m_x =0.
\end{equation}
In light of (\ref{eq: foff mfg system2}), (\ref{XYZ}) becomes
\begin{equation}\label{eq : ff mfg entropy 0}
- \frac{\partial E}{\partial v} H'(v) v_x+ \frac{\partial E}{\partial v} g'(m) m_x +\frac{\partial E}{\partial
m} H'(v) m_x +\frac{\partial E}{\partial m} m H''(v)v_x + \frac{\partial Q}{\partial v} v_x+\frac{\partial
Q}{\partial m} m_x =0.
\end{equation}
Thus,
\begin{equation}\label{eq : ff mfg entropy 1}
\frac{\partial Q}{\partial v}= \frac{\partial E}{\partial v} H'(v)-\frac{\partial E}{\partial m} m H''(v)\qquad\hbox{and}\qquad
\frac{\partial Q}{\partial m}=-\frac{\partial E}{\partial v} g'(m) -\frac{\partial E}{\partial m} H'(v).
\end{equation}
Consequently,
\begin{equation}\label{eq : ff mfg entropy 1}
\frac{\partial }{\partial m}\left( \frac{\partial E}{\partial v} H'(v)-\frac{\partial E}{\partial m} m H''(v)\right) =
\frac{\partial }{\partial v}\left(-\frac{\partial E}{\partial v} g'(m) -\frac{\partial E}{\partial
m} H'(v)\right).
\end{equation}
This last equation simplifies to
\begin{equation}\label{eq: cons E condition 1}
\frac{1}{H''(v)} \frac{\partial^2 E}{\partial v^2}+\frac{2 H'(v)}{H''(v)g'(m)}
\frac{\partial^2 E}{\partial v \partial m}-\frac{m}{g'(m)} \frac{\partial^2 E}{\partial m^2}=0.
\end{equation}
The preceding equation has a trivial family of solutions,
$$E(v,m)=\alpha v+\beta m,\ \alpha,\beta \in {\mathbb{R}}.$$
Moreover, \eqref{eq: cons E condition 1} admits a solution of the form:
$$E(v,m)=H(v)+P(m)$$
with $P(m)$ as in \eqref{Pofm}.
In contrast with the backward-forward case, here, if $g$ is increasing, the previous entropy is convex. This observation is crucial for our proof of convergence of the forward-forward mean-field games with viscosity. For illustration, we consider the case $H(v)=\frac {v^2}{2}$.
In Tables \ref{T3} and \ref{T4}, we present some polynomial conservation laws for, respectively, a monotone, $g(m)=\frac{m^2}{2}$, and an anti-monotone, $g(m)=-\frac{m^2}{2}$, quadratic forward-forward MFG. These conservation laws satisfy
\[
\frac{\partial^2 E}{\partial v^2}\pm\frac{2 v}{m}\frac{\partial^2E}{\partial v\partial m}\mp\frac{\partial^2 E}{\partial m^2}=0,
\]
where the $-$ sign corresponds to the monotone case and the $+$ sign to the anti-monotone case.
\begin{table}
\centering
\begin{tabular}{|c|c|}\hline
Degree &$E(v,m)$\\\hline
3&$v^3-3 m^2 v$ \\\hline
4& $-2 m^2 v^2-\frac{1}{3} m^4+v^4$ \\\hline
4&$m^3 v$\\\hline
5&$ -2 m^2 v^3-3 m^4 v+v^5$\\\hline
6&$\frac{45}{7}
m^4 v^2-\frac{15}{7} m^2 v^4+\frac{3 m^6}{7}+v^6$\\\hline
\end{tabular}
\smallskip
\caption{Conservation laws for the forward-forward MFG with $H(v)=\frac{v^2}{2}$
and $g(m)=\frac{m^2}{2}$ up to degree 6. }
\label{T3}
\end{table}
\begin{table}
\centering
\begin{tabular}{|c|c|}\hline
Degree &$E(v,m)$\\\hline
3&$ 3 m^2 v+v^3$ \\\hline
4& $2 m^2 v^2-\frac{1}{3} m^4+v^4$ \\\hline
4&$m^3 v$\\\hline
5&$ 2 m^2 v^3-3 m^4 v+v^5$\\\hline
6&$\frac{45}{7}
m^4 v^2+\frac{15}{7} m^2 v^4-\frac{3}{7} m^6+v^6$\\\hline
\end{tabular}
\smallskip
\caption{Conservation laws for the forward-forward MFG with $H(v)=\frac{v^2}{2}$
and $g(m)=-\frac{m^2}{2}$ up to degree 6. }
\label{T4}
\end{table}
\section{Wave-type equations}
\label{wte}
Here, we introduce a class of wave-type equations that are equivalent to forward-forward MFGs. Using these equations, we rewrite the forward-forward MFG as a new system of conservation laws.
For $g(m)=m^\alpha$, this new system depends polynomially in $\alpha$ in contrast with \eqref{eq: foff mfg system2} where the dependence on $\alpha$ is exponential. This new formulation is of interest for the numerical simulation of forward-forward MFGs with a large value $\alpha$ and substantially simplifies the computation of conserved quantities.
Subsequently, we consider the logarithmic nonlinearity and, using a result from DiPerna, we prove the existence of a global solution for the forward-forward problem. Moreover, this solution is bounded in $L^\infty$. Finally, also for the logarithmic nonlinearity, we investigate the connection between this new formulation and a class of equations introduced in \cite{BEJ}. In particular, we provide a representation formula for some solutions of the forward-forward MFG and establish the existence of shocks.
\subsection{Wave equations and forward-forward MFGs}We continue our study of forward-forward MFGs by reformulating \eqref{ffmfg} as a scalar nonlinear wave equation.
Here,
we assume that $H,g$ are smooth and $g$ is either strictly increasing or decreasing; that is, $g'\neq 0$. From the first equation in \eqref{ffmfg}, we have that
\begin{equation}\label{eq: density in terms of vfunction}
m=g^{-1}(u_t+H(u_x)).
\end{equation}
We differentiate $\eqref{eq: density in terms of vfunction}$ with respect to $t$ and $x$ to obtain, respectively,
\begin{equation}
\begin{aligned}\label{eq: diff1 density in terms of vfunction}
m_t=(g^{-1})'\left(u_t+H(u_x)\right) (u_{tt}+H'(u_x)u_{xt})
\end{aligned}
\end{equation}
and
\begin{equation}\label{eq: diff2 density in terms of vfunction}
\begin{aligned}
(m H'(u_x))_x&=(g^{-1})'(u_t+H(u_x))(u_{tx}+H'(u_x)u_{xx})H'(u_x)\\
&+g^{-1} (u_t+H(u_x)) H''(u_x)u_{xx}.\\
\end{aligned}
\end{equation}
Next, we combine (\ref{eq: diff1 density in terms of vfunction}) and (\ref{eq: diff2 density in terms of vfunction}) and get
\begin{align*}
m_t-(mH'(u_x))_x&=(g^{-1})'(u_t+H(u_x))(u_{tt}+H'(u_x)u_{xt})\\
&-(g^{-1})'(u_t+H(u_x))(u_{tx}+H'(u_x)u_{xx})H'(u_x)\\
&-g^{-1}(u_t+H(u_x)) H''(u_x)u_{xx}.
\end{align*}
Hence, the second equation in \eqref{eq: foff mfg system2} yields
\begin{align*}
(g^{-1})'(u_t+H(u_x))\left( u_{tt}-(H'(u_x))^2u_{xx}\right) =g^{-1}(u_t+H(u_x)) H''(u_x)u_{xx},
\end{align*}
or, equivalently,
\begin{align}
u_{tt}=\left(
(H'(u_x))^2
+
g'(g^{-1}(u_t+H(u_x)))
g^{-1}(u_t+H(u_x))
H''(u_x)
\right)u_{xx};
\end{align}
that is,
\begin{equation}\label{eq: we foff mfg}
u_{tt}=\left((H'(u_x))^2+mg'(m) H''(u_x)\right)u_{xx}.
\end{equation}
Thus, \eqref{eq: foff mfg system2} is equivalent to the nonlinear second-order equation \eqref{eq: we foff mfg} coupled with \eqref{eq: density in terms of vfunction}. Moreover, if $g$ is increasing, the preceding equation is hyperbolic. In the particular case where $g(m)=
\ln m$, \eqref{eq: we foff mfg} takes the simpler form
\begin{equation}\label{eq: wave equation with H}
u_{tt}=\left((H'(u_x))^2+H''(u_x)\right)u_{xx}.
\end{equation}
\subsection{A new system of conservation laws}
Now, we consider the wave equations introduced in the preceding section and reformulate them as a new system of conservation laws.
For that, we set $v=u_x$ and $w=u_t$. Then, \eqref{eq: we foff mfg} is equivalent to
\begin{equation}\label{eq: foff mfg system1}
\begin{cases}
v_t=w_x,\\
w_t=\left((H'(v))^2+g'( g^{-1}(w+H(v))) g^{-1}(w+H(v)) H''(v)\right)v_x.
\end{cases}
\end{equation}
We set \[\phi(v,w)=(H'(v))^2+g'( g^{-1}(w+H(v))) g^{-1}(w+H(v)) H''(v).
\]
Accordingly, \eqref{eq: foff mfg system1} becomes
\begin{equation}\label{eq: foff mfg system1 phi}
\begin{cases}
v_t=w_x,\\
w_t=\phi(v,w)v_x.
\end{cases}
\end{equation}
In the sequel, we choose
\begin{equation} \label{eq : Stored energy}
H(v)=\dfrac{v^2}{2}\qquad\hbox{and}\qquad g(m)=m^{\alpha}.
\end{equation}
Consequently, we have that
$$mg'(m)=\alpha g(m).$$
Therefore, \eqref{eq: foff mfg system1 phi} takes the form
\begin{equation*}
\begin{cases}
v_t=w_x,\\
w_t=\left(v^2+\alpha(w+ v^2 )\right)v_x.
\end{cases}
\end{equation*}
Next, we search for a conserved quantity, $F(v,w),$ for the preceding system.
Arguing as before, we see that $F$ is conserved
if and only if
\begin{equation}\label{eq: cons F condition}
\frac{\partial^2 F}{\partial v^2}=\frac{\partial}{\partial w}\left(\frac{\partial F}{\partial w}\\ \phi(v,w)\right),
\end{equation}
where $\phi(v,w)=v^2+\alpha(w+ v^2 )$.
A particular solution of \eqref{eq: cons F condition} is
\begin{equation}\label{eq: particular conserved quantity 1}
F(v,w)= w+ \dfrac{\alpha}{2}v^2.
\end{equation}
Accordingly, we set
\begin{equation}\label{eq: particular conserved quantity 2}
z(x,t)= w(x,t)+ \dfrac{\alpha}{2}v^2(x,t).
\end{equation}
Thus, we have that
\begin{align*}
z_t &= w_t+ \alpha v v_t\\
&=\left(v^2+\alpha\left(w+ \dfrac{v^2}{2} \right)\right)v_x+\alpha v w_x\\
&= \left(1+\dfrac{\alpha}{2}\right)v^2v_x +\alpha v_x w+\alpha v w_x\\
&=\left( \dfrac{1}{3}+\dfrac{\alpha}{6}\right)v^3_x +\alpha (vw)_x\\
&=\left( \left( \dfrac{1}{3}+\dfrac{\alpha}{6}\right)v^3 +\alpha vw\right) _x.
\end{align*}
Hence, we obtain the following equivalent system of conservation laws
\begin{equation}\label{eq: foff mfg system3}
\begin{cases}
z_t=\left( \left( \frac{1}{3}+\frac{\alpha}{6}-\frac{\alpha^2}2\right)v^3 +\alpha v z\right) _x,\\
v_t=\left(z+\dfrac{\alpha}{2}v^2\right)_x.
\end{cases}
\end{equation}
We observe that $\alpha$ is no longer in the exponent of the foregoing equation. Therefore, the growth of the nonlinearity becomes polynomial with a fixed degree for any exponent
$\alpha$. This property is relevant for the numerical analysis and simulation of these games. Moreover, in this formulation, we obtain further polynomial conservation laws for \eqref{eq:
foff mfg system3} shown in Table \ref{T5}.
\begin{table}
\centering
\begin{tabular}{|c|c|}\hline
Degree &$E(z,v)$\\\hline
2&$ v z$ \\\hline
4& $3 \alpha ^2 v ^4-\alpha v ^4-12 \alpha v ^2 z-2 v ^4-12 z^2$ \\\hline
5&$v \left(9 \alpha ^2 v ^4-3 \alpha v ^4-20 \alpha v ^2 z-6 v ^4-60 z ^2\right)$\\\hline
6&$ 6 \alpha ^3 v ^6-2 \alpha ^2 v ^6-4 \alpha v ^6+5 \alpha ^2 v ^4 z-5 \alpha v ^4 z-60 \alpha v ^2 z ^2-10 v ^4 z-20 z ^3$\\\hline
\end{tabular}
\smallskip
\caption{Conservation laws for the modified forward-forward MFG \eqref{eq: foff mfg system3}
up to degree 6. }
\label{T5}
\end{table}
\subsection{Forward-forward MFGs with a logarithmic nonlinearity -- existence of a solution}
Here, we prove the existence of a solution of \eqref{eq: wave equation with H} for a quadratic Hamiltonian. For our proof, we use the ideas in the preceding subsection and rewrite \eqref{eq: wave equation with
H} as a system of conservation laws. The system we consider here is a special case of the ones investigated in \cite{DiPerna83}, in the whole space, and in \cite{DeStTz00}, in the periodic case. More precisely, we examine the system
\begin{equation} \label{eq : elastodynamics}
\begin{cases}
v_t -w_x= 0 \\
w_t - \sigma(v)_x=0
\end{cases}
\end{equation}
with the initial conditions
\begin{equation}
\begin{cases}
v(x,0)=v_0(x)\\
w(x,0)=w_0(x).
\end{cases}
\end{equation}
Here, $\sigma : \mathbb{R}\to \mathbb{R}$ is a $C^2$ function, $\sigma'> 0$, $(v,w)$ is the unknown
and $(x,t)\in {\mathbb{T}}\times[0,T]$. We consider initial data $v_0, w_0\in L^\infty({\mathbb{T}})$. As pointed out in \cite{DeStTz00}, if (\ref{eq
: elastodynamics}) has a $C^1$ solution then there exists $u$ such that
$w=u_t,$ $v=u_x$ and a straightforward computation yields \begin{equation} \label{eq
: Wave elastodynamics}
u_{tt}- (\sigma(u_x))_x=0.
\end{equation}
In addition, for a quadratic Hamiltonian, $H(p)=\frac{p^2}{2}$,
\eqref{eq : Wave elastodynamics} is equivalent to \eqref{eq: wave equation
with H} for
\begin{equation}
\label{sigma}
\sigma(z)=z+\frac{z^3}{3}.
\end{equation}
By proving the existence of a solution to
\eqref{eq: wave equation with H}, we get a solution of the corresponding forward-forward MFG.
In \cite{DiPerna83}, the author considers the viscosity approximation\begin{equation} \label{eq : viscosity elastodynamics}
\begin{cases}
v_t^\varepsilon -w_x^\varepsilon= \varepsilon v _{xx} \\
v_t^\varepsilon - \sigma(u^\varepsilon)_x=\varepsilon w_{xx}
\end{cases}
\end{equation}
and proves that, in the limit $\varepsilon\to 0$, $(u^\varepsilon, v^\varepsilon)$ converges to a solution of
\eqref{eq : elastodynamics}.
For the reader convenience, we reproduce a result from \cite{DeStTz00} that ensures the existence of a solution of (\ref{eq : elastodynamics}) in $\mathbb{{\mathbb{T}}}\times[0,T]$.
\begin{theorem}
Let $\sigma$ be given by \eqref{sigma}. Suppose that $v_0, w_0\in L^\infty({\mathbb{T}}).$
Then \eqref{eq : elastodynamics} has a weak solution $v,w\in L^\infty({\mathbb{T}}\times [0,T])$.
\end{theorem}
\begin{proof}
The theorem follows from the results in \cite{DeStTz00}
because $\sigma'>0$ and $\sigma''$ vanishes at a single point.
Furthermore, as shown in \cite{DiPerna83}, because
\begin{equation}
z\sigma''(z)>0 \qquad \forall z\neq 0\label{eq: stress constraints 2}
\end{equation}
and the initial data belongs to $L^{\infty}(\mathbb{T}\times[0,T])$, the theory of invariant regions developed in \cite{ChCoSm77} ensures that $$\|v\|_{L^{\infty}(\mathbb{R}\times[0,T])}+ \|w\|_{L^{\infty}(\mathbb{R}\times[0,T])}\leqslant C. $$
\end{proof}
\subsection{Logarithmic forward-forward MFGs and Hamilton-Jacobi flows}
We end this section with a brief discussion of the connection between
the logarithmic forward-forward MFG and a class of Hamilton-Jacobi
flows introduced in \cite{BEJ}. As in discussed in that reference, we consider the Hamilton-Jacobi equation
\begin{equation}\label{eq : Hamilton Jacobi}
u_t + G(u_x)=0.
\end{equation}
Assuming smoothness in the equation, we differentiate respectively with respect to $x$ and $t$ to obtain:
\begin{equation}\label{eq : Hamilton Jacobi diff x}
u_{tx} + G'(u_x)u_{xx}=0
\end{equation}
and
\begin{equation}\label{eq : Hamilton Jacobi diff t}
u_{tt} + G'(u_x)u_{tx}=0.
\end{equation}
Next, we combine (\ref{eq : Hamilton Jacobi diff x}) and (\ref{eq : Hamilton Jacobi diff t}) to get
\begin{equation}\label{eq : Hamilton Jacobi diff t,x combined}
u_{tt} - [G'(u_x)]^2 u_{xx}=0.
\end{equation}
Finally, we set
\begin{equation}\label{eq : H for HJ technique}
G(p)=\begin{cases}
\frac{1}{2}[p\sqrt{1+p^2}+\operatorname{arcsinh}(p)]\qquad &p\geqslant 0\\
-\frac{1}{2}[p\sqrt{1+p^2}+\operatorname{arcsinh}(p)]\qquad &p<0,
\end{cases}
\end{equation}
so that (\ref{eq : Hamilton Jacobi diff t,x combined}) becomes
\[
u_{tt}-(1+u_x^2)u_{xx}=0.
\]
We observe that G is convex. Thus, we can compute the solution of \eqref{eq : Hamilton Jacobi} by using the Lax-Hopf formula. For that,
we introduce the Legendre transform
\[
G^*(v)=\sup_{p} pv -G(p)
\]
and, according to
the Lax-Hopf formula, we get the following representation for the solution of \eqref{eq : Hamilton Jacobi}
\begin{equation}
\label{efor}
u(x,t)=\inf_{y} t G^*\left(\frac{x-y}{t}\right)+u(y,0).
\end{equation}
If $u(x,0)$ is differentiable, so is $u(x,t)$ for $0<t<T^*$, where $T^*$\ is the time of the first shock.
Now, we set
\[
m=e^{H(u_x(x,t))-G(u_x(x,t))}.
\]
Then, for smooth enough solutions, a simple calculation gives
\[
m_t-(m u_x)_x=0.
\]
Thus, we see that $u$ and $m$ solve the forward-forward MFG
\begin{equation}
\label{lmfg}
\begin{cases}
u_t + \frac{u_x^2}{2}= \ln m\\
m_t-( m u_x)_x=0.
\end{cases}
\end{equation}
Finally, because \eqref{eq : Hamilton Jacobi diff t,x combined} depends only on the $G'(u_x)^2$, we can repeat the discussion above for the equation
\[
u_t-G(u_x)=0,
\]
and obtain another explicit solution.
The examples we discuss in this section show that \eqref{lmfg} develops shocks in finite time as the regularity of $u$ is at best the regularity of the solutions of the Hamilton-Jacobi equation \eqref{eq : Hamilton Jacobi}.
Moreover, the convergence results for Hamilton-Jacobi equations (see, for example, \cite{ CGMT,MR2237158, FATH4, MR2396521, MR1457088})\ show that
the function $u$ given by \eqref{efor} converges (up to additive constants) as $t\to \infty$ to a stationary solution of
\[
G(u_x)=\overline{G}.
\]
\section{Parabolic MFGs}
\label{pmfg}
In Section \ref{ss: ffmfg}, we examined the first-order forward-forward MFGs ($\varepsilon=0$) and determined several conserved quantities (entropies). In the
parabolic ($\varepsilon>0$) case, these entropies are dissipated.
Here, we use this dissipation to establish the long-time convergence of solutions.
As before, by differentiating
\eqref{ffmfg} with respect to $x$, we get
\begin{equation}\label{eq: parabolic ffmfg}
\begin{cases}
v_t+(g(m)-H(v))_x=\varepsilon v_{xx},\\
m_t-(mH'(v))_x= \varepsilon m_{xx},
\end{cases}
\end{equation}
where $v=u_x$. We assume that $g$ is $C^1$ and strictly increasing, and that $H$ is $C^2$ and strictly convex; that is, $H''(v)>0$ for all $v\in {\mathbb{R}}$. Additionally, we impose
\begin{equation}\label{eq: fixedmeanvalues}
\int\limits_{{\mathbb{T}}} v(x,0)dx=0, \quad \int\limits_{{\mathbb{T}}} m(x,0)dx=1.
\end{equation}
The foregoing conditions are natural because $v$ is the derivative of a periodic function, $u,$ and $m$ is a probability density.
A straightforward computation yields the following result.
\begin{lem}
Suppose that $v,m \in C^2({\mathbb{T}} \times (0,+\infty))\cap C({\mathbb{T}} \times (0,+\infty))$ solve \eqref{eq: parabolic ffmfg}. Furthermore, let $E(v,m)$ be a $C^2$ entropy for \eqref{eq: foff mfg system2}; that is, $E(v,m)$ satisfies \eqref{eq: cons E condition 1}. Then, \begin{equation}\label{eq: time derivative of entropy}
\frac{d}{dt}\int_{\mathbb{T}} E(v, m) dx= -\varepsilon \int_{\mathbb{T}} (v_x, m_x)^T D^2E(v, m)(v_x, m_x)dx.
\end{equation}
\end{lem}
Now, let $P(m)$ be as in \eqref{Pofm}. Note that $P$ is strictly convex when $g$ is strictly increasing.
\begin{lem}
Let $\varepsilon>0$. Suppose $v,m \in C^2({\mathbb{T}} \times (0,+\infty))\cap C({\mathbb{T}} \times (0,+\infty))$ solve \eqref{eq: parabolic ffmfg} and satisfy \eqref{eq: fixedmeanvalues}. Then, for all $t\geqslant 0$, we have that \begin{equation}\label{eq: fixedmeanvalues t}
\int\limits_{{\mathbb{T}}} v(x,t)dx=0, \quad \int\limits_{{\mathbb{T}}} m(x,t)dx=1.
\end{equation}
Furthermore, if $g$ is increasing, we have that
\begin{align}\label{eq: time derivative of convex solutions}
&\frac{d}{dt}\int_{{\mathbb{T}}} H(v(x,t))+P(m(x,t)) dx\\&\qquad = -\varepsilon \int_{{\mathbb{T}}} H''(v(x,t))v_x^2(x,t)+P''(m(x,t))m_x^2(x,t)dx\leqslant 0.
\end{align}
\end{lem}
\begin{proof}
In Section \ref{ss: ffmfg}, we observed that $E_0(v,m)=v, E_1(v,m)=m,$ and $E_2(v,m):=H(v)+P(m)$ are entropies for \eqref{eq: foff mfg system2}. Hence, we apply \eqref{eq: time derivative of entropy} to $E_0,E_1$ and $E_2$ and obtain \eqref{eq: fixedmeanvalues t} and \eqref{eq: time derivative of convex solutions}.
The inequality in \eqref{eq: time derivative of convex solutions} follows from the convexity of $H$ and $P$.
\end{proof}
\subsection{Poincar\'{e}-type inequality}
To establish the long-time convergence, we need the following Poincar\'{e}-type inequality:
\begin{theorem}\label{thm: poincare}
Let $I\subset {\mathbb{R}}$ be an open interval and $\Phi \in C^2(I)$ be a strictly convex function. Furthermore, let $\Psi \in C^1(I)$ be such that
\begin{equation}\label{eq: psi}
\Psi'(s)=\sqrt{\Phi''(s)},\quad s\in I.
\end{equation}
Then, for every $f:{\mathbb{T}}\to I$, $f \in C^1({\mathbb{T}})$, we have
\begin{equation}\label{ineq: poincare 1}
\int\limits_{{\mathbb{T}}} \Phi(f(x))dx - \Phi\left(\int\limits_{{\mathbb{T}}} f(x)dx\right)\leqslant C_{\Phi}(a,b) \int\limits_{{\mathbb{T}}} \Phi''(f(x))f'(x)^2dx,
\end{equation}
where $a=\min \limits_{{\mathbb{T}}}f$, $b=\max \limits_{{\mathbb{T}}}f,$ and \begin{equation}\label{eq: cab}
C_{\Phi}(a,b)=\frac{\Phi(a)+\Phi(b)-2\Phi\left(\frac{a+b}{2}\right)}{(\Psi(b)-\Psi(a))^2}.
\end{equation}
Moreover, if
\begin{equation}\label{eq: c}
C_{\Phi}=\sup \limits_{a,b \in I} C_{\Phi}(a,b)<\infty,
\end{equation}
then
\begin{equation}\label{ineq: poincare 2}
\int\limits_{{\mathbb{T}}} \Phi(f(x))dx - \Phi\left(\int\limits_{{\mathbb{T}}} f(x)dx\right)\leqslant C_{\Phi} \int\limits_{{\mathbb{T}}} \Phi''(f(x))f'(x)^2dx
\end{equation}
for all $f:{\mathbb{T}}\to I$, $f\in C^1({\mathbb{T}})$.
\end{theorem}
\begin{proof}
Because \eqref{ineq: poincare 2} is an immediate consequence of \eqref{ineq: poincare 1}, we only need to prove the latter inequality. For that, next, we show that for every $f:{\mathbb{T}}\to I$, $f\in C^1({\mathbb{T}})$, such that $a=\min \limits_{{\mathbb{T}}}f$ and $b=\max \limits_{{\mathbb{T}}} f$, we have \begin{equation}\label{ineq: reversejensen}
\int\limits_{{\mathbb{T}}} \Phi(f(x))dx - \Phi\left(\int\limits_{{\mathbb{T}}} f(x)dx\right)\leqslant \Phi(a)+\Phi(b)-2\Phi\left(\frac{a+b}{2}\right)
\end{equation}
and
\begin{equation}\label{ineq: nondegenracy}
\int\limits_{{\mathbb{T}}} \Phi''(f(x))f'(x)^2dx \geqslant (\Psi(b)-\Psi(a))^2.
\end{equation}
If $a=b,$ $f$ is constant and the result is trivial. Thus, we assume $a<b$. Let $A=\int\limits_{{\mathbb{T}}} f(x)dx$. We have that $a\leqslant A\leqslant b$. Furthermore, because $\Phi$ is convex, we have that
\[\Phi(s)\leqslant \frac{b-s}{b-a} \Phi(a)+\frac{s-a}{b-a}\Phi(b)=:L(s),\quad \forall\ s\in[a,b].
\]
Now, we observe that $\Phi(s)-L(s)$ is a convex function that vanishes at $s=a,b$. Accordingly, for $a<s<\frac{a+b}{2}$ there exists $\lambda>\frac 1 2$ such that
\[
\frac{a+b}{2}=\lambda s +(1-\lambda) b.
\]
Therefore,
\begin{align}
\label{mi}
\Phi\left(\frac{a+b}{2}\right)-L\left(\frac{a+b}{2}\right)&\leqslant \lambda (\Phi(s)-L(s))+(1-\lambda) (\Phi(b)-L(b))\\
&=\lambda (\Phi(s)-L(s)).\notag\end{align}
Arguing in a similar way for $\frac{a+b}{2}\leqslant s<b$, we see that \eqref{mi} also holds for some $\lambda>\frac 1 2$.
Consequently, we have
\begin{align*}
L(s)-\Phi(s)&\leqslant 2\left(L\left(\frac{a+b}{2}\right)-\Phi\left(\frac{a+b}{2}\right)\right)\\&=\Phi(a)+\Phi(b)-2\Phi\left(\frac{a+b}{2}\right)
\end{align*}
for all $s\in [a, b]$. Hence, we get
\[
\int\limits_{{\mathbb{T}}} \Phi(f(x))dx \leqslant \int\limits_{{\mathbb{T}}} L(f(x))dx=L\left(\int\limits_{{\mathbb{T}}}f(x)dx\right)=L(A).
\]
Therefore, \[\int\limits_{{\mathbb{T}}} \Phi(f(x))dx - \Phi\left(\int\limits_{{\mathbb{T}}} f(x)dx\right)\leqslant L(A)-\Phi(A)\leqslant \Phi(a)+\Phi(b)-2\Phi\left(\frac{a+b}{2}\right).
\]
Suppose $f(x_0)=a$ and $f(x_1)=b$. Then, we have that
\begin{align*}
\int\limits_{{\mathbb{T}}} \Phi''(f(x))f'(x)^2dx&=\int\limits_{{\mathbb{T}}} \left(\frac{d\Psi(f(x))}{dx}\right)^2dx\geqslant \left(\int\limits_{{\mathbb{T}}} \left|\frac{d\Psi(f(x))}{dx}\right|dx\right)^2\\
&\geqslant \left(\int_{x_0}^{x_1} \left|\frac{d\Psi(f(x))}{dx}\right|dx\right)^2\geqslant \left|\int_{x_0}^{x_1} \frac{d\Psi(f(x))}{dx}dx\right|^2\\
&=(\Psi(b)-\Psi(a))^2.
\end{align*}
\end{proof}
Next, we present some convex functions $\Phi$ for which \eqref{eq: c} holds.
\begin{pro}
Let $I$ and $\Phi \in C^2(I)$ be one of the following:
\begin{enumerate}
\item $I=(0,\infty),\ \Phi(s)=s^p$, where $p>1$.
\item $I=(0,\infty),\ \Phi(s)=s^p$, where $p<0$.
\item $I=(0,\infty),\ \Phi(s)=-s^p$, where $0<p<1$.
\item $I=(0,\infty),\ \Phi(s)=-\ln s$.
\item $I=(0,\infty),\ \Phi(s)=s\ln s$.
\item $I={\mathbb{R}},\ \Phi(s)=s^{2n}$, where $n\in \mathbb{N}$.
\item $I={\mathbb{R}},\ \Phi(s)=e^{\alpha s}$, where $\alpha \in {\mathbb{R}}$.
\end{enumerate}
Then, $C_{\Phi}$ defined in \eqref{eq: c} is finite. Consequently, \eqref{ineq: poincare 2} holds.
\end{pro}
\begin{proof}
The proof of the preceding result is elementary though tedious, and we omit it here.
\end{proof}
\subsection{Stability of Jensen's inequality} The proof of the long-time convergence of the solutions of \eqref{eq: parabolic ffmfg} is based on the following stability property of Jensen's inequality:
\begin{theorem}\label{thm: jensenstability}
Let $I\subset {\mathbb{R}}$ be an open interval, not necessarily bounded, and $\Phi \in C(I)$ a strictly convex function. Furthermore, let $A \in I$ and $f_t:{\mathbb{T}}\to I$, $\{f_t\}_{t>0} \subset C({\mathbb{T}})$, be such that,
for all $t\geqslant 0$, \[\int\limits_{{\mathbb{T}}} f_t(x)dx=A
\]
and
\[\lim\limits_{t \to \infty} \int\limits_{{\mathbb{T}}} \Phi(f_t(x))dx-\Phi(A) =0.
\]
Then,\[\lim\limits_{t\to \infty} \int\limits_{{\mathbb{T}}}|f_t(x)-A|dx=0.
\]
\end{theorem}
\begin{remark}
Note that we do not impose uniform $L^{\infty}$ bounds on the family $\{f_t\}_{t>0}$.
\end{remark}
Before proving Theorem \ref{thm: jensenstability}, we need the following technical lemma.
We recall that $\mathcal{L}^1$ denotes the one-dimensional Lebesgue measure.
\begin{lemma}
Let $I\subset {\mathbb{R}}$ be some interval and $\Phi \in C(I)$ a convex function. Then, for every $f\in C(I),$ we have that
\begin{equation}\label{eq: JensenStability}
\int\limits_{{\mathbb{T}}} \Phi(f(x))dx-\Phi\left(\int\limits_{{\mathbb{T}}} f(x) dx\right) \geqslant p \Phi(A_1)+ q \Phi(A_2)-(p+q) \Phi(A)\geqslant 0,
\end{equation}
where
\begin{equation}
\label{eq: jensenstability_suppl}
A=\int\limits_{{\mathbb{T}}} f(x) dx,\quad p=\mathcal{L}^1(\{f< A\}),\quad q=\mathcal{L}^1(\{f\geqslant A\}),
\end{equation}
\begin{equation}
\label{eq: jensenstability_supplB}
A_1=\fint\limits_{f<A} f(x)dx= A- \frac{\gamma(f)}{p},\quad A_2= \fint\limits_{f\geqslant A} f(x)dx= A+ \frac{\gamma(f)}{q},
\end{equation}
and
\begin{equation}
\label{eq: jensenstability_supplC}
\gamma(f)=\int\limits_{{\mathbb{T}}} (f(x)-A)^- dx=\int\limits_{{\mathbb{T}}} (f(x)-A)^+ dx=\frac{1}{2}\int\limits_{{\mathbb{T}}} |f(x)-A| dx.
\end{equation}
\end{lemma}
\begin{proof}
By rearranging \eqref{eq: JensenStability} and observing that $p+q=1$, we get the inequality
\begin{align*}
&\int\limits_{f<A} \Phi(f(x))dx+\int\limits_{f\geqslant A} \Phi(f(x))dx\\
&\geqslant \mathcal{L}^1(\{f(x)< A\}) \Phi\left(\fint\limits_{f<A} f(x)dx\right)+\mathcal{L}^1(\{f(x)\geqslant A\}) \Phi\left(\fint\limits_{f\geqslant A} f(x)dx\right).
\end{align*}
The result follows by observing that the preceding inequality is a consequence of Jensen's inequality.
\end{proof}
Now, we are ready to prove Theorem \ref{thm: jensenstability}.
\begin{proof}[Proof of Theorem \eqref{thm: jensenstability}] Let $p_t,q_t,A_1^t,A_2^t$ and $\gamma_t:=\gamma(f_t)$ be as in \eqref{eq: jensenstability_suppl}-\eqref{eq: jensenstability_supplC} for $f=f_t$. From \eqref{eq: JensenStability}, we have that
\[p_t \Phi(A^t_1)+ q_t \Phi(A^t_2)-(p_t+q_t) \Phi(A) \to 0.
\]
By contradiction, we assume that $f_t$ does not converge to the common average value $A$. Then, without loss of generality, we can assume that
there exists $\varepsilon_0>0$
such that \[\gamma_{t_n} \geqslant \varepsilon_0>0
\]
for some sequence $t_n\to\infty$. Consequently,
\[|A^{t_n}_1-A|=\frac{\gamma_{t_n}}{p_{t_n}}\geqslant \varepsilon_0
\]
and
\[|A^{t_n}_2-A|=\frac{\gamma_{t_n}}{q_{t_n}}\geqslant \varepsilon_0.
\]
Because $\Phi$ is strictly convex, we have that
\begin{align*}
k=\inf\limits_{|s-A| \geqslant \varepsilon_0} \frac{\Phi(s)-\Phi(A)-(s-A) \alpha}{|s-A|} = \min\limits_{|s-A| = \varepsilon_0} \frac{\Phi(s)-\Phi(A)-(s-A)\cdot \alpha}{|s-A|} >0
\end{align*}
for any $\alpha$ in the subdifferential $\partial^-\Phi(A)$. Therefore,
\begin{align*}
p_{t_n} \Phi(A^{t_n}_1)+ q_{t_n} \Phi(A^{t_n}_2)-(p_{t_n}+q_{t_n}) \Phi(A)&=p_{t_n}(\Phi(A^{t_n}_1)-\Phi(A)-(A^{t_n}_1-A) \alpha)\\
&+q_{t_n}(\Phi(A^{t_n}_2)-\Phi(A)-(A^{t_n}_2-A) \alpha) \\
&\geqslant k p_{t_n} |A^{t_n}_1-A|+k q_{t_n} |A^{t_n}_2-A|\\
&=k \gamma_{t_n}\geqslant k \varepsilon_0,
\end{align*}
which is a contradiction.
\end{proof}
If we have uniform $L^{\infty}$ bounds, we have the following stronger stability property for Jensen's inequality:
\begin{theorem}\label{thm: jensenstability strong}
Let $I\subset {\mathbb{R}}$ be some interval and $\Phi \in C(I)$ a strictly convex function. Furthermore, let $a<b$ be real numbers and consider a family of functions $f_t:{\mathbb{T}}\to I,$ $\{f_t\}_{t>0}\subset C({\mathbb{T}}),$ such that
\[ a\leqslant f_t(x) \leqslant b,\quad\forall x \in {\mathbb{T}},\quad\forall t>0,
\]
and
\[\lim\limits_{t \to \infty} \int\limits_{{\mathbb{T}}} \Phi(f_t(x))dx-\Phi\left(\int\limits_{{\mathbb{T}}} f_t(x) dx\right) =0.
\]
Then, we have that
\begin{equation}\label{eq: strong stability l1}
\lim\limits_{t\to \infty} \int\limits_{{\mathbb{T}}}|f_t(x)-A_t|dx=0,
\end{equation}
where $A_t=\int\limits_{{\mathbb{T}}} f_t(x)dx$. Consequently,
\begin{equation}\label{eq: strong stability lp}
\lim\limits_{t\to \infty} \int\limits_{{\mathbb{T}}}|f_t(x)-A_t|^pdx=0
\end{equation}
for all $p>1$.
\end{theorem}
\begin{proof} Because $f_t$ is bounded, \eqref{eq: strong stability lp} follows from \eqref{eq: strong stability l1}. Therefore, we only need to prove the latter. Let $p_t,q_t,A_1^t,A_2^t$ and $\gamma_t:=\gamma(f_t)$ be as in \eqref{eq: jensenstability_suppl}-\eqref{eq:
jensenstability_supplC} for $f=f_t$.
By contradiction, we assume that there exists $\varepsilon_0>0$ such that
\[\gamma_{t_n} \geqslant \varepsilon_0>0,
\]
for some $t_n\to \infty$. Accordingly, \[|A^{t_n}_1-A_{t_n}|=\frac{\gamma_{t_n}}{p_{t_n}}\geqslant \varepsilon_0,
\]
and
\[|A^{t_n}_2-A_{t_n}|=\frac{\gamma_{t_n}}{q_{t_n}}\geqslant \varepsilon_0.
\]
We have that $a\leqslant A_t,A^t_1,A^t_2\leqslant b. $ Therefore, by compactness, we can assume that
\[A^{t_n}_1\to A_1,\quad A^{t_n}_2\to A_2,\quad A_{t_n}\to A,\quad p_{t_n}\to p,\quad q_{t_n} \to q,
\]
extracting a subsequence if necessary. Moreover, we have that
\[|A_1-A|,|A_2-A| \geqslant \varepsilon_0>0.
\]
Furthermore, since $\Phi$ is continuous, we have that
\[p_{t_n} \Phi(A^{t_n}_1)+ q_{t_n} \Phi(A^{t_n}_2)-(p_{t_n}+q_{t_n}) \Phi(A_{t_n})\to p \Phi(A_1)+q\Phi(A_2)-(p+q)\Phi(A)=0,
\]
using \eqref{eq: JensenStability}. Note that
\[p_tA^t_1+q_tA^t_2=(p_t+q_t)A_t\]
for all $t>0$. Hence,
\[pA_1+qA_2=(p+q)A.\]
Next, since $\Phi$ is strictly convex, we get that $p=0$ or $q=0$. But then $p_{t_n} \to 0$ or $q_{t_n} \to 0$. Suppose $p_{t_n} \to 0$. Then,
\[\varepsilon_0\leqslant \gamma_{t_n}= \int\limits_{f_{t_n}<A_{t_n}}|f_{t_n}(x)-A_{t_n}|dx\leqslant (b-a) \mathcal{L}^1(\{f_{t_n}<A_{t_n}\})=(b-a)p_{t_n},
\]
which is a contradiction. Similarly, we get a contradiction if $q_{t_n} \to 0$.
\end{proof}
\subsection{Parabolic forward-forward MFGs -- convergence}
Now, we are ready to prove the convergence result for \eqref{eq: parabolic ffmfg}.
\begin{theorem}
Let $H\in C^2({\mathbb{R}})$ be strictly convex and $g\in C^1\left((0,\infty)\right)$ be strictly increasing. Suppose that $C_{H},C_{P}<\infty$ (see \eqref{eq: c}), where $P$ is as in \eqref{Pofm}. Furthermore, let $v,m \in C^2({\mathbb{T}} \times (0,+\infty))\cap C({\mathbb{T}} \times
[0,+\infty)),\ m>0,$ solve \eqref{eq: parabolic ffmfg} and satisfy \eqref{eq: fixedmeanvalues}. Then, we have that
\begin{equation}\label{eq: longtime l1}
\lim \limits_{t\to \infty} \int\limits_{{\mathbb{T}}} |v(x,t)|dx=0,\quad \lim \limits_{t\to \infty} \int\limits_{{\mathbb{T}}} |m(x,t)-1|dx=0.
\end{equation}
Moreover, if \[\sup\limits_{t\geqslant 0} \|v(\cdot,t)\|_{C({\mathbb{T}})} \quad \text{and}\quad \sup\limits_{t\geqslant 0} \|m(\cdot,t)\|_{C({\mathbb{T}})}<\infty,
\]
then, for all $1<p<\infty$,
\begin{equation}\label{eq: longtime lp}
\lim \limits_{t\to \infty} \int\limits_{{\mathbb{T}}} |v(x,t)|^pdx=0\quad\text{and} \quad \lim \limits_{t\to \infty} \int\limits_{{\mathbb{T}}} |m(x,t)-1|^pdx=0.
\end{equation}
\end{theorem}
\begin{proof}
Let $C_0:=\max\{C_H,C_P\}$. Let
\[I(t)=\int\limits_{{\mathbb{T}}} H(v(x,t))+P(m(x,t)) dx-H(0)-P(1).
\]
From \eqref{eq: fixedmeanvalues t},\ \eqref{eq: time derivative of convex solutions}, and \eqref{ineq: poincare 2}, we have that
\begin{align*}
\frac{dI(t)}{dt}&=-\varepsilon \int\limits_{{\mathbb{T}}} H''(v(x,t))v_x^2(x,t)+P''(m(x,t))m_x^2(x,t)dx\\
&\leqslant - \frac{\varepsilon}{C_0} \left(\int\limits_{{\mathbb{T}}} H(v(x,t))dx-H(0)+\int\limits_{{\mathbb{T}}}P(m(x,t)) dx-P(1)\right)\\
&=- \frac\varepsilon{C_0} I(t).
\end{align*}
Therefore, we get
\[I(t)\leqslant e^{-\frac\varepsilon{C_0} t} I(0)\qquad \forall t\geqslant 0,
\]
which yields
\[\lim\limits_{t\to \infty}I(t)=0.
\]
Furthermore, by Jensen's inequality, we have that
\[\int\limits_{{\mathbb{T}}} H(v(x,t))dx-H(0)\leqslant I(t),
\]
and
\[\int\limits_{{\mathbb{T}}} P(m(x,t))dx-P(1)\leqslant I(t).
\]
Therefore, we get
\[\lim\limits_{t\to \infty}\int\limits_{{\mathbb{T}}} H(v(x,t))dx-H(0)=\lim\limits_{t\to \infty}\int\limits_{{\mathbb{T}}} P(m(x,t))dx-P(1)=0,
\]
and we conclude using Theorem \ref{thm: jensenstability}.
\end{proof}
\def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def$'${$'$}
|
1,477,468,751,159 | arxiv | \section{Introduction}\label{sec:intro}
The primary purpose of this work is to study the weak convergence of a family of properly rescaled continuous-time Markov chains on integer compositions \cite{Gnedin97} and the limiting diffusions.
Our results should be compared with the scaling limits of natural up-down Markov chains on branching graphs, which have received substantial focus in the literature \cite{BoroOlsh09, Petrov09, Petrov13}. In this language, our models take place on the branching graph of integer compositions and on its boundary, which was represented in \cite{Gnedin97} as a space of interval partitions. This paper establishes a proper scaling limit connection between discrete models \cite{RogeWink20} and their
continuum analogues \cite{Paper1-1,Paper1-2,IPPAT} in the generality of \cite{ShiWinkel-1}.
We consider a class of ordered Chinese restaurant processes with departures, parametrised by $\alpha\in (0,1)$ and $\theta_1,\theta_2\ge 0$.
This model is a continuous-time Markov chain $(C(t), t\ge 0)$ on vectors of positive integers, describing customer numbers of occupied tables arranged in a row.
At time zero, say that there are $k\in \mathbb{N}=\{1,2,\ldots\} $ occupied tables and for each $i\le k$ the $i$-th occupied table enumerated from left to right has $n_i\in \mathbb{N}$ customers,
then the initial state is $C(0)= (n_1, \ldots , n_k)$.
New customers arrive as time proceeds, either taking a seat at an existing table or starting a new table, according to the following rule, illustrated in Figure~\ref{fig:PCRP}:
\begin{itemize}
\item for each occupied table, say there are $m\in\mathbb{N}$ customers,
a new customer comes to join this table at rate $m- \alpha$;
\item at rate $\theta_1$, a new customer enters to start a new table to the left of the leftmost table;
\item at rate $\theta_2$, a new customer begins a new table to the right of the rightmost table;
\item between each pair of two neighbouring occupied tables, a new customer enters and begins a new table there at rate $\alpha$.
\end{itemize}
We refer to the arrival of a customer as an \emph{up-step}.
Furthermore, each customer leaves at rate $1$ (a \emph{down-step}).
By convention, the chain jumps from the null vector $\emptyset$ to state $(1)$ at rate $\theta:= \theta_1+\theta_2-\alpha$ if $\theta>0$, and $\emptyset$ is an absorbing state if $\theta\le 0$.
At every time $t\ge 0$, let $C(t)$ be the vector of customer numbers at occupied tables,
listed from left to right.
In this way we have defined a continuous-time Markov chain $(C(t), t\ge 0)$.
This process is referred to as a \emph{Poissonised up-down ordered Chinese restaurant process (PCRP) with parameters
$\alpha$, $\theta_1$ and $\theta_2$}, denoted by $\mathrm{PCRP}^{(\alpha)}_{C(0)}(\theta_1,\theta_2)$.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{oCRP.pdf}
\caption{The rates at which new customers arrive in a $\mathrm{PCRP}^{(\alpha)}(\theta_1,\theta_2)$.
}
\label{fig:PCRP}
\end{figure}
This family of Markov chains is closely related to the well-known Chinese restaurant processes due to Dubins and Pitman
(see e.g.\@ \cite{CSP}) and their ordered variants studied in \cite{James2006,PitmWink09}.
When $\theta_2= \alpha$, a $\mathrm{PCRP}^{(\alpha)} (\theta_1, \alpha)$ is studied in \cite{RogeWink20}. Notably, our generalisation includes cases $\theta=\theta_1+\theta_2-\alpha\in(-\alpha,0)$, which did not arise in \cite{James2006,PitmWink09,RogeWink20}.
Though we focus on the range $\alpha\in (0,1)$ in this paper, our model is clearly well-defined for $\alpha=0$ and it is straightforward to deal with this case; we include a discussion in Section~\ref{sec:zero}.
To state our first main result, we represent PCRPs in a space of interval partitions.
For $M\ge 0$, an \emph{interval partition} $\beta=\{U_i,i\in I\}$ of $[0,M]$ is a (finite or countably infinite) collection of disjoint open intervals $U_i=(a_i,b_i)\subseteq (0,M)$,
such that the (compact) set of partition points
$G(\beta):= [0,M]\setminus\bigcup_{i\in I}U_i$ has zero Lebesgue measure.
We refer
to the intervals $U\in\beta$ as \em blocks\em, to their lengths ${\rm Leb}(U)$ as their \em masses\em. We similarly refer to $\|\beta\|:=\sum_{U\in\beta}{\rm Leb}(U)$ as the
\em total mass \em of $\beta$.
We denote by $\mathcal{I}_{H}$ the set of all
interval partitions of $[0,M]$ for all $M\ge 0$.
This space is equipped with the metric $d_H$ obtained by applying the Hausdorff metric to the sets of partition points:
for every $\gamma,\gamma'\in \mathcal{I}_{H}$,
\[
d_{H} (\gamma, \gamma')
:= \inf \bigg\{r\ge 0\colon G(\gamma)\subseteq \bigcup_{x\in G(\gamma')} (x-r,x+r),~
G(\gamma')\subseteq \bigcup_{x\in G(\gamma)} (x-r,x+r) \bigg\}.
\]
Although $(\mathcal{I}_H, d_H)$ is not complete, the induced topological space is Polish \cite[Theorem~2.3]{Paper1-0}.
For $c>0$ and $\beta\in \mathcal{I}_H$, we define a \emph{scaling map} by
\[
c \beta:= \{(ca,cb)\colon (a,b)\in \beta\}.
\]
We shall regard a PCRP $(C(t),t\ge 0)$ as a c\`adl\`ag process in $(\mathcal{I}_H, d_H)$, by identifying any vector of positive integers $(n_1, \ldots, n_k)$ with an interval partition of $[0,n_1+\cdots+n_k]$:
\[
(n_1, \ldots, n_k) \quad \longleftrightarrow \quad \{ (s_{i-1},s_{i}), 1\le i\le k \} \quad \text{where}~ s_i=n_1+\cdots+n_i.
\]
We are now ready to state our main result, which is a limit theorem in distribution
in the space of $\mathcal{I}_H$-valued c\`adl\`ag functions $\mathbb{D}(\mathbb{R}_+, \mathcal{I}_H)$ with $\mathbb{R}_+ := [0,\infty)$, endowed with
the $J_1$-Skorokhod topology (see e.g.\@ \cite{Billingsley} for background).
\begin{theorem}\label{thm:crp-ip} Let $\alpha\in(0,1)$ and $\theta_1,\theta_2\ge 0$.
For $n\in \mathbb{N}$, let $(C^{(n)}(t),\, t\ge 0)$ be a $\mathrm{PCRP}^{(\alpha)}(\theta_1,\theta_2)$ starting from $C^{(n)}(0)= \gamma^{(n)}$.
Suppose that the initial interval partitions $ \frac{1}{n} \gamma^{(n)}$ converge in distribution to
$\gamma\in \mathcal{I}_H$ as $n\to \infty$, under $d_H$.
Then there exists an $\mathcal{I}_H$-valued path-continuous Hunt process $(\beta(t), t\ge 0)$ starting from $\beta(0)= \gamma$, such that
\begin{equation}\label{mainthmeq}
\Big(\frac{1}{n} C^{(n)}(2 n t),\, t\ge 0 \Big)
\underset{n\to \infty}{\longrightarrow} (\beta(t),\, t\ge 0) , \quad \text{in distribution in $\mathbb{D}(\mathbb{R}_+,\mathcal{I}_H)$.}
\end{equation}
Moreover, set $\zeta^{(n)} =\inf\{ t\ge 0\colon C^{(n)}(t) =\emptyset \}$ and $\zeta =\inf\{ t\ge 0\colon \beta(t) =\emptyset \}$ to be the respective first hitting times of $\emptyset$. If $\gamma\neq\emptyset$, then \eqref{mainthmeq} holds jointly with $\zeta^{(n)}/2n\to \zeta$, in distribution.
\end{theorem}
We call the limiting diffusion $(\beta(t),\, t\ge 0)$ on $\mathcal{I}_H$ an \emph{$(\alpha,\theta_1,\theta_2)$-self-similar interval partition evolution}, denoted by $\mathrm{SSIP}^{(\alpha)}(\theta_1, \theta_2)$. These processes are indeed \emph{self-similar with index $1$}, in the language of self-similar Markov processes \cite{Lamperti72}, see also \cite[Chapter~13]{KyprianouBook}: if $(\beta(t), t\ge 0)$ is an
$\mathrm{SSIP}^{(\alpha)}(\theta_1, \theta_2)$-evolution, then $(c \beta(c^{-1} t),\, t\ge 0)$ is an $\mathrm{SSIP}^{(\alpha)}(\theta_1, \theta_2)$-evolution starting from
$c \beta(0)$, for any $c>0$.
While most $\mathrm{SSIP}$-evolutions have been constructed before \cite{Paper1-1,Paper1-2,IPPAT,ShiWinkel-1}, in increasing generality, Theorem \ref{thm:crp-ip} is the first scaling limit result with an $\mathrm{SSIP}$-evolution as its limit. In special cases, this was conjectured in \cite{RogeWink20}. In the following, we state some consequences and further developments. We relate to the literature in more detail in Section \ref{sec:lit}. We refer to Section \ref{sec:application} for applications particularly of the generality of the three-parameter family.
Our interval partition evolutions have multiple connections to \emph{squared Bessel processes}. More precisely, a squared Bessel process $Z=(Z(t),\,t\ge 0)$ starting from $Z(0)=m\ge 0$ and with ``dimension'' parameter $\delta\in \mathbb{R}$ is the unique strong solution of the following equation:
\[
Z(t) = m +\delta t + 2 \int_0^t \sqrt{|Z(s)|} d B(s),
\]
where $(B(t),\,t\ge 0)$ is a standard Brownian motion.
We refer to \cite{GoinYor03} for general properties of squared Bessel processes.
Let $\zeta(Z):= \inf \{ t\ge 0\colon Z(t)=0 \}$ be the first hitting time of zero. To allow $Z$ to re-enter $(0,\infty)$ where possible after hitting $0$,
we define the \emph{lifetime of $Z$} by
\begin{equation}\label{eq:besq-zeta}
\overline{\zeta}(Z):=
\begin{cases}
\infty, & \text{if}~ \delta>0,\\
\zeta(Z), &\text{if}~ \delta\le 0. \\
\end{cases}
\end{equation}
We write ${\tt BESQ}_m(\delta)$ for the law of a squared Bessel process $Z$ with dimension $\delta$ starting from $m$, in the case $\delta\le 0$ absorbed in $\emptyset$ at the end of its (finite) lifetime $\overline{\zeta}(Z)$. When $\delta\le 0$, by our convention ${\tt BESQ}_0(\delta)$ is the law of the constant zero process.
In an $\mathrm{SSIP}^{(\alpha)}(\theta_1, \theta_2)$-evolution, informally speaking,
each block evolves as ${\tt BESQ}(-2\alpha)$, independently of other blocks \cite{Paper1-1,Paper1-2}. Meanwhile, there is always immigration of rate $2\alpha$ between ``adjacent blocks'', rate $2\theta_1$ on the left \cite{IPPAT} and rate $2\theta_2$ on the right \cite{ShiWinkel-1}.
Moreover, the total mass process $(\|\beta(t)\|,\,t\ge 0)$ of any $\mathrm{SSIP}^{(\alpha)}(\theta_1,\theta_2)$-evolution $(\beta(t),\,t\ge 0)$ is ${\tt BESQ}_{\|\beta(0)\|}(2\theta)$ with
$\theta:=\theta_1+\theta_2-\alpha$. We discuss this more precisely in Section~\ref{sec:SSIP}.
We refer to $|2\theta|$
as the total \em immigration rate \em if $\theta>0$, and as the total \em emigration rate \em if $\theta<0$.
There are \emph{pseudo-stationary} $\mathrm{SSIP}^{(\alpha)}(\theta_1, \theta_2)$-evolutions, that have fluctuating total mass but stationary interval length proportions, in the sense \cite{Paper1-2} of the following proposition, for a family $\mathtt{PDIP}^{(\alpha)}( \theta_1, \theta_2)$, $\alpha\in(0,1)$, $\theta_1,\theta_2\ge 0$, of Poisson--Dirichlet interval partitions on the space $\mathcal{I}_{H,1}\subset \mathcal{I}_{H}$ of partitions of the unit interval. This family notably extends the subfamilies of \cite{GnedPitm05,PitmWink09,ShiWinkel-1}, whose ranked sequence of interval lengths in the Kingman simplex
\[
\nabla_{\infty} := \bigg\{ (x_1, x_2, \ldots) \colon x_1\ge x_2\ge \cdots \ge 0,\, \sum_{i\ge 1} x_i = 1 \bigg\}
\]
are members of the two-parameter family ${\tt PD}^{(\alpha)}(\theta)$, $\alpha\in(0,1)$, $\theta\ge 0$ of \em Poisson--Dirichlet distributions\em. Here, we include new cases of interval partitions, for which $\theta\in(-\alpha,0)$, completing the usual range of the two-parameter family of ${\tt PD}^{(\alpha)}(\theta)$ with $\alpha\in(0,1)$ of \cite[Definition~3.3]{CSP}. The further case $\theta_1=\theta_2=0$, is degenerate, with ${\tt PDIP}^{(\alpha)}(0,0)=\delta_{\{(0,1)\}}$.
\begin{proposition}[Pseudo-stationarity]\label{prop:ps-theta1theta2-nokill}
For $\alpha\in(0,1)$ and $\theta_1, \theta_2\ge 0$, consider independently $\bar\gamma\sim \mathtt{PDIP}^{(\alpha)}( \theta_1,\theta_2)$ and a ${\tt BESQ}(2 \theta)$-process $(Z(t),\,t\ge 0)$ with any initial distribution and parameter $\theta=\theta_1+\theta_2-\alpha$.
Let $(\beta(t),\, t\ge 0)$ be an $\mathrm{SSIP}^{(\alpha)}(\theta_1, \theta_2)$-evolution starting from $\beta(0)=Z(0) \bar\gamma$.
Fix any $t\ge 0$, then $\beta(t)$ has the same distribution as $Z(t) \bar\gamma$.
\end{proposition}
We refer to Definition~\ref{defn:pdip} for a description of the probability distribution $\mathtt{PDIP}^{(\alpha)}( \theta_1, \theta_2)$ on unit interval partitions. We prove in Proposition~\ref{prop:ocrp-pdip} that $\mathtt{PDIP}^{(\alpha)}( \theta_1, \theta_2)$ gives the limiting block sizes in their left-to-right order of a new three-parameter family of
\emph{composition structures} in the sense of \cite{Gnedin97}.
For the special case $\theta_2=\alpha$, which was introduced in \cite{GnedPitm05, PitmWink09}, we also write $\mathtt{PDIP}^{(\alpha)}(\theta_1):= \mathtt{PDIP}^{(\alpha)}( \theta_1, \alpha)$ and recall a construction in Definition~\ref{defn:pdip-alphatheta}.
As in the case $\theta_2=\alpha$ studied in \cite{Paper1-2,IPPAT}, we define an associated family of $\mathcal{I}_{H,1}$-valued evolutions via time-change and renormalisation (``de-Poissonisation'').
\begin{definition}[De-Poissonisation and $\mathrm{IP}^{(\alpha)} (\theta_1, \theta_2)$-evolutions]
\label{defn:dePoi}
Consider $\gamma\in \mathcal{I}_{H,1}$,
let $\boldsymbol{\beta}:= (\beta(t),\, t\ge 0)$ be an $\mathrm{SSIP}^{(\alpha)} (\theta_1, \theta_2)$-evolution starting from $\gamma$ and define a time-change function $\tau_{\boldsymbol{\beta}}$ by
\begin{equation}\label{eq:tau-beta}
\tau_{\boldsymbol{\beta}}(u):= \inf \left\{ t\ge 0\colon \int_0^t \|\beta(s)\|^{-1} d s>u \right\}, \qquad u\ge 0.
\end{equation}
Then the process on $\mathcal{I}_{H,1}$ obtained from $\boldsymbol{\beta}$ via the following \emph{de-Poissonisation}
\[
\overline{\beta}(u):= \big\| \beta(\tau_{\boldsymbol{\beta}}(u)) \big\|^{-1} \beta(\tau_{\boldsymbol{\beta}}(u)),\qquad u\ge 0,
\]
is called a \emph{Poisson--Dirichlet $(\alpha,\theta_1,\theta_2)$-interval partition evolution} starting from $\gamma$, abbreviated as $\mathrm{IP}^{(\alpha)} (\theta_1, \theta_2)$-evolution.
\end{definition}
\begin{theorem}\label{thm:dP}
Let $\alpha\!\in\!(0,1)$, $\theta_1, \theta_2\!\ge\! 0$.
An $\mathrm{IP}^{(\alpha)} (\theta_1, \theta_2)$-evolution is a path-continuous Hunt process on
$(\mathcal{I}_{H,1},d_{H})$, is continuous in the initial state and has a stationary distribution
$\mathtt{PDIP}^{(\alpha)}( \theta_1, \theta_2)$.
\end{theorem}
Define $\mathcal{H}$ to be the commutative unital algebra of functions on $\nabla_{\infty}$ generated by
$q_k(x) = \sum_{i\ge 1} x_i^{k+1}$, $k\ge 1$, and $q_0(x)=1$.
For every $\alpha\in (0,1)$ and $\theta>-\alpha$, define an operator $\mathcal{B}_{\alpha,\theta}\colon \mathcal{H} \to \mathcal{H}$ by
\[
\mathcal{B}_{\alpha, \theta}:= \sum_{i\ge 1} x_i \frac{\partial^2}{\partial x_i^2}
-\sum_{i,j\ge 1} \frac{\partial^2}{\partial x_i \partial x_j}
- \sum_{i\ge 1} (\theta x_i +\alpha ) \frac{\partial}{\partial x_i}.
\]
It has been proved in \cite{Petrov09} that there is a Markov process on $\nabla_{\infty}$
whose (pre-)generator on $\mathcal{H}$ is $\mathcal{B}_{\alpha, \theta}$, which shall be referred to as the
Ethier--Kurtz--Petrov diffusion with parameter $(\alpha, \theta)$, for short ${\tt EKP}(\alpha,\theta)$-diffusion;
moreover, ${\tt PD}^{(\alpha)}(\theta)$ is the unique invariant probability measure
for ${\tt EKP}(\alpha, \theta)$. In \cite{Paper1-3}, the following connection will be established.
\begin{itemize}
\item Let $\alpha\in(0,1)$, $\theta_1 ,\theta_2 \ge 0$ with $\theta_1+\theta_2 >0$.
For an $\mathrm{IP}^{(\alpha)} (\theta_1, \theta_2)$-evolution $(\overline{\beta}(u),\, u\ge 0)$,
list the lengths of intervals of $\overline{\beta}(u)$ in decreasing order in a sequence $W(u)\in \nabla_{\infty}$.
Then the process $(W(u/2), u\ge 0)$ is an ${\tt EKP}(\alpha, \theta)$-diffusion with $\theta:= \theta_1+\theta_2-\alpha>-\alpha$.
\end{itemize}
\subsection{Connections with interval partition evolutions in the literature}\label{sec:lit}
The family of $\mathrm{SSIP}^{(\alpha)}(\theta_1, \theta_2)$-evolutions
generalises the two-parameter model of \cite{IPPAT}:
\begin{itemize}
\item for $\alpha\in (0,1)$ and $\theta_1> 0$, an $\mathrm{SSIP}^{(\alpha)}( \theta_1, \alpha)$-evolution is an \emph{$(\alpha, \theta_1)$-self-similar interval partition evolution} in the sense of \cite{IPPAT}, which we will also refer to as an $\mathrm{SSIP}^{(\alpha)}(\theta_1)$-evolution. The properties of these limiting processes have been proved in \cite{IPPAT}.
\end{itemize}
For this smaller class with $\theta_2 =\alpha$, \cite{RogeWink20} provides a study of the family $\mathrm{PCRP}^{(\alpha)}( \theta_1, \alpha)$ and conjectures the existence of diffusion limits, which is thus confirmed by our Theorem~\ref{thm:crp-ip}.
We conjecture that the convergence in Theorem~\ref{thm:crp-ip} can be extended to the case where $G(\frac{1}{n} \gamma^{(n)})$ converges in distribution, with respect to the Hausdorff metric, to a compact set of positive Lebesgue measure; then the limiting process is a generalized interval partition evolution in the sense of \cite[Section~4]{IPPAT}.
For the two-parameter case ($\theta_2=\alpha$), \cite{RivRiz} obtains the scaling limits of a closely related family of discrete-time up-down ordered Chinese restaurant processes, in which at each time exactly one customer arrives according to probabilities proportional to the up-rates of $\mathrm{PCRP}^{(\alpha)}( \theta_1, \alpha)$, see Definition~\ref{defn:oCRP}, and then one customer leaves uniformly, such that the number of customers remains constant. The method in \cite{RivRiz} is by analysing the generator of the Markov processes, which is quite different from this work, and neither limit theorem implies the other.
It is conjectured that the limits of \cite{RivRiz} are $\mathrm{IP}^{(\alpha)}(\theta_1,\alpha)$-evolutions, and we further conjecture that this extends to the three-parameter setting of Definition~\ref{defn:dePoi}.
An $\mathrm{SSIP}^{(\alpha)}(\theta_1, \theta_2)$-evolution $\boldsymbol{\beta}= (\beta(t),t\ge 0)$ killed at its first hitting time $\zeta(\boldsymbol{\beta})$ of $\emptyset$ has been constructed in \cite{ShiWinkel-1}. We denote this killed process by $\mathrm{SSIP}_{\!\dagger}^{(\alpha)}(\theta_1, \theta_2)$.
For the $\mathrm{SSIP}^{(\alpha)}(\theta_1, \theta_2)$-evolution itself, there are three different phases, according to the parameter
\[
\theta:=\theta_1+\theta_2-\alpha\ge -\alpha.
\]
\begin{itemize}
\item $\theta\ge 1$: $\zeta(\boldsymbol{\beta})=\infty$ a.s.. In this case, $\mathrm{SSIP}^{(\alpha)}(\theta_1, \theta_2)$-evolutions have been constructed in \cite{ShiWinkel-1}, including a proof of Proposition \ref{prop:ps-theta1theta2-nokill} that is the key to the proof of Theorem \ref{thm:dP}.
\item $\theta\in (0,1)$: $\zeta(\boldsymbol{\beta})$ is a.s.\ finite with $\zeta(\boldsymbol{\beta})\mbox{$ \ \stackrel{d}{=}$ } \|\beta(0)\|/2G$, where $G\sim \mathtt{Gamma}(1\!-\!\theta, 1)$, the Gamma distribution with shape parameter $1-\theta$ and rate parameter $1$. The construction in \cite{ShiWinkel-1} does not cover this case in full.
We will construct in Section~\ref{sec:rec} an $\mathrm{SSIP}^{(\alpha)}(\theta_1,\theta_2)$-evolution as a \emph{recurrent extension} of $\mathrm{SSIP}_{\!\dagger}^{(\alpha)}(\theta_1, \theta_2)$-evolutions and study its properties.
\item $\theta\in [-\alpha,0]$:
$\emptyset$ is an absorbing state, and hence an $\mathrm{SSIP}_{\!\dagger}^{(\alpha)}(\theta_1, \theta_2)$-evolution coincides with an $\mathrm{SSIP}^{(\alpha)}(\theta_1, \theta_2)$-evolution. In \cite{ShiWinkel-1}, we were unable to establish the pseudo-stationarity stated in Proposition~\ref{prop:ps-theta1theta2-nokill} for this case. Our proof of Proposition~\ref{prop:ps-theta1theta2-nokill} relies crucially on the convergence in Theorem~\ref{thm:crp-ip}.
\end{itemize}
Note that the law of $\zeta(\boldsymbol{\beta})$ and the phase transitions can be observed directly from the boundary behaviour at zero of the total mass process ${\tt BESQ}(2\theta)$, see e.g. \cite[Equation (13)]{GoinYor03}.
\subsection{$\mathrm{SSIP}^{(\alpha)} (\theta_1,
\theta_2)$-excursions}
When $\theta=\theta_1\!+\!\theta_2\!-\!\alpha\in(0,1)$, a $\mathrm{PCRP}^{(\alpha)}(\theta_1,\theta_2)$ is reflected at $\emptyset$.
When $\theta= \theta_1\!+\!\theta_2\!-\!\alpha \le 0$, a $\mathrm{PCRP}^{(\alpha)}(\theta_1,\theta_2)$ is absorbed at $\emptyset$, and
if the initial interval partitions $ \frac{1}{n} \gamma^{(n)}$ converge in distribution to $\emptyset\in \mathcal{I}_H$ as $n\to \infty$ under $d_H$,
then the limiting process in Theorem~\ref{thm:crp-ip} is the constant process that stays in $\emptyset$.
In both cases we refine the discussion and establish the convergence of rescaled PCRP excursions to a non-trivial limit in the following sense.
\begin{theorem}\label{thm:Theta}
Let $\alpha\in(0,1)$, $\theta_1, \theta_2\ge 0$ and suppose that $\theta= \theta_1\!+\!\theta_2\!-\!\alpha \in (-\alpha, 1)$.
Let $(C(t),t\!\ge\! 0)$ be a $\mathrm{PCRP}^{(\alpha)}(\theta_1,\theta_2)$ starting from state $(1)$ and denote by $\mathrm{P}^{(n)}$ the law of the process
$\left( C^{(n)}(t):= \frac{1}{n} C (2 n t\wedge\zeta(C)) ,\, t\ge 0 \right)$, where $\zeta(C):=\inf\{t\ge 0\colon C(t)=\emptyset\}$.
Then the following convergence holds vaguely under the Skorokhod topology:
\[
\frac{ \Gamma(1+\theta)}{1-\theta} n^{1-\theta} \mathrm{P}^{(n)} \underset{n\to \infty}{\longrightarrow} \Theta,
\]
where the limit $\Theta$ is a $\sigma$-finite measure on the space of continuous excursions on $\mathcal{I}_{H}$.
\end{theorem}
A description of the limit $\Theta$ is given in Section~\ref{sec:exc}.
We refer to $\Theta$ as the excursion measure of an $\mathrm{SSIP}^{(\alpha)} (\theta_1, \theta_2)$-evolution, which plays a crucial role in the construction of recurrent extensions mentioned above (when $\theta\in(0,1)$), as well as in the study of nested interval partition evolutions (when $\theta\in(-\alpha,0)$) in Section~\ref{sec:nested2}.
\subsection{Further applications}
\label{sec:application}
A remarkable feature of the three-parameter family is that it includes the \emph{emigration} case with $\theta<0$; this cannot happen in the two-parameter case with $\theta_2=\alpha$ where $\theta=\theta_1\ge 0$, but it is naturally included by Petrov \cite{Petrov09} in the unordered setting.
The discrete approximation method developed in this work in particular permits us to understand pseudo-stationarity and $\mathrm{SSIP}$-excursions in the emigration case, which has further interesting applications.
We discuss a few in this paper, listed as follows.
\subsubsection{Measure-valued processes with $\theta\in [-\alpha, 0)$}
In \cite{FVAT}, we introduced a family of Fleming--Viot superprocesses parametrised by $\alpha\in (0,1)$, $\theta\ge 0$, taking values on the space $(\mathcal{M}^a_1,d_{\mathcal{M}})$ of all purely atomic probability measures on $[0,1]$ endowed with the Prokhorov distance. Our construction in \cite{FVAT} is closely related to our construction of interval partition evolutions.
We can now extend this model to the case $\theta\in [-\alpha,0)$ and identify the desired stationary distribution, the two-parameter Pitman--Yor process, here exploiting the connection with an $\mathrm{SSIP}^{(\alpha)}(\theta+\alpha, 0)$-evolution. This is discussed in more detail in Section~\ref{sec:FV}.
\subsubsection{Nested interval partition evolutions}
Let us recall a well-known identity \cite[(5.24)]{CSP} involving the two-parameter family $\mathtt{PD}^{(\alpha)} (\theta)$ and associated fragmentation operators.
For $0\le \bar{\alpha}\le \alpha<1$ and $\bar\theta> 0$, let $(A_i, i\ge 1)$ be a random variable on $\nabla_{\infty}$ with distribution $\mathtt{PD}^{(\bar\alpha)} (\bar\theta)$,
and let $(A'_{i,j}, j\ge 1)$, $i\ge 1$, be an i.i.d.\ sequence of $\mathtt{PD}^{(\alpha)} (-\bar\alpha)$, further independent of $(A_i, i\ge 1)$.
Then the decreasing rearrangement of the collection $A_i A'_{i,j}$, $i,j\ge 1$, counted with multiplicities, has distribution $\mathtt{PD}^{(\alpha)} (\bar\theta)$. In other words, a $\mathtt{PD}^{(\bar\alpha)} (\bar\theta)$ fragmented by $\mathtt{PD}^{(\alpha)} (-\bar\alpha)$ has distribution $\mathtt{PD}^{(\alpha)} (\bar\theta)$. In Section \ref{sec:nested1}, we extend this to the setting of our three-parameter family ${\tt PDIP}^{(\alpha)}(\theta_1,\theta_2)$ of interval partitions.
In Sections~\ref{sec:nested2}, we study \emph{nested} interval partition evolutions $(\boldsymbol{\beta}_c, \boldsymbol{\beta}_f)$, such that at every time $t\ge 0$, the endpoints of the intervals in $\beta_f(t)$ form a subset of those in $\beta_c(t)$.
A particular case of our results establishes a dynamical and ordered version of this identity. Informally speaking, for $0< \bar{\alpha}\le \alpha<1$ and $\bar\theta, \theta_1,\theta_2\ge 0$ with $\theta_1+\theta_2-\alpha= -\bar{\alpha}<0$, we can find a coupling of stationary $\mathrm{IP}^{(\bar\alpha)} (\bar\theta)$- and $\mathrm{IP}^{(\alpha)} (\bar\theta)$-evolutions $\overline{\boldsymbol{\beta}}_c$ and $\overline{\boldsymbol{\beta}}_f$, such that
at each time $u\ge 0$, $\overline{\beta}_f(u)\sim \mathtt{PDIP}^{(\bar\alpha)} (\bar\theta)$ can be regarded as fragmenting each interval of $\overline{\beta}_c(u)\sim \mathtt{PDIP}^{(\bar\alpha)} (\bar\theta)$ according to $\mathtt{PDIP}^{(\alpha)} (\theta_1,\theta_2)$.
Finally, Section \ref{sec:nestedPCRP} extends Theorem \ref{thm:crp-ip} to the setting of nested PCRPs and nested interval partitions.
\subsubsection{Connections with random trees}
An initial motivation of this work was from studies of diffusions on a space of continuum trees.
Aldous \cite{Aldous00} introduced a Markov chain on the space of binary trees with $n$ leaves, by removing a leaf uniformly and reattaching it to a random edge. This Markov chain has the uniform distribution as its stationary distribution.
As $n\!\to\! \infty$, Aldous conjectured that the limit of this Markov chain is a diffusion on continuum trees with stationary distribution given by the \emph{Brownian continuum random tree (CRT)}, i.e.\@ the universal scaling limit of random discrete trees with finite vertex degree variance. Among different approaches
\cite{LohrMytnWint18,Paper4} investigating this problem, \cite{Paper4} describes the evolution via a consistent system of spines endowed with lengths and subtree masses, which relies crucially on interval partition evolutions.
This motivates us to construct \emph{stable Aldous diffusions}, with stationary distributions given by \emph{stable L\'evy trees} with parameter $\rho\in (1,2)$ \cite{DuLG05,DuquLeGall02}, which are the infinite variance analogues of the Brownian CRT.
The related Markov chains on discrete trees have been studied in \cite{Soerensen}.
A major obstacle for the study in the continuum is that the approaches in \cite{LohrMytnWint18,Paper4} cannot be obviously extended from binary to multifurcating trees with unbounded degrees.
The current work provides tools towards overcoming this difficulty; with these techniques one could further consider more general classes of \emph{continuum fragmentation trees}, including the \emph{alpha-gamma models} \cite{CFW} or a two-parameter Poisson--Dirichlet family \cite{HPW}.
To demonstrate a connection of our work to continuum trees, recall the spinal decompositions developed in \cite{HPW}.
In a stable L\'evy tree with parameter $\rho\in (1,2)$, there is a natural diffuse ``uniform'' probability measure on the leaves of the tree.
Sample a leaf uniformly at random and consider its path to the root, called the \emph{spine}. To decompose along this spine, we say that
two vertices $x,y$ of the tree are equivalent, if the paths from $x$ and $y$ to the root first meet the spine at the same point. Then equivalence classes are bushes of spinal subtrees rooted at the same spinal branch point; by deleting the branch point on the spine, the subtrees in an equivalence class form smaller connected components.
The collection of equivalence classes is called \emph{the coarse spinal decomposition}, and the collection of all subtrees is called \emph{the fine spinal decomposition}.
With $\bar\alpha= 1-1/\rho$ and $\alpha = 1/\rho$, it is known \cite[Corollary 10]{HPW} that
the deceasing rearrangement of masses of the coarse spinal decomposition has distribution $\mathtt{PD}^{(\bar\alpha)}(\bar{\alpha})$; moreover, the mass sequence of the fine spinal decomposition is a $\mathtt{PD}^{(\alpha)} (\alpha-1)$-fragmentation of the coarse one and has $\mathtt{PD}^{(\alpha)}(\bar{\alpha})$ distribution.
Some variants of our aforementioned nested interval partition evolutions can be used to represent the mass evolution of certain spinal decompositions in a conjectured stable Aldous diffusion.
The order structure provided by the interval partition evolutions also plays a crucial role: at the coarse level, the equivalence classes of spinal bushes are naturally ordered by the distance of the spinal branch points to the root; at the fine level, a total order of the subtrees in the same equivalence class aligns with the \emph{semi-planar structure} introduced in \cite{Soerensen}, which is used to explore the evolutions of sizes of subtrees in a bush at a branch point.
In Section~\ref{sec:trees}, we give a broader example of a Markov chain related to random trees that converges to our nested SSIP-evolutions. The study of stable Aldous diffusions is, however, beyond the scope of the current paper and will be further investigated in future work.
\subsection{Organisation of the paper}
In Section \ref{sec:PDIP} we generalise the two-parameter ordered Chinese Restaurant Processes to a three-parameter model and establish their
connections to interval partitions and composition structures. In Section \ref{sec:crp}, we prove Theorem \ref{thm:crp-ip} in the two-parameter setting, building on \cite{Paper1-1,Paper1-2,IPPAT,RogeWink20}. In Section \ref{sec:pcrp-ssip}, we study the three-parameter setting, both for processes absorbed in $\emptyset$ and for recurrent extensions, which we obtain by constructing excursion measures of ${\rm SSIP}^{(\alpha)}(\theta_1,\theta_2)$-evolutions. This section concludes with proofs of all results stated in this introduction. We finally turn to applications in Section \ref{sec:appl}.
\section{Poisson--Dirichlet interval partitions}\label{sec:PDIP}
Throughout the rest of the paper, we fix a parameter $\alpha\in (0,1)$.
In this section we will introduce the three-parameter family of random interval partitions $\mathtt{PDIP}^{(\alpha)}(\theta_1,\theta_2)$, with $\theta_1,\theta_2\ge 0$, as the limit of a family of discrete-time ordered Chinese restaurant processes (without customers leaving).
\subsection{Ordered Chinese restaurant processes in discrete time}\label{sec:oCRP}
For $n\in \mathbb{N}$, let $[n]:= \{1,2,\ldots, n\}$.
Let $\mathcal{C}:= \{(n_1,n_2,\ldots,n_k)\in\mathbb{N}^k,k\ge 1\}\cup \{\emptyset\}$. We view $\mathcal{C}$ as
a space of \emph{integer compositions}: for any $n\ge 0$, the subset $\mathcal{C}_n:= \{(n_1,\ldots,n_k)\in \mathcal{C}\colon n_1+\cdots+n_k=n\}$ is the set of compositions of $n$.
Recall that we view $\mathcal{C}$ as a subset of the space $\mathcal{I}_H$ of interval partitions introduced in the introduction. We still consider the metric $d_H$, and all operations and functions defined on $\mathcal{I}_{H}$ shall be inherited by $\mathcal{C}$.
We also introduce the \emph{concatenation} of a family of interval partitions $(\beta_a)_{a\in \mathcal{A}}$, indexed by a totally ordered set $(\mathcal{A}, \preceq)$:
\[
\mathop{ \raisebox{-2pt}{\Huge$\star$} } _{a\in \mathcal{A}} \beta_a
:= \left\{
(x+ S_{\beta}(a-), y + S_{\beta}(a-) \colon a\in \mathcal{A}, (x,y)\in \beta_{a})
\right\}, ~\text{where}~ S_{\beta}(a-):= \sum_{b\prec a} \|\beta_b\|.
\]
When $\mathcal{A} = \{1,2\}$, we denote this by $\beta_1 \star \beta_2$.
Then each composition $(n_1,n_2,\ldots,n_k)\in \mathcal{C}$ is identified with the interval partition
$ \mathop{ \raisebox{-2pt}{\Huge$\star$} } _{i=1}^k \{(0, n_i)\} \in \mathcal{I}_H$.
\begin{definition}[$\mathrm{oCRP}^{(\alpha)}(\theta_1,\theta_2)$ in discrete-time]\label{defn:oCRP}
Let $\theta_1, \theta_2\ge 0$.
We start with a customer $1$ sitting at a table and new customers arrive one-by-one.
Suppose that there are already $n$ customers, then the $(n+1)$-st customer is located by the following \emph{$(\alpha,\theta_1,\theta_2)$-seating rule}:
\begin{itemize}
\item If a table has $m$ customers, then the new customer comes to join this table with probability $(m- \alpha)/(n +\theta)$, where $\theta:= \theta_1 + \theta_2 -\alpha$.
\item The new customer may also start a new table:
with probability $\theta_1/(n +\theta)$, the new customer begins a new table to the left of the leftmost table;
with probability $\theta_2/(n +\theta)$, she starts a new table to the right of the rightmost table.
Between each pair of two neighbouring occupied tables, a new table is started there with probability $\alpha/(n +\theta)$.
\end{itemize}
At each step $n\in \mathbb{N}$, the numbers of customers at the tables (ordered from left to right) form a composition $C(n)\in \mathcal{C}_n$. We will refer to $(C(n), n\in \mathbb{N})$ as an \emph{ordered Chinese restaurant process with parameters $\alpha$, $\theta_1$ and $\theta_2$}, or $\mathrm{oCRP}^{(\alpha)}(\theta_1, \theta_2)$. We also denote the distribution of $C(n)$ by
$\mathtt{oCRP}^{(\alpha)}_n(\theta_1,\theta_2)$.
\end{definition}
In the degenerate case $\theta_1=\theta_2=0$, an $\mathrm{oCRP}^{(\alpha)}(0,0)$ is simply a deterministic process $C(n)= (n), n\in \mathbb{N}$.
If we do not distinguish the location of the new tables, but build the partition of $\mathbb{N}$ that has $i$ and $j$ in the same block if the $i$-th and $j$-th customer sit at the same table, then this gives rise to the well-known (unordered) $(\alpha, \theta)$-Chinese restaurant process with $\theta:= \theta_1 + \theta_2 -\alpha$; see e.g.\@ \cite[Chapter~3]{CSP}.
When $\theta_1=\alpha$, this model encompasses the family of composition structures studied in \cite[Section~8]{GnedPitm05} and \cite{PitmWink09}.
In particular, they show that an $\mathrm{oCRP}^{(\alpha)}(\alpha, \theta_2)$ is a
\emph{regenerative composition structure} in the following sense.
\begin{definition}[Composition structures~\cite{GnedPitm05,Gnedin97}]
A Markovian sequence of random compositions $(C(n), n\in \mathbb{N})$ with $C(n)\in \mathcal{C}_n$
is a \emph{composition structure}, if the following property is satisfied:
\begin{itemize}
\item Weak sampling consistency: for each $n\in \mathbb{N}$, if we first distribute $n+1$ identical balls into an ordered series of boxes according to $C(n+1)$ and then remove one ball uniformly at random (deleting an empty box if one is created), then the resulting composition $\widetilde{C}(n)$ has the same distribution as $C(n)$.
\end{itemize}
Moreover, a composition structure is called \emph{regenerative}, if it further satisfies
\begin{itemize}
\item Regeneration:
for every $n \ge m$, conditionally on the first block of
$C(n)$ having size $m$, the remainder is a composition in $\mathcal{C}_{n-m}$ with
the same distribution as $C(n-m)$.
\end{itemize}
For $n \ge m$, let $r(n,m)$ be the probability that the first block of
$C(n)$ has size $m$. Then $(r(n,m),1\le m\le n)$ is called the \emph{decrement matrix} of $(C(n),n\in \mathbb{N})$.
\end{definition}
\begin{lemma}[{\cite[Proposition~6]{PitmWink09}}]\label{lem:rcs}
For $\theta_2\ge 0$, an $\mathrm{oCRP}^{(\alpha)}(\alpha, \theta_2)$ $(C(n), n\in \mathbb{N})$
starting from $(1)\in \mathcal{C}$ is a regenerative composition structure
with decrement matrix
\[
r_{\theta_2}(n,m):= \binom{n}{m} \frac{(n-m)\alpha+ m\theta_2}{n} \frac{\Gamma(m-\alpha)\Gamma(n-m+\theta_2)}{\Gamma(1-\alpha)\Gamma(n+\theta_2)}, \quad 1\le m\le n.
\]
For every $(n_1, n_2, \ldots n_k)\in \mathcal{C}_n$, we have
\[
\mathbb{P}\big(C(n)= (n_1, n_2, \ldots n_k)\big) = \prod_{i=1}^{k} r_{\theta_2}(N_{i:k}, n_i),\quad \text{where}~ N_{i:k}:= \sum_{j=i}^{k} n_j.
\]
Moreover, $\frac{1}{n} C(n)$ converges a.s.\@ to a random interval partition $\bar{\gamma}\in \mathcal{I}_H$,
under the metric $d_{H}$, as $n\to \infty$.
\end{lemma}
The limit $\bar{\gamma}$ is called a \emph{regenerative $(\alpha,\theta_2)$ interval partition} in \cite{GnedPitm05} and \cite{PitmWink09}.
For $\beta\in\mathcal{I}_H$, the \emph{left-right reversal} of $\beta$ is
\begin{equation}\label{eq:reversal}
{\rm rev}(\beta):=\{ (\|\beta\|-b, \|\beta\|-a):~ (a,b)\in\beta\}\in \mathcal{I}_H.
\end{equation}
\begin{definition}[$\mathtt{PDIP}^{(\alpha)}( \theta_1)$]\label{defn:pdip-alphatheta}
For $\theta_1\ge 0$, let $\bar{\gamma}$ be a regenerative $(\alpha,\theta_1)$ interval partition. Then the left-right reversal $\mathrm{rev}(\bar{\gamma})$ is called a \emph{Poisson--Dirichlet$(\alpha,\theta_1)$ interval partition}, whose law on $\mathcal{I}_H$ is denoted by $\mathtt{PDIP}^{(\alpha)}( \theta_1)$.
\end{definition}
By the left-right reversal, it follows clearly from Lemma~\ref{lem:rcs} that
$\mathtt{PDIP}^{(\alpha)}( \theta_1)$ also describes the limiting proportions of customers at tables in an $\mathrm{oCRP}^{(\alpha)}(\theta_1,\alpha)$.
We record from \cite[Proposition~2.2(iv)]{Paper1-2} a decomposition for future usage: with independent $B\sim \mathtt{Beta}(\alpha, 1\!-\!\alpha)$ and $\bar{\gamma}\sim \mathtt{PDIP}^{(\alpha)}(\alpha)$, we have
\begin{equation}\label{eq:pdip:0-alpha}
\{(0, 1-B)\}\star B \bar{\gamma} \sim \mathtt{PDIP}^{(\alpha)}(0).
\end{equation}
To understand the distribution of $(C(n), n\in \mathbb{N})\sim\mathrm{oCRP}^{(\alpha)}(\theta_1,\theta_2)$,
let us present a decomposition as follows.
Recall that there is an initial table with one customer at time $1$.
Let us distinguish this initial table from other tables. At time $n\in \mathbb{N}$, we record the size of the initial table by $N_0^{(n)}$, the composition of the table sizes to the left of the initial table by $C_1^{(n)}$, and the composition to the right of the initial table by $C_2^{(n)}$.
Then there is the identity $C(n)= C_1^{(n)} \star \{(0, N_0^{(n)})\} \star C_2^{(n)}$.
Let $(N_1^{(n)}, N_2^{(n)}):= (\|C_1^{(n)}\|, \|C_2^{(n)}\|)$.
Then $(N_1^{(n)}, N_0^{(n)}, N_2^{(n)})$ can be described as a P\'olya urn model with three colours.
More precisely, when the current numbers of balls of the three colours are $(n_1, n_0, n_2)$,
we next add a ball whose colour is chosen according to probabilities proportional to $n_1 + \theta_1$, $n_0 -\alpha$ and $n_2 + \theta_2$.
Starting from the initial state $(0,1,0)$, we get $(N_1^{(n)}, N_0^{(n)}, N_2^{(n)})$ after adding $n\!-\!1$ balls.
In other words, the vector $(N_1^{(n)}, N_0^{(n)}\!-\!1, N_2^{(n)})$ has \emph{Dirichlet-multinomial distribution with
parameters $n\!-\!1$ and $(\theta_1, 1\!-\!\alpha, \theta_2)$}; i.e.\@ for $n_1, n_0, n_2\in \mathbb{N}_0$ with $n_0\ne 0$ and $n_1+n_0+n_2 = n$,
\begin{eqnarray}
p_n(n_1,n_0,n_2)&:=& \mathbb{P}\left( (N_1^{(n)}, N_0^{(n)}, N_2^{(n)}) = (n_1, n_0, n_2)\right) \notag \\
&=& \frac{\Gamma(1-\alpha + \theta_1 +\theta_2) }{\Gamma (1-\alpha)\Gamma (\theta_1) \Gamma(\theta_2)}
\frac{(n-1)!\Gamma(n_0-\alpha)\Gamma(n_1 + \theta_1)\Gamma(n_2 + \theta_2)}{\Gamma(n-\alpha + \theta_1 +\theta_2)(n_0-1)! n_1! n_2!}. \label{eq:dmn}
\end{eqnarray}
Furthermore, conditionally given $(N_1^{(n)},N_0^{(n)},N_2^{(n)})$, the compositions $C_1^{(n)}$ and $C_2^{(n)}$ are independent with distribution
$\mathtt{oCRP}^{(\alpha)}_{N_1^{(n)}}(\theta_1, \alpha)$
and $\mathtt{oCRP}^{(\alpha)}_{N_2^{(n)}}(\alpha,\theta_2)$ respectively, for which there is
an explicit description in Lemma~\ref{lem:rcs}, up to an elementary left-right reversal.
\begin{proposition}\label{prop:down}
Let $\theta_1,\theta_2\ge 0$ and $(C(n), n\in \mathbb{N})$ an $\mathrm{oCRP}^{(\alpha)}(\theta_1,\theta_2)$.
Then for every $(n_1, n_2, \ldots, n_k)\in \mathcal{C}$ with $n = n_1+n_2+\cdots+n_k$, we have
\[\mathbb{P}\big(C(n)\! =\! (n_1, \ldots, n_k)\big)
=\sum_{i=1}^k\! \bigg(\!
p_n \big(N_{1:i-1}, n_{i}, N_{i+1:k}\big) \prod_{j=1}^{i-1} r_{\theta_1}\!\big(N_{1:j}, n_j\big)
\!\!\prod_{j=i+1}^{k}\!\! r_{\theta_2}\!\big(N_{j:k},n_j\big)\!
\bigg)\!,
\]
where $N_{i:j} =n_i+\cdots+n_j$, $p_n$ is given by \eqref{eq:dmn} and $r_{\theta_1}$, $r_{\theta_2}$ are as in Lemma \ref{lem:rcs}.
Furthermore, $(C(n), n\in \mathbb{N})$ is a composition structure in the sense that it is weakly sampling consistent.
\end{proposition}
\begin{proof}
The distribution of $C(n)$ is an immediate consequence of the decomposition explained above.
To prove the weak sampling consistency, let us consider the decomposition
$C(n+1)=C_1^{(n+1)}\star\{(0,N_0^{(n+1)})\}\star C_2^{(n+1)}$.
By removing one customer uniformly at random (a \emph{down-step}),
we obtain in the obvious way a triple $(\widetilde{C}_1^{(n)},\widetilde{N}_0^{(n)},\widetilde{C}_2^{(n)})$, with
the exception for the case when $N_0^{(n+1)}=1$ and this customer is removed by the down-step:
in the latter situation, to make sure that $\widetilde{N}_0^{(n)}$ is strictly positive, we choose the new marked table to be the nearest to the left with probability proportional to $\|C_1^{(n+1)}\|$,
and the nearest to the right with the remaining probability, proportional to $\|C_2^{(n+1)}\|$, and
we further decompose according to this new middle table to define $\big(\widetilde{C}_1^{(n)},\widetilde{N}_0^{(n)},\widetilde{C}_2^{(n)}\big)$.
Therefore, for $n_1, n_0, n_2\in \mathbb{N}_0$ with $n_0\ne 0$ and $n_1+n_0+n_2 = n$,
the probability of the event $\big\{\big(\|\widetilde{C}_1^{(n)}\|,\widetilde{N}_0^{(n)},\|\widetilde{C}_2^{(n)}\|\big)= (n_1, n_0, n_2) \big\}$ is
\begin{align*}
&p_{n+1}(n_1\!+\!1,n_0,n_2)\frac{n_1\!+\!1}{n\!+\!1} + p_{n\!+\!1}(n_1,n_0,n_2\!+\!1)\frac{n_2\!+\!1}{n\!+\!1}
+ p_{n\!+\!1}(n_1,n_0\!+\!1,n_2)\frac{n_0\!+\!1}{n\!+\!1}\\
&\!\!+\!p_{n+1}(n_1\!\!+\!n_0,1,n_2) \frac{n_1\!\!+\!n_0}{\!n(n\!+\!1)\!} r_{\theta_1}\!(n_1\!\!+\!n_0, n_0)
\!+\!p_{n+1}(n_1,1, n_0\!+\!n_2) \frac{n_0\!+\!n_2}{\!n(n\!+\!1)\!}r_{\theta_2}\!(n_0\!+\!n_2, n_0),
\end{align*}
where the meaning of each term should be clear.
Straightforward calculation shows the sum is equal to $p_n(n_1,n_0,n_2)$.
The triple description above shows that, conditionally on $\|C_1^{(n+1)}\|=n_1$, $C_1^{(n+1)}$ has distribution $\mathtt{oCRP}^{(\alpha)}_{n_1}(\theta_1,\alpha)$.
Conditionally on $(\|\widetilde{C}_1^{(n)}\|,\widetilde{N}_0^{(n)},\|\widetilde{C}_2^{(n)}\|)= (n_1, n_0, n_2)$, we still have
that $\widetilde{C}_1^{(n)}\sim \mathtt{oCRP}^{(\alpha)}_{n_1}(\theta_1, \alpha)$.
This could be checked by looking at each situation:
in the down-step, if the customer is removed from the marked table (with size $\ge 2$) or the right part, then $\widetilde{C}_1^{(n)} = C_1^{(n+1)}$,
which has distribution $\mathtt{oCRP}^{(\alpha)}_{n_1}(\theta_1,\alpha)$;
if the customer is removed from the left part, then this is a consequence of the weak sampling consistency of $C_1^{(n+1)}$ given by Lemma~\ref{lem:rcs};
if the marked table has one customer and she is removed, then
the claim holds because Lemma~\ref{lem:rcs} yields that $C_1^{(n+1)}$ is regenerative.
By similar arguments we have $\widetilde{C}_2^{(n)}\sim \mathtt{oCRP}^{(\alpha)}_{n_2}(\theta_2)$.
Summarising, we find that $(\widetilde{C}_1^{(n)},\widetilde{N}_0^{(n)},\widetilde{C}_2^{(n)})$ has the same distribution
as $(C_1^{(n)},N_0^{(n)},C_2^{(n)})$.
This completes the proof.
\end{proof}
\subsection{The three-parameter family $\mathtt{PDIP}^{(\alpha)}(\theta_1,\theta_2)$}
Our next aim is to study the asymptotics of an $\mathtt{oCRP}^{(\alpha)}_{n}(\theta_1,\theta_2)$, as $n\rightarrow\infty$.
Recall that Lemma~\ref{lem:rcs} gives the result for the case $\theta_1=\alpha$ with the limit distributions forming a two-parameter family, whose left-right reversals give the
corresponding result for $\theta_2=\alpha$. The latter limits were denoted by $\mathtt{PDIP}^{(\alpha)}(\theta_1)$, $\theta_1\ge 0$.
Let us now introduce a new three-parameter family of random interval partitions, generalising $\mathtt{PDIP}^{(\alpha)}(\theta_1)$.
To this end, we extend the parameters of Dirichlet distributions to
every $\alpha_1, \ldots ,\alpha_m \ge 0$ with $m\in \mathbb{N}$: say $\alpha_{i_1},\ldots, \alpha_{i_k} >0$
and $\alpha_j =0$ for any other $j\le m$, let $(B_{i_1},\ldots ,B_{i_k})\sim \mathtt{Dir} (\alpha_{i_1}, \ldots, \alpha_{i_k})$ and $B_j:=0$ for all other $j$.
Then we define $\mathtt{Dir} (\alpha_1, \ldots, \alpha_m)$ to be the law of $(B_1,\ldots,B_m)$.
By convention $\mathtt{Dir} (\alpha)= \delta_1$ for any $\alpha\ge 0$.
\begin{definition}[$\mathtt{PDIP}^{(\alpha)}( \theta_1, \theta_2)$]\label{defn:pdip}
For $\theta_1, \theta_2 \ge 0$, let
$(B_1, B_0, B_2)\sim \mathtt{Dir} (\theta_1, 1\!-\!\alpha,\theta_2)$, $\bar{\gamma}_1\sim \mathtt{PDIP}^{(\alpha)} (\theta_1)$, and $\bar{\gamma}_2\sim \mathtt{PDIP}^{(\alpha)} (\theta_2)$, independent of each other.
Let $\bar{\gamma}= B_1 \bar{\gamma}_1 \star \{(0, B_0)\} \star \mathrm{rev} (B_2 \bar{\gamma}_2)$.
Then we call $\bar{\gamma}$ an \emph{$(\alpha, \theta_1, \theta_2)$-Poisson--Dirichlet interval partition} with distribution $\mathtt{PDIP}^{(\alpha)}(\theta_1, \theta_2)$.
\end{definition}
The case $\theta_1=\theta_2 =0$ is degenerate with $\mathtt{PDIP}^{(\alpha)}( 0, 0) = \delta_{\{(0,1)\}}$.
When $\theta_2=\alpha$, we see e.g.\ from \cite[Corollary 8]{PitmWink09} that $\mathtt{PDIP}^{(\alpha)}( \theta_1,\alpha)$ coincides with $\mathtt{PDIP}^{(\alpha)}( \theta_1)$ in Definition~\ref{defn:pdip-alphatheta}.
\begin{lemma}\label{lem:crp-pdip}
Let $(C(n),n\in \mathbb{N})\sim \mathrm{oCRP}^{(\alpha)}(\theta_1,\theta_2)$.
Then $\frac{1}{n} C(n)$ converges a.s.\@ to a random interval partition $\bar{\gamma}\sim \mathtt{PDIP}^{(\alpha)}( \theta_1, \theta_2)$,
under the metric $d_{H}$, as $n\to \infty$.
\end{lemma}
\begin{proof}
We use the triple-description of $C(n)$ in the
proof of Proposition~\ref{prop:down}. Consider independent $(R_1(n), n\in \mathbb{N})\sim \mathrm{oCRP}^{(\alpha)}(\theta_1,\alpha)$, $(R_2(n), n\in \mathbb{N})\sim \mathrm{oCRP}^{(\alpha)}(\alpha,\theta_2)$, and a P\'olya urn model with three colours $(N_1^{(n)}, N_0^{(n)}, N_2^{(n)})$, $n\in \mathbb{N}$.
Then we can write $C(n) = R_1(N_1^{(n)}) \star \{(0,N_0^{(n)} )\}\star R_2(N_2^{(n)})$.
The asymptotics of a P\'olya urn yield that $\frac{1}{n} (N_1^{(n)}, N_0^{(n)}, N_2^{(n)})$ converges a.s.\@ to some $(B_1, B_0, B_2)\sim \mathtt{Dir} (\theta_1, 1-\alpha,\theta_2)$.
By Lemma~\ref{lem:rcs}, there exist $\bar{\gamma}_1\sim \mathtt{PDIP}^{(\alpha)} (\theta_1)$ and $\bar{\gamma}_2\sim \mathtt{PDIP}^{(\alpha)} (\theta_2)$, independent of each other,
such that $\frac{1}{n} R_1 (n) \to \bar{\gamma}_1$
and $\frac{1}{n} R_2(n) \to \mathrm{rev}(\bar{\gamma}_2)$; both convergences hold a.s.\@ as $n\to\infty$.
Therefore, we conclude that
$\frac{1}{n} C(n)$ converges a.s.\@ to
$(B_1 \bar{\gamma}_1) \star \{(0,B_0)\}\star (B_2\mathrm{rev}(\bar{\gamma}_2))$, which
has distribution $\mathtt{PDIP}^{(\alpha)}( \theta_1, \theta_2)$ by definition.
\end{proof}
We now give some decompositions of $\mathtt{PDIP}^{(\alpha)}(\theta_1,\theta_2)$, extending \cite[Corollary 8]{PitmWink09} to the three-parameter case.
With independent $B\sim \mathtt{Beta}( 1\!-\!\alpha\!+\!\theta_1, \theta_2)$, $\bar\gamma\sim \mathtt{PDIP}^{(\alpha)}(\theta_1, 0)$, and $\bar\beta\sim \mathtt{PDIP}^{(\alpha)}(\alpha,\theta_2)$, it follows readily from Definition~\ref{defn:pdip} that
\begin{equation*}\label{eq:pdip-decomp-bis}
B\bar{\gamma} \star (1-B) \bar{\beta} \sim \mathtt{PDIP}^{(\alpha)}(\theta_1,\theta_2).
\end{equation*}
When $\theta_1\ge \alpha$, we also have a different decomposition as follows.
\begin{corollary}\label{cor:pdipdec}
Suppose that $\theta_1\ge \alpha$.
With independent $B'\sim \mathtt{Beta}( \theta_1\!-\!\alpha, \theta_2)$, $\bar\gamma\sim \mathtt{PDIP}^{(\alpha)}(\theta_1, 0)$, and $\bar\beta\sim \mathtt{PDIP}^{(\alpha)}(\alpha,\theta_2)$, we have
\begin{equation}\label{eq:pdip-decomp}
B'\bar{\gamma} \star (1-B') \bar{\beta} \sim \mathtt{PDIP}^{(\alpha)}(\theta_1,\theta_2),
\end{equation}
and $\bar{\gamma}\overset{d}{=}V^\prime\bar{\gamma}_1\star\{(0,1-V^\prime)\}$ for independent
$V^\prime\sim{\tt Beta}(\theta_1,1-\alpha)$ and $\bar{\gamma}_1\sim{\tt PDIP}^{(\alpha)}(\theta_1)$.
\end{corollary}
\begin{proof}
Consider an $\mathrm{oCRP}^{(\alpha)}(\theta_1,\theta_2)$.
For the initial table, we colour it in red with probability $(\theta_1-\alpha)/(\theta_1-\alpha+\theta_2)$. If it is not coloured in red, then each time a new table arrives to the left of the initial table, we flip an unfair coin with success probability $1-\alpha/\theta_1$ and colour the new table in red at the first success.
In this way, we separate the composition at every step into two parts: the tables to the left of the red table (with the red table included), and everything to the right of the red table.
It is easy to see that the sizes of the two parts follow a P\'olya urn such that the asymptotic proportions follow $\mathtt{Dir} (\theta_1-\alpha,\theta_2)$. Moreover, conditionally on the sizes of the two parts, they are independent
$\mathrm{oCRP}^{(\alpha)}(\theta_1,0)$ and $\mathrm{oCRP}^{(\alpha)}(\alpha,\theta_2)$ respectively.
Now the claim follows from Lemma~\ref{lem:crp-pdip} and Definition \ref{defn:pdip}.
\end{proof}
An ordered version \cite{Pitman97} of \emph{Kingman's paintbox processes} is described as follows. Let $\gamma\in \mathcal{I}_{H,1}$ and $(Z_i, i\in \mathbb{N})$ be i.i.d.\@ uniform random variables on $[0,1]$.
Then customers $i$ and $j$ sit at the same table, if and only if $Z_i$ and $Z_j$ fall in the same block of $\gamma$. Moreover, the tables are ordered by their corresponding intervals.
For any $n\in \mathbb{N}$, the first $n$ variables $(Z_i, i\in [n])$ give rise to a \emph{composition of the set $[n]$}, i.e.\ an ordered family of disjoint subsets of $[n]$:
\begin{equation}\label{eqn:C*}
C_{\gamma}^*(n)= \{B_U(n)\colon B_U(n)\!\ne\! \emptyset, U\!\in\! \gamma\}, ~\text{where}~ B_U(n) := \{j\!\le\! n \colon Z_j \!\in\! (\inf U, \sup U)\}.
\end{equation}
Let $P_{n,\gamma}$ be the distribution of the random composition of $n$ induced by $C^*_{\gamma}(n)$.
The following statement shows that the composition structure induced by an ordered CRP is a mixture of \emph{ordered Kingman paintbox processes}.
\begin{proposition}\label{prop:ocrp-pdip}
The probability measure $\mathtt{PDIP}^{(\alpha)}(\theta_1,\theta_2)$ is the unique probability measure on $\mathcal{I}_{H}$, such that there is the identity
\[
\mathtt{oCRP}_n^{(\alpha)}(\theta_1,\theta_2) (A)
= \int_{\mathcal{I}_H} P_{n,\gamma} (A) ~ \mathtt{PDIP}^{(\alpha)}(\theta_1,\theta_2)(d \gamma), \qquad \forall n\in \mathbb{N}, \forall A \subseteq \mathcal{C}_n. %
\]
\end{proposition}
\begin{proof}
Since an $\mathrm{oCRP}^{(\alpha)}(\theta_1,\theta_2)$ is a composition structure by Proposition~\ref{prop:down} and since renormalised $\mathtt{oCRP}_n^{(\alpha)}(\theta_1,\theta_2)$ converges weakly to $\mathtt{PDIP}^{(\alpha)}(\theta_1,\theta_2)$ as $n\to \infty$ by Lemma~\ref{lem:crp-pdip},
the statement follows from \cite[Corollary~12]{Gnedin97}.
\end{proof}
\begin{remark}
If we label the customers by $\mathbb{N}$ in an $\mathrm{oCRP}^{(\alpha)}(\theta_1,\theta_2)$ defined in Definition~\ref{defn:oCRP}, then we also naturally obtain a composition of the set $[n]$ when $n$ customers have arrived. However, it does not have the same law as the $C^*_{\bar{\gamma}}(n)$ obtained from the paintbox in \eqref{eqn:C*} with $\bar{\gamma}\sim\mathtt{PDIP}^{(\alpha)}(\theta_1,\theta_2)$, though we know from Proposition~\ref{prop:ocrp-pdip} that their induced integer compositions of $n$ have the same law. Indeed, $C^*_{\bar{\gamma}}(n)$ obtained by the paintbox is \emph{exchangeable} \cite{Gnedin97}, but it is easy to check that an $\mathrm{oCRP}^{(\alpha)}(\theta_1,\theta_2)$ with general parameters is not, the only exceptions being for $\theta_1=\theta_2=\alpha$.
\end{remark}
Recall that ${\tt PD}^{(\alpha)}(\theta)$ denotes the Poisson--Dirichlet distribution on the Kingman simplex~$\nabla_{\infty}$.
\begin{proposition}\label{prop:pdip-pd}
Let $\theta_1 ,\theta_2 \ge 0$ with $\theta_1+\theta_2 >0$.
The ranked interval lengths of a $\mathtt{PDIP}^{(\alpha)}( \theta_1, \theta_2)$ have ${\tt PD}^{(\alpha)}(\theta)$ distribution on $\nabla_{\infty}$ with $\theta:= \theta_1 + \theta_2 - \alpha$.
\end{proposition}
\begin{proof}
Let $C^{(n)} \sim \mathtt{oCRP}^{(\alpha)}_{n}(\theta_1, \theta_2)$,
then it follows immediately from
its construction that
$C^{(n)}$ ranked in decreasing order is an unordered $(\alpha, \theta)$-Chinese restaurant process with $\theta:= \theta_1 + \theta_2 -\alpha >-\alpha$.
As a consequence of Lemma~\ref{lem:crp-pdip}, the ranked interval lengths of a $\mathtt{PDIP}^{(\alpha)}( \theta_1, \theta_2)$ have the same distribution as the limit of $(\alpha, \theta)$-Chinese restaurant processes, which is
known to be ${\tt PD}^{(\alpha)}(\theta)$.
\end{proof}
\section{Proof of Theorem~\ref{thm:crp-ip} when $\theta_2=\alpha$}\label{sec:crp}
We first recall in Section~\ref{sec:pre} the construction and some basic properties of the two-parameter family of $\mathrm{SSIP}^{(\alpha)}(\theta_1)$-evolutions from \cite{Paper1-1,Paper1-2, IPPAT},
and then prove that they are the diffusion limits of the corresponding $\mathrm{PCRP}^{(\alpha)}(\theta_1,\alpha)$, in Sections~\ref{sec:cv-0alpha} and \ref{sec:cv-thetaalpha} for $\theta_1=0$ and for $\theta_1\ge 0$ in general, respectively, thus proving Theorem~\ref{thm:crp-ip} for the case $\theta_2=\alpha$.
The proofs rely on a representation of $\mathrm{PCRP}^{(\alpha)}(\theta_1,\alpha)$ by Rogers and Winkel \cite{RogeWink20} that we recall in Section~\ref{sec:PCRP} and an in-depth investigation of a positive-integer-valued Markov chain in Section~\ref{sec:ud-chain}.
\subsection{Preliminaries: $\mathrm{SSIP}^{(\alpha)}(\theta_1)$-evolutions}\label{sec:pre}
In this section, we recall the \emph{scaffolding-and-spindles} construction and some basic properties of an $(\alpha,\theta_1)$ self-similar interval partition evolution, $\mathrm{SSIP}^{(\alpha)}(\theta_1)$. The material is collected from \cite{Paper1-1, Paper1-2, IPPAT}.
Let $\mathcal{E}$ be the space of non-negative c\`adl\`ag excursions away from zero. Then
for any $f\in \mathcal{E}$, we have $\zeta (f):= \inf\{t>0\colon f(t)=0\}=\sup \{ t\ge 0\colon f(t)>0\}$.
We will present the construction of SSIP-evolutions via the following \emph{skewer} map introduced in \cite{Paper1-1}.
\begin{definition}[Skewer] \label{def:skewer}
Let $N= \sum_{i\in I} \delta(t_i,f_i)$ be a point measure on $\mathbb{R}_+\times \mathcal{E}$ and $X$ a c\`adl\`ag process such that
\[\sum_{ \Delta X(t)> 0} \delta(t, \Delta X(t)) = \sum_{i\in I} \delta(t_i, \zeta(f_i)).\]
The \emph{skewer} of the pair $(N,X)$ at level $y$ is (when well-defined) the interval partition
\begin{equation}
\ensuremath{\normalfont\textsc{skewer}}(y,N,X) :=
\{ (M^y(t-),M^y(t)) \colon M^y(t-)<M^y(t), t\ge 0 \},
\end{equation}
where $M^y(t) = \int_{[0,t]\times\mathcal{E}} f\big( y- X(s-) \big) N(ds,df)$.
Denote the process by
\[
\ensuremath{\overline{\normalfont\textsc{skewer}}}(N,X):= (\ensuremath{\normalfont\textsc{skewer}}(y,N,X), y\ge 0).
\]
\end{definition}
Let $\theta\in (-1,1)$. We know from \cite{GoinYor03} that ${\tt BESQ}( 2\theta)$ has an exit boundary at zero.
Pitman and Yor \cite[Section~3]{PitmYor82} construct a $\sigma$-finite excursion measure $\Lambda^{(2\theta)}_{\mathtt{BESQ}}$ associated with ${\tt BESQ}(2\theta)$ on the space $\mathcal{E}$,
such that
\begin{equation}\label{eq:besq-exc}
\Lambda_{\tt BESQ}^{(2\theta)}(\zeta>y):=\Lambda^{(2\theta)}_{\mathtt{BESQ}} \left\{f\in\mathcal{E}\colon \zeta(f)> y\right\} =\frac{2^{\theta-1}}{\Gamma(2\!-\!\theta)} y^{-1+\theta}, \qquad y>0,
\end{equation}
and under $\Lambda^{(2\theta)}_{\mathtt{BESQ}}$, conditional on $\{ \zeta=y \}$ for $0<y<\infty$,
the excursion is a squared Bessel bridge from $0$ to $0$ of length $y$, see \cite[Section 11.3]{RevuzYor}.
\cite[Section~3]{PitmYor82} offers several other equivalent descriptions of $\Lambda^{(2\theta)}_{\mathtt{BESQ}}$; see also \cite[Section~2.3]{Paper1-1}.
\begin{figure}
\centering
\input{Fig_JCCP_skewer_5b.pdf_t}
\caption{A scaffolding with marks (atom size evolutions as spindle-shapes and allelic types from a colour spectrum coded by $[0,1]$) and the skewer and superskewer (see Definition~\ref{def:superskewer}) at level $y$, not to scale.\label{fig:scaf-marks}}
\end{figure}
For $\alpha\in (0,1)$, let $\mathbf{N}$ be a Poisson random measure on $\mathbb{R}_+\times \mathcal{E}$ with intensity $c_{\alpha}\mathrm{Leb} \otimes \Lambda^{(-2\alpha)}_{\mathtt{BESQ}}$, denoted by $\mathtt{PRM}(c_{\alpha}\mathrm{Leb} \otimes \Lambda^{(-2\alpha)}_{\mathtt{BESQ}})$, where
\begin{equation}\label{eq:nu-Lambda}
c_{\alpha}:=2 \alpha(1\!+\!\alpha)/\Gamma(1\!-\!\alpha).
\end{equation}
Each atom of $\mathbf{N}$, which is an excursion function in $\mathcal{E}$, shall be referred to as a \emph{spindle}, in view of illustration of $\mathbf{N}$ as in Figure~\ref{fig:scaf-marks} .
We pair $\mathbf{N}$ with a \emph{scaffolding function} $\xi_{\mathbf{N}}:=(\xi_{\mathbf{N}}(t), t\ge 0)$ defined by
\begin{equation}\label{eq:scaffolding}
\xi_{\mathbf{N}}(t):= \lim_{z\downarrow 0} \bigg(
\int_{[0,t]\times \{g\in \mathcal{E}\colon \zeta(g)>z\}} \zeta(f) \mathbf{N}(ds,df) - \frac{(1+\alpha)t}{(2z)^{\alpha}\Gamma(1-\alpha)\Gamma(1+\alpha)}
\bigg).
\end{equation}
This is a spectrally positive stable L\'evy process of index $(1+\alpha)$, with L\'evy measure $c_{\alpha}\Lambda^{(-2\alpha)}_{\mathtt{BESQ}} (\zeta \in d y)$ and Laplace exponent
$
(2^{-\alpha}q^{1+\alpha} /\Gamma(1+\alpha), q\ge 0).
$
For $x>0$, let $\mathbf{f}\sim {\tt BESQ}_x(- 2\alpha)$, independent of $\mathbf{N}$.
Write $\mathtt{Clade}_x(\alpha)$ for the law of a \emph{clade of initial mass $x$}, which is a random point measure on $\mathbb{R}_+\times \mathcal{E}$ defined by
\begin{equation*}\label{eq:clade}
\ensuremath{\normalfont\textsc{clade}} (\mathbf{f}, \mathbf{N}):= \delta(0,\mathbf{f}) + \mathbf{N}\,\big|_{\left(0, T_{-\zeta(\mathbf{f})} (\xi_\mathbf{N}) \right]\times \mathcal{E}},
~\text{where}~ T_{-y} (\xi_\mathbf{N}):= \inf\{t\ge 0 \colon \xi_{\mathbf{N}}(t)= -y\}.
\end{equation*}
\begin{definition}[$\mathrm{SSIP}^{(\alpha)}(0)$-evolution]\label{defn:ip0}
For $\gamma \in \mathcal{I}_H$, let $(\mathbf{N}_U, U\in \gamma)$ be a family of independent clades, with each $\mathbf{N}_U \sim \mathtt{Clade}_{\mathrm{Leb}(U)} (\alpha)$.
An $\mathrm{SSIP}^{(\alpha)}(0)$-evolution starting from $\gamma \in \mathcal{I}_H$ is a process distributed as $\boldsymbol{\beta}= (\beta(y), y\ge 0)$ defined by
\[
\beta(y):= \mathop{ \raisebox{-2pt}{\Huge$\star$} } _{U\in \gamma} \ensuremath{\normalfont\textsc{skewer}} (y, \mathbf{N}_{U}, \xi(\mathbf{N}_{U})), \qquad y\ge 0.
\]
\end{definition}
We now turn to the case $\theta_1>0$.
Let $\mathbf{N}$ be a $\mathtt{PRM}(c_{\alpha}\mathrm{Leb}\otimes \Lambda^{(-2\alpha)}_{\mathtt{BESQ}})$ and $\mathbf{X}_{\alpha}= \xi_{\mathbf{N}}$ its scaffolding.
Define the \em modified scaffolding \em process
\begin{equation}\label{eq:X-alphatheta}
\mathbf{X}_{\theta_1}(t) := \mathbf{X}_{\alpha}(t) + \left(1 - \alpha/\theta_1 \right)\ell(t) \quad \text{where} \quad \ell(t) := -\inf_{u\leq t}\mathbf{X}_{\alpha}(u) \quad \text{for }t\ge 0.
\end{equation}
For any $y\ge 0$, let
\[
T^{-y}_{\alpha}:=T_{-y}(\mathbf{X}_\alpha)=\inf\{t\ge 0\colon \mathbf{X}_{\alpha}(t)= -y \} = \inf\{t\ge 0\colon \ell(t)\ge y \}.
\]
Notice that
$\inf_{u\le t} \mathbf{X}_{\theta_1} (u)= -(\alpha/\theta_1) \ell (t)$,
then we have the identity
\begin{equation}\label{eq:Talphatheta}
T^{-y}_{\theta_1}
:=T_{-y}(\mathbf{X}_{\theta_1})= \inf\{t\ge 0\colon \mathbf{X}_{\theta_1}(t)=- y \}
= T^{-(\theta_1/\alpha) y}_{\alpha}.
\end{equation}
For each $j\in \mathbb{N}$, define an interval-partition-valued process
\[
\cev{\beta}_j(y) :=
\ensuremath{\normalfont\textsc{skewer}} (y, \mathbf{N}\big|_{[0,T_{\theta_1}^{-j})}, j+\mathbf{X}_{\theta_1}\big|_{[0,T_{\theta_1}^{-j})}), \qquad y\in [0,j].
\]
For any $z>0$, the shifted process
$(z +\mathbf{X}_{\alpha} (T_{\alpha}^{-z}+t), t\ge 0)$
has the same distribution as $\mathbf{X}_{\alpha}$, by the strong Markov property of $\mathbf{X}_{\alpha}$.
As a consequence, $(-z+\ell(t+T_{\alpha}^{-z}), t\ge 0)$ has the same law as $\ell$.
Combing this and \eqref{eq:Talphatheta}, we deduce that,
for any $k\ge j$, the following two pairs have the same law:
\[
\left({\mathbf{N}}\circ L_{T_{\theta_1}^{j-k}}\Big|_{[0, T_{\theta_1}^{-k}-T_{\theta_1}^{j-k})},
k+{\mathbf{X}_{\theta_1}}\circ L_{T_{\theta_1}^{j-k}}\Big|_{[0,T_{\theta_1}^{-k}-T_{\theta_1}^{j-k})}
\right)
\stackrel{d}{=}
\left({\mathbf{N}}\big|_{[0,T_{\theta_1}^{-j})}, j+{\mathbf{X}_{\theta_1}}\big|_{[0,T_{\theta_1}^{-j})}
\right),
\]
where $L$ stands for the shift operator and we have also used the Poisson property of $\mathbf{N}$.
This leads to
$(\cev{\beta}_j(y),\, y\in[0,j] ) \stackrel{d}{=} (\cev{\beta}_k(y),\, y\in[0,j] )$.
Thus, by Kolmogorov’s extension theorem, there exists a process $(\cev{\beta}(y), y\ge 0)$ such that
\begin{equation}\label{eq:backarrow}
\left(\cev{\beta}(y),\, y\in[0,j]\right) \stackrel{d}{=} \left(\cev{\beta}_j(y),\, y\in[0,j] \right)\quad\mbox{for every $j\in \mathbb{N}$.}
\end{equation}
\begin{definition}[$\mathrm{SSIP}^{(\alpha)}(\theta_1)$-evolution]~\label{defn:alphatheta}
For $\theta_1>0$, let $(\cev{\beta}(y), y\ge 0)$ be as in \eqref{eq:backarrow} and $(\vecc{\beta}(y), y\ge 0)$
an independent $\mathrm{SSIP}^{(\alpha)}(0)$-evolution starting from $\gamma\in \mathcal{I}_H$.
Then $(\beta(y) = \cev{\beta}(y)\star \vecc{\beta}(y),\, y\ge 0)$
is called an \emph{$\mathrm{SSIP}^{(\alpha)}(\theta_1)$-evolution} starting from $\gamma$.
\end{definition}
In \cite{IPPAT}, an $\mathrm{SSIP}^{(\alpha)}(\theta_1)$-evolution is defined in a slightly different way that more explicitly handles the Poisson random measure of excursions of $\mathbf{X}_\alpha$ above the minimum. Indeed, the passage from $\alpha$ to $\theta_1$ in \cite{IPPAT} is by changing the intensity by a factor of $\theta_1/\alpha$.
The current correspondence can be easily seen to have the same effect.
\begin{proposition}[{\cite[Theorem 1.4 and Proposition 3.4]{IPPAT}}]\label{prop:alphatheta}
For $\theta_1 \!\ge\! 0$, an $\mathrm{SSIP}^{(\alpha)}(\theta_1)$-evolution
is a path-continuous Hunt process and its total mass process
is a ${\tt BESQ}(2\theta_1)$.
\end{proposition}
We refer to \cite{IPPAT} for the transition kernel of an $\mathrm{SSIP}^{(\alpha)}(\theta_1)$-evolution.
\subsection{Poissonised ordered up-down Chinese restaurant processes}\label{sec:PCRP}
For $\theta >-1$, let $Z:= (Z(t), t\ge 0)$ be a continuous-time Markov chain on $\mathbb{N}_0$, whose non-zero transition rates are
\begin{equation*}
Q_{i,j} (\theta)= \begin{cases}
i +\theta,& i\ge 1, j = i+1; \\
i,& i\ge 1, j = i-1, \\
\theta\vee 0,& i=0, j=1.
\end{cases}
\end{equation*}
In particular, $0$ is an absorbing state when $\theta\le 0$.
For $k\in \mathbb{N}_0$, we define
\begin{equation}
\label{eq:pi}
\pi_k(\theta) \colon \text{ the law of the process } Z \text{ starting from } Z(0)=k.
\end{equation}
Let $\zeta(Z):= \inf\{t> 0 \colon Z(t) = 0\}$ be its first hitting time of zero.
Let $\alpha\in (0,1)$ and $\theta_1,\theta_2\ge 0$.
Recall from the introduction that a Poissonised ordered up-down Chinese restaurant process (PCRP) with parameters $\alpha$, $\theta_1$ and $\theta_2$, starting from $C\in \mathcal{C}$, is denoted it by $\mathrm{PCRP}^{(\alpha)}_C (\theta_1, \theta_2)$.
When $\theta_2= \alpha$, a $\mathrm{PCRP}^{(\alpha)} (\theta_1, \alpha)$ is well-studied by Rogers and Winkel \cite{RogeWink20}.
They develop a representation of a PCRP by using scaffolding and spindles, in a similar way to the construction of an $\mathrm{SSIP}^{(\alpha)} (\theta_1)$-evolution.
Their approach draws on connections with \emph{splitting trees} and results
of the latter object developed in \cite{GeigKers97,Lambert10}.
Let $\mathbf{D}\sim\mathtt{PRM}(\alpha \cdot \mathrm{Leb} \otimes \pi_1(-\alpha))$ and define its scaffolding function by
\begin{equation}\label{eq:scaffolding-D}
J_{\mathbf{D}} (t) :=- t + \int_{[0,t] \times \mathcal{E}} \zeta(f) \mathbf{D} (ds, df) ,\qquad t\ge 0.
\end{equation}
Let $Z\sim \pi_m(-\alpha)$ with $m\in \mathbb{N}$, independent of $\mathbf{D}$.
Then a \emph{discrete clade with initial mass $m$} is a random point measure on $\mathbb{R}_+\times \mathcal{E}$ defined by
\begin{equation}\label{eq:clade-D}
\ensuremath{\normalfont\textsc{clade}}^D (Z, \mathbf{D}):= \delta(0,Z) + \mathbf{D}\mid_{\big(0, T_{-\zeta(Z)} (J_\mathbf{D}) \big]\times \mathcal{E}},
\end{equation}
where $T_{-y} (J_\mathbf{D})= \inf\{t\ge 0 \colon J_{\mathbf{D}}(t)= -y\}$. Write $\mathtt{Clade}^D_m(\alpha)$ for the law of $\ensuremath{\normalfont\textsc{clade}}^D (Z, \mathbf{D})$.
\begin{lemma} [{\cite[Theorem 1.2]{RogeWink20}}]\label{lem:pcrp0}
For $\gamma \in \mathcal{C}$, let $(\mathbf{D}_U, U\!\in\! \gamma)$ be an independent family with each $\mathbf{D}_U \!\sim\! \mathtt{Clade}^D_{\mathrm{Leb}(U)} (\alpha)$.
Then the process
$
\big( \mathop{ \raisebox{-2pt}{\Huge$\star$} } _{U\in \gamma}\ensuremath{\normalfont\textsc{skewer}}(y, \mathbf{D}_{U}, J_{\mathbf{D}_{U}}), y\ge 0\big)
$
is a $\mathrm{PCRP}^{(\alpha)}_{\gamma}(0, \alpha)$.
\end{lemma}
To construct a $\mathrm{PCRP}^{(\alpha)} (\theta_1,\alpha)$ with $\theta_1>0$, define for $t\ge 0$
\begin{equation}\label{eqnstar}
J_{\theta_1, \mathbf{D}}(t):= J_{\mathbf{D}}(t) + \left(1-\frac{\alpha}{\theta_1}\right) \ell(t), \quad \text{where}~ \ell (t):= -\inf_{u\le t} J_{\mathbf{D}}(u).
\end{equation}
Then $\inf\{t\!\ge\! 0\colon\! J_{\mathbf{D}}(t) \!=\!-z\}\!=\!\inf\{t\!\ge\! 0\colon\! J_{\theta_1,\mathbf{D}}(t)\!=\!-(\alpha/\theta_1)z\}\!=:\!T^{-(\alpha/\theta_1)z}_{\theta_1}$ for $z\ge 0$. Set
\[
\cev{C}_j(y) := \ensuremath{\normalfont\textsc{skewer}}\left(y, \mathbf{D}\big|_{[0,T_{\theta_1}^{-j})}, j+ J_{\theta_1,\mathbf{D}}\big|_{[0,T_{\theta_1}^{-j})}\right), \qquad y\in [0,j],\quad j\in\mathbb{N}.
\]
Then, for any $k>j$, we have
\[
\left( \mathbf{D}\big|_{[T^{-(j\!-\!k)}_{\theta_1}, T^{-k}_{\theta_1})}, k+ J_{\theta_1,\mathbf{D}}\big|_{[T^{-(j\!-\!k)}_{\theta_1}, T^{-k}_{\theta_1})} \right)
\mbox{$ \ \stackrel{d}{=}$ }\left(
\mathbf{D}\mid_{[0,T^{-j}_{\theta_1})}, j+ J_{\theta_1,\mathbf{D}}\big|_{[0,T^{-j}_{\theta_1})}
\right).
\]
As a result,
$(\cev{C}_k(y),\, y\in [0,j])\mbox{$ \ \stackrel{d}{=}$ } (\cev{C}_j(y),\, y\in [0,j])$.
Then by Kolmogorov's extension theorem there exists
a c\`adl\`ag process $(\cev{C}(y),\, y\ge 0)$
such that
\begin{equation}\label{eq:discrbackarrow}
(\cev{C}(y),\, y\in [0,j])\mbox{$ \ \stackrel{d}{=}$ } (\cev{C}_j(y),\, y\in [0,j]) \quad\text{for all}~ j\in \mathbb{N}.
\end{equation}
\begin{theorem}[{\cite[Theorem 2.5]{RogeWink20}}]\label{thm:jccp}
For $\theta_1 > 0$, let $(\cev{C}(y),\, y\ge 0)$ be the process defined in \eqref{eq:discrbackarrow}.
For $\gamma\in \mathcal{C}$,
let $(\vecc{C}(y),\,y\ge 0)$ be a $\mathrm{PCRP}^{(\alpha)} (0,\alpha)$ starting from $\gamma$.
Then the $\mathcal{C}$-valued process
$(C(y):= \cev{C}(y)\star \vecc{C}(y),\,y\ge 0)$
is a $\mathrm{PCRP}^{(\alpha)} (\theta_1,\alpha)$ starting from $\gamma$.
\end{theorem}
\subsection{Study of the up-down chain on positive integers}\label{sec:ud-chain}
For $\theta>-1$ and $n, k\in \mathbb{N}$, define a probability measure:
\begin{equation}\label{eq:pi-n}
\pi_k^{(n)}(\theta) \text{ is the law of the process }
\left( (n^{-1} Z(2 n y),\, y\ge 0 \right), ~\text{where}~ Z \sim \pi_{k} (\theta)\text{ as in } \eqref{eq:pi}.
\end{equation}
In preparation of proving Theorem~\ref{thm:crp-ip}, we present the following convergence concerning scaffoldings and spindles.
\begin{proposition}\label{prop:cv-prm}
For $n\in \mathbb{N}$, let $\mathbf{N}^{(n)}$ be a Poisson random measure on $\mathbb{R}_+\times \mathcal{E}$ with intensity
$ \mathrm{Leb}\otimes ( 2 \alpha n^{1+ \alpha}\cdot\pi_1^{(n)} (-\alpha))$,
and define its scaffolding $\xi^{(n)}:= (\xi^{(n)}(t))_{t\ge 0}$, where
\begin{equation}\label{eq:scaffolding-n}
\xi^{(n)}(t):= J_{\mathbf{N}^{(n)}}^{(n)} (t) := - n^{\alpha} t + \int_{[0,t] \times \mathcal{E}} \zeta(f) \mathbf{N}^{(n)} (ds, df) ,\qquad t\ge 0.
\end{equation}
Write
$\ell^{(n)}:= \left( \ell^{(n)}(t):= -\inf_{s\in [0,t]} \xi^{(n)}(s) , t\ge 0\right)$.
Let $\mathbf{N}\sim \mathtt{PRM}( c_{\alpha} \cdot\mathrm{Leb} \otimes \Lambda^{(-2\alpha)}_{\mathtt{BESQ}})$, where $\Lambda^{(-2\alpha)}_{\mathtt{BESQ}}$ is the excursion measure associated with ${\tt BESQ}(- 2\alpha)$
introduced in Section~\ref{sec:pre} and $c_{\alpha}=2 \alpha(1\!+\!\alpha)/\Gamma(1\!-\!\alpha)$ as in \eqref{eq:nu-Lambda}. Define its scaffolding $\xi_{\mathbf{N}}$ as in \eqref{eq:scaffolding},
and $\ell_{\mathbf{N}}= (\ell_{\mathbf{N}}(t)= -\inf_{s\in [0,t]} \xi_{\mathbf{N}}(s), t\ge 0)$.
Then the joint distribution of the triple $(\mathbf{N}^{(n)}, \xi^{(n)}, \ell^{(n)})$ converges to $(\mathbf{N}, \xi_{\mathbf{N}}, \ell_{\mathbf{N}})$ in distribution, under the product of vague and Skorokhod topologies.
\end{proposition}
Note that, for $n\in \mathbb{N}$, we can obtain $\mathbf{N}^{(n)}$ from $\mathbf{D}\sim \mathtt{PRM}(\mathrm{Leb} \otimes \pi_1 (-\alpha))$, such
that for each atom $\delta(s,f)$ of $\mathbf{D}$, $\mathbf{N}^{(n)}$ has an atom $\delta(n^{-(1+\alpha)} s, n^{-1} f (2 n \,\cdot\, ) )$.
This suggests a close relation between $\mathbf{N}^{(n)}$ and a rescaled PCRP, which will be specified later on.
The up-down chain defined in \eqref{eq:pi-n} plays a central role in the proof of Proposition~\ref{prop:cv-prm}. Let us first record a convergence result obtained in \cite[Theorem 1.3--1.4]{RogeWink20}.
Similar convergence in a general context of discrete-time Markov chains converging to positive self-similar Markov processes has been established in \cite{BertKort16}.
\begin{lemma}[{\cite[Theorem 1.3--1.4]{RogeWink20}}]\label{lem:ud}
Fix $a>0$ and $\theta>-1$. For every $n\in \mathbb{N}$, let $Z^{(n)} \sim \pi_{\lfloor n a\rfloor}^{(n)} (\theta)$.
Then the following convergence holds in the space $\mathbb{D}(\mathbb{R}_+, \mathbb{R}_+)$ of c\`adl\`ag functions endowed with the Skorokhod topology:
\[
Z^{(n)} \underset{n\to \infty}{\longrightarrow} Z\sim {\tt BESQ}_a (2 \theta)\quad \text{in distribution}.
\]
Moreover, if $\theta\in (-1, 0]$, then the convergence holds jointly with the convergence of first hitting times of 0.
\end{lemma}
For our purposes, we study this up-down chain in more depth and obtain the following two
convergence results.
\begin{lemma}\label{lem:ud-bis}
In Lemma~\ref{lem:ud}, the joint convergence of first hitting time of 0 also holds when $\theta\in (0,1)$.
\end{lemma}
Recall that for $\theta\ge 1$, the first hitting time of 0 by ${\tt BESQ}(2\theta)$ is infinite.
\begin{proof} We adapt the proof of \cite[Theorem 3(i)]{BertKort16}, which establishes such convergence of hitting times in a general context of discrete-time Markov chains
converging to positive self-similar Markov processes. This relies on Lamperti's representation for $Z\sim {\tt BESQ}_a (2 \theta)$
$$Z(t)=\exp(\xi(\sigma(t))),\qquad\mbox{where }\sigma(t)=\inf\left\{s\ge 0\colon\int_0^se^{\xi(r)}dr>t\right\},$$
for a Brownian motion with drift $\xi(t)=\log(a)+2B(t)-2(1-\theta)t$, and corresponding representations
$$Z_n(t)=\exp(\xi_n(\sigma_n(t))),\qquad\mbox{where }\sigma_n(t)=\inf\left\{s\ge 0\colon\int_0^se^{\xi_n(r)}dr>t\right\},$$
for continuous-time Markov chains $\xi_n$, $n\ge 1$, with increment kernels
$$L^n(x,dy)=2ne^x\Big((ne^x+\theta)\delta_{\log(1+1/ne^x)}(dy)+ne^x\delta_{\log(1-1/ne^x)}(dy)\Big),\ \ x\ge -\log n.$$
We easily check that for all $x\in\mathbb{R}$
$$\int_{y\in\mathbb{R}}yL^n(x,dy)\rightarrow -2(1-\theta)\quad\mbox{and}\quad\int_{y\in\mathbb{R}}y^2L^n(x,dy)\rightarrow 4,$$
as well as
$$\sup_{\{x\colon|x|\le r\}}\int_{y\in\mathbb{R}}y^21_{\{|y|>\varepsilon\}}L^n(x,dy)=0\quad\mbox{for $n$ sufficiently large.}$$
To apply \cite[Theorem IX.4.21]{JacodShiryaev}, we further note that all convergences are locally uniform, and we extend the increment kernel $\widetilde{L}(x,dy):=L(x,dy)$, $x\ge\log(2)-\log(n)$, by setting
$$\widetilde{L}^n(x,dy):=L^n(\log(2)-\log(n),dy),\quad x<\log(2)-\log(n),$$
to be definite. With this extension of the increment kernel, we obtain $\widetilde{\xi}_n\rightarrow\xi$ in distribution on
$\mathbb{D}([0,\infty),\mathbb{R})$. This implies $\xi_n\rightarrow\xi$ in distribution also for the process $\xi_n$ that jumps from $-\log(n)$ to
$-\infty$, but only if we stop the processes the first time they exceed any fixed negative level.
Turning to extinction times $\tau_n$ of $Z_n$, we use Skorokhod's representation $\xi_n\rightarrow\xi$ almost surely. Then we want to show
that also
$$\tau_n=\int_0^\infty e^{\xi_n(s)}ds\rightarrow\tau=\int_0^\infty e^{\xi(s)}ds\qquad\mbox{in probability.}$$
We first establish some uniform bounds on the extinction times when $Z_n$, $n\ge 1$, are started from
sufficiently small initial states. To achieve this, we consider the generator $\mathcal{L}^n$ of $Z_n$ and note that for $g(x)=x^\beta$, we have
$$\mathcal{L}^ng(x)=2n((nx+\theta)g(x+1/n)+nx g(x-1/n)-(2nx+\theta)g(x))\le -C/x$$
for all $n\ge 1$ and $x\ge K/n$ if and only if
$$\frac{g(1+h)-2g(1)+g(1-h)}{h^2}+\theta\frac{g(1+h)-g(1)}{h}\le -C/2\quad\mbox{for all $h\le 1/K$.}$$
But since $g^{\prime\prime}(1)+\theta g^\prime(1)=\beta(\beta-1+\theta)<0$ for $\beta\in(0,1-\theta)$, the function $g$ is a Foster-
Lyapunov function, and \cite[Corollary 2.7]{MensPetr14}, applied with $q=p/2=\beta/(1+\beta)$ and $f(x)=x^{(1+\beta)/2}$, yields
$$\exists C^\prime>0\ \forall n\ge 1,\ \forall x\ge K/n\quad\mathbb{E}_x((\tau_n^{(K)})^q)<C^\prime x^\beta,$$
where $\tau_n^{(K)}=\inf\{t\ge 0\colon M_n(t)\le K/n\}$. An application of Markov's inequality yields
$\mathbb{P}_x(\tau_n^{(K)}>t)\le C^\prime x^\beta t^{-\beta/(1+\beta)}$. In particular,
$$\forall\varepsilon>0\ \forall t>0\ \exists\eta>0\ \forall n\ge 1\ \forall K+1\le i\le\eta n\quad \mathbb{P}_{i/n}(\tau_n^{(K)}>t/6)\le\varepsilon/8.$$
Furthermore, there is $n_0$ such that for $n\ge n_0$, the probability that $M_n$ starting from $K/n$ takes more than time $t/6$ to get
from $K/n$ to $0$ is smaller than $\varepsilon/8$. Now choose $R$ large enough so that
$$\mathbb{P}(\exp(\xi(R))<\eta/2)\ge 1-\varepsilon/8\quad\mbox{and}\quad\mathbb{P}\left(\int_R^\infty e^{\xi(s)}ds>t/3\right)\le\varepsilon/4.$$
We can also take $n_1\ge n_0$ large enough so that
$$\mathbb{P}(|\exp(\xi_n(R))-\exp(\xi(R))|<\eta/2)\ge 1-\varepsilon/8\quad\mbox{for all $n\ge n_1$.}$$
Then, considering $\exp(\xi_n(R))$ and applying the Markov property at time $R$,
$$\mathbb{P}(\exp(\xi_n(R))>\eta)<\varepsilon/4\quad\mbox{and}\quad\mathbb{P}\left(\int_R^\infty e^{-\xi_n(s)}ds>t/3\right)\le\varepsilon/2,\quad\mbox{for all $n\ge n_1$.}$$
But since $\xi_n\rightarrow\xi$ almost surely, uniformly on compact sets, we already have
$$\int_0^R e^{\xi_n(s)}ds\rightarrow\int_0^Re^{\xi(s)}ds\quad\mbox{almost surely.}$$
Hence, we can find $n_2\ge n_1$ so that for all $n\ge n_2$
$$\mathbb{P}\left(\left|\int_0^R e^{\xi_n(s)}ds-\int_0^Re^{\xi(s)}ds\right|>t/3\right)<\varepsilon/4.$$
We conclude that, for any given $t>0$ and any given $\varepsilon$, we found $n_2\ge 1$ such that for all $n\ge n_2$
$$\mathbb{P}\left(\left|\int_0^\infty e^{\xi_n(s)}ds-\int_0^\infty e^{\xi(s)}ds\right|>t\right)<\varepsilon,$$
as required.
\end{proof}
\begin{proposition}\label{prop:vague}
Let $\theta\in (-1,1)$ and $Z\sim\pi_1(\theta)$. Denote by $\widetilde{\pi}_1^{(n)}(\theta)$ the distribution of
$\big(\frac{1}{n}Z(2nt\wedge\zeta(Z)),\,t\ge 0\big)$.
Then the following convergence holds vaguely
\[
\frac{\Gamma(1\!+\!\theta)}{1\!-\!\theta} n^{1-\theta} \cdot \widetilde{\pi}_1^{(n)}(\theta) \underset{n\to \infty}{\longrightarrow} \Lambda^{(2\theta)}_{\mathtt{BESQ}}
\]
on the space of c\`adl\`ag excursions equipped with the Skorokhod topology.
\end{proposition}
\begin{proof} Denote by $A(f)=\sup|f|$ the supremum of a c\`adl\`ag excursion $f$. In using the term ``vague convergence'' on spaces that
are not locally compact, but are bounded away from a point (here bounded on $\{A>a\}$ for all $a>0$), we follow
Kallenberg \cite[Section 4.1]{KallenbergRM}. Specifically, it follows from his Lemma 4.1 that it suffices to show for all $a>0$
\begin{enumerate}\item[1.] $\Lambda^{(2\theta)}_{\mathtt{BESQ}}(A=a)=0$,\vspace{0.1cm}
\item[2.] $(\Gamma(1+\theta)/(1-\theta)) n^{1-\theta} \cdot \widetilde{\pi}_1^{(n)}(\theta)(A>a) \underset{n\to \infty}{\longrightarrow} \Lambda^{(2\theta)}_{\mathtt{BESQ}}(A>a)$,
\item[3.] $\widetilde{\pi}_1^{(n)}(\theta)(\,\cdot\,|\,A>a) \underset{n\to \infty}{\longrightarrow} \Lambda^{(2\theta)}_{\mathtt{BESQ}}(\,\cdot\,|\,A>a)$ weakly.
\end{enumerate}
See also \cite[Proposition A2.6.II]{DaleyVereJones1}.
1. is well-known. Indeed, we have chosen to normalise $\Lambda^{(2\theta)}_{\mathtt{BESQ}}$ so that
$\Lambda^{(2\theta)}_{\mathtt{BESQ}}(A>a)=a^{1-\theta}.$
See e.g. \cite[Section~3]{PitmYor82}. Cf. \cite[Lemma 2.8]{Paper1-1}, where a different normalisation was chosen.
2. can be proved using scale functions. Let us compute a scale function $s$ for the birth-death chain with up-rates $i+\theta$ and down-rates
$i$ from state $i\ge 1$. Set $s(0)=0$ and $s(1)=1$. For $s$ to be a scale function, we need
$$(k+\theta)(s(k+1)-s(k))+k(s(k-1)-s(k))=0\qquad\mbox{for all }k\ge 1.$$
Let $d(k)=s(k)-s(k-1)$, $k\ge 1$. Then
$$d(k+1)=\frac{k}{k+\theta}d(k)=\frac{\Gamma(k+1)\Gamma(1+\theta)}{\Gamma(k+1+\theta)}\sim\Gamma(1+\theta)k^{-\theta}\qquad\mbox{as } k\rightarrow\infty,$$
and therefore
$$s(k)=\sum_{i=1}^kd(i)=\sum_{i=1}^k\frac{\Gamma(i+1)\Gamma(1+\theta)}{\Gamma(i+1+\theta)}\sim\frac{\Gamma(1+\theta)}{1-\theta}k^{1-\theta}.$$
Then the scale function applied to the birth-death chain is a martingale. Now let $p(k)$ be the probability of hitting $k$ before absorption in 0,
when starting from 1. Applying the optional stopping theorem at the first hitting time of $\{0,k\}$, we find $p(k)s(k)=1$, and hence
$$\frac{\Gamma(1\!+\!\theta)}{1\!-\!\theta} n^{1-\theta}p(\lceil na\rceil)=\frac{\Gamma(1\!+\!\theta) n^{1- \theta}}{(1-\theta)s(\lceil na\rceil)}\underset{n\to\infty}{\longrightarrow}a^{1-\theta},$$
as required.
3. can be proved by using the First Description of $\Lambda^{(2\theta)}_{\mathtt{BESQ}}$ given in \cite[(3.1)]{PitmYor82}, which states, in particular, that the
excursion under $\Lambda^{(2\theta)}_{\mathtt{BESQ}}(\,\cdot\,|A>a)$ is a concatenation of two independent processes, an $\uparrow$-diffusion
starting from 0 and stopped when reaching $a$ followed by a $0$-diffusion starting from $a$ and run until absorption in 0. In our case, the
$0$-diffusion is ${\tt BESQ}(2\theta)$, while \cite[(3.5)]{PitmYor82} identifies the $\uparrow$-diffusion as ${\tt BESQ}(4-2\theta)$.
Specifically, straightforward Skorokhod topology arguments adjusting the time-changes around the concatenation times, see
\cite[VI.1.15]{JacodShiryaev}, imply that it suffices to show:
\begin{enumerate}\item[(a)] The birth-death chain starting from 1 and conditioned to reach $\lceil na\rceil$ before 0, rescaled,
converges to a ${\tt BESQ}(4\!-\!2\theta)$ stopped at $a$, jointly with the hitting times of $\lceil na\rceil$.
\item[(b)] The birth-death chain starting from $\lceil na\rceil$ run until first hitting 0, rescaled, converges to a ${\tt BESQ}(2\theta)$
starting from $a$ stopped when first hitting $0$, jointly with the hitting times.
\end{enumerate}
(b) was shown in \cite[Theorem 1.3]{RogeWink20}. See also Lemmas \ref{lem:ud}--\ref{lem:ud-bis} here, completing the convergence of the hitting time. For (a), we adapt that proof. But first we need to identify the conditioned birth-death
process. Note that the holding rates are not affected by the conditioning. An elementary argument based purely on the jump chain shows that
the conditioned jump chain is Markovian, and its transition probabilities are adjusted by factors $s(i\pm 1)/s(i)$ so that the conditioned birth-death
process has up-rates $(i+\theta)s(i+1)/s(i)$ and down-rates $is(i-1)/s(i)$ from state $i\ge1$. Rescaling, our processes are instances of $\mathbb{R}$-valued
pure-jump Markov process with jump intensity kernels $\widetilde{K}^n(x,dy)=0$ for $x\le 0$ and, for $x>0$,
\[\widetilde{K}^n(x,dy)=2n\!\left(\!\!\left(\lceil nx\rceil+\theta\right)\frac{s(\lceil nx\!+\!1\rceil)}{s(\lceil nx\rceil)}\delta_{1/n}(dy)
+\lceil nx\rceil\frac{s(\lceil nx\!-\!1\rceil)}{s(\lceil nx\rceil)}\delta_{-1/n}(dy)\!\right)\!.
\]
We now check the drift, diffusion and jump criteria of \cite[Theorem IX.4.21]{JacodShiryaev}: for $x>0$
\begin{align*}
\int_{\mathbb{R}}y\widetilde{K}^n(x,dy)
&=2\lceil nx\rceil\frac{s(\lceil nx\!+\!1\rceil)-s(\lceil nx\!-\!1\rceil)}{s(\lceil nx\rceil)}+2\theta\frac{s(\lceil nx\!+\!1\rceil)}{s(\lceil nx\rceil)}\\
&\rightarrow 4-4\theta+2\theta=4-2\theta,\\
\int_{\mathbb{R}}y^2\widetilde{K}^n(x,dy)
&=\frac{2\lceil nx\rceil}{n}\frac{s(\lceil nx\!+\!1\rceil)+s(\lceil nx\!-\!1\rceil)}{s(\lceil nx\rceil)}+\frac{2\theta}{n}\frac{s(\lceil nx\!+\!1\rceil)}{s(\lceil nx\rceil)}\\
&\rightarrow 4x+0=4x,\\
\int_{\mathbb{R}}y^21_{\{|y|\ge\varepsilon\}}\widetilde{K}^n(x,dy)&=0\qquad\mbox{for $n$ sufficiently large,}
\end{align*}
all uniformly in $x\in(0,\infty)$, as required for the limiting $(0,\infty)$-valued ${\tt BESQ}(4-2\theta)$ diffusion with infinitesimal drift $4-2\theta$ and diffusion coefficient $4x$. The convergence of hitting times of $\lfloor na\rfloor$, which is the first passage time above level $\lfloor na\rfloor$, follows from the regularity of the limiting diffusion after the first passage time above level $a$. See e.g.
\cite[Lemma 3.3]{RogeWink20}.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:cv-prm}]
Proposition~\ref{prop:vague} shows that the intensity measure of the Poisson random measure $\mathbf{N}^{(n)}$
converges vaguely as $n\to \infty$. Then the weak convergence of $\mathbf{N}^{(n)}$, under the vague topology, follows from \cite[Theorem~4.11]{KallenbergRM}.
The weak convergence $\xi^{(n)} \to \xi_{\mathbf{N}}$ has already been proved by Rogers and Winkel \cite[Theorem 1.5]{RogeWink20}.
Therefore, both sequences $(\mathbf{N}^{(n)},n\in \mathbb{N})$ and $(\xi^{(n)}, n\in \mathbb{N})$ are tight (see e.g.\ \cite[VI~3.9]{JacodShiryaev}),
and the latter implies the tightness of $(\ell^{(n)},n\in \mathbb{N})$.
We hence deduce immediately the tightness of the triple-valued sequence $((\mathbf{N}^{(n)}, \xi^{(n)},\ell^{(n)}), n\in \mathbb{N})$.
As a result, we only need to prove that, for any subsequence $(\mathbf{N}^{(n_i)},\xi^{(n_i)},\ell^{(n_i)})$ that
converges in law, the limiting distribution is the same as $(\mathbf{N},\xi_{\mathbf{N}}, \ell_{\mathbf{N}})$.
By Skorokhod representation, we may assume that $(\mathbf{N}^{(n_i)},\xi^{(n_i)},\ell^{(n_i)})$ converges a.s.\@ to $(\mathbf{N},\widetilde{\xi}, \widetilde{\ell})$,
and it remains to prove that $\widetilde{\xi}=\xi_{\mathbf{N}}$ and $\widetilde{\ell}=\ell_{\mathbf{N}}$ a.s..
For any $\varepsilon>0$, since a.s.\@ $\mathbf{N}$ has no spindle of length equal to $\varepsilon$,
the vague convergence of $\mathbf{N}^{(n_i)}$ implies that,
a.s.\@ for any $t\ge 0$, we have the following weak convergence of finite point measures:
\[
\sum_{s\le t} \mathbf{1}\{|\Delta \xi^{(n_i)} (s)|>\varepsilon\} \delta\left(s, \Delta \xi^{(n_i)}(s)\right)
\quad \Longrightarrow \quad
\sum_{s\le t} \mathbf{1}\{|\Delta \xi_{\mathbf{N}} (s)|>\varepsilon\} \delta\left(s, \Delta \xi_{\mathbf{N}} (s)\right).
\]
The subsequence above also converges a.s.\@ to $\sum_{s\le t} \mathbf{1}\{|\Delta \xi (s)|>\varepsilon\} \delta\big(s, \Delta \widetilde{\xi}(s)\big)$, since $\xi^{(n_i)}\rightarrow\widetilde{\xi}$ in
$\mathbb{D}(\mathbb{R}_+,\mathbb{R})$.
By the L\'evy--It\^o decomposition, this is enough to conclude that
$\widetilde{\xi} = \xi_{\mathbf{N}}$ a.s..
For any $t\ge 0$, since $\xi^{(n_i)}\to \xi_{\mathbf{N}}$ a.s.\@ and $\xi_{\mathbf{N}}$ is a.s.\@ continuous at $t$, we have $(\xi^{(n_i)}(s), s\in [0,t]) \to (\xi_{\mathbf{N}}(s), s\in [0,t])$ in $\mathbb{D}([0,t], \mathbb{R})$ a.s.. Then $\inf_{s\in [0,t]} \xi^{(n)}(s) \to \inf_{s\in [0,t]}\xi_{\mathbf{N}}(s)$ a.s., because it is a continuous functional (w.r.t.\@ the Skorokhod topology).
In other words, $\widetilde{\ell}(t)=\ell_{\mathbf{N}}(t)$ a.s.. By the continuity of the process $\ell_{\mathbf{N}}$ we have
$\widetilde{\ell}=\ell_{\mathbf{N}}$ a.s., completing the proof.
\end{proof}
\begin{lemma}[First passage over a negative level]\label{lem:T-}
Suppose that $(\mathbf{N}^{(n)}, \xi^{(n)})$ as in Proposition~\ref{prop:cv-prm} converges a.s.\@ to $(\mathbf{N}, \xi_{\mathbf{N}})$ as $n\to \infty$. Define
\begin{equation}
T_{-y}^{(n)} := T_{-y} (\xi^{(n)}):= \inf\{t\ge 0 \colon \xi^{(n)}(t)= -y\}, \qquad y>0,
\end{equation}
and similarly $T_{-y}:= T_{-y} (\xi_{\mathbf{N}})$.
Let $(h^{(n)})_{n\in \mathbb{N}}$ be a sequence of positive numbers with $\lim_{n\to \infty} h^{(n)}=h>0$.
Then
$T^{(n)}_{- h^{(n)}} $ converges to $T_{-h}$ a.s..
\end{lemma}
\begin{proof}
Since the process $\xi^{(n)}$ is a spectrally positive L\'evy process with some Laplace exponent
$\Phi^{(n)}$, we know from \cite[Theorem~VII.1]{BertoinLevy} that
$(T^{(n)}_{(-y)+},y\ge 0)$ is a subordinator with Laplace exponent $(\Phi^{(n)})^{-1}$.
On the one hand, the convergence of $\Phi^{(n)}$ leads to $(T^{(n)}_{(-y)+},y\ge 0)\to (T_{(-y)+}, y\ge 0)$ in distribution under the Skorokhod topology.
Since $\xi_{\mathbf{N}}$ is a.s.\ continuous at $T_{-h}$, we have
$T^{(n)}_{-h^{(n)}}\to T_{-h}$ in distribution.
On the other hand, we deduce from the convergence of the process $\xi^{(n)}$ in $\mathbb{D}(\mathbb{R}_+,\mathbb{R})$ that, for any $\varepsilon>0$, a.s.\@ there exists $N_1\in \mathbb{N}$ such that for all $n>N_1$,
\[
|\xi^{(n)}(T_{-h}) - (-h)|<\varepsilon.
\]
We may assume that $|h^{(n)} - h|<\varepsilon$ for all $n\in \mathbb{N}$.
As a result, a.s.\@ for any $n> N_1$ and $y'<h^{(n)}-2\varepsilon$,
we have $T^{(n)}_{-y'}< T_{-h}$.
Hence, by the arbitrariness of $\varepsilon$ and the left-continuity of $T^{(n)}_{-y}$ with respect to $y$, we have
$
\limsup_{n\to \infty} T^{(n)}_{-h^{(n)}}\le T_{-h}$ a.s.. Recall that $T^{(n)}_{-h^{(n)}}\to T_{-h}$ in distribution, it follows that $T^{(n)}_{-h^{(n)}}\to T_{-h}$ a.s..
\end{proof}
\subsection{The scaling limit of a $\mathrm{PCRP}^{(\alpha)} (0,\alpha)$}\label{sec:cv-0alpha}
\begin{theorem}[Convergence of $\mathrm{PCRP}^{(\alpha)} (0,\alpha)$]
\label{thm:crp-ip-0}
For $n\in \mathbb{N}$, let $(C^{(n)}(y), y\ge 0)$ be a $\mathrm{PCRP}^{(\alpha)}(0,\alpha)$ starting from
${C^{(n)}(0)}\in \mathcal{C}$
and $(\beta(y), y\ge 0)$ be an $\mathrm{SSIP}^{(\alpha)}(0)$-evolution starting from $\beta(0)\in \mathcal{I}_H$.
Suppose that the interval partition $\frac{1}{n} C^{(n)}(0)$ converges in distribution to $\beta(0)$ as $n\to \infty$, under $d_H$.
Then the rescaled process $(\frac{1}{n} C^{(n)}(2 n y), y\ge 0)$ converges in distribution to $(\beta(y), y\ge 0)$ as $n\to \infty$ in the Skorokhod sense and hence uniformly.
\end{theorem}
We start with the simplest case.
\begin{lemma}\label{lem:cv-clade}
The statement of Theorem~\ref{thm:crp-ip-0} holds, if $\beta(0)= \{(0,b)\}$ and $C^{(n)} (0) = \{(0,b^{(n)})\}$, where $ \lim_{n\to \infty} n^{-1} b^{(n)}= b> 0$.
\end{lemma}
To prove this lemma, we first give a representation of the rescaled PCRP.
Let $(\mathbf{N}^{(n)},\xi^{(n)})$ be as in Proposition~\ref{prop:cv-prm}.
For each $n\in \mathbb{N}$, we define a random point measure on $\mathbb{R}_+\times \mathcal{E}$ by
\begin{equation*}
\mathbf{N}^{(n)}_{\mathrm{cld}}:= \delta(0,\mathbf{f}^{(n)}) + \mathbf{N}^{(n)}\Big|_{(0, T_{-\zeta(\mathbf{f}^{(n)})} (\xi^{(n)})]\times \mathcal{E}},
\end{equation*}
where $\mathbf{f}^{(n)}\sim \pi_{b^{(n)}}^{(n)}(-\alpha)$, independent of $\mathbf{N}^{(n)}$, and $T_{-y} (\xi^{(n)}):= \inf\{t\ge 0 \colon \xi^{(n)}(t)= -y\}$.
Then we may assume that $\mathbf{N}^{(n)}_{\mathrm{cld}}$ is obtained from $\mathbf{D}^{(n)} \sim \mathtt{Clade}^{D}_{b^{(n)}}(\alpha)$ defined in \eqref{eq:clade-D} such that
for each atom $\delta(s,f)$ of $\mathbf{D}^{(n)}$, $\mathbf{N}^{(n)}_{\mathrm{cld}}$ has an atom $\delta(n^{-(1+\alpha)} s, n^{-1} f (2 n \cdot ) )$.
Let $\xi^{(n)}_{\mathrm{cld}}$ be the scaffolding associated with $\mathbf{N}^{(n)}_{\mathrm{cld}}$ as in \eqref{eq:scaffolding-D}.
As a consequence, we have the identity
\[
\beta^{(n)}(y):= \ensuremath{\normalfont\textsc{skewer}}\left(y,\mathbf{N}^{(n)}_{\mathrm{cld}}, \xi^{(n)}_{\mathrm{cld}}\right)
= \frac{1}{n} \ensuremath{\normalfont\textsc{skewer}}\left(2 n y,\mathbf{D}^{(n)}, \xi_{\mathbf{D}^{(n)}}\right), \quad y\ge 0,
\]
where $\xi_{\mathbf{D}^{(n)}}$ is defined as in \eqref{eq:scaffolding-D}.
By Lemma~\ref{lem:pcrp0}, we may assume that $\beta^{(n)}(y) = \frac{1}{n} C^{(n)}(2 n y)$ with $C^{(n)}$
a $\mathrm{PCRP}^{(\alpha)}(0,\alpha)$ starting from $C^{(n)} (0) = \{(0,b^{(n)})\}$.
\begin{proof}[Proof of Lemma~\ref{lem:cv-clade}]
With notation as above, we shall prove that the rescaled process $\boldsymbol{\beta}^{(n)}:=(\beta^{(n)}(y),\,y\ge 0)$ converges to
an $\mathrm{SSIP}^{(\alpha)}(0)$-evolution $\boldsymbol{\beta}:=(\beta(y),\,y\ge 0)$ starting from $\{(0,b)\}$.
By Definition~\ref{defn:ip0} we can write
$\boldsymbol{\beta}= \ensuremath{\overline{\normalfont\textsc{skewer}}}(\mathbf{N}_{\mathrm{cld}}, \xi_{\mathrm{cld}})$, with
$\mathbf{N}_{\mathrm{cld}} = \ensuremath{\normalfont\textsc{clade}} (\mathbf{f}, \mathbf{N})$ and $\xi_{\mathrm{cld}}$ its associated scaffolding,
where $\mathbf{f}\sim {\tt BESQ}_{b} (-2\alpha)$ and $\mathbf{N}$ is a Poisson random measure on $[0,\infty)\times \mathcal{E}$ with intensity
$c_\alpha\mathrm{Leb}\otimes \Lambda^{(-2\alpha)}_{\mathtt{BESQ}}$, independent of $\mathbf{f}$.
Using Proposition~\ref{prop:cv-prm} and Lemma~\ref{lem:ud}, we have
$(\mathbf{N}^{(n)}, \xi^{(n)})\to (\mathbf{N}, \xi)$ and
$(\mathbf{f}^{(n)}, \zeta(\mathbf{f}^{(n)}))\to (\mathbf{f}, \zeta(\mathbf{f}))$ in distribution, independently.
Then it follows from Lemma~\ref{lem:T-} that this convergence also holds jointly with $T_{-\zeta(\mathbf{f}^{(n)})}(\xi^{(n)})\to T_{-\zeta(\mathbf{f})}(\xi)$.
As a consequence, we have $(\mathbf{N}_{\mathrm{cld}}^{(n)}, \xi_{\mathrm{cld}}^{(n)})\to (\mathbf{N}_{\mathrm{cld}}, \xi_{\mathrm{cld}})$ in distribution.
Fix any subsequence $(m_j)_{j\in \mathbb{N}}$.
With notation as above, consider the subsequence of triple-valued processes $(\mathbf{N}^{(m_j)},\xi^{(m_j)}, \|\boldsymbol{\beta}^{(m_j)}\|)_{j\in \mathbb{N}}$.
For each element in the triple, we know its tightness from Proposition~\ref{prop:cv-prm} and Lemma~\ref{lem:ud},
then the triple-valued subsequence is tight.
Therefore, we can extract a further subsequence $\left(\mathbf{N}_{\mathrm{cld}}^{(n_i)},\xi_{\mathrm{cld}}^{(n_i)},\|\boldsymbol{\beta}^{(n_i)}\|\right)_{i\in \mathbb{N}}$ that converges in distribution to a limit process $(\mathbf{N}_{\mathrm{cld}},\xi_{\mathrm{cld}},\widetilde{M})$.
Using Skorokhod representation, we may assume that this convergences holds a.s..
We shall prove that $\boldsymbol{\beta}^{(n_i)}$ converges to $\boldsymbol{\beta}$ a.s., from which the lemma
follows.
We stress that the limit $\widetilde{M}$
has the same law as the total mass process $\|\boldsymbol{\beta}\|$,
but at this stage it is not clear if they are indeed equal.
We will prove that $\widetilde{M}= \|\boldsymbol{\beta}\|$ a.s..
To this end, let us consider the contribution of the spindles with lifetime longer than $\rho>0$.
On the space
${\mathbb{R}_+\times \{f\in\mathcal{E}\colon \zeta(f)>\rho \}}$, $\mathbf{N}_{\mathrm{cld}}$ has a.s.\@ a finite number of atoms, say enumerated in chronological order by $(t_j ,f _j)_{ j\le K}$ with $K\in \mathbb{N}$.
Since $\mathbf{N}_{\mathrm{cld}}$ has no spindle of length exactly equal to $\rho$,
by the a.s.\@ convergence $\mathbf{N}^{(n_i)}_{\mathrm{cld}}\to \mathbf{N}_{\mathrm{cld}}$, we may assume that
each $\mathbf{N}^{(n_i)}_{\mathrm{cld}}$ also has $K$ atoms $(t^{(n_i)}_j ,f^{(n_i)}_j)_{ j\le K}$ on
${\mathbb{R}_+\times \{f\in\mathcal{E}\colon \zeta(f)>\rho \}}$,
and, for every $j\le K$, that
\begin{equation}\label{eq:tjfj}
\lim_{i\to \infty} t^{(n_i)}_j = t_j, \quad \lim_{i\to \infty} \sup_{t\ge 0} \left|f^{(n_i)}_j (t)- f_j(t)\right| =0, \quad
\text{and}~ \lim_{i\to \infty} \zeta(f^{(n_i)}_j) = \zeta (f_j) \quad \text{a.s.}.
\end{equation}
Note that $ \zeta(f^{(n_i)}_j)=\Delta \xi^{(n_i)}_{\mathrm{cld}} (t^{(n_i)}_j) $.
Since $\xi^{(n_i)}_{\mathrm{cld}} \to \xi_{\mathrm{cld}}$ in $\mathbb{D}(\mathbb{R}_+, \mathbb{R})$, we deduce that
\begin{equation}\label{eq:xicld-tj}
\lim_{i\to \infty} \xi^{(n_i)}_{\mathrm{cld}} (t^{(n_i)}_j -)= \xi_{\mathrm{cld}}(t_j-) \quad \text{a.s..}
\end{equation}
By deleting all spindles whose lifetimes are smaller than $\rho$, we obtain from $\boldsymbol{\beta}^{(n_i)}$ an interval partition evolution
\[
\beta^{(n_i)}_{>\rho}(y) := \left\{ \left(M^{(n_i)} _{k-1} (y, \rho), M^{(n_i)} _{k} (y, \rho)\right), 1\le k\le K\right\} , \qquad y\ge 0,
\]
where $M^{(n_i)} _k (y, \rho) = \sum_{j\in [k]} f^{(n_i)}_j \left(y - \xi^{(n_i)}_{\mathrm{cld}} (t^{(n_i)}_j -)\right)$.
Define similarly $M _k (y, \rho)$ and $\beta_{>\rho}(y)$ from $\boldsymbol{\beta}$.
By \eqref{eq:xicld-tj} and \eqref{eq:tjfj}, for all $k\le K$,
\[
\lim_{n\to \infty} \sup_{y\ge 0} \left| M^{(n_i)} _k (y, \rho) - M _k (y, \rho)\right| =0\quad \text{a.s..}
\]
It follows that
\begin{equation}\label{eq:cv>rho}
\lim_{i\to \infty} \sup_{y\ge 0} d_H \left( \beta^{(n_i)}_{>\rho}(y) ,\beta_{>\rho}(y) \right) =0\quad \text{a.s..}
\end{equation}
In particular, for all $y,\rho>0$,
\[
\widetilde{M} (y)= \lim_{i\to \infty}
\|\beta^{(n_i)} (y)\| \ge \lim_{i\to \infty}
\|\beta^{(n_i)}_{>\rho} (y)\| = \|\beta_{>\rho}(y)\|, \quad \text{a.s..}
\]
Then monotone convergence leads to, for all $y>0$,
\[
\widetilde{M} (y)\ge \lim_{\rho\downarrow 0} \|\beta_{>\rho}(y)\| = \|\beta (y)\|, \quad \text{a.s..}
\]
Moreover, since $\widetilde{M}$ and $\|\boldsymbol{\beta}\|$ also have the same law,
we conclude that $\widetilde{M}$ and $\|\boldsymbol{\beta}\|$ are indistinguishable.
Next, we shall show that $\boldsymbol{\beta}_{> \rho}$ approximates arbitrarily closely to $\boldsymbol{\beta}$ as $\rho \to 0$.
Write $M_{\le\rho}:= \|\boldsymbol{\beta}\|-\|\boldsymbol{\beta}_{> \rho}\|$.
For any $\varepsilon>0$, we can find a certain $\rho>0$ such that $\sup_{y\ge 0} M_{\le\rho}(y)<\varepsilon$.
Indeed, suppose by contradiction that this is not true, then
there exist two sequences $\rho_i \downarrow 0$ and $y_i\ge 0$, such that
$M_{\le \rho_i}(y_i)\ge\varepsilon$ for each $i\in \mathbb{N}$.
Since the extinction time of a clade is a.s.\@ finite, we may assume that (by extracting a subsequence)
$\lim_{i\to \infty} y_i = y\ge 0$.
By the continuity of $\boldsymbol{\beta}$, this yields that $\liminf_{i\to \infty} M_{\le\rho_i}(y)\ge\varepsilon$.
This is absurd, since we know by monotone convergence that $\lim_{i\to \infty} M_{\le \rho_i}(y)=0$.
With this specified $\rho$, since $d_{H} (\beta(y), \beta_{>\rho} (y))\le M_{\le \rho} (y)$ for each $y\ge 0$,
we have
\begin{equation}\label{eq:dH>rho}
\sup_{y\ge 0} d_{H} \left(\beta(y), \beta_{>\rho} (y)\right)<\varepsilon.
\end{equation}
Using \eqref{eq:cv>rho} and the uniform convergence $\|\boldsymbol{\beta}^{(n_i)}\| \to \widetilde{M}=\|\boldsymbol{\beta}\|$, we deduce that the process $ M^{(n_i)}_{\le \rho} := \|\boldsymbol{\beta}^{(n_i)}\|-\|\boldsymbol{\beta}^{(n_i)}_{> \rho}\|$ converges to
$M_{\le\rho}$ uniformly. Then, for all $n$ large enough,
we also have
\[
\sup_{y\ge 0}d_{H} \left(\beta^{(n_i)}(y), \beta^{(n_i)}_{>\rho} (y)\right)\le \sup_{y\ge 0} M^{(n_i)}_{\le \rho}(y)<2\varepsilon
\]
Combining this inequality with \eqref{eq:cv>rho} and \eqref{eq:dH>rho},
we deduce that
\[
\limsup_{i\to \infty} \sup_{y\ge 0} d_{H}\left(\beta^{(n_i)}(y), \beta(y)\right) \le 3 \varepsilon.
\]
As $\varepsilon$ is arbitrary, we conclude that $\boldsymbol{\beta}^{(n_i)}$ converges to $\boldsymbol{\beta}$ a.s.\@ under the uniform topology, completing the proof.
\end{proof}
To extend to a general initial state, let us record the following result that characterises the convergence under $d_H$.
\begin{lemma}[{\cite[Lemma~4.3]{IPPAT}}]\label{lem:dH} Let $\beta,\,\beta_n\in\mathcal{I}_H$, $n\ge 1$. Then $d_H(\beta_n,\beta)\rightarrow 0$ as $n\rightarrow\infty$
if and only if
\begin{equation}\label{crit1}\forall_{(a,b)\in\beta}\ \exists_{n_0\ge 1}\ \forall_{n\ge n_0}\ \exists_{(a_n,b_n)\in\beta_n}\ a_n\rightarrow a\ \mbox{and}\ b_n\rightarrow b
\end{equation}
and
\begin{equation}\label{crit2}\forall_{(n_k)_{k\ge 1}\colon n_k\rightarrow\infty}\ \forall_{(c_k,d_k)\in\beta_{n_k},\,k\ge 1\colon d_k\rightarrow d\in(0,\infty],\,c_k\rightarrow c\neq d}\ (c,d)\in\beta.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{thm:crp-ip-0}]
For the case $\beta(0)= \emptyset$, by convention $\beta(y) =\emptyset$ for every $y\ge 0$.
Then the claim is a simple consequence of the convergence of the total mass processes.
So we may assume that $\beta(0)\ne \emptyset$. By Definition~\ref{defn:ip0}, we can write
$\boldsymbol{\beta} = \mathop{ \raisebox{-2pt}{\Huge$\star$} } _{U\in \beta(0)} \boldsymbol{\beta}_U$,
where each process $\boldsymbol{\beta}_U:=(\beta_U(y), y\ge 0)\sim \mathtt{Clade}^D_{\mathrm{Leb}(U)}(\alpha)$, independent of the others.
For any $\varepsilon>0$, a.s.\@ we can find at most a finite number of intervals, say $U_1, U_2, \ldots, U_k\in \beta(0)$, listed from left to right,
such that
\begin{equation}\label{eq:R_k}
\sup_{y\ge 0} R_k(y) <\varepsilon, ~\text{where}~R_k(y) := \|\beta(y)\| - \sum_{i=1}^k \|\beta_{U_i} (y)\| .
\end{equation}
Indeed, since the process $\|\boldsymbol{\beta}\|\sim {\tt BESQ}_{\|\beta(0)\|}(0)$, $\|\boldsymbol{\beta}_{U_i}\|\sim
{\tt BESQ}_{\mathrm{Leb}(U_i)}(0)$ for each $i\in [k]$, and $R_k$ is independent of the family $\{\boldsymbol{\beta}_{U_i}, i\in [k]\}$,
we deduce from Proposition~\ref{prop:alphatheta} that
$R_k \sim {\tt BESQ}_{r_k}(0)$ with $r_k:= \|\beta(0)\| - \sum_{i=1}^k\mathrm{Leb}(U_i)$.
Hence, by letting $r_k$ be small enough, we have \eqref{eq:R_k}.
For each $n\in \mathbb{N}$, we similarly assume that $C^{(n)}(y) = \mathop{ \raisebox{-2pt}{\Huge$\star$} } _{U\in C^{(n)}(0)} C^{(n)}_U(y), y\ge 0$,
where each process $(C^{(n)}_U(y), y\ge 0)\sim \mathrm{PCRP}_{\mathrm{Leb}(U)}^{(\alpha)}(0,\alpha)$, independent of the others.
Due to the convergence of the initial state $\frac{1}{n} C^{(n)}(0) \to \beta(0)$ and
by Lemma~\ref{lem:dH} we can find for each $i\le k$ a sequence
$U^{(n)}_i = (a^{(n)}_i, b^{(n)}_i) \in C^{(n)}(0)$, $n\in \mathbb{N}$,
such that $a^{(n)}_i/n\to \inf U_i$ and $b^{(n)}_i/n\to \sup U_i $.
In particular, we have ${\rm Leb}(U^{(n)}_i)/n \to {\rm Leb}(U_i)$ for every $i\le k$.
Then we may assume by Lemma~\ref{lem:cv-clade} that, for all $i\le k$,
\begin{equation}\label{eq:beta_i}
\lim_{n\to \infty} \sup_{y\ge 0}d_{H} \left(\frac{1}{n} C_{U^{(n)}_i}^{(n)}(2 n y), \beta_{U_i}(y) \right) = 0\quad \text{a.s..}
\end{equation}
Moreover, it is easy to see that the total mass of a $\mathrm{PCRP}^{(\alpha)}(0,\alpha)$ is a Markov chain described by $\pi (0)$ in \eqref{eq:pi}.
By independence, the rescaled process
\[
R_k^{(n)}(y) := \frac{1}{n}\Big\|C^{(n)}(2 n y)\Big\| -\frac{1}{n} \sum_{i=1}^k \Big\|C^{(n)}_{U^{(n)}_i} (2 n y)\Big\|,\quad y\ge 0,
\]
has the law of $\pi^{(n)}_{r^{(n)}_k} (0)$ as in \eqref{eq:pi-n}, where
$r^{(n)}_k:= \|C^{(n)}(0) \|- \sum_{i=1}^k {\rm Leb}(U^{(n)}_i)$.
By Lemma~\ref{lem:ud} and Skorokhod representation, we may also assume
$\sup_{y\ge 0} |R_k^{(n)}(y) - R_k(y)|\to 0$ a.s..
An easy estimate shows that
\[
d_{H} \left(\frac{1}{n} C^{(n)} (2 n y), \beta(y) \right)
\le 2 R_k^{(n)}(y) +2 R_k(y) + \sum_{i=1}^k d_{H} \left(\frac{1}{n} C^{(n)}_{U^{(n)}_i} (2 n y), \beta_{U_i}(y)\right).
\]
As a result, combining \eqref{eq:R_k} and \eqref{eq:beta_i}, we have
\[
\limsup_{n\to \infty} \sup_{y\ge 0}d_{H} \left(\frac{1}{n} C^{(n)}(2 n y), \beta(y) \right)
\le 4\varepsilon \quad \text{a.s.}.
\]
By the arbitrariness of $\varepsilon$ we deduce the claim.
\end{proof}
\subsection{The scaling limit of a $\mathrm{PCRP}^{(\alpha)} (\theta_1,\alpha)$}\label{sec:cv-thetaalpha}
\begin{proposition}
[Convergence of a $\mathrm{PCRP}^{(\alpha)}( \theta_1,\alpha)$]
\label{prop:crp-ip-theta}
Let $\theta_1\ge 0$.
For $n\in \mathbb{N}$, let $(C^{(n)}(y), y\ge 0)$ be a $\mathrm{PCRP}^{(\alpha)}(\theta_1, \alpha)$
starting from $C^{(n)}(0)\in \mathcal{C}$.
Suppose that the interval partition $ \frac{1}{n} C^{(n)}(0)$ converges in distribution to
$\beta(0)\in \mathcal{I}_H$ as $n\to \infty$, under $d_H$.
Then the process $(\frac{1}{n} C^{(n)}(2 n y), y\ge 0)$ converges in distribution to an $\mathrm{SSIP}^{(\alpha)}(\theta_1)$-evolution starting from $\beta(0)$, as $n\to \infty$, in $\mathbb{D}(\mathbb{R}_+,\mathcal{I}_{H})$ under the Skorokhod topology.
\end{proposition}
\begin{proof}
We only need to prove the case when $\theta_1>0$ and $C^{(n)}(0)=\emptyset$ for every $n\in \mathbb{N}$; then combining this special case and Theorem~\ref{thm:crp-ip-0} leads to the general result.
The arguments are very similar to those in the proof of Lemma~\ref{lem:cv-clade}; we only sketch the strategy here and omit the details.
Fix $j\in \mathbb{N}$. Let $(\mathbf{N}^{(n)},\xi^{(n)},\ell^{(n)})_{n\in \mathbb{N}}$ be the sequence given in
Proposition~\ref{prop:cv-prm}.
For each $n\in \mathbb{N}$, by using Theorem~\ref{thm:jccp}, we may write
\[
\beta^{(n)} (y):=\frac{1}{n} C^{(n)}(2 n y) = \ensuremath{\normalfont\textsc{skewer}}\left(y, \mathbf{N}^{(n)}\big|_{[0,T^{(n)}_{-j\theta_1/\alpha})}, j+ \xi^{(n)}_{\theta_1 }\big|_{[0,T^{(n)}_{-j\theta_1/\alpha})}\right), \quad y\in [0,j],
\]
where
$
\xi^{(n)}_{\theta_1}:= \xi^{(n)} + (1-\alpha/\theta_1) \ell^{(n)}
$
and $T^{(n)}_{-j\theta_1/\alpha}:=T_{-j\theta_1/\alpha}(\xi^{(n)})=T_{-j}(\xi_{\theta_1}^{(n)})$.
By Proposition~\ref{prop:cv-prm} and Skorokhod representation, we may assume that
$(\mathbf{N}^{(n)},\xi^{(n)} ,\xi^{(n)}_{\theta_1})$ converges a.s.\@ to $(\mathbf{N},\mathbf{X}_\alpha,\mathbf{X}_{\theta_1})$.
Then it follows from Lemma~\ref{lem:T-} that $T^{(n)}_{-j\theta_1/\alpha}\to T_{-j\theta_1/\alpha}(\mathbf{X}_\alpha)=T_{-j}(\mathbf{X}_{\theta_1})$, cf. \eqref{eq:Talphatheta}.
Next, in the same way as in the proof of Lemma~\ref{lem:cv-clade},
we consider for any $\rho>0$ the interval partition evolution $\boldsymbol{\beta}^{(n)}_{>\rho}$ associated with the spindles of $\boldsymbol{\beta}^{(n)}$ with lifetime longer than $\rho$.
By proving that for any $\rho>0$,
$(\beta^{(n)}_{>\rho}(y), y\in [0,j]) \to (\beta_{>\rho}(y), y\in [0,j])$ as $n\to \infty$,
and that $\|\boldsymbol{\beta}^{(n)}\|\!-\! \| \boldsymbol{\beta}^{(n)}_{>\rho}\| \to 0$ as $\rho\downarrow 0$ uniformly for all $n\in \mathbb{N}$,
we deduce the convergence of $(\beta^{(n)}(y), y\in [0,j])$.
This leads to the desired statement.
\end{proof}
\section{Convergence of the three-parameter family}\label{sec:pcrp-ssip}
In this section we consider the general three-parameter family $\mathrm{PCRP}^{(\alpha)}(\theta_1, \theta_2)$ with $\theta_1,\theta_2\ge 0$.
In Section~\ref{sec:cvproof} we establish a related convergence result, Theorem~\ref{thm:crp-ip-bis}, for the processes killed upon hitting $\emptyset$, with the limiting diffusion being an $\mathrm{SSIP}_{\!\dagger}$-evolution introduced in \cite{ShiWinkel-1}. Using Theorem~\ref{thm:crp-ip-bis}, we obtain a pseudo-stationary distribution for an $\mathrm{SSIP}_{\!\dagger}$-evolution in Proposition~\ref{prop:ps-theta1theta2}, which enables us to introduce an excursion measure and thereby construct an $\mathrm{SSIP}$-evolution from excursions, for suitable parameters, in Sections~\ref{sec:exc} and ~\ref{sec:rec} respectively.
In Section~\ref{sec:results}, we finally complete the proofs of Theorem~\ref{thm:crp-ip} and the other results stated in the \href{sec:intro}{introduction}.
\subsection{Convergence when $\emptyset$ is absorbing}\label{sec:cvproof}
If we choose any table in a $\mathrm{PCRP}^{(\alpha)}(\theta_1, \theta_2)$, then its size evolves as a $\pi(-\alpha)$-process until the first hitting time of zero; before the deletion of this table, the tables to its left form a $\mathrm{PCRP}^{(\alpha)}(\theta_1, \alpha)$ and the tables to its right a $\mathrm{PCRP}^{(\alpha)}(\alpha, \theta_2)$.
This observation suggests us to make such decompositions and to use the convergence results obtained in the previous section.
A similar idea has been used in \cite{ShiWinkel-1} for the construction of an $\mathrm{SSIP}$-evolution with absorption in $\emptyset$, abbreviated as $\mathrm{SSIP}_{\!\dagger}$-evolution, which we shall now recall.
Specifically, define a function $\phi\colon \mathcal{I}_H \to \big(\mathcal{I}_H \times (0,\infty)\times \mathcal{I}_H\big) \cup \{(\emptyset,0,\emptyset)\}$ by setting $\phi(\emptyset):= (\emptyset,0,\emptyset)$ and, for $\beta\ne \emptyset$, \vspace{-0.02cm}
\begin{equation}\label{eq:phi}
\phi(\beta) := \big( \beta \cap (0,\inf U), \mathrm{Leb}(U), \beta \cap (\sup U, \|\beta\|) - \sup U \big), \vspace{-0.02cm}
\end{equation}
where $U$ is the longest interval in $\beta$; we take $U$ to be the leftmost one if this is not unique.
\begin{definition}[$\mathrm{SSIP}$-evolution with absorption in $\emptyset$, Definition 1.3 of \cite{ShiWinkel-1}] \label{defn:ipe}
$\!\!$Consider $\theta_1\ge 0$, $\theta_2 \ge 0$ and $\gamma\in \mathcal{I}_H$.
Set $T_0 := 0$ and $\beta(0):= \gamma$.
For $k\ge 0$, suppose by induction that we have obtained $(\beta(t), t\le T_k)$.
\begin{itemize}
\item If $\beta(T_k)= \emptyset$, then we stop and set
$T_{i}:= T_k$ for every $i\ge k$ and $\beta(t):=\emptyset $ for $t\ge T_k$.
\item If $\beta(T_k)\!\ne\! \emptyset$, denote $(\beta^{(k)}_1,m^{(k)},\beta^{(k)}_2):= \phi(\beta(T_k))$. Conditionally on the history, let $\mathbf{f}^{(k)}\sim {\tt BESQ}_{m^{(k)}}(-2 \alpha)$ and $\boldsymbol{\gamma}^{(k)}_i=(\gamma_i^{(k)}(s),\,s\ge 0)$ an $\mathrm{SSIP}^{(\alpha)}( \theta_i)$-evolution starting from $\beta_i^{(k)}$, $i=1,2$,
with $\mathbf{f}^{(k)}, \boldsymbol{\gamma}_1^{(k)},\boldsymbol{\gamma}_2^{(k)}$ independent. Set $T_{k+1}:= T_k + \zeta(\mathbf{f}^{(k)})$.
We define
\[
\beta(t) := \gamma^{(k)}_1(t\!-\! T_k) \star\left\{\left( 0,\mathbf{f}^{(k)}(t\!-\!T_k)\right)\right\} \star {\rm rev}\big(\gamma^{(k)}_2(t\!-\!T_k)\big) , \qquad t\in (T_k, T_{k+1}].
\]
\end{itemize}
We refer to $(T_k)_{k\ge1}$ as the \emph{renaissance levels} and $T_{\infty}:= \sup_{k\ge 1} T_k \in [0, \infty]$ as the \emph{degeneration level}.
If $T_{\infty}< \infty$, then by convention we set
$\beta(t) :=\emptyset$ for all $t\ge T_{\infty}$.
Then the process $\boldsymbol{\beta}:=(\beta(t), t\ge 0)$ is called an \emph{$\mathrm{SSIP}_{\!\dagger}^{(\alpha)}( \theta_1, \theta_2)$-evolution} starting from $\gamma$.
\end{definition}
Note that $\emptyset$ is an absorbing state of an $\mathrm{SSIP}_{\!\dagger}^{(\alpha)}( \theta_1, \theta_2)$-evolution by construction. Let us summarise a few results obtained in \cite[Theorem 1.4, Corollary 3.7]{ShiWinkel-1}.
\begin{theorem}[\cite{ShiWinkel-1}]\label{thm:IvaluedMP}
For $\theta_1,\theta_2\ge 0$, let $(\beta(t),t\ge 0)$ be an $\mathrm{SSIP}_{\!\dagger}^{(\alpha)}(\theta_1, \theta_2)$-evolution, with renaissance levels $(T_k,k\ge 0)$ and degeneration level $T_{\infty}$. Set $\theta= \theta_1+\theta_2-\alpha$.
\begin{longlist}
\item (Hunt property) $(\beta(t),t\ge 0)$ is a Hunt process with continuous paths in $(\mathcal{I}_H ,d_{H})$.
\item \label{item:mass} (Total-mass)
$(\|\beta(t)\|,t\ge 0)$ is a ${\tt BESQ}_{\|\beta^0\|}(2\theta)$ killed at its first hitting time of zero.
\item (Degeneration level) If $\theta\ge 1$ and $\beta(0)\ne \emptyset$, then a.s.\ $T_{\infty}=\infty$ and $\beta(t)\ne \emptyset$ for every $t\ge 0$;
if $\theta<1$, then a.s.\ $T_{\infty}<\infty$ and $\lim_{t\uparrow T_{\infty}} d_H(\beta(t), \emptyset) =0$.
\item (Self-similarity) For any $c>0$, the process $(c \beta(c^{-1} t), t\ge 0)$
is an $\mathrm{SSIP}_{\!\dagger}^{(\alpha)}(\theta_1, \theta_2)$-evolution starting from $c \beta(0)$.
\item When $\theta_2=\alpha$, the $\mathrm{SSIP}_{\!\dagger}^{(\alpha)}(\theta_1,\alpha)$-evolution $(\beta(t), t\ge 0)$ is an $\mathrm{SSIP}^{(\alpha)}(\theta_1)$-evolution killed at its first hitting time at $\emptyset$.
\end{longlist}
\end{theorem}
\begin{theorem}\label{thm:crp-ip-bis}
Let $\theta_1,\theta_2\ge 0$ and $\theta=\theta_1\!+\!\theta_2\!-\!\alpha$.
For $n\in \mathbb{N}$, let $(C^{(n)}(t), t\ge 0)$ be a $\mathrm{PCRP}^{(\alpha)}(\theta_1,\theta_2)$ starting from $C^{(n)}(0)= \gamma^{(n)}$ and killed at $\zeta^{(n)} =\inf\{ t\ge 0\colon C^{(n)}(t) =\emptyset \}$. Let $(\beta(t),t\!\ge\! 0)$ be an $\mathrm{SSIP}_{\!\dagger}^{(\alpha)}(\theta_1,\theta_2)$-evolution starting from $\gamma$ and
$\zeta \!=\!\inf\{ t\!\ge\! 0\colon \beta(t)\!=\!\emptyset\}$.
Suppose that $ \frac{1}{n} \gamma^{(n)}$ converges in distribution to
$\gamma$ as $n\to \infty$, under $d_H$, then the following convergence holds in $\mathbb{D}(\mathbb{R}_+,\mathcal{I}_H)$:
\[
\Big(\frac{1}{n} C^{(n)}\big((2 n t)\wedge\zeta^{(n)}\big), t\ge 0 \Big)
\underset{n\to \infty}{\longrightarrow} (\beta(t), t\ge 0) , \quad \text{in distribution}.
\]
Moreover, $\zeta^{(n)}/2n$ converges to $\zeta$ in distribution jointly.\pagebreak[2]
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thm:crp-ip-bis}]
We shall construct a sequence of $\mathrm{PCRP}^{(\alpha)} (\theta_1, \theta_2)$ on a sufficiently large probability space by using $\mathrm{PCRP}^{(\alpha)} (\theta_1,\alpha)$, $\mathrm{PCRP}^{(\alpha)} (\alpha,\theta_2)$ and up-down chains of law $\pi_k(-\alpha)$ defined in \eqref{eq:pi}; the idea is similar to Definition~\ref{defn:ipe}.
Then the convergences obtained in Proposition~\ref{prop:crp-ip-theta} and Lemmas~\ref{lem:ud}--\ref{lem:ud-bis} permit us to conclude.
By assumption, $\frac{1}{n} C^{(n)}(0)$ converges in distribution to
$\gamma\in \mathcal{I}_H$ under the metric $d_H$. Except for some degenerate cases when $\gamma=\emptyset$,
we may use Skorokhod representation and Lemma~\ref{lem:dH} to find $\big(C^{(n)}_1(0), m^{(n)}(0),C^{(n)}_2(0)\big)$ for all $n$ sufficiently large, with $m^{(n)}(0)\ge 1$ and $C^{(n)}_1(0),C^{(n)}_2(0)\in \mathcal{C}$, such that $C^{(n)}_1(0)\star \{(0, m^{(n)}(0))\}\star C^{(n)}_2(0)= C^{(n)}(0)$, and that, as $n\to \infty$,
\begin{equation}\label{eq:intial}
\Big(\frac{1}{n} C_1^{(n)}(0), \frac{1}{n} m^{(n)}(0), \frac{1}{n} C_2^{(n)}(0)\Big)\to (\gamma_1, m,\gamma_2):=\phi(\gamma), \quad \text{a.s.,}
\end{equation}
where $\phi$ is the function defined by \eqref{eq:phi}.
For every $n\in \mathbb{N}$, let $\mathbf{f}^{(n,0)}\sim \pi_{m^{(n)}(0)}(-\alpha)$ be as in \eqref{eq:pi}, $\boldsymbol{\gamma}^{(n,0)}_1$ a $\mathrm{PCRP}^{(\alpha)} (\theta_1, \alpha)$ starting from $ C_1^{(n)}(0)$
and $\boldsymbol{\gamma}^{(n,0)}_2$ a $\mathrm{PCRP}^{(\alpha)} (\alpha, \theta_2)$ starting from $ C_2^{(n)}(0)$;
the three processes $\boldsymbol{\gamma}_1^{(n,0)},\mathbf{f}^{(n,0)}$ and $\boldsymbol{\gamma}_2^{(n,0)}$ are independent.
By Proposition~\ref{prop:crp-ip-theta}, Lemma~\ref{lem:ud} and Skorokhod representation, we may assume that a.s.\
\begin{equation}\label{eq:cvT1}
\Big(\frac{1}{n} \gamma_1^{(n,0)}(2n~\cdot), \frac{1}{n}\mathbf{f}^{(n,0)}(2n~\cdot),\frac{\zeta(\mathbf{f}^{(n,0)})}{2n}, \frac{1}{n} \gamma_2^{(n,0)}(2n~\cdot)\Big)\to
\Big(\boldsymbol{\gamma}_1^{(0)},\mathbf{f}^{(0)}, \zeta(\mathbf{f}^{(0)}),\boldsymbol{\gamma}_2^{(0)}\Big).
\end{equation}
The limiting triple process $(\boldsymbol{\gamma}_1^{(0)},\mathbf{f}^{(0)}, \boldsymbol{\gamma}_2^{(0)})$ starting from $(\gamma_1, m,\gamma_2)$ can serve as that in the construction of $\boldsymbol{\beta}$ in Definition~\ref{defn:ipe}.
Write $T_1=\zeta(\mathbf{f}^{(0)})$ and $T_{n,1}= \zeta(\mathbf{f}^{(n,0)})$, then
\[
\frac{1}{n} \gamma_1^{(n,0)}( T_{n,1})\star \frac{1}{n} \gamma_2^{(n,0)}( T_{n,1}) \to \gamma_1^{(0)}(T_{1})\star \gamma_2^{(0)}(T_{1})=:\beta(T_1), \quad \text{a.s..}.
\]
With $\phi$ the function defined in \eqref{eq:phi}, set
\[
( C_1^{(n,1)}, m^{(n,1)}, C_2^{(n,1)})
:=\phi\left(\gamma^{(0)}_1(T_{n,1})\star \gamma^{(0)}_2(T_{n,1})\right).
\]
Since $T_1$ is independent of $(\boldsymbol{\gamma}_1^{(0)},\boldsymbol{\gamma}_2^{(0)})$, $\beta(T_1)$ a.s.\@ has a unique largest block.
By this observation and \eqref{eq:cvT1} we have
$ \frac{1}{n} ( C_1^{(n,1)}, m^{(n,1)}, C_2^{(n,1)}) \to \phi(\beta(T_{1}))$, since $\phi$ is continuous at any interval partition whose longest block is unique.
For each $n\ge 1$, if $( C_1^{(n,1)}, m^{(n,1)}, C_2^{(n,1)}) =(\emptyset, 0,\emptyset)$, then for every $i\ge 1$, we set $T_{n,i}:= T_{n,1}$ and
$
\Big( \boldsymbol{\gamma}^{(n,i)}_1 ,\mathbf{f}^{(n,i)} ,\boldsymbol{\gamma}^{(n,i)}_2\Big):\equiv (\emptyset,0, \emptyset).$
If $( C_1^{(n,1)}, m^{(n,1)}, C_2^{(n,1)}) \ne (\emptyset, 0,\emptyset)$, then conditionally on the history, let $\mathbf{f}^{(n,1)}\sim \pi_{m^{(n,1)}}(-\alpha)$, and consider $\boldsymbol{\boldsymbol{\gamma}}^{(n,1)}_1$, a $\mathrm{PCRP}^{(\alpha)} (\theta_1,\alpha)$ starting from $C_1^{(n,1)}$, and $\boldsymbol{\gamma}^{(n,1)}_2$, a $\mathrm{PCRP}^{(\alpha)} (\alpha,\theta_2)$ starting from $C_2^{(n,1)}$, independent of each other. Set $T_{n,2} = T_{n,1} +\zeta(\mathbf{f}^{(n,1)})$.
Again, by Proposition~\ref{prop:crp-ip-theta}, Lemma~\ref{lem:ud} and Skorokhod representation, we may assume that a similar a.s.\ convergence as in \eqref{eq:cvT1} holds for $(\boldsymbol{\gamma}_1^{(n,1)},\mathbf{f}^{(n,1)},\zeta(\mathbf{f}^{(n,1)}),\boldsymbol{\gamma}_2^{(n,1)})$.
By iterating arguments above, we finally obtain for every $n\ge 1$ a sequence of processes $(\boldsymbol{\gamma}_1^{(n,i)},\mathbf{f}^{(n,i)}, \boldsymbol{\gamma}_2^{(n,i)})_{i\ge 0}$ with renaissance levels $(T_{n,i})_{i\ge 0}$, such that, inductively, for every $k\ge 0$, the following a.s.\@ convergence holds:
\begin{equation}\label{eq:cv-ik}
\Big(\frac{1}{n} \gamma_1^{(n,k)}\!(2n~\cdot), \frac{1}{n}\mathbf{f}^{(n,k)}\!(2n~\cdot), \frac{\zeta(\mathbf{f}^{(n,k)})}{2n}, \frac{1}{n} \gamma_2^{(n,k)}\!(2n~\cdot)\Big)
\to \Big(\boldsymbol{\gamma}_1^{(k)},\mathbf{f}^{(k)},\zeta(\mathbf{f}^{(k)}), \boldsymbol{\gamma}_2^{(k)}\Big).
\end{equation}
Using the limiting processes $\Big(\boldsymbol{\gamma}_1^{(k)},\mathbf{f}^{(k)}, \boldsymbol{\gamma}_2^{(k)}\Big)_{k\ge 0}$, we build according to Definition~\ref{defn:ipe} an $\mathrm{SSIP}_{\!\dagger}^{(\alpha)}(\theta_1, \theta_2)$-evolution $\boldsymbol{\beta}=(\beta(t), t\ge 0)$, starting from $\gamma$, with renaissance levels $T_k = \sum_{i=0}^{k-1}\zeta(\mathbf{f}^{(i)} )$ and $T_{\infty}=\lim_{k\to \infty} T_k$.
Then for every $t\ge 0$ and $k\in \mathbb{N}$, on the event $\{T_{k} >t \}$ we have by \eqref{eq:cv-ik} the a.s.\@ convergence
of the process $( \frac{1}{n} C^{(n)} (2n s) , s\le t ) \to ( \beta (s) , s\le t )$.
When $\theta\ge 1$, since the event
$\{T_{\infty}=\infty\}=\bigcap_{t\in \mathbb{N}} \bigcup_{k\in \mathbb{N}} \{T_{k} >t \}$ has probability one by Theorem~\ref{thm:IvaluedMP}, the convergence in Theorem~\ref{thm:crp-ip-bis} holds a.s..
We now turn to the case $\theta<1$, where we have by Theorem~\ref{thm:IvaluedMP} that $T_{\infty}<\infty$ a.s.\
and that, for any $\varepsilon >0$, there exists $K\in \mathbb{N}$ such that
\begin{equation}\label{eq:cv-1}
\mathbb{P} \Big(\sup_{t\ge T_K} \|\beta(t)\|>\varepsilon \Big) <\varepsilon\quad\mbox{and}\quad\mathbb{P}\big(T_\infty>T_K+\varepsilon\big)<\varepsilon.
\end{equation}
For each $n\in \mathbb{N}$, consider the concatenation
\begin{equation*}
C^{(n)}\!(t)\!=\! \begin{cases}
\!\gamma^{\!(n,i)}_1\!(t\!-\! T_{n,i})\!\star\! \big\{\!\big(0, \mathbf{f}^{(n,i)}\!(t\!-\!T_{n,i})\big)\!\big\} \!\star\!\gamma^{\!(n,i)}_2\!(t\!-\!T_{n,i}) ,\! &t\!\in\! [T_{n,i}, T_{n,i+1}), i\!\le\! K\!\!-\!1,\\
\!\widetilde{C}^{(n)}(t\!-\!T_{n,K}),& t\ge T_{n,K},
\end{cases}
\end{equation*}
where $\widetilde{C}^{(n)}$ is a $\mathrm{PCRP}^{(\alpha)} (\theta_1, \theta_2)$ starting from $ C^{(n)}(T_{n,K}-)$ and killed at $\emptyset$, independent of the history.
Then $C^{(n)}$ is a $\mathrm{PCRP}^{(\alpha)} (\theta_1, \theta_2)$ starting from $C^{(n)}(0)$.
We shall next prove that its rescaled process converges to $(\beta(t),t\ge 0)$ in probability, which completes the proof.
By Lemmas~\ref{lem:ud}--\ref{lem:ud-bis}, under the locally uniform topology
\[
\Big( \frac{1}{n}\|\widetilde{C}^{(n)} (2n\cdot)\|, \frac{1}{2n} \zeta (\widetilde{C}^{(n)}) \Big)
\underset{n\to\infty}{\to}
\Big(\|\beta(\cdot+ T_K)\|, \zeta\big(\beta(\cdot+ T_K)\big) \Big) \quad \text{in distribution}.
\]
By the convergence \eqref{eq:cv-ik}, there exists $N\in \mathbb{N}$ such that for every $n>N$, we have
\begin{equation}\label{eq:cv-2}
\mathbb{P}\bigg(\sup_{s\in [0, T_K]} d_{H} \Big(\frac{1}{n}C^{(n)} (2ns), \beta(s)\Big) >\varepsilon \bigg)<\varepsilon\quad\mbox{and}\quad
\mathbb{P}\bigg(\Big|\frac{1}{2n}T_{n,K}-T_K\Big|>\varepsilon\bigg)<\varepsilon.
\end{equation}
Furthermore, by the convergence of $\frac{1}{n}\|\widetilde{C}^{(n)}\|$, there exists $\widetilde{N}\in \mathbb{N}$
such that for every $n>\widetilde{N}$,
\begin{equation}\label{eq:cv-3}
\mathbb{P}\bigg( \sup_{s\ge 0}\frac{1}{n}\|\widetilde{C}^{(n)}(s)\|> \varepsilon\bigg) <\varepsilon\quad\mbox{and}\quad
\mathbb{P}\bigg(\Big|\frac{1}{2n}\zeta(\widetilde{C}^{(n)})-\zeta\big(\beta(\cdot+T_K)\big)\Big|>\varepsilon\bigg)<\varepsilon.
\end{equation}
Summarising \eqref{eq:cv-1} and \eqref{eq:cv-3}, for every $n>\widetilde{N}$, we have
\[
\mathbb{P} \bigg(\! \sup_{s\in [0, \infty)} d_{H} \Big(\frac{1}{n}\widetilde{C}^{(n)} (2ns), \beta(s+T_K)\Big)>3\varepsilon\bigg)\le 3\varepsilon.
\]
Together with \eqref{eq:cv-2}, this leads to the desired convergence in probability.
\end{proof}
\subsection{Pseudo-stationarity of $\mathrm{SSIP}_{\!\dagger}$-evolutions}
\begin{proposition}[Pseudo-stationary distribution of an $\mathrm{SSIP}_{\!\dagger}^{(\alpha)}(\theta_1, \theta_2)$-evolution]\label{prop:ps-theta1theta2}
For $\theta_1, \theta_2\ge 0$ and $\theta:=\theta_1+\theta_2-\alpha$, let $(Z(t),\,t\ge 0)$ be a ${\tt BESQ} (2 \theta)$ killed at zero with $Z(0)>0$, independent of $\bar\gamma\sim \mathtt{PDIP}^{(\alpha)}( \theta_1,\theta_2)$.
Let $(\beta(t),\, t\ge 0)$ be an $\mathrm{SSIP}_{\!\dagger}^{(\alpha)}(\theta_1, \theta_2)$-evolution starting from $Z(0) \bar\gamma$.
Fix any $t\ge 0$, then $\beta(t)$ has the same distribution as $Z(t) \bar\gamma$.
\end{proposition}
Analogous results for $\mathrm{SSIP}^{(\alpha)}(\theta_1)$-evolutions have been obtained in \cite{Paper1-1,IPPAT}, however, the strategy used in their proofs does not easily apply to our three-parameter model.
We shall use a completely different method, which crucially relies on the discrete approximation by $\mathrm{PCRP}^{(\alpha)}(\theta_1,\theta_2)$ in Theorem~\ref{thm:crp-ip-bis}. It is easy to see that the total mass of a $\mathrm{PCRP}^{(\alpha)}(\theta_1,\theta_2)$ evolves according to a Markov chain defined by $\pi(\theta)$ as in \eqref{eq:pi}, with $\theta=\theta_1+\theta_2-\alpha$.
Conversely, given any $C (0)\in \mathcal{C}$ and $Z\sim \pi_{\|C(0)\|} (\theta)$, we can embed a process $(C (t),\, t\ge 0)\sim \mathrm{PCRP}^{(\alpha)} (\theta_1, \theta_2)$, starting from $C (0)$, such that its total-mass evolution
is $Z$. More precisely, in the language of the Chinese restaurant process, at each jump time when $Z$ increases by one, add a customer according to the seating rule in Definition~\ref{defn:oCRP}; and whenever $Z$ decreases by one, perform a down-step, i.e.\@ one uniformly chosen customer leaves. It is easy to check that this process indeed satisfies the definition of $\mathrm{PCRP}^{(\alpha)} (\theta_1, \theta_2)$ in the introduction.
Recall the probability law $\mathtt{oCRP}^{(\alpha)}_{m}(\theta_1,\theta_2)$ from Definition~\ref{defn:oCRP}.
\begin{lemma}[Marginal distribution of a $\mathrm{PCRP}^{(\alpha)} (\theta_1, \theta_2)$]\label{prop:crp-ps}
Consider a $\mathrm{PCRP}^{(\alpha)} (\theta_1, \theta_2)$ $(C(t),\, t\ge 0)$
starting from $C(0)\sim \mathtt{oCRP}^{(\alpha)}_{m}(\theta_1,\theta_2)$ with $m\in \mathbb{N}_0$.
Then, at any time $t \ge 0$, $C(t)$ has a mixture distribution $\mathtt{oCRP}^{(\alpha)}_{\|C(t)\|}(\theta_1,\theta_2)$, where the total number of customers has distribution $(\|C(t)\|,\,t\ge 0)\sim \pi_m(\theta)$ with $\theta:= \theta_1+\theta_2- \alpha$.
\end{lemma}
\begin{proof}
Let $Z\sim \pi_{\|C(0)\|} (\theta)$ and we consider $(C (t),\, t\ge 0)\sim \mathrm{PCRP}^{(\alpha)} (\theta_1, \theta_2)$, starting from $C(0)$, as a process embedded in $Z\sim \pi_{\|C(0)\|} (\theta)$, in the way we just explained as above.
Before the first jump time $J_1$ of $Z$, $C(t) = C(0)\sim \mathtt{oCRP}^{(\alpha)}_{m}(\theta_1,\theta_2)$ by assumption.
At the first jump time $J_1$ of $Z$, it follows from Proposition~\ref{prop:down} that, given $Z(J_1)$, $C(J_1)$ has conditional distribution $\mathtt{oCRP}^{(\alpha)}_{Z(J_1)}(\theta_1,\theta_2)$.
The proof is completed by induction.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:ps-theta1theta2}]
For $n\in \mathbb{N}$, consider a process $C^{(n)}\sim \mathrm{PCRP}^{(\alpha)}(\theta_1, \theta_2)$,
starting from $C^{(n)}(0) \sim\mathtt{oCRP}_{\lfloor n Z(0)\rfloor}^{(\alpha)}(\theta_1, \theta_2)$ and killed at $\emptyset$.
It follows from Lemma~\ref{prop:crp-ps} that, for every $t\ge 0$,
$C^{(n)}(t)$ has the mixture distribution $\mathtt{oCRP}^{(\alpha)}_{N^{(n)}(t\wedge\zeta(N^{(n)}))}(\theta_1, \theta_2)$ with
$(N^{(n)}(t),t\ge 0)\sim\pi_{\lfloor n Z(0)\rfloor} (\theta)$.
By Lemma~\ref{lem:crp-pdip},
$ \frac{1}{n} C^{(n)}(0)$ converges in distribution to $Z(0) \bar\gamma$ under $d_H$.
For any fixed $t\ge 0$, it follows from Theorem~\ref{thm:crp-ip-bis} that
$\frac{1}{n} C^{(n)}(2 n t)$ converges in distribution to $\beta(t)$.
Using Lemmas~\ref{lem:ud}--\ref{lem:ud-bis} and \ref{lem:crp-pdip} leads to the desired statement.
\end{proof}
\subsection{SSIP-evolutions}\label{sec:SSIP}
Let $\alpha\in (0,1)$ and $\theta_1,\theta_2\ge 0$.
Recall that the state $\emptyset$ has been defined to be a trap of an $\mathrm{SSIP}_{\!\dagger}^{(\alpha)}(\theta_1, \theta_2)$-evolution.
In this section, we will show that, for certain cases, depending on the value of $\theta:= \theta_1+\theta_2-\alpha$, we can include $\emptyset$ as an initial state such that it leaves $\emptyset$ continuously.
More precisely, consider independent $(Z(t),\, t\ge 0)\sim {\tt BESQ}_0(2 \theta)$
and $\bar\gamma\sim \mathtt{PDIP}^{(\alpha)}(\theta_1, \theta_2)$.
Define for every $t\ge 0$ a probability kernel $K_t$ on $\mathcal{I}_H$: for $\beta_0\in \mathcal{I}_H$ and measurable $A \subseteq \mathcal{I}_H$,
\begin{equation}\label{eq:kernel-ssip}
K_t(\beta_0, A)= \mathbb{P}\big(\beta(t)\in A,\, t< \zeta(\boldsymbol{\beta})\big)
+ \int_0^t \mathbb{P}(Z({t\!-\!r})\bar\gamma\in A) \mathbb{P}(\zeta(\boldsymbol{\beta})\in dr),
\end{equation}
where $\boldsymbol{\beta}=(\beta(t),\,t\ge 0)$ is an $\mathrm{SSIP}_{\!\dagger}^{(\alpha)}(\theta_1, \theta_2)$-evolution starting from $\beta_0$, and $\zeta(\boldsymbol{\beta})$ is the first hitting time of $\emptyset$ by $\boldsymbol{\beta}$.
Note that \cite[Corollary XI.(1.4)]{RevuzYor} yields for fixed $s\ge 0$, that
\begin{equation}\label{eq:gamma}(Z(t),\, t\ge 0)\sim {\tt BESQ}_0(2 \theta),\quad\theta>0,\qquad\Rightarrow\qquad Z(s)\sim \mathtt{Gamma}(\theta,1/2s).
\end{equation}
When $\beta_0=\emptyset$, we have by convention $\zeta(\boldsymbol{\beta})=0$ and the first term in \eqref{eq:kernel-ssip} vanishes.
\begin{theorem}\label{thm:hunt}
Let $\theta_1,\theta_2\ge 0$.
The family $(K_t,\,t\ge 0)$ defined in \eqref{eq:kernel-ssip} is the transition semigroup of a path-continuous Hunt process on the Polish space $\mathcal{I}_H$.
\end{theorem}
\begin{definition}[$\mathrm{SSIP}^{(\alpha)}(\theta_1,\theta_2)$-evolutions]\label{defn:ssip}
For $\theta_1,\theta_2\ge 0$, a path-continuous Markov process with transition semigroup $(K_t,\,t\ge 0)$ is called an \emph{$\mathrm{SSIP}^{(\alpha)}(\theta_1,\theta_2)$-evolution}.
\end{definition}
\begin{proposition}\label{prop:mass}
For $\theta_1,\theta_2\ge 0$, let $(\beta(t) ,\,t\ge 0)$ be a Markov process with transition semigroup $(K_t,t\!\ge \!0)$.
Then the total mass $(\|\beta(t)\| ,t\!\ge \!0)$ is a ${\tt BESQ}(2\theta)$ with $\theta =\theta_1\!+\!\theta_2\!-\!\alpha$. \pagebreak[2]
\end{proposition}
\begin{proof}
We know from Theorem~\ref{thm:IvaluedMP} that the total mass of an $\mathrm{SSIP}_{\!\dagger}^{(\alpha)}(\theta_1,\theta_2)$-evolution evolves according to a ${\tt BESQ}(2\theta)$ killed at zero. Therefore, the description in \eqref{eq:kernel-ssip} implies that $(\|\beta(t)\| ,\,t\ge 0)$ has the semigroup of ${\tt BESQ}(2\theta)$.
\end{proof}
The proof of Theorem~\ref{thm:hunt} is postponed to Section~\ref{sec:rec}. We distinguish three phases:
\begin{itemize}
\item $\theta\in [-\alpha,0]$: by convention, $Z\sim {\tt BESQ}_0(2 \theta)$ is the constant zero process and thus the second term in \eqref{eq:kernel-ssip} vanishes; then $(K_t,\,t\ge 0)$ is just the semigroup of an $\mathrm{SSIP}_{\!\dagger}^{(\alpha)}(\theta_1, \theta_2)$-evolution. In this case Theorem~\ref{thm:hunt} is encompassed by Theorem~\ref{thm:IvaluedMP}.
\item $\theta\in (0,1)$: by Theorem~\ref{thm:IvaluedMP}~(\ref{item:mass}) and \cite[Equation~(13)]{GoinYor03} we deduce that $\zeta(\boldsymbol{\beta})$ is a.s.\ finite in \eqref{eq:kernel-ssip}, with $\zeta(\boldsymbol{\beta})\mbox{$ \ \stackrel{d}{=}$ } \|\beta(0)\|/2G$, where $G\sim \mathtt{Gamma}(1-\theta, 1)$. In this case, we will construct an $\mathrm{SSIP}^{(\alpha)}(\theta_1,\theta_2)$-evolution as a \emph{recurrent extension} of $\mathrm{SSIP}_{\!\dagger}^{(\alpha)}(\theta_1, \theta_2)$-evolutions, by using an excursion measure that will be introduced in Section~\ref{sec:exc}.
\item $\theta\ge 1$: since $\zeta(\boldsymbol{\beta})=\infty$ a.s., the second term in \eqref{eq:kernel-ssip} vanishes unless $\beta(0)=\emptyset$ and $\emptyset$ is an entrance boundary with an entrance law
$K_t(\emptyset,\,\cdot\,) =\mathbb{P}(Z(t)\bar\gamma\in \cdot\,)$, by Proposition \ref{prop:ps-theta1theta2}. See also \cite[Proposition 5.11]{ShiWinkel-1},
where this was shown using a different construction and a different formulation of the entrance law, which is seen to be equivalent
to \eqref{eq:kernel-ssip} by writing $\bar{\gamma}=B'\big(V^\prime\bar{\gamma}_1\star\{(0,1\!-\!V^\prime)\}\big) \star (1\!-\!B') \bar{\beta}\sim \mathtt{PDIP}^{(\alpha)}(\theta_1,\theta_2)$ as in Corollary \ref{cor:pdipdec}.\end{itemize}
\subsection{The excursion measure of an SSIP-evolution when $\theta\in (-\alpha,1)$}\label{sec:exc}
In this section, we fix $\theta_1,\theta_2\ge 0$ and suppose that $-\alpha< \theta \!=\! \theta_1 \!+\!\theta_2 \!-\!\alpha \!<\!1$.
We shall construct an $\mathrm{SSIP}^{(\alpha)} (\theta_1, \theta_2)$ excursion measure $\Theta:= \Theta^{(\alpha)} (\theta_1, \theta_2)$, which is a $\sigma$-finite measure on the space $\mathbb{C}([0,\infty), \mathcal{I}_H)$ of
continuous functions in $(\mathcal{I}_H, d_{H})$, endowed with the uniform metric and the Borel $\sigma$-algebra.
Our construction is in line with Pitman and Yor \cite[(3.2)]{PitmYor82}, by the following steps.
\begin{itemize}
\item
For each $t> 0$, define a measure $N_t$ on $\mathcal{I}_H$ by
\begin{align}\label{eq:entrance}
N_t(A) &:= \mathbb{E}\left[ (Z(t))^{\theta-1} \mathbf{1}_{A} (Z(t) \bar\gamma)\right],\quad \text{ measurable }A\subseteq \mathcal{I}_H\setminus\{\emptyset\}, \\
N_t (\emptyset) &:=\infty,\nonumber
\end{align}
where $Z=(Z(t),\, t\ge 0)\sim {\tt BESQ}_0(4 - 2 \theta)$
and $\bar\gamma\sim \mathtt{PDIP}^{(\alpha)}(\theta_1, \theta_2)$ are independent.
As $4-2\theta>2$, the process $Z$ never hits zero.
We have $N_t (\mathcal{I}_H\setminus\{\emptyset\})=t^{\theta-1}/2^{1-\theta}\Gamma(2-\theta)$.
Then $(N_t,\, t\ge 0)$ is an entrance law for an $\mathrm{SSIP}_{\!\dagger}^{(\alpha)} (\theta_1, \theta_2)$-evolution $(\beta(t),\, t\ge 0)$. Indeed, with notation above, we have by Proposition~\ref{prop:ps-theta1theta2} that, for every $s,t\ge 0$ and $f$ non-negative measurable,
\begin{align*}
\int \mathbb{E}\left[f(\beta(s)) \mid \beta(0) = \gamma\right]N_t (d\gamma)
&= \mathbb{E}\left[ (Z(t))^{\theta-1} \mathbb{E}_{Z(t)\bar\gamma}\left[f(\beta(s))\right] \right]\\
&=\mathbb{E}\left[ (Z'(0))^{\theta-1}\mathbb{E}_{Z'(0)}\left[f(Z'(s) \bar\gamma)\right] \right],
\end{align*}
where $(Z'(s),\, s\ge 0)$ is a ${\tt BESQ}(2 \theta)$ killed at zero with $Z'(0) = Z(t)$.
Since we know from the duality property of ${\tt BESQ}(2 \theta)$, see e.g.\ \cite[(3.b) and (3.5)]{PitmYor82}, that
\[
(Z'(0))^{\theta-1} \mathbb{E}_{Z'(0)} \big[g(Z'(s))\big]= \mathbb{E}_{Z'(0)} \big[g(\widetilde{Z}(s)) (\widetilde{Z}(s))^{\theta-1}\big] , \qquad \forall s> 0,
\]
where $\widetilde{Z}\sim {\tt BESQ}(4 -2 \theta)$ starting from $Z'(0)$,
it follows from the Markov property that
\begin{align*}
\mathbb{E}\left[\! (Z'(0))^{\theta-1}\mathbb{E}_{Z'(0)}[f(Z'({s}) \bar\gamma)] \right]\!
&= \mathbb{E}\left[ \mathbb{E}_{Z'(0)}\left[(\widetilde{Z}(s))^{\theta-1} f(\widetilde{Z}({s}) \bar\gamma)\right] \right] \\
&= \mathbb{E}\left[ (Z({t+s}))^{\theta-1} f(Z({t+s}) \bar\gamma) \right]
= \int f(\gamma) N_{t+s}(d\gamma).
\end{align*}
We conclude that
\[
\int \mathbb{E}\big[f(\beta(s)) \mid\beta(0) = \gamma\big]N_t (d\gamma) = \int f(\gamma) N_{t+s} (d \gamma), \quad \forall s,t\ge 0.
\]
\item
As a consequence, there exists a unique $\sigma$-finite measure $\Theta$ on $\mathbb{C}((0,\infty), \mathcal{I}_H)$ such that for all $t> 0$ and $F$ bounded measurable functional, we have the identity
\begin{equation}\label{eq:Theta:entrance}
\Theta [F\circ L_t] = \int \mathbb{E} \left[ F( \beta(s),\, s\ge 0) \mid \beta(0) = \gamma\right] N_t (d \gamma) ,
\end{equation}
where $(\beta(s),\, s\ge 0)$ is an $\mathrm{SSIP}_{\!\dagger}^{(\alpha)} (\theta_1, \theta_2)$-evolution and $L_t$ stands for the shift operator.
See \cite[VI.48]{RogersWilliams} for details.
In particular, for each $t>0$ and measurable $A\subseteq \mathcal{I}_H\setminus\{\emptyset\}$, we have the identity
$\Theta \{(\beta(s),\,s>0)\in\mathbb{C}((0,\infty),\mathcal{I}_H)\colon\beta(t) \in A \} =N_t(A)$. In particular,
\begin{equation}\label{eq:Theta-zeta}
\Theta(\zeta>t)=\Theta\big\{(\beta(s),s>0)\in\mathbb{C}((0,\infty),\mathcal{I}_H)\colon\beta(t) \ne \emptyset \big\} = t^{\theta-1}/2^{1-\theta}\Gamma(2-\theta).
\end{equation}
\item The image of $\Theta$ by the mapping $(\beta(t),\,t>0)\mapsto(\|\beta(t)\|,\,t>0)$ is equal to the push-forward of $\Lambda_{\mathtt{BESQ}}^{(2 \theta)}$ from $\mathbb{C}([0,\infty),\mathcal{I}_H)$ to $\mathbb{C}((0,\infty),\mathcal{I}_H)$ under the restriction map, where $\Lambda_{\mathtt{BESQ}}^{(2 \theta)}$ is the excursion measure of
${\tt BESQ} (2 \theta)$ as in \eqref{eq:besq-exc}.
In particular, we have for $\Theta$-almost every $(\beta(t),\,t>0)\in\mathbb{C}((0,\infty),\mathcal{I}_H)$
\begin{equation}\label{eq:Theta:tozero}
\limsup_{t\downarrow 0} \|\beta(t)\|=0 \quad \Longrightarrow \quad \lim_{t\downarrow 0} d_H(\beta(t),\emptyset)=0.
\end{equation}
Therefore, we can ``extend'' $\Theta$ to $\mathbb{C}([0,\infty), \mathcal{I}_H)$, by defining $\beta(0)=\emptyset$ for $\Theta$-almost every $(\beta(t),t>0)\in\mathbb{C}((0,\infty),\mathcal{I}_H)$, and we also set
\begin{equation}\label{eq:Theta:zero}
\Theta\big\{\boldsymbol{\beta}\in\mathbb{C}([0,\infty),\mathcal{I}_H)\colon\boldsymbol{\beta}\equiv \emptyset\big\} = 0.
\end{equation}
\end{itemize}
Summarising, we have the following statement.
\begin{proposition}\label{prop:Theta-besq}
Let $\theta_1,\theta_2\ge 0$ and suppose that $-\alpha< \theta \!=\! \theta_1 +\theta_2 -\alpha \!<\!1$.
Then there is a unique $\sigma$-finite measure $\Theta= \Theta^{(\alpha)} (\theta_1, \theta_2)$ on $\mathbb{C}([0,\infty), \mathcal{I}_H)$ that satisfies
\eqref{eq:Theta:entrance} and \eqref{eq:Theta:zero}.
Moreover, the image of $\Theta$ by the mapping $(\beta(t),\,t\ge 0)\mapsto(\|\beta(t)\|,\,t\ge 0)$ is $\Lambda_{\mathtt{BESQ}}^{(2 \theta)}$, the excursion measure of ${\tt BESQ} (2 \theta)$.
\end{proposition}
For the case $\theta_1=\theta_2 =0$, the law $\mathtt{PDIP}^{(\alpha)}(\theta_1, \theta_2)$ coincides with the Dirac mass $\delta_{\{(0,1)\}}$.
As a consequence, the $\mathrm{SSIP}^{(\alpha)} (0,0)$ excursion measure is just the pushforward of
$\Lambda_{\mathtt{BESQ}}^{(2 \theta)}$, by the map $ x \mapsto \{(0,x)\}$ from $[0,\infty)$ to $\mathcal{I}_H$.
When $\theta_1=0$ and $\theta_2=\alpha$, it is easy to check using \cite[Proposition 2.12(i), Lemma 3.5(ii), Corollary 3.9]{IPPAT} that $2\alpha\Theta^{(\alpha)}(0,\alpha)$ is the push-forward via the mapping $\ensuremath{\overline{\normalfont\textsc{skewer}}}$ in Definition~\ref{def:skewer} of the measure
$\nu^{(\alpha)}_{\mathrm{\bot cld}}$ studied in \cite[Section 2.3]{IPPAT}.
\subsection{Recurrent extension when $\theta\in (0,1)$}\label{sec:rec}
Consider the $\mathrm{SSIP}^{(\alpha)} (\theta_1, \theta_2)$ excursion measure $\Theta:= \Theta^{(\alpha)} (\theta_1, \theta_2)$ and suppose that $\theta= \theta_1 +\theta_2-\alpha \in (0,1)$.
It is well-known \cite{Salisbury86} in the theory of Markov processes that excursion measures such as $\Theta$ can be used to construct a recurrent extension of a Markov process. To this end,
let $\mathbf{G}\sim\mathtt{PRM}(\mathrm{Leb}\otimes b_\theta\Theta)$, where $b_\theta=2^{1-\theta}\Gamma(1-\theta)/\Gamma(\theta)$.
For every $s\ge 0$, set $\sigma_s = \int_{[0,s]\times \mathcal{I}_H} \zeta (\boldsymbol{\gamma}) \mathbf{G} (dr, d\boldsymbol{\gamma})$. As the total mass process under $\Theta$ is the $\mathtt{BESQ}(2\theta)$ excursion measure with $\theta\in (0,1)$,
the process $(\sigma_s, s\ge 0)$ coincides with the inverse local time of a $\mathtt{BESQ}(2\theta)$, which is well-known to be a subordinator.
We define
\begin{equation}\label{eq:nonconcat}
\beta(t)= \mathop{ \raisebox{-2pt}{\Huge$\star$} } _{\text{ points }(s,\boldsymbol{\gamma}_s)\text{ of } \mathbf{G},\, \sigma_{s-}< t\le \sigma_{s}} \gamma_s(t- \sigma_{s-}), \qquad t\ge 0.
\end{equation}
This ``concatentation'' consists of at most one interval partition since $(\sigma_s,\,s\ge 0)$ is increasing.
\begin{proposition}\label{prop:rec-extension}
The process $(\beta(t),\, t\ge 0)$ of \eqref{eq:nonconcat} is a path-continuous Hunt process with transition semigroup $(K_t,\,t\ge 0)$. Its total mass process $(\|\beta(t)\|,\, t\ge 0)$ is a ${\tt BESQ}(2\theta)$.
\end{proposition}
\begin{proof}
We can use \cite[Theorem 4.1]{Salisbury86}, since we have the following properties:
\begin{itemize}
\item an $\mathrm{SSIP}_{\!\dagger}^{(\alpha)} (\theta_1, \theta_2)$-evolution is a Hunt process;
\item $\Theta$ is concentrated on $\{\boldsymbol{\gamma}\in\mathbb{C}([0,\infty),\mathcal{I}_H)\colon 0<\zeta(\boldsymbol{\gamma})<\infty, \gamma(t) = \emptyset\mbox{ for all }t\ge \zeta(\boldsymbol{\gamma}) \}$;
\item for any $a>0$, we have $\Theta\{\boldsymbol{\gamma}\in\mathbb{C}([0,\infty),\mathcal{I}_H)\colon\sup_{t\ge 0} \|\gamma(t)\|\ge a \}<\infty$;
\item $\int (1-e^{-\zeta(\boldsymbol{\gamma})}) b_\theta\Theta (d\boldsymbol{\gamma}) = 1$;
\item \eqref{eq:Theta:entrance} holds;
\item $\Theta$ is infinite and $\Theta\{\boldsymbol{\gamma}\in\mathbb{C}([0,\infty),\mathcal{I}_H)\colon \gamma(0)\ne\emptyset \}=0$.
\end{itemize}
It follows that $(\beta(t),\, t\ge 0)$ is a Borel right Markov process with transition semigroup $(K_t,\, t\ge 0)$. Moreover, the total mass process $(\|\beta(t)\|,\, t\ge 0)$ evolves according to a ${\tt BESQ}(2\theta)$ by Proposition~\ref{prop:Theta-besq}.
In fact, $(\beta(t),\,t\ge 0)$ a.s.\ has continuous paths. Fix any path on the almost sure event that the total mass process $(\|\beta(t)\|, t\ge 0)$ and all excursions $\boldsymbol{\gamma}_s$ are continuous. For any $t\ge 0$, if $\sigma_{s-}<t<\sigma_{s}$ for some $s\ge 0$, then the continuity at $t$ follows from that of $\boldsymbol{\gamma}_s$.
For any other $t$, we have $\beta(t)=\emptyset$ and the continuity at $t$ follows from the continuity of the ${\tt BESQ}(2\theta)$ total mass process. This completes the proof.
\end{proof}
We are now ready to give the proof of Theorem \ref{thm:hunt}, which claims that $(K_t,\,t\ge 0)$ defined in \eqref{eq:kernel-ssip} is the transition semigroup of a path-continuous Hunt process.
\begin{proof}[Proof of Theorem~\ref{thm:hunt}] When $\theta\in (0,1)$, this is proved by Proposition~\ref{prop:rec-extension}.
When $\theta\le 0$, the state $\emptyset$ is absorbing, and an $\mathrm{SSIP}^{(\alpha)}(\theta_1, \theta_2)$-evolution coincides with an $\mathrm{SSIP}_{\!\dagger}^{(\alpha)}(\theta_1, \theta_2)$-evolution.
For $\theta\ge 1$, the state $\emptyset$ is inaccessible, but an entrance boundary of the $\mathrm{SSIP}^{(\alpha)}(\theta_1,\theta_2)$-evolution, see
also \cite[Proposition 5.11]{ShiWinkel-1}. For these cases, the proof is completed by Theorem~\ref{thm:IvaluedMP}, the only modification is when starting from $\emptyset$. Specifically, the modified semigroup is still measurable. Right-continuity starting from $\emptyset$ follows from the continuity of the total mass process, and this entails the strong Markov property by the usual approximation argument.
\end{proof}
\subsection{Proofs of results in the \href{sec:intro}{introduction}}\label{sec:results}
We will first prove Theorem~\ref{thm:crp-ip} and identify the limiting diffusion in Theorem~\ref{thm:crp-ip} with an $\mathrm{SSIP}$-evolution as defined in Definition~\ref{defn:ssip}. Then we complete proofs of the other results in the \href{sec:intro}{introduction}.
\begin{lemma}\label{lem:entrance}
Let $\alpha\in(0,1)$, $\theta_1,\theta_2\ge 0$ and $\theta=\theta_1+\theta_2-\alpha$. For $n\in \mathbb{N}$, let $(C^{(n)}(t), t\ge 0)$ be a $\mathrm{PCRP}^{(\alpha)}(\theta_1,\theta_2)$ starting from $C^{(n)}(0)=\gamma^{(n)}$.
If $ \frac{1}{n} \gamma^{(n)}$ converges to $\emptyset$ under $d_H$, then for any $t\ge 0$, \vspace{-0.2cm}
\[
\frac{1}{n} C^{(n)}(2nt) \to Z(t) \bar{\gamma}\quad \text{ in distribution},
\]
where $(Z(t),\,t\ge 0)\sim \mathtt{BESQ}_0(2\theta)$ and $\bar{\gamma}\sim \mathtt{PDIP}^{(\alpha)} (\theta_1,\theta_2)$ are independent. In particular, this limit is
constant $\emptyset$ when $\theta\le 0$.
\end{lemma}
\begin{proof}
We start with the case when $\theta<1$.
Let $\zeta^{(n)}$ be the hitting time of $\emptyset$ for $C^{(n)}$. For any $\varepsilon>0$, for all $n$ large enough, $\zeta^{(n)}$ is stochastically dominated by the hitting time of zero of an up-down chain $Z^{(n)}\sim \pi(2\theta)$ starting from $\lfloor n\varepsilon \rfloor$, which by Lemmas~\ref{lem:ud} and \ref{lem:ud-bis} converges in distribution to $\varepsilon/2G$ with $G$ a Gamma variable. Letting $\varepsilon\to 0$, we conclude that $\zeta^{(n)}/2n\to 0$ in probability as $n\to \infty$.
For any $t>0$ and any bounded continuous function $f$ on $\mathcal{I}_H$, we have
\begin{align}\label{eq:lifesplit}
\mathbb{E}\left[f\left(\frac{1}{n}C^{(n)}(2 n t)\right) \right]
=& \mathbb{E}\left[\mathbf{1}\{\zeta^{(n)} \le 2nt \} f\left(\frac{1}{n}\widetilde{C}^{(n)}\big(2nt - \zeta^{(n)}\big)\right) \right]\\
&+ \mathbb{E}\left[\mathbf{1}\{\zeta^{(n)} > 2nt \} f\left(\frac{1}{n}C^{(n)}(2 n t)\right) \right], \nonumber
\end{align}
where $\widetilde{C}^{(n)}(s)= C^{(n)}(s+\zeta^{(n)})$, $s\ge 0$.
As $n\to \infty$, since $\zeta^{(n)}/2n\to 0$ in probability, the second term tends to zero.
By the strong Markov property and Lemma~\ref{prop:crp-ps}, $\widetilde{C}^{(n)}(s)$ has the mixture distribution $\mathtt{oCRP}_{\|\widetilde{C}^{(n)}(s)\|}^{(\alpha)}(\theta_1,\theta_2)$.
Since $\|C^{(n)}(2nt)\|/n \to Z(t)$ in distribution by Lemma~\ref{lem:ud}, we deduce by Lemma~\ref{lem:crp-pdip} that
the first term tends to
$\mathbb{E}\left[f\left(Z(t)\bar{\gamma}\right) \right]$, as desired.
For $\theta\ge 1$, at least one of $\theta_1\ge\alpha$ or $\theta_2\ge\alpha$. Say, $\theta_1\ge\alpha$. We may assume that
$C^{(n)}(t)=C_1^{(n)}(t)\star C_0^{(n)}(t)\star C_2^{(n)}(t)$ for independent $(C_1^{(n)} (t),\,t\ge 0)\sim \mathrm{PCRP}^{(\alpha)} (\theta_1,0)$ starting from $\emptyset$, $(C_0^{(n)} (t),t\!\ge\! 0)\!\sim\! \mathrm{PCRP}^{(\alpha)} (\alpha,0)$ starting from $C^{(n)}(0)$, and $(C_2^{(n)} (t),t\!\ge\! 0)\linebreak \sim\mathrm{PCRP}^{(\alpha)} (\alpha,\theta_2)$ starting from $\emptyset$.
For the middle term $C_0^{(n)}$, the $\theta\!\le\! 0$ case yields that $\frac{1}{n} C_0^{(n)}(2nt) \to \emptyset$ in distribution.
For the other two, applying \eqref{eq:gamma} and Lemmas~\ref{prop:crp-ps}, \ref{lem:ud} and \ref{lem:crp-pdip} yields
$ \frac{1}{n} C_1^{(n)}(2nt) \to Z_1(t) \bar{\gamma}_1$ in distribution, with $Z_1(t)\sim \mathtt{Gamma}(\theta_1\!-\!\alpha,1/2t)$ and $\bar{\gamma}_1\sim \mathtt{PDIP}^{(\alpha)} (\theta_1,0)$, and $ \frac{1}{n} C_2^{(n)}(2nt) \to Z_2(t) \bar{\gamma}_2$ in distribution, with $Z_2(t)\sim \mathtt{Gamma}(\theta_2,1/2t)$ and $\bar{\gamma}_2\sim \mathtt{PDIP}^{(\alpha)} (\alpha,\theta_2)$.
We complete the proof by applying the decomposition \eqref{eq:pdip-decomp}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:crp-ip}]
When $\theta\le 0$, the state $\emptyset$ is absorbing, and an $\mathrm{SSIP}^{(\alpha)}(\theta_1, \theta_2)$-evolution coincides with an $\mathrm{SSIP}_{\!\dagger}^{(\alpha)}(\theta_1, \theta_2)$-evolution. For this case
the proof is completed by Theorem~\ref{thm:crp-ip-bis}.
So we shall only consider $\theta>0$ and prove that the limiting diffusion is given by an $\mathrm{SSIP}^{(\alpha)}(\theta_1, \theta_2)$-evolution $\boldsymbol{\beta}=(\beta(t),\,t\ge 0)$ with $\zeta(\boldsymbol{\beta})=\inf\{t\ge 0\colon\beta(t)=\emptyset\}$ as defined in Definition~\ref{defn:ssip}.
It suffices to prove the convergence in $\mathbb{D}([0,T], \mathcal{I}_{H})$ for a fixed $T>0$.
The convergence of finite-dimensional distributions follows readily from Theorem~\ref{thm:crp-ip-bis}, Lemma~\ref{lem:entrance} and the description in \eqref{eq:kernel-ssip}.
Specifically, for $\theta\in(0,1)$, we proceed as in the proof of Lemma \ref{lem:entrance} and see the first term in \eqref{eq:lifesplit} converge to $\mathbb{E}[\mathbf{1}\{\zeta(\boldsymbol{\beta})\le t\}f(Z_0(t-\zeta(\boldsymbol{\beta}))\bar{\gamma})]$ where $Z_0\sim{\tt BESQ}_0(2\theta)$ and $\bar{\gamma}\sim{\tt PDIP}^{(\alpha)}(\theta_1,\theta_2)$ are independent and jointly independent of $\boldsymbol{\beta}$, while the second term converges to $\mathbb{E}[\mathbf{1}\{\zeta(\boldsymbol{\beta})>t\}f(\beta(t))]$. For $\theta\ge 1$ and $\beta(0)=\emptyset$, convergence of marginals holds by Lemma \ref{lem:entrance} and \eqref{eq:kernel-ssip}. Theorem \ref{thm:crp-ip-bis} then establishes finite-dimensional convergence, also when $\beta(0)\neq\emptyset$.
It remains to check tightness. Let $\boldsymbol{\beta}^{(n)}=(\beta^{(n)}(t),\,t\ge 0):= \frac{1}{n} C^{(n)}(2n\,\cdot\,)$.
Since we already know from Lemma~\ref{lem:ud} that the sequence of total mass processes $\|\boldsymbol{\beta}^{(n)}\|$, $n\ge 1$, converges in distribution, it is tight.
For $h>0$, define the modulus of continuity by
\[\omega \left(\|\boldsymbol{\beta}^{(n)}\|, h\right) = \sup \left\{ \big|\|\beta^{(n)}(s)\|- \|\beta^{(n)}(t)\|\big|\colon |s-t|\le h\right\}.\]
For any $\varepsilon>0$, the tightness implies that there exists $\Delta'$ such that for any $h\le 2\Delta'$,
\[
\limsup_{n\to \infty} \mathbb{E}\left[\omega \left(\|\boldsymbol{\beta}^{(n)}\|, h\right)\wedge 1\right] <\varepsilon;
\]
this is an elementary consequence of \cite[Proposition VI.3.26]{JacodShiryaev}. See also \cite[Theorem~16.5]{Kallenberg}.
For $1\le i\le \lfloor T/\Delta' \rfloor$, set $t_i = i \Delta'$ and let
$\boldsymbol{\beta}^{(n)}_i$ be the process obtained by shifting $\boldsymbol{\beta}^{(n)}$ to start from $t_i$, killed at $\emptyset$.
The convergence of the finite-dimensional distributions yields that each $\beta^{(n)}(t_i)$ converges weakly to $\beta(t_i)$.
Since $\beta(t_i)\ne \emptyset$ a.s., by Theorem~\ref{thm:crp-ip-bis} each sequence $\boldsymbol{\beta}^{(n)}_i$ converges in distribution as $n\to \infty$.
So the sequence $(\boldsymbol{\beta}^{(n)}_i, n\in \mathbb{N})$ is tight, as the space $(\mathcal{I}_{H},d_H)$ is Polish.
By tightness there
exists $\Delta_i$ such that for any $h<\Delta_i$,
\[
\limsup_{n\to \infty} \mathbb{E}\left[\omega \left(\boldsymbol{\beta}^{(n)}_i, h\right)\wedge 1\right] < 2^{-i} \varepsilon.
\]
Now let $\Delta = \min (\Delta', \Delta_0, \Delta_1, \ldots, \Delta_{\lfloor T/\Delta' \rfloor} )$.
For any $s\le t\le T$ with $t\!-\!s\le \Delta$, consider $i$ such that $t_i \le s < t_{i+1}$, then
$t\!-\!t_i < \Delta'\! +\! t\!-\!s \le 2 \Delta'$.
If $\zeta(\boldsymbol{\beta}^{(n)}_i) \le t\!-\!t_i$, then $\boldsymbol{\beta}^{(n)}$ touches $\emptyset$ during the time interval $[t_i, t]$ and thus
$\max (\|\beta^{(n)}(s)\|,\|\beta^{(n)}(t)\|) \le \omega \left(\|\boldsymbol{\beta}^{(n)}\|, 2\Delta'\right)$.
Therefore, we have
\[
d_H\left(\beta^{(n)}(s), \beta^{(n)}(t)\right) \le d_H\left(\beta^{(n)}_i(s), \beta^{(n)}_i(t)\right)+ 2\omega \left(\|\boldsymbol{\beta}^{(n)}\|, 2\Delta'\right).
\]
It follows that for $h<\Delta$,
\[
\mathbb{E}\left[\omega \left(\boldsymbol{\beta}^{(n)}, h\right)\wedge 1\right]\le
2\mathbb{E}\left[\omega \left(\|\boldsymbol{\beta}^{(n)}\|, 2\Delta'\right)\wedge 1\right] +
\sum_{i=0}^{\lfloor T/\Delta' \rfloor}\mathbb{E}\left[\omega \left(\boldsymbol{\beta}^{(n)}_i, \Delta_i\right)\wedge 1\right] .
\]
So we have $\limsup_{n\to \infty}\mathbb{E}\left[\omega \left(\boldsymbol{\beta}^{(n)}, h\right)\wedge 1\right]\le 4\varepsilon$.
This leads to the tightness, e.g.\ via \cite[Theorem 16.10]{Kallenberg}.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:ps-theta1theta2-nokill}]
This follows from Proposition~\ref{prop:ps-theta1theta2} and the semigroup description in \eqref{eq:kernel-ssip}.
\end{proof}
\begin{theorem}\label{thm:continst} Let $\alpha\in(0,1)$, $\theta_1,\theta_2\ge 0$ and $\gamma_n\in\mathcal{I}_H$ with $\gamma_n\rightarrow\gamma\in\mathcal{I}_H$. Let
$\boldsymbol{\beta}_n$, $n\ge 1$, and $\boldsymbol{\beta}$ be ${\rm SSIP}^{(\alpha)}(\theta_1,\theta_2)$-evolutions starting from $\gamma_n$, $n\ge 1$, and $\gamma$, respectively. Then
$\boldsymbol{\beta}_n\rightarrow\boldsymbol{\beta}$ in distribution in $\mathbb{C}(\mathbb{R}_+,\mathcal{I}_H)$ equipped with the locally uniform
topology.
\end{theorem}
\begin{proof} It follows easily from Lemma \ref{lem:dH} that we may assume that $\gamma_n=\beta_{n,1}^{(0)}\star\{(0,m_n^{(0)})\}\star\beta_{n,2}^{(0)}$ with $m_n^{(0)}\rightarrow m^{(0)}$, $\beta_{n,i}^{(0)}\rightarrow\beta_i^{(0)}$, $i=1,2$, and $\phi(\gamma)=(\beta_1^{(0)},m^{(0)},\beta_2^{(0)})$. We will now couple the constructions in Definition \ref{defn:ipe} and use the notation from there.
Given $(\beta_{n,1}^{(k)},m_n^{(k)},\beta_{n,2}^{(k)})\rightarrow(\beta_1^{(k)},m^{(k)},\beta_2^{(k)})$ a.s., for some $k\ge 0$, the Feller property of
\cite[Theorem 1.8]{IPPAT} allows us to apply \cite[Theorem 19.25]{Kallenberg} and, by Skorokhod representation, we obtain
$\boldsymbol{\gamma}_{n,i}^{(k)}\!\rightarrow\!\boldsymbol{\gamma}_i^{(k)}$ a.s.\ in $\mathbb{C}(\mathbb{R}_+,\mathcal{I}_H)$, $i\!=\!1,2$, as $n\!\rightarrow\!\infty$. For
$\mathbf{f}^{(k)}\!\sim\!{\tt BESQ}_{m^{(k)}}(-2\alpha)$ and \linebreak
$\mathbf{f}^{(k)}_n(s)\!:=\!(m_n^{(k)}\!/m^{(k)})\mathbf{f}^{(k)}((m^{(k)}\!/m_n^{(k)})s)$, $s\!\ge\! 0$, we find
$\mathbf{f}^{(k)}_n\!\!\sim\!{\tt BESQ}_{m^{(k)}_n}(-2\alpha)$. As $n\!\rightarrow\!\infty$, \linebreak
$(\mathbf{f}^{(k)}_n,\zeta(\mathbf{f}^{(k)}_n))\rightarrow(\mathbf{f}^{(k)},\zeta(\mathbf{f}^{(k)}))$ a.s..
And as $\gamma_1^{(k)}(\zeta(\mathbf{f}^{(k)}))\star\gamma_2^{(k)}(\zeta(\mathbf{f}^{(k)}))$ has a.s.\ a unique longest interval,
$\phi\big(\gamma_{n,1}^{(k)}(\zeta(\mathbf{f}^{(k)}_n))\star\gamma_{n,2}^{(k)}(\zeta(\mathbf{f}^{(k)}_n))\big)\rightarrow\phi\big(\gamma_1^{(k)}(\zeta(\mathbf{f}^{(k)}))\star\gamma_2^{(k)}(\zeta(\mathbf{f}^{(k)}))\big)$, a.s..
Inductively, the convergences stated in this proof so far hold a.s.\ for all $k\ge 0$. When $\theta:=\theta_1+\theta_2-\alpha\ge 1$, this gives rise to coupled
$\boldsymbol{\beta}_n$ and $\boldsymbol{\beta}$. When $\theta<1$, arguments as at the end of the proof of Theorem \ref{thm:crp-ip-bis} allow us to prove the convergence until the first hitting time of $\emptyset$ jointly with the convergence of the hitting times. When $\theta\le 0$, we extend the constructions by absorption in $\emptyset$. When
$\theta\in(0,1)$, we extend by the same ${\rm SSIP}^{(\alpha)}(\theta_1,\theta_2)$ starting from $\emptyset$. In each case, we deduce
that $\boldsymbol{\beta}_n\rightarrow\boldsymbol{\beta}$ a.s., locally uniformly.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:dP}]
For an $\mathrm{SSIP}$-evolution, we have established the pseudo-stationarity (Proposition~\ref{prop:ps-theta1theta2-nokill}), self-similarity,
path-continuity, Hunt property (Theorem~\ref{thm:crp-ip}) and the continuity in the initial state (Theorem \ref{thm:continst}).
With these properties in hand, we can easily prove this theorem by following the same arguments as in \cite[proof of Theorem~1.6]{Paper1-2}. Details are left to the reader.
\end{proof}
We now prove Theorem~\ref{thm:Theta}, showing that when $\theta=\theta_1+\theta_2-\alpha\in (-\alpha,1)$, the excursion measure $\Theta:=\Theta^{(\alpha)}(\theta_1,\theta_2)$ of Section \ref{sec:exc} is the limit of rescaled PCRP excursion measures.
Recall that the total mass process of ${\rm PCRP}^{(\alpha)}(\theta_1,\theta_2)$ has distribution $\pi_1(\theta)$.
We have already obtained the convergence of the total mass process from Proposition~\ref{prop:vague}.
\begin{proof}[Proof of Theorem~\ref{thm:Theta}]
Recall that $\zeta(\boldsymbol{\gamma})= \inf\{t>0\colon \gamma(t)=\emptyset \}$ denotes the lifetime of an excursion $\boldsymbol{\gamma}\in \mathbb{D}([0,\infty),\mathcal{I}_H)$. To prove vague convergence, we proceed as in the proof of Proposition~\ref{prop:vague}. In the present setting, we work on the space of measures on $\mathbb{D}([0,\infty),\mathcal{I}_H)$ that are bounded on $\{\zeta>t\}$ for all $t>0$. We denote by $\mathrm{P}^{(n)}$ the distribution of $C^{(n)}$, a killed $\mathrm{PCRP}^{(\alpha)}(\theta_1,\theta_2)$ starting from $(1)$. It suffices to prove for fixed $t>0$,
\begin{enumerate}\item[1.] $\Theta(\zeta = t)=0$,\vspace{0.2cm}
\item[2.] $(\Gamma(1+\theta)/(1-\theta)) n^{1-\theta} \cdot \mathrm{P}^{(n)} (\zeta>t) \underset{n\to \infty}{\longrightarrow} \Theta(\zeta > t)$,
\item[3.] $\mathrm{P}^{(n)}(\,\cdot\,|\,\zeta>t) \underset{n\to \infty}{\longrightarrow} \Theta(\,\cdot\,|\,\zeta>t)$ weakly.
\end{enumerate}
1. This follows from \eqref{eq:Theta-zeta}.
2. Since the total-mass process $\|C^{(n)}\|$ is an up-down chain of law $\widetilde{\pi}_1^{(n)}(\theta)$, Proposition~\ref{prop:vague} implies the following weak convergence of finite measures on $(0,\infty)$:
\begin{align}\label{eq:cv-nu}
&\frac{ \Gamma(1+\theta)}{1-\theta} n^{1-\theta} \mathbb{P} \left(\|C^{(n)}(t)\|\in \cdot~;~ \zeta (C^{(n)} )> t\right) \\[-0.1cm]
&\underset{n\to \infty}{\longrightarrow}
\Lambda_{\mathtt{BESQ}}^{(2 \theta)}\big\{f\in\mathbb{C}([0,\infty),[0,\infty))\colon f(t) \in \cdot~;~ \zeta(f)> t\big\}
= N_t \big\{\gamma\in\mathcal{I}_H\colon \|\gamma\|\in \cdot \big\}, \nonumber
\end{align}
where $N_t$ is the entrance law of $\Theta$ given in \eqref{eq:entrance}. This implies the desired convergence. \pagebreak[2]
3. For any $t>0$, given $(\|C^{(n)}(r)\|, r\le 2 n t)$, we know from Lemma~\ref{prop:crp-ps} that the conditional distribution of $C^{(n)}(t) =\frac{1}{n} C(2 n t)$ is
the law of $\frac{1}{n} C_n$, where
$C_n$ is $\mathtt{oCRP}_{m}^{(\alpha)}(\theta_1,\theta_2)$ with $m= \|C(2 n t)\|$.
By Lemma \ref{lem:crp-pdip}, we can strengthen \eqref{eq:cv-nu} to the following weak convergence on $\mathcal{I}_H\setminus \{\emptyset\}$:\vspace{-0.3cm}
\[
\mathbb{P} \left(C^{(n)}(t)\in \cdot \,\middle|\, \zeta (C^{(n)} )> t\right)
\underset{n\to \infty}{\longrightarrow} N_t(\,\cdot \,|\, \mathcal{I}_H\setminus\{\emptyset\}).
\]
Next, by the Markov property of a PCRP and the convergence result Theorem~\ref{thm:crp-ip-bis}, we deduce that, conditionally on $\{ \zeta (C^{(n)} )> t\}$, the process $(C^{(n)}(t+s), s\ge 0)$ converges weakly to an $\mathrm{SSIP}^{(\alpha)}(\theta_1,\theta_2)$-evolution $(\beta(s),\,s\ge 0)$ starting from $\beta(0)\sim N_t(\,\cdot \,|\, \mathcal{I}_H\setminus\{\emptyset\})$. By the description of $\Theta$ in \eqref{eq:Theta:entrance}, this implies the convergence of finite-dimensional distributions for times
$t\le t_1<\cdots<t_k$. For $t>t_1$, this holds under $\mathrm{P}^{(n)}(\,\cdot\,|\,\zeta>t_1)$ and $\Theta(\,\cdot\,|\,\zeta>t_1)$ and can be further conditioned on $\{\zeta>t\}$, by 1.\ and 2.
It remains to prove tightness.
For every $n\ge 1$, let $\tau_n$ be a stopping time with respect to the natural filtration of $C^{(n)}$ and $h_n$ a positive constant. Suppose that the sequence $\tau_n$ is bounded and $h_n\to 0$.
By Aldous's criterion \cite[Theorem 16.11]{Kallenberg}, it suffices to show that for any $\delta>0$,
\begin{equation}\label{eq:Aldous}
\lim_{n\to\infty} \mathbb{P}\left(d_H\left(C^{(n)}(\tau_n\!+\!h_n) , C^{(n)}(\tau_n) \right) >\delta \,\middle|\, \zeta (C^{(n)} )> t\right) =0.
\end{equation}
By the total mass convergence in Proposition~\ref{prop:vague}, for any $\varepsilon>0$, there exists a constant $s>0$, such that \vspace{-0.1cm}
\begin{equation}\label{eq:Aldous-1}
\limsup_{n\to\infty} \mathbb{P}\left(\sup_{r\le 2s} \|C^{(n)}(r)\| >\delta/3 \,\middle|\, \zeta (C^{(n)} )> t\right) \le \varepsilon.
\end{equation}
Moreover, since $(C^{(n)}(s+z), z\ge 0)$ conditionally on $\{ \zeta (C^{(n)} )> s\}$ converges weakly to a continuous process, by \cite[Proposition~VI.3.26]{JacodShiryaev} we have for any $u>s$,
\begin{equation}\label{eq:Aldous-2}
\lim_{n\to\infty} \mathbb{P}\left(\sup_{r\in[s,u]} d_H\left(C^{(n)}(r\!+\!h_n) , C^{(n)}(r) \right) >\delta/3 \,\middle|\, \zeta (C^{(n)} )> t\right) =0.
\end{equation}
Then \eqref{eq:Aldous} follows from \eqref{eq:Aldous-1} and \eqref{eq:Aldous-2}. This completes the proof. \pagebreak
\end{proof}
\subsection{The case $\alpha=0$}\label{sec:zero}
In a PCRP model with $\alpha=0$, the size of each table evolves according to an up-down chain $\pi(0)$ as in \eqref{eq:pi}, and new tables are only started to the left or to the right, but not between existing tables.
We can hence build a $\mathrm{PCRP}^{(0)}(\theta_1,\theta_2)$ starting from $(n_1, \ldots,n_k)\in \mathcal{C}$ by a Poissonian construction. Specifically, consider independent $\mathbf{f}_i\sim \pi_{n_i}(0)$, $i\in [k]$, as size evolutions of the initial tables, $\mathbf{F}_1\sim \mathtt{PRM}(\theta_1\mathrm{Leb}\otimes \pi_1(0))$ whose atoms describe the birth times and size evolutions of new tables added to the left, and $\mathbf{F}_2\sim \mathtt{PRM}(\theta_2\mathrm{Leb}\otimes \pi_1(0))$ for new tables added to the right.
For $t\ge 0$, set
$C_1(t)= \mathop{ \raisebox{-2pt}{\Huge$\star$} } _{\text{atoms }(s,f)\text{ of }\mathbf{F}_1\downarrow: s\le t} \{(0, f(t-s))\}$, where $\downarrow$ means that the concatenation is from larger $s$ to smaller,
$C_0(t)= \mathop{ \raisebox{-2pt}{\Huge$\star$} } _{i\in [k]} \{(0,\mathbf{f}_i(t))\}$, and
$C_2(t)= \mathop{ \raisebox{-2pt}{\Huge$\star$} } _{\text{atoms }(s,f)\text{ of }\mathbf{F}_2:s\le t} \{(0, f(t-s))\}$.
Then $(C(t)= C_1(t)\star C_0(t)\star C_2(t) ,t\ge 0)$ is a $\mathrm{PCRP}^{(0)}(\theta_1,\theta_2)$ starting from $(n_1, \ldots,n_k)$.
\begin{proposition}
The statement of Theorem~\ref{thm:crp-ip} still holds when $\alpha=0$.
\end{proposition}
\begin{proof}
We only prove this for the case when the initial state is $C^{(n)}(0)= \{(0,b^{(n)})\}$ with $\lim_{n\to \infty} b^{(n)}/n = b> 0$.
Then we can extend to a general initial state in the same way as we passed from Lemma~\ref{lem:cv-clade} to Theorem~\ref{thm:crp-ip-0}.
For each $n\ge 1$, we may assume $C^{(n)}$ is associated with $\mathbf{F}_1\sim \mathtt{PRM}(\theta_1\mathrm{Leb}\otimes \pi_1(0))$, $\mathbf{F}_2\sim \mathtt{PRM}(\theta_1\mathrm{Leb}\otimes \pi_1(0))$ and $\mathbf{f}^{(n)}_0\sim \pi_{b^{(n)}}(0)$.
Replacing each atom $\delta(s,f)$ of $\mathbf{F}_1$ and $\mathbf{F}_2$ by $\delta(s/2n, f(2n\cdot)/n)$, we obtain $\mathbf{F}^{(n)}_1\sim \mathtt{PRM}(2n\theta_1\mathrm{Leb}\otimes \pi^{(n)}_1(0))$ and
$\mathbf{F}^{(n)}_2\sim \mathtt{PRM}(2n\theta_2\mathrm{Leb}\otimes \pi^{(n)}_1(0))$. Note that
$(\frac{1}{n}C^{(n)}(2nt),t\ge 0)$ is associated with $(\mathbf{F}^{(n)}_1, \mathbf{F}^{(n)}_2, \mathbf{f}^{(n)}(2n\cdot)/n)$.
Since Proposition~\ref{prop:vague} shows that $n\pi^{(n)}_1(0)\to \Lambda_{\mathtt{BESQ}}^{(0)}$,
by \cite[Theorem 4.11]{KallenbergRM}, we deduce that $\mathbf{F}^{(n)}_1$ and $\mathbf{F}^{(n)}_2$ converge in distribution respectively to $\mathbf{F}^{(\infty)}_1\sim \mathtt{PRM}(2\theta_1\mathrm{Leb}\otimes \Lambda_{\mathtt{BESQ}}^{(0)})$ and $\mathbf{F}^{(\infty)}_2\!\sim\! \mathtt{PRM}(2\theta_2\mathrm{Leb}\otimes \Lambda_{\mathtt{BESQ}}^{(0)})$.
By Lemma~\ref{lem:ud}, $\mathbf{f}^{(n)}(2n\cdot)/n \!\to\! \mathbf{f}^{(\infty)}\!\sim\! {\tt BESQ}_b(0)$ in distribution.
As a result, we can deduce that $(\frac{1}{n}C^{(n)}(2nt),t\ge 0)$ converges to an $\mathcal{I}_H$-valued process $(\beta(t),t\ge 0)$ defined by
\begin{multline}\label{eq:ssip0}
\beta(t)=
\bigg( \mathop{ \raisebox{-2pt}{\Huge$\star$} } _{\text{atoms }(s,f)\text{ of }\mathbf{F}^{(\infty)}_1\downarrow: s\le t}\!\! \{(0, f(t\!-\!s))\}\bigg)\\
\star \{(0,\mathbf{f}^{(\infty)}(t) )\} \star
\bigg( \mathop{ \raisebox{-2pt}{\Huge$\star$} } _{\text{atoms }(s,f)\text{ of }\mathbf{F}^{(\infty)}_2: s\le t}\!\! \{(0, f(t\!-\!s))\}\bigg).
\end{multline}
A rigorous argument can be made as in the proof of Lemma~\ref{lem:cv-clade}.
\end{proof}
The limiting process in \eqref{eq:ssip0} can be viewed as an $\mathrm{SSIP}^{(0)}(\theta_1,\theta_2)$-evolution, which is closely related to the construction of measure-valued processes in \cite{Shiga1990}. See also \cite[Section 7.1]{FVAT}.
\section{Applications}\label{sec:appl}
\subsection{Measure-valued processes}\label{sec:FV}
In \cite{FVAT}, we introduced a two-parameter family of superprocesses taking values in the space $(\mathcal{M}^a,d_{\mathcal{M}})$ of all purely atomic finite measures on a space of allelic types, say $[0,1]$. Here $d_{\mathcal{M}}$ is the Prokhorov distance.
Our construction is closely related to that of SSIP-evolutions, here extracting from scaffolding and spindles via the following \emph{superskewer} mapping. See Figure~\ref{fig:scaf-marks} on page \pageref{fig:scaf-marks} for an illustration.
\begin{definition}[Superskewer] \label{def:superskewer}
Let $V= \sum_{i\in I} \delta(t_i,f_i,x_i)$ be a point measure on $\mathbb{R}\times \mathcal{E}\times [0,1]$ and $X$ a c\`adl\`ag process such that
\[\sum_{\Delta X(t)> 0} \delta(t, \Delta X(t)) = \sum_{i\in I} \delta(t_i, \zeta(f_i)).\]
The \emph{superskewer} of the pair $(V,X)$ at level $y$ is the atomic measure
\begin{equation}
\ensuremath{\normalfont\textsc{sSkewer}}(y,V,X) := \sum_{i\in I\colon X(t_i-)\le y <X(t_i)} f_i\big( y- X(t_i-) \big) \delta(x_i).
\end{equation}
\end{definition}
For $\alpha\in (0,1)$, $\theta\ge 0$, recall the scaffolding-and-spindles construction of an $\mathrm{SSIP}^{(\alpha)}(\theta)$-evolution starting from $\gamma\in \mathcal{I}_H$; in particular, for each $U\in \gamma$, there is an initial spindle $\mathbf{f}_U\sim {\tt BESQ}_{\mathrm{Leb}(U)} (-2\alpha)$.
For any collection $x_U\in[0,1]$, $U\in \gamma$, we can construct a self-similar superprocess $\mathrm{SSSP}^{(\alpha)}(\theta)$ starting from $\pi= \sum_{U\in \gamma} \mathrm{Leb}(U) \delta(x_U)$ as follows.
We mark each initial spindle $\mathbf{f}_U$ by the allelic type $x_U$ and all other spindles in the construction by i.i.d.\ uniform random variables on $[0,1]$.
Then we obtain the desired superprocess by repeating the construction of an $\mathrm{SSIP}^{(\alpha)}(\theta)$-evolution in Definitions \ref{defn:ip0}--\ref{defn:alphatheta}, with skewer replaced by superskewer,
and concatenation replaced by addition. We refer to \cite{FVAT} for more details.
We often write $\pi\in \mathcal{M}^a$ in \em canonical representation \em $\pi = \sum_{i\ge 1} b_i\delta(x_i)$ with
$b_1\ge b_2\ge \cdots$ and $x_i <x_{i+1}$ if $b_i=b_{i+1}$. We write $\|\pi\|:=\pi([0,1])=\sum_{i\ge 1}b_i$ for the \em total mass \em of $\pi$.
\begin{definition}\label{defn:sssp}
Let $\alpha\in (0,1)$ and $\theta\in [-\alpha,0)$.
We define a process $(\pi(t),\,t\ge 0)$ starting from $\pi(0)\in \mathcal{M}^a$ by the following construction.
\begin{itemize}
\item Set $T_0=0$. For $\pi(0) = \sum_{i\ge 1} b_i\delta(x_i)$ in canonical representation,
consider $\mathbf{x}^{(0)}:= x_1$ and independent
$\mathbf{f}^{(0)}\sim {\tt BESQ}_{b_1}(-2\alpha)$ and
$\boldsymbol{\lambda}^{(0)}\sim \mathrm{SSSP}^{(\alpha)}(\theta\!+\!\alpha)$ starting from $\sum_{i\ge 2} b_i\delta(x_i)$.
\item For $k\ge 1$, suppose by induction we have obtained $(\boldsymbol{\lambda}^{(i)},\mathbf{f}^{(i)},\mathbf{x}^{(i)}, T_i)_{0\le i\le k-1}$.
Then we set $T_{k}= T_{k-1} +\zeta(\mathbf{f}^{(k-1)}) $ and
\[
\pi(t) = \lambda^{(k-1)}(t\!-\!T_{k-1})+\mathbf{f}^{(k-1)}(t\!-\!T_{k-1}) \delta (\mathbf{x}^{(k-1)}),\qquad t\in [T_{k-1}, T_{k}].
\]
Write $\pi(T_{k}) = \sum_{i\ge 1} b^{(k)}_i\delta(x^{(k)}_i)$, with $b^{(k)}_1\ge b^{(k)}_2\ge \cdots$, for its canonical representation.
Conditionally on the history, construct independent $\boldsymbol{\lambda}^{(k)}\sim \mathrm{SSSP}^{(\alpha)}(\theta\!+\!\alpha)$ starting from $\sum_{i\ge 2} b^{(k)}_i\delta(x^{(k)}_i)$ and $\mathbf{f}^{(k)}\sim {\tt BESQ}_{b^{(k)}_1}(-2\alpha)$. Let $\mathbf{x}^{(k)}= x^{(k)}_1$.
\item Let $T_{\infty}= \lim_{k\to\infty} T_k$ and $\pi(t) = 0$ for $t\ge T_{\infty}$.
\end{itemize}
The process $\boldsymbol{\pi}:=(\pi(t),t\ge 0)$ is called an \emph{$(\alpha,\theta)$ self-similar superprocess}, $\mathrm{SSSP}^{(\alpha)}(\theta)$.
\end{definition}
For any $\pi(0)=\sum_{i\ge 1}b_i\delta(x_i)\in\mathcal{M}^a$ consider $\beta(0)=\{(s(i-1),s(i)),\,i\ge 1\}\in\mathcal{I}_H$, where
$s(i)=b_1+\cdots+b_i$, $i\ge 0$. Consider an $\mathrm{SSIP}^{(\alpha)}(\theta\!+\!\alpha, 0)$-evolution starting from $\beta(0)$, built
in Definition~\ref{defn:ipe} and use notation therein.
As illustrated in Figure~\ref{fig:scaf-marks}, we may assume that each interval partition evolution is obtained from the skewer of marked spindles. Therefore, we can couple each $\mathrm{SSIP}^{(\alpha)}(\theta\!+\!\alpha)$-evolution $\boldsymbol{\gamma}_1^{(k)}$ with
an $\boldsymbol{\lambda}_1^{(k)}\sim \mathrm{SSSP}^{(\alpha)}(\theta\!+\!\alpha)$, such that the atom sizes of the latter correspond to the interval lengths of the former.
Similarly, each $\mathrm{SSIP}^{(\alpha)}(0)$-evolution $\boldsymbol{\gamma}_2^{(k)}$ corresponds to a $\boldsymbol{\lambda}_2^{(k)}\sim \mathrm{SSSP}^{(\alpha)}(0)$.
Then $\boldsymbol{\lambda}^{(k)}= \boldsymbol{\lambda}_1^{(k)}\!+\! \boldsymbol{\lambda}_2^{(k)}$ is an $\mathrm{SSSP}^{(\alpha)}(\theta\!+\!\alpha)$ by definition.
Let $\mathbf{f}^{(k)}$ be the \emph{middle} (marked) spindle in Definition~\ref{defn:ipe}, which is a ${\tt BESQ}(-2\alpha)$, and $\mathbf{x}^{(k)}$ be its type.
In this way, we obtain a sequence $(\boldsymbol{\lambda}^{(k)},\mathbf{f}^{(k)},\mathbf{x}^{(k)})_{k\ge 0}$ and thus $\boldsymbol{\pi}=(\pi(t),\,t\ge 0)\sim\mathrm{SSSP}^{(\alpha)}(\theta)$ as in Definition~\ref{defn:sssp}. It is coupled with $\boldsymbol{\beta}=(\beta(t),\,t\ge 0)\sim\mathrm{SSIP}^{(\alpha)}(\theta\!+\!\alpha, 0)$ as in Definition \ref{defn:ipe}, such that atom sizes and interval lengths are matched, and the renaissance level $(T_k)_{k\ge 0}$ are exactly the same.
The next theorem extends \cite[Theorem 1.2]{FVAT} to $\theta\in [-\alpha,0)$.
\begin{theorem}
Let $\alpha\!\in\! (0,1)$, $\theta\!\in\! [-\alpha,0)$. An $\mathrm{SSSP}^{(\alpha)}(\theta)$
is a Hunt process with ${\tt BESQ}(2\theta)$ total mass, paths that are total-variation continuous, and its finite-dimensional marginals are continuous along sequences of initial states that converge in total variation.
\end{theorem}
\begin{proof} For any $\pi(0)=\sum_{i\ge 1}b_i\delta(x_i)\in\mathcal{M}^a$ consider
$(\pi(t),\,t\ge 0)\sim\mathrm{SSSP}^{(\alpha)}(\theta)$ and $(\beta(t),\,t\ge 0)\sim\mathrm{SSIP}^{(\alpha)}(\theta\!+\!\alpha, 0)$
coupled, as above. By \cite[Theorem 1.4]{ShiWinkel-1}, their (identical) total mass processes are ${\tt BESQ}(2\theta)$.
Moreover, by this coupling and \cite[Corollary 3.7]{ShiWinkel-1},
\begin{equation}\label{eq:sssp-T-infty}
T_{\infty}<\infty, ~\text{and}~\lim_{t\to T_\infty} \|\pi(t)\|=0 \quad \text{a.s.,}
\end{equation}
which implies the path-continuity at $T_{\infty}$. Since an $\mathrm{SSSP}^{(\alpha)}(\theta\!+\!\alpha)$ has continuous paths \cite[Theorem 1.2]{FVAT}, we conclude the path-continuity of $\mathrm{SSSP}^{(\alpha)}(\theta)$ by the construction in Definition~\ref{defn:sssp}, both in the Prokhorov sense and in the stronger total variation sense.
To prove the Hunt property, we adapt the proof of \cite[Theorem 1.4]{ShiWinkel-1} and apply Dynkin's criterion to a richer Markov process that records more information from the construction. Specifically, in the setting of Definition~\ref{defn:sssp}, let
\[
(\lambda(t) ,\mathbf{f}(t),\mathbf{x}(t) ):= \left(\lambda^{(k-1)}(t\!-\!T_{k-1}), \mathbf{f}^{(k-1)}(t\!-\!T_{k-1}), \mathbf{x}^{(k-1)}\right),\quad t\in [T_{k-1}, T_{k}), k\ge 1.
\]
and $(\lambda(t),\mathbf{f}(t),\mathbf{x}(t)) :=(0,0,0)$ for $t\ge T_{\infty}$. We shall refer to this process as a \emph{triple-valued
$\mathrm{SSSP}^{(\alpha)}(\theta)$} with values in $\widetilde{\mathcal{J}}:=(\mathcal{M}^a\times(0,\infty)\times[0,1])\cup\{(0,0,0)\}$.
This process induces the $\mathcal{M}^a$-valued $\mathrm{SSSP}^{(\alpha)}(\theta)$ as $\pi(t)=\lambda(t)+\mathbf{f}(t)\delta(\mathbf{x}(t))$.
Since each $(\boldsymbol{\lambda}^{(k)},\mathbf{f}^{(k)},\mathbf{x}^{(k)})$ is Hunt and is built conditionally on the previous ones according to a probability kernel, then $(\lambda(t),\mathbf{f}(t), \mathbf{x}(t))$, $t\ge 0$, is a Borel right Markov process by
\cite[Th\'eor\`eme II 3.18]{Bec07}.
To use Dynkin's criterion to deduce that $(\pi(t),t\ge 0)$ is Borel right Markovian, and hence Hunt by path-continuity, we consider any
$(\lambda_1(0),\mathbf{f}_1(0),\mathbf{x}_1(0)), (\lambda_2(0),\mathbf{f}_2(0),\mathbf{x}_2(0))\in\widetilde{\mathcal{J}}$ with
$\lambda_1(0)+\mathbf{f}_1(0)\delta(\mathbf{x}_1(0))=\lambda_2(0)+\mathbf{f}_2(0)\delta(\mathbf{x}_2(0))$. It suffices to couple
triple-valued $\mathrm{SSSP}^{(\alpha)}(\theta)$ from these two initial states whose induced $\mathcal{M}^a$-valued $\mathrm{SSSP}^{(\alpha)}(\theta)$ coincide.
First note that (unless they are equal) the initial states are such that for $t=0$ and $i=1,2$,
\begin{equation}\label{eq:mvcoupling}\lambda_1(t)=\mu(t)+\mathbf{f}_2(t)\delta(\mathbf{x}_2(t))\quad\mbox{and}\quad
\lambda_2(t)=\mu(t)+\mathbf{f}_1(t)\delta(\mathbf{x}_1(t))
\end{equation}
for some $\mu(t)\in\mathcal{M}^a$. We follow similar arguments as in the proof of \cite[Lemma 3.3]{ShiWinkel-1}, via a quintuple-valued
process $(\mu(t),\mathbf{f}_1(t),\mathbf{x}_1(t),\mathbf{f}_2(t),\mathbf{x}_2(t))$, $0\le t<S_N$, that captures two marked types. Let $S_0:=0$. For $j\ge 0$, suppose we have constructed the process on $[0,S_j]$.
\begin{itemize}
\item Conditionally on the history, consider an ${\rm SSSP}^{(\alpha)}(\theta\!+\!2\alpha)$-evolution
$\boldsymbol{\mu}^{(j)}$ starting from $\mu(S_j)$, and $\mathbf{f}^{(j)}_i\sim{\tt BESQ}_{\mathbf{f}_i(S_j)}(-2\alpha)$, $i=1,2$, independent of each other. Let
$\Delta_j:=\min\{\zeta(\mathbf{f}_1^{(j)}),\zeta(\mathbf{f}_2^{(j)})\}$ and $S_{j+1}:=S_j+\Delta_j$. For $t\in[S_j,S_{j+1})$, define
\[
\left(\mu(t),\mathbf{f}_1(t),\mathbf{x}_1(t),\mathbf{f}_2(t),\mathbf{x}_2(t)\right)
:=\Big(\!\mu^{(j)}(t\!-\!S_j),\mathbf{f}_1^{(j)}(t\!-\!S_j),\mathbf{x}_1(S_j),\mathbf{f}_2^{(j)}(t\!-\!S_j),\mathbf{x}_2(S_j)\!\Big).
\]
\item Say $\Delta_j=\zeta(\mathbf{f}_1^{(j)})$. If $\mathbf{f}_{2}^{(j)}(\Delta_j)$ exceeds the size of the largest atom in $\mu^{(j)}(\Delta_j)$, let $N=j+1$. The construction is complete.
Otherwise, let $(\mathbf{f}_2(S_{j+1}),\mathbf{x}_2(S_{j+1})):= (\mathbf{f}_2^{(j)}(\Delta_j),\mathbf{x}_2(S_j))$ and
decompose $\mu^{(j)}(\Delta_j)= \mu(S_{j+1})+\mathbf{f}_1(S_{j+1})\delta(\mathbf{x}_1(S_{j+1}))$ by identifying its largest atom, giving rise to the five components.
Similar operations apply when $\Delta_j=\zeta(\mathbf{f}_2^{(j)})$.
\end{itemize}
For $t\in[0,S_N)$, define $\lambda_i(t)$, $i=1,2$, by \eqref{eq:mvcoupling}. In general, we may have $N\in\mathbb{N}\cup\{\infty\}$. On the
event $\{N<\infty\}$, we further continue with the same triple-valued ${\rm SSSP}^{(\alpha)}(\theta)$ starting from the terminal value
$(\mu^{(N)}(\Delta_{N-1}),\mathbf{f}_i^{(N)}(\Delta_{N-1}),\mathbf{x}_i(\Delta_N))$, with $i\in \{1,2\}$ being the index such that $\mathbf{f}_i^{(N)}(\Delta_{N-1})>0$.
By \cite[Corollary~5.11 and remark below]{FVAT} and the
strong Markov property of these processes applied at the stopping times $S_j$, we obtain two coupled triple-valued ${\rm SSSP}^{(\alpha)}(\theta)$, which induce the same $\mathcal{M}^a$-valued ${\rm SSSP}^{(\alpha)}(\theta)$, as required.
Indeed, the construction of these two processes is clearly complete on $\{N<\infty\}$.
On $\{N=\infty\}$, by \eqref{eq:sssp-T-infty} one has $\{S_\infty<\infty\}$ and the total mass tends to zero as $t\uparrow S_\infty$, and hence the construction is also finished.
For the continuity in the initial state, suppose that
$\pi_n(0)=\mathbf{f}_n^{(0)}(0)\delta(x_1)+\lambda_n^{(0)}(0)\rightarrow\pi(0)=\mathbf{f}^{(0)}(0)\delta(x_1)+\lambda^{(0)}(0)$ in total variation.
First note that a slight variation of the proof of \cite[Proposition 3.6]{FVAT} allows to couple $\boldsymbol{\lambda}_n^{(0)}$ and
$\boldsymbol{\lambda}^{(0)}$ so that $\lambda_n^{(0)}(t_n)\rightarrow\lambda^{(0)}(t)$ in total variation a.s., for any fixed sequence $t_n\rightarrow t$.
Also coupling $\mathbf{f}_n^{(0)}$ and $\mathbf{f}^{(0)}$, we can apply this on $\{\zeta(\mathbf{f}^{(0)})>t\}$ to obtain $\pi_n(t)\rightarrow\pi(t)$ for any fixed $t$, and on
$\{\zeta(\mathbf{f}^{(0)})<t\}$ to obtain $\zeta(\mathbf{f}^{(0)}_n)\rightarrow\zeta(\mathbf{f}^{(0)})$ and $\pi_n(\zeta(\mathbf{f}^{(0)}_n))\rightarrow\pi(\zeta(\mathbf{f}^{(0)}))$ in total variation a.s.. By induction, this establishes the convergence of one-dimensional marginals on
$\{T_\infty\!>\!t\}$, and trivially on $\{T_\infty\!<\!t\}=\{\pi(t)\!=\!0\}$. A further induction extends this to finite-dimensional marginals.
\end{proof}
For $\alpha\in (0,1)$, $\theta\in [-\alpha,0)$, let $(B_1,B_2,\ldots)\sim \mathtt{PD}^{(\alpha)}( \theta)$ be a Poisson--Dirichlet sequence in the Kingman simplex and $(X_i, i\ge 1)$ i.i.d.\ uniform random variables on $[0,1]$, further independent of $(B_1,B_2,\ldots)$. Define $\mathtt{PDRM}^{(\alpha)}( \theta)$ to be the distribution of the random probability measure $\overline{\pi}:= \sum_{i\ge 1} B_i \delta (X_i)$ on $\mathcal{M}^a_1:= \{\mu\in \mathcal{M}^a\colon \|\mu\|=1 \}$. If $\theta=-\alpha$, then $\overline{\pi}=\delta(X_1)$.
\begin{proposition}\label{prop:ps-SSSP}
Let $(Z(t),t\ge 0)$ be a ${\tt BESQ} (2 \theta)$ killed at zero with $Z(0)>0$, independent of $\overline\pi\sim \mathtt{PDRM}^{(\alpha)}( \theta)$.
Let $(\pi(t), t\ge 0)$ be an $\mathrm{SSSP}^{(\alpha)}(\theta)$ starting from $Z(0) \overline\pi$.
Fix any $t\ge 0$, then $\pi(t)$ has the same distribution as $Z(t) \overline\pi$.
\end{proposition}
\begin{proof}
By the coupling between an $\mathrm{SSIP}^{(\alpha)}(\theta\!+\!\alpha,0)$-evolution and an $\mathrm{SSSP}^{(\alpha)}(\theta)$ mentioned above, the claim follows from Proposition~\ref{prop:ps-theta1theta2} and the definition of $\mathtt{PDRM}^{(\alpha)}( \theta)$.
\end{proof}
For $\theta\ge -\alpha$,
let $\boldsymbol{\pi}:= (\pi(t),\, t\ge 0)$ be an $\mathrm{SSSP}^{(\alpha)} (\theta)$ starting from $\mu\in \mathcal{M}^a_{1}$. Define an associated $\mathcal{M}^a_{1}$-valued process via the de-Poissonisation as in Definition~\ref{defn:dePoi}:
\[
\overline{\pi}(u):= \big\| \pi(\tau_{\boldsymbol{\pi}}(u)) \big\|^{-1} \pi(\tau_{\boldsymbol{\pi}}(u)),\qquad u\ge 0,
\]
where $\tau_{\boldsymbol{\pi}}(u):= \inf \big\{ t\ge 0\colon \int_0^t \|\pi(s)\|^{-1} d s>u \big\}$.
The process $(\overline{\pi}(u),u\ge 0)$ on $\mathcal{M}^a_{1}$
is called a \emph{Fleming--Viot $(\alpha,\theta)$-process} starting from $\mu$, denoted by $\mathrm{FV}^{(\alpha)} (\theta)$.
Using Proposition~\ref{prop:ps-SSSP}, we easily deduce the following statement by the same arguments as in the proof of \cite[Theorem~1.7]{FVAT},
extending \cite[Theorem~1.7]{FVAT} to the range $\theta\in [-\alpha,0)$.
\begin{theorem}\label{thm:FV}
Let $\alpha\in(0,1)$ and $\theta\ge -\alpha$.
A $\mathrm{FV}^{(\alpha)} (\theta)$-evolution is a total-variation path-continuous Hunt process on
$(\mathcal{M}^a_{1},d_{\mathcal{M}})$ and has a stationary distribution
$\mathtt{PDRM}^{(\alpha)}( \theta)$.
\end{theorem}
\subsection{Fragmenting interval partitions}\label{sec:nested1}
We define a fragmentation operator for interval partitions, which is associated with the random interval partition $\mathtt{PDIP}^{(\alpha)}(\theta_1, \theta_2)$ defined in Section~\ref{sec:PDIP}. Fragmentation theory has been extensively studied in the literature; see e.g.\ \cite{Bertoin06}.
\begin{definition}[A fragmentation operator]
Let $\alpha\in (0,1)$ and $\theta_1,\theta_2\ge 0$.
We define a Markov transition kernel $\mathrm{Frag}:= \mathrm{Frag}^{(\alpha)}(\theta_1,\theta_2)$ on $\mathcal{I}_{H}$ as follows.
Let $(\gamma_i)_{i\in\mathbb{N}}$ be i.i.d.\@ with distribution
$\mathtt{PDIP}^{(\alpha)}(\theta_1, \theta_2)$.
For
$\beta = \{ (a_i,b_i), i\in \mathbb{N}\}\!\in\! \mathcal{I}_H$, with $(a_i,b_i)_{i\in \mathbb{N}}$ enumerated in decreasing order of length, we define $\mathrm{Frag}(\beta, \cdot)$ to be the law of the interval partition obtained from $\beta$ by splitting each $(a_i,b_i)$ according to the interval partition $\gamma_i$, i.e.\@
\[\{ (a_i + (b_i-a_i)l , a_i+ (b_i-a_i) r) \colon i\in \mathbb{N}, (a_i,b_i) \in \beta, (l,r)\in \gamma_i \}.
\]
\end{definition}
\begin{lemma}\label{lem:cf-pdip}
For $\alpha, \bar\alpha\in (0,1)$ and $\theta_1,\theta_2, \bar\theta_1, \bar\theta_2\ge 0$,
suppose that
\[
\theta_1+\theta_2 +\bar\alpha=\alpha .
\]
Let $\beta_c\sim \mathtt{PDIP}^{(\bar\alpha)} (\bar\theta_1, \bar\theta_2)$ and $\beta_f$ a random interval partition whose regular conditional distribution given $\beta_c$ is $\mathrm{Frag}^{(\alpha)}(\theta_1,\theta_2)$.
Then $\beta_f\sim \mathtt{PDIP}^{(\alpha)} (\theta_1\!+\!\bar\theta_1, \theta_2\!+\!\bar\theta_2)$.
\end{lemma}
A similar result for the particular case with parameter $\bar{\alpha}=\bar{\theta}_2=0$, $\theta_1=0$ and $\theta_2=\alpha$ is included in \cite[Theorem~8.3]{GnedPitm05}. Lemma~\ref{lem:cf-pdip} is also an analogous result of \cite[Theorem~5.23]{CSP} for $\mathtt{PD}(\alpha,\theta)$ on the Kingman simplex.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{nested_oCRP.pdf}
\caption{Clusters are divided by dashed lines. A new customer starts a new table in an existing cluster (solid arrow) or a new cluster (dashed arrow), with probability proportional to the indicated weights.
In the continuous-time model studied in Section~\ref{sec:nestedPCRP}, the weights correspond to the rates ar which customers arrive.
}
\label{fig:nested_CRP}
\end{figure}
To prove Lemma~\ref{lem:cf-pdip}, we now consider a pair of
\emph{nested} ordered Chinese restaurant processes $(C_c(n), C_f(n))_{n\in \mathbb{N}}$.
The coarse one $C_c\sim \mathrm{oCRP}^{(\bar{\alpha})} (\bar{\theta}_1, \bar{\theta}_2)$ describes the arrangement of customers in a sequence of ordered \emph{clusters}.
We next obtain a composition of each cluster of customers by further seating them at ordered \emph{tables}, according to the $(\alpha, \theta_1,\theta_2)$-seating rule.
These compositions are concatenated according to the cluster order, forming the fine process $C_f$.
Then, as illustrated in Figure~\ref{fig:nested_CRP}, we can easily to check that $C_f\sim \mathrm{oCRP}^{(\alpha)} (\theta_1\!+\!\bar{\theta}_1, \theta_2\!+\!\bar{\theta}_2)$, due to the identity $\theta_1\!+\!\theta_2\!+\!\bar\alpha = \alpha$.
Nested (unordered) Chinese restaurant processes have been widely applied in nonparametric Bayesian analysis of the problem of learning topic hierarchies from data, see e.g.\ \cite{Blei10}.
Lemma~\ref{lem:cf-pdip} follows immediately from the following convergence result.
\begin{lemma}[Convergence of nested oCRP]\label{lem:cv-oCRP}
Let $(C_c(n),C_f(n))_{n\in \mathbb{N}}$ be a pair of nested ordered Chinese restaurant processes constructed as above.
Then $(n^{-1} C_c(n), n^{-1} C_f(n))$ converges a.s.\@ to a limit $(\beta_c, \beta_f)$ for the metric $d_H$ as $n\to \infty$; furthermore, we have $\beta_c\sim \mathtt{PDIP}^{(\bar\alpha)}(\bar\theta_1, \bar\theta_2)$, $\beta_f\sim\mathtt{PDIP}^{(\alpha)} (\theta_1\!+\!\bar\theta_1, \theta_2\!+\!\bar\theta_2)$, and the regular conditional distribution of $\beta_f$ given $\beta_c$ is $\mathrm{Frag}^{(\alpha)}(\theta_1,\theta_2)$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:crp-pdip}, we immediately deduce that $(n^{-1} C_c(n), n^{-1} C_f(n))$ converges a.s.\@ to a limit $(\beta_c, \beta_f)$. It remains to determine the joint distribution of the limit.
Consider a sequence of i.i.d.\@ $\gamma_i\sim \mathtt{PDIP}^{(\alpha)}(\theta_1, \theta_2)$, $i\ge 1$ and an independent $\beta'_c:= \{(a_i,b_i), i\in \mathbb{N}\} \sim \mathtt{PDIP}^{(\bar\alpha)}(\bar\theta_1, \bar\theta_2)$. Let $\beta'_f$ be obtained from $\beta'_c$ by splitting each $(a_i,b_i)$ according to $\gamma_i$, then the regular conditional distribution of $\beta'_f$ given $\beta'_c$ is $\mathrm{Frag}^{(\alpha)}(\theta_1,\theta_2)$.
We will show that $(\beta_c, \beta_f)\mbox{$ \ \stackrel{d}{=}$ } (\beta'_c, \beta'_f)$.
To this end, apply the paintbox construction described before Proposition~\ref{prop:ocrp-pdip} to the nested $\beta'_c$ and $\beta'_f$, by using the same sequence of i.i.d.\@ uniform random variables $(Z_j, j\in \mathbb{N})$ on $[0,1]$.
For each $n\in \mathbb{N}$, let $C^*_c(n)$ and $C^*_f(n)$ be the compositions of the set $[n]$ obtained as in \eqref{eqn:C*}, associated with $\beta'_c$ and $\beta'_f$ respectively.
Write $(C'_c(n),C'_f(n))$ for the integer compositions associated with $(C^*_c(n),C^*_f(n))$, then $(n^{-1}C'_c(n),n^{-1}C'_f(n))$ converges a.s.\ to $(\beta'_c,\beta'_f)$ by \cite[Theorem 11]{Gnedin97}.
Note that each $(a_i, b_i)\in \beta'_c$ corresponds to a cluster of customers in $C^*_c(n)$, which are further divided into ordered tables in $C^*_f(n)$. This procedure can be understood as a paintbox construction, independent of other clusters, by using $\gamma_i\sim \mathtt{PDIP}^{(\alpha)}(\theta_1, \theta_2)$ and i.i.d.\@ uniform random variables $\{(Z_j - a_i)/(b_i-a_i)\colon Z_j \in (a_i, b_i), j\in \mathbb{N}\}$ on $[0,1]$.
By Proposition~\ref{prop:ocrp-pdip}, it has the same effect as an $\mathrm{oCRP}^{(\alpha)}(\theta_1,\theta_2)$. For each $n\in \mathbb{N}$, as we readily have $C'_c(n)\mbox{$ \ \stackrel{d}{=}$ } C_c(n)$ by Proposition~\ref{prop:ocrp-pdip}, it follows that $(C'_c(n),C'_f(n))\mbox{$ \ \stackrel{d}{=}$ } (C_c(n),C_f(n))$. As a result, we deduce that the limits also have the same law, i.e.\ $(\beta_c, \beta_f)\mbox{$ \ \stackrel{d}{=}$ } (\beta'_c, \beta'_f)$.
\end{proof}
\subsection{Coarse-fine interval partition evolutions}\label{sec:nested2}
We consider the space
\[
\mathcal{I}^2_{\mathrm{nest}}:=
\big\{(\gamma_c,\gamma_f) \in \mathcal{I}_{H}\times \mathcal{I}_{H} \colon: G(\gamma_c)\subseteq G(\gamma_f)
,\| \gamma_c\| = \|\gamma_f \|
\big\}.
\]
In other words, for each element $(\gamma_c,\gamma_f)$ in this space, the interval partition $\gamma_f$ is a refinement of $\gamma_c$ such that each interval $U\in \gamma_c$
is further split into intervals in $\gamma_f$, forming an interval partition $\gamma_f\big|_U$ of $[\inf U, \sup U]$. We also define the shifted interval partition $\gamma_{f}\big|^{\leftarrow}_{U}:= \{ (a,b)\colon (a+\inf U, b+\inf U) \in \gamma_f\big|_U \}$ of $[0,\mathrm{Leb}(U)]$ and note that $\gamma_{f}\big|^{\leftarrow}_U\in\mathcal{I}_H$.
We equip $\mathcal{I}^2_{\mathrm{nest}}$ with the product metric
\[d_{H}^2 ((\gamma_c,\gamma_f), (\gamma'_c,\gamma'_f))= d_{H}(\gamma_c,\gamma'_c)+d_{H}(\gamma_f,\gamma'_f).\]
\begin{lemma}\label{lem:nested-cv}
For each $n\ge 1$, let $(\beta_n, \gamma_n)\in \mathcal{I}^2_{\mathrm{nest}}$.
Suppose that $(\beta_n, \gamma_n)$ converges to $(\beta_{\infty},\gamma_{\infty})$ under the product metric $d^2_H$. Then $(\beta_{\infty}, \gamma_{\infty})\in \mathcal{I}^2_{\mathrm{nest}}$.
\end{lemma}
\begin{proof}
This requires us to prove $G(\beta_{\infty})\subseteq G(\gamma_{\infty})$.
As $G(\gamma_{\infty})$ is closed, it is equivalent to show that, for any $x\in G(\beta_{\infty})$,
the distance $d(x, G(\gamma_{\infty}))$ from $x$ to the set $G(\gamma_{\infty})$ is zero.
For any $y_n\in G(\beta_n)\subseteq G(\gamma_{n})$, we have
$
d(x, G(\gamma_{\infty}))\le d(x, y_n)
+ d_H(\gamma_{n} ,\gamma_{\infty}).
$
It follows that
$
d(x, G(\gamma_{\infty})) \le \inf_{y_n\in G(\beta_n)} d(x, y_n) + d_H(\gamma_{n} , \gamma_{\infty})
\le d_H(\beta_{\infty} , \beta_{n}) + d_H(\gamma_{n}, \gamma_{\infty}).
$
As $n\to \infty$, the right-hand side converges to zero.
So we conclude that $d(x, G(\gamma_{\infty}))=0$ for every $x\in G(\beta_{\infty})$,
completing the proof.
\end{proof}
We shall now construct a coarse-fine interval partition evolution in the space $\mathcal{I}^2_{\mathrm{nest}}$.
To this end, let us first extend the scaffolding-and-spindles construction in Section~\ref{sec:pre} to the setting where each spindle is an interval-partition-valued excursion.
Denote by $\mathcal{E}_{\mathcal{I}}$ the space of continuous $\mathcal{I}_H$-valued excursions.
Given a point measure $W$ on $\mathbb{R}_+ \times \mathcal{E}_{\mathcal{I}}$ and a scafolding function
$X \colon \mathbb{R}_+ \to \mathbb{R}$, we define the following variables, if they are well-defined.
The \emph{coarse skewer} of $(W,X)$ at level $y\in \mathbb{R}$ is the interval partition
\begin{equation*}
\ensuremath{\normalfont c\textsc{skewer}}(y,W, X):= \left\{ \big(M_{W,X}^y (t-), M_{W,X}^y (t) \big) \colon
t\ge 0, M_{W,X}^y (t-)< M_{W,X}^y (t)
\right\},
\end{equation*}
where
$
M_{W,X}^y (t):= \int_{[0,t]\times \mathcal{E}_{\mathcal{I}}} \big\|\gamma\big( y -X(s-) \big)\big\| W(ds,d\boldsymbol{\gamma})$, $t\ge 0.
$
Let $\ensuremath{\overline{\normalfont c\textsc{skewer}}}(W,X):= (\ensuremath{\normalfont c\textsc{skewer}}(y,W,X), y\ge 0)$.
The \emph{fine skewer} of $(W,X)$ at level $y\in \mathbb{R}$ is the interval partition
\begin{equation*}
\ensuremath{\normalfont f\textsc{skewer}}(y,W,X):=
\mathop{ \raisebox{-2pt}{\Huge$\star$} } _{\text{points }(t, \boldsymbol{\gamma}_t) \text{ of } W\colon M_{W,X}^y (t-)< M_{W,X}^y (t)} \gamma_t\big( y -X(t-) \big).
\end{equation*}
Let $\ensuremath{\overline{\normalfont f\textsc{skewer}}}(W,X):= (\ensuremath{\normalfont f\textsc{skewer}}(y,W, X), y\ge 0)$.
Let $\theta_1, \theta_2 \ge 0$. Suppose that $\theta=\theta_1+\theta_2-\alpha\in [-\alpha,0)$, then we have an $\mathrm{SSIP}^{(\alpha)}(\theta_1, \theta_2)$-excursion measure $\Theta:= \Theta^{(\alpha)}(\theta_1, \theta_2)$ defined as in Section~\ref{sec:exc}.
Write $\bar\alpha:= -\theta \in (0,\alpha]$ and let $\mathbf{W}$ be a Poisson random measure on $\mathbb{R}_+ \times \mathcal{E}_{\mathcal{I}}$ with intensity $c_{\bar\alpha}\mathrm{Leb} \otimes \Theta$, where
$c_{\bar\alpha}:=2 \bar\alpha(1+\bar\alpha)/\Gamma(1-\bar\alpha)$.
We pair $\mathbf{W}$ with a (\emph{coarse}) scaffolding $(\xi^{(\bar\alpha)}_{\mathbf{W}}(t), t\ge 0)$ defined by
\begin{equation}\label{eq:scaffoldingW}
\xi^{(\bar\alpha)}_{\mathbf{W}}(t):= \lim_{z\downarrow 0} \left(
\int_{[0,t]\times \{\boldsymbol{\gamma}\in \mathcal{E}_{\mathcal{I}}\colon \zeta(\boldsymbol{\gamma})>z\}} \zeta(\boldsymbol{\gamma}) \mathbf{W}(ds,d\boldsymbol{\gamma}) - \frac{(1+\bar\alpha)t}{(2z)^{\bar\alpha}\Gamma(1-\bar\alpha)\Gamma(1+\bar\alpha)}
\right).
\end{equation}
This is a spectrally positive stable L\'evy process of index $1+\bar\alpha$.
Let $\boldsymbol{\beta}$ be an $\mathrm{SSIP}_{\!\dagger}^{(\alpha)} (\theta_1,\theta_2)$-evolution starting from $\gamma_0\in \mathcal{I}_{H}$ with first hitting time $\zeta(\boldsymbol{\beta})$ of $\emptyset$.
We define by $\mathbf{Q}_{\gamma_0}^{(\alpha)}(\theta_1, \theta_2)$ the law of the following a random point measure on $[0,\infty)\times \mathcal{E}_{\mathcal{I}}$:
\begin{equation}\label{eq:cladeW}
\delta(0,\boldsymbol{\beta}) + \mathbf{W}\big|_{(0, T_{-\zeta(\boldsymbol{\beta})}]\times \mathcal{E}_{\mathcal{I}}},
\quad\text{where}~ T_{-y}:= \inf\big\{t\ge 0 \colon \xi^{(\bar\alpha)}_{\mathbf{W}}(t)= -y\big\}.
\end{equation}
\begin{definition}[Coarse-fine SSIP-evolutions]\label{defn:cfIP-0}
$\!\!\!\!\!\!$ Let $\alpha\!\in\! (0,1)$, $\theta_1,\theta_2\!\ge\! 0$ with $\theta_1\!+\!\theta_2\!<\!\alpha$. Let $\bar\alpha:= \alpha -\theta_1-\theta_2 \in (0,\alpha]$.
For $(\gamma_c,\gamma_f) \in \mathcal{I}^2_{\mathrm{nest}}$, let $\mathbf{W}_U \sim \mathbf{Q}_{\gamma_{f}|^{\leftarrow}_{U}}^{(\alpha)}(\theta_1, \theta_2)$, $U\in \gamma_c$, be an independent family with scaffolding $\xi^{(\bar\alpha)}_{\mathbf{W}_{U}}$ as in \eqref{eq:scaffoldingW}.
Then the pair-valued process
\[
\Big( \mathop{ \raisebox{-2pt}{\Huge$\star$} } _{U\in \gamma}\ensuremath{\normalfont c\textsc{skewer}}\big(y,\mathbf{W}_{U}, \xi^{(\bar\alpha)}_{\mathbf{W}_{U}}\big),~
\mathop{ \raisebox{-2pt}{\Huge$\star$} } _{U\in \gamma}\ensuremath{\normalfont f\textsc{skewer}}\big(y, \mathbf{W}_{U}, \xi^{(\bar\alpha)}_{\mathbf{W}_{U}}\big)\Big), \qquad y\ge 0,
\]
is called a \emph{coarse-fine $(\alpha,\theta_1,\theta_2,0)$-self-similar interval partition evolution}, starting from $(\gamma_c,\gamma_f)$, abbreviated as \emph{$\mathrm{cfSSIP}^{(\alpha,\theta_1,\theta_2)} (0)$-evolution}.
\end{definition}
Roughly speaking, it is a random refinement of an $\mathrm{SSIP}^{(\bar\alpha)} (0)$-evolution according to $\mathrm{SSIP}^{(\alpha)} (\theta_1, \theta_2)$-excursions.
To add immigration to this model,
let $\mathbf{W}\sim\mathtt{PRM}(c_{\bar\alpha}\mathrm{Leb}\otimes \Theta)$ and consider its coarse scaffolding $\xi^{(\bar\alpha)}_{\mathbf{W}}$ as in \eqref{eq:scaffoldingW}.
For $\bar\theta\ge 0$, as in \eqref{eq:X-alphatheta}, define the process
\begin{equation}
\mathbf{X}_{\bar\theta}(t) := \xi^{(\bar\alpha)}_{\mathbf{W}}(t) + \left(1 - \bar\alpha/\bar\theta \right)\ell(t), \quad t\ge 0, \qquad \text{where} \quad \ell(t) := -\inf_{u\leq t}\xi^{(\bar\alpha)}_{\mathbf{W}}(u).
\end{equation}
For each $j\in \mathbb{N}$, set $T^{-j}_{\bar\theta}
:= \inf\{t\ge 0\colon \mathbf{X}_{\bar\theta}(t)= - j \}$ and define nested processes
\begin{align*}
\cev{\beta}_{c,j}(y) &:=
\ensuremath{\normalfont c\textsc{skewer}} \big(y, \mathbf{W}\big|_{[0,T_{\bar\theta}^{-j})}, j+\mathbf{X}_{\bar\theta}\big|_{[0,T_{\bar\theta}^{-j})}\big), \qquad y\in [0,j], \\
\cev{\beta}_{f,j}(y) &:=
\ensuremath{\normalfont f\textsc{skewer}} \big(y, \mathbf{W}\big|_{[0,T_{\bar\theta}^{-j})}, j+\mathbf{X}_{\bar\theta}\big|_{[0,T_{\bar\theta}^{-j})}\big), \qquad y\in [0,j].
\end{align*}
As in Section~\ref{sec:pre}, we find that
$
\big(\!\big(\cev{\beta}_{c,j}(y),\cev{\beta}_{f,j}(y)\big), y\in[0,j] \big) \stackrel{d}{=} \big(\!\big(\cev{\beta}_{c,k}(y),\cev{\beta}_{f,k}(y)\big), y\in[0,j] \big)
$
for all $k\ge j$.
Thus, by Kolmogorov’s extension theorem, there exists a process $\big(\cev{\boldsymbol{\beta}}_c,\cev{\boldsymbol{\beta}}_f\big)$ such that $\big(\!\big(\cev{\beta}_c(y),\cev{\beta}_f(y)\big), y\in[0,j]\big) \stackrel{d}{=} \big(\!\big(\cev{\beta}_{c,j}(y),\cev{\beta}_{f,j}(y)\big), y\in[0,j] \big)$ for every $j\in \mathbb{N}$.
\begin{definition}[Coarse-fine SSIP-evolutions with immigration]\label{defn:cfIP-theta}
Let $\bar\theta,\theta_1,\theta_2\!\ge\! 0$, $\alpha\!\in \!(0,1)$, $\bar\alpha= \alpha \!-\!\theta_1\!-\!\theta_2 \in (0,\alpha]$ and $(\gamma_c,\gamma_f) \in \mathcal{I}^2_{\mathrm{nest}}$. Let $(\cev{\boldsymbol{\beta}}_c,\cev{\boldsymbol{\beta}}_f)$ be defined as above and $(\vecc{\boldsymbol{\beta}}_c,\vecc{\boldsymbol{\beta}}_f)$ an independent $\mathrm{cfSSIP}^{(\alpha,\theta_1,\theta_2)} ( 0)$-evolution starting from $(\gamma_c,\gamma_f)$.
Then we call $(\cev{\beta}_c(y) \star\vecc{\beta}_c(y),\cev{\beta}_f(y) \star\vecc{\beta}_f(y))$, $y\ge 0$,
a \emph{coarse-fine $(\alpha,\theta_1,\theta_2)$-self-similar interval partition evolution with immigration rate $\bar\theta$}, starting from $(\gamma_c,\gamma_f)$, or a \emph{$\mathrm{cfSSIP}^{(\alpha,\theta_1,\theta_2)} ( \bar\theta)$-evolution}.
\end{definition}
By construction, the coarse process of a \emph{$\mathrm{cfSSIP}^{(\alpha,\theta_1,\theta_2)} (\bar\theta)$-evolution} is an $\mathrm{SSIP}^{(\bar\alpha)} (\bar\theta)$-evolution.
For the special case $\theta_1=\theta_2 =0$, the fine process coincides with the coarse one.
\begin{remark}
By combining Definition~\ref{defn:cfIP-theta} and Definition~\ref{defn:ipe}, one can further construct a coarse-fine SSIP-evolution with the coarse process being an $\mathrm{SSIP}^{(\bar\alpha)} (\bar{\theta}_1, \bar\theta_2)$-evolution.
\end{remark}
\subsection{Convergence of nested PCRPs}\label{sec:nestedPCRP}
For $\bar\alpha\in (0, 1)$ and $\bar\theta\ge 0$, let $(C_c(t), t\ge 0)$ be a Poissonised Chinese restaurant process $\mathrm{PCRP}^{(\bar\alpha)} (\bar\theta,\bar\alpha)$ as in Section~\ref{sec:PCRP}.
Recall that for each cluster of $C_c$, the mass evolves according to a Markov chain of law $\pi(-\bar\alpha)$ as in \eqref{eq:pi}.
Let $\alpha\in (\bar\alpha, 1)$ and $\theta_1,\theta_2\ge 0$. Suppose that there is the identity
\[
\theta= \theta_1+\theta_2-\alpha = -\bar{\alpha}<0,
\]
then the total mass evolution of a $\mathrm{PCRP}^{(\alpha)} (\theta_1,\theta_2)$ also has distribution $\pi(-\bar\alpha)$.
Therefore, we can fragment each cluster of $C_c$ into $\mathrm{PCRP}^{(\alpha)} (\theta_1,\theta_2)$ as follows.
In each \emph{cluster}, customers are further attributed into an ordered sequence of \emph{tables}:
whenever a customer joins this cluster, they choose an existing table or add a new table according to the $(\alpha, \theta_1,\theta_2)$-seating rule;
whenever the cluster size reduces by one, a customer is chosen uniformly to leave.
As a result, we embed a $\mathrm{PCRP}^{(\alpha)} (\theta_1,\theta_2)$ into each cluster of $C_c$, independently of the others.
The rates at which customers arrive are illustrated in Figure~\ref{fig:nested_CRP}.
For each time $t\ge 0$, by concatenating the composition of ordered table size configuration of each cluster, from left to right according to the order of clusters, we obtain a composition $C_f(t)$, representing the numbers of customers at all tables. Then $C_f(t)$ is a refinement of $C_c(t)$.
One can easily check that $(C_f(t),t\ge 0)$ is a $\mathrm{PCRP}^{(\alpha)} (\theta_1+\bar{\theta},\theta_2+ \bar{\alpha})$. We refer to the
pair $((C_c(t),C_f(t)),\,t\ge 0)$ as a \em pair of nested PRCPs\em.
\begin{theorem}[Convergence of nested PCRPs]\label{thm:cv-cfIP1}
For each $n\in \mathbb{N}$, let $(C^{(n)}_c,C^{(n)}_f)$ be a pair of nested $\mathrm{PCRP}$s as defined above, starting from $(\gamma^{(n)}_c, \gamma^{(n)}_{f})\in \mathcal{I}^2_{\mathrm{nest}}$. Suppose that $\frac{1}{n}(\gamma^{(n)}_c, \gamma^{(n)}_{f})$ converges to $(\gamma^{(n)}_c, \gamma^{(n)}_{f})\in \mathcal{I}^2_{\mathrm{nest}}$ under the product metric $d^2_H$.
Then the following convergence holds in distribution in the space of c\`adl\`ag functions on $\mathcal{I}_{H}\times \mathcal{I}_{H}$ endowed with the Skorokhod topology,
\[
\left( \frac{1}{n}\Big(C^{(n)}_c(2nt),C^{(n)}_f(2nt)\Big), t\ge 0 \right) \underset{n\to \infty}{\longrightarrow} \left(\Big(\beta_c(t),\beta_f(t)\Big),\, t\ge 0\right),
\]
where the limit $(\boldsymbol{\beta}_c,\boldsymbol{\beta}_f)= \big(\!\big(\beta_c(t),\beta_f(t)\big),\, t\ge 0\big)$
is a $\mathrm{cfSSIP}^{(\alpha,\theta_1,\theta_2)}(\bar{\theta})$.
\end{theorem}
\begin{proof}
The arguments are very similar to those in the proof of Theorem~\ref{thm:crp-ip-0} and Proposition~\ref{prop:crp-ip-theta}, with an application of Theorem~\ref{thm:Theta}, which replaces the role of Proposition~\ref{prop:vague}.
Let us sketch the main steps:
\begin{itemize}
\item Let $\mathbf{W}^{(n)}$ be a Poisson random measure of rescaled excursions of $\mathrm{PCRP}^{(\alpha)} (\theta_1,\theta_2)$ with intensity $2\bar{\alpha}n^{1+\bar{\alpha}}\mathrm{P}^{(n)}$, where $\mathrm{P}^{(n)}$ is as in Theorem~\ref{thm:Theta}. Write $\xi^{(n)}$ for the associated scaffolding of $\mathbf{W}^{(n)}$ defined as in \eqref{eq:scaffolding-D} and $M^{(n)}$ the total mass of the coarse skewer.
Since by Theorem~\ref{thm:Theta} the intensity measure converges vaguely to the $\mathrm{SSIP}^{(\alpha)} (\theta_1,\theta_2)$-excursion measure $c_{\bar{\alpha}}\Theta$, in analogy with Proposition~\ref{prop:cv-prm}, the sequence
$(\mathbf{W}^{(n)}, \xi^{(n)},M^{(n)})$ can be constructed such that it converges a.s.\ to $(\mathbf{W}, \xi, M)$, where $\xi$ and $M$ are the scaffolding defined as in \eqref{eq:scaffoldingW} and the coarse skewer total mass of $\mathbf{W}\sim{\tt PRM}(c_{\bar{\alpha}}{\rm Leb}\otimes\Theta)$, respectively.
\item Using similar methods as in Theorem~\ref{thm:crp-ip-0} proves the convergence when $\bar{\theta}=0$. More precisely, using the sequence $\mathbf{W}^{(n)}$ obtained in the previous step, we give a scaffolding-and-spindles construction for each rescaled nested pair $(\frac{1}{n}C^{(n)}_c(2n\,\cdot\,), \frac{1}{n}C^{(n)}_f(2n\,\cdot\,))$, as in the description below Lemma~\ref{lem:cv-clade} and in Section~\ref{sec:PCRP}.
We first study the case when the initial state of the coarse process is a single interval as in Lemma~\ref{lem:cv-clade},
and then extend to any initial state by coupling the large clades and controlling the total mass of the remainder.
\item When $\bar\theta>0$, we proceed as in the proof of Proposition~\ref{prop:crp-ip-theta}: we prove that the modified scaffolding converges and then the skewer process also converges.
\end{itemize}
Summarising, we deduce the convergence of nested PCRPs to the coarse-fine skewer processes, as desired.
\end{proof}
Having Theorem~\ref{thm:cv-cfIP1}, we can now identify the fine process by Theorem~\ref{thm:crp-ip}.
\begin{proposition}[Nested SSIP-evolutions]\label{prop:nested}
Let $\alpha\in (0,1)$, $\theta_1,\theta_2,\bar{\theta}\ge 0$ and suppose that $\theta= \theta_1+\theta_2-\alpha<0$.
In a $\mathrm{cfSSIP}^{(\alpha,\theta_1,\theta_2)} ( \bar\theta)$-evolution, the coarse and fine processes
are $\mathrm{SSIP}^{(\bar\alpha)}(\bar\theta)$- and $\mathrm{SSIP}^{(\alpha)}(\theta_1\!+\! \bar{\theta} , \theta_2\!+\! \bar\alpha)$-evolutions respectively, where $\bar{\alpha} = -\theta$.
\end{proposition}
\begin{proof}
We may assume this cfSSIP is the limit of a sequence of nested PCRPs.
Since the coarse processes form a sequence of $\mathrm{PCRP}^{(\bar\alpha)}(\bar\theta)$ that converges in its own right, Theorem~\ref{thm:crp-ip} shows that the limit is an $\mathrm{SSIP}^{(\bar\alpha)}(\bar{\theta})$-evolution.
Similarly, since the fine process is the limit of a sequence of $\mathrm{PCRP}^{(\alpha)}(\theta_1\!+\! \bar{\theta} , \theta_2\!+\! \bar\alpha)$, it is an $\mathrm{SSIP}^{(\alpha)}(\theta_1\!+\! \bar{\theta} , \theta_2\!+\! \bar\alpha)$-evolution.
\end{proof}
\begin{proposition}[Pseudo-stationarity]\label{prop:cf-ps-theta1theta2}
Let $\alpha\!\in\! (0,1)$, $\theta_1,\theta_2\!\ge\! 0$ with
$\bar\alpha:= \alpha\!-\!\theta_1\!-\!\theta_2$ $\in (0,\alpha]$, and $\bar\theta\ge 0$.
Let $Z\sim {\tt BESQ} (2 \bar\alpha ) $ and $\bar\gamma_c\sim \mathtt{PDIP}^{(\bar\alpha)} (\bar\theta,\bar\alpha)$ be independent and $\bar\gamma_f\sim \mathrm{Frag}^{(\alpha)}(\theta_1,\theta_2)(\gamma_c,\,\cdot\,)$.
Let $((\beta_c(t),\beta_f(t)),\, t\ge 0)$ be a $\mathrm{cfSSIP}^{(\alpha,\theta_1, \theta_2)}(\bar{\theta})$-evolution starting from $ (Z(0)\bar\gamma_c,Z(0)\bar\gamma_f)$.
Then $(\beta_c(t), \beta_f(t))\overset{d}{=}(Z(t)\bar\gamma_c,Z(t)\bar\gamma_f)$ for each $t\ge 0$.
\end{proposition}
\begin{proof}
We may assume this cfSSIP-evolution is the limit of a sequence of nested PCRPs $(C^{(n)}_c,C^{(n)}_f)$, with $(C^{(n)}_c,C^{(n)}_f)$ starting from nested compositions of $[n]$ with distribution as in Lemma~\ref{lem:cv-oCRP}.
By similar arguments as in Lemma~\ref{prop:crp-ps}, we deduce that, given the total number of customers $m:=\|C^{(n)}_c(t)\|=\|C^{(n)}_f(t)\|$ at time $t\ge 0$, the conditional distribution of $(C^{(n)}_c(t), C^{(n)}_f(t))$ is given by nested $\mathtt{oCRP}_m^{(\bar\alpha)}(\bar\theta, \bar\alpha)$ and $\mathtt{oCRP}_m^{(\alpha)}(\theta_1+\bar{\theta}, \theta_2+\bar{\alpha})$ described above Lemma~\ref{lem:cv-oCRP}.
The claim then follows from Lemma~\ref{lem:cv-oCRP} and Theorem~\ref{thm:cv-cfIP1}.
\end{proof}
\begin{proposition}[Markov property]\label{prop:nest-Markov}
A $\mathrm{cfSSIP}^{(\alpha,\theta_1,\theta_2)} ( \bar\theta)$-evolution
is a Markov process on $(\mathcal{I}_{\mathrm{nest}}^2, d_H^2)$ with continuous paths.
\end{proposition}
To prove Proposition~\ref{prop:nest-Markov}, we first give a property of the excursion measure $\Theta^{(\alpha)}(\theta_1,\theta_2)$. For any $\mathcal{I}_{H}$-valued process $\boldsymbol{\gamma}\!=\!(\gamma(y),y\!\ge\!0)$ and $a\!>\!0$, let $H^a(\boldsymbol{\gamma}):= \inf\{y\!\ge\! 0\colon \|\gamma(y)\|\!>\!a \}$.
\begin{lemma}\label{lem:Theta-Ha}
For $a>0$, let $\boldsymbol{\beta}=(\beta(y),\,y\ge 0)\sim\Theta^{(\alpha)}(\theta_1,\theta_2)(\,\cdot\,|\, H^a<\infty)$. Conditionally on $(\beta(r),\, r\le H^a(\boldsymbol{\beta}))$, the process $(\beta(H^a(\boldsymbol{\beta})+z),\, z\ge 0)$ is an $\mathrm{SSIP}^{(\alpha)}(\theta_1,\theta_2)$-evolution starting from $\beta(H^a(\boldsymbol{\beta}))$.
\end{lemma}
\begin{proof}
For $k\in \mathbb{N}$, let $H^a_k:= 2^{-k}\lceil2^k H^a\rceil\wedge 2^k$. Then $H^a_k$ is a stopping time that a.s.\ only takes a finite number of possible values and eventually decreases to $H^a$.
By \eqref{eq:Theta:entrance}, the desired property is satisfied by each $H^a_k$. Then we deduce the result for $H^a$ by approximation, using the path-continuity and Hunt property of $\mathrm{SSIP}^{(\alpha)}(\theta_1,\theta_2)$-evolutions of Theorem~\ref{thm:hunt}.
\end{proof}
For $(\gamma_c, \gamma_f)\in \mathcal{I}^2_{\mathrm{nest}}$, let $(\mathbf{W}_U, U\in \gamma_c)$ be a family of independent clades, with each $\mathbf{W}_U \sim \mathbf{Q}^{(\alpha)}_{\gamma_{f}|^{\leftarrow}_{U}} (\theta_1,\theta_2)$. Let $\xi_{U}^{(\bar{\alpha})}$ be the scaffolding associated with $\mathbf{W}_U$ as in \eqref{eq:scaffoldingW} and write $\mathrm{len}(\mathbf{W}_U):= \inf \{s\ge 0\colon \xi_{U}^{(\bar{\alpha})} (s)=0\}$ for its length, which is a.s.\@ finite.
Then we define the concatenation of $(\mathbf{W}_U, U\in \gamma_c)$ by\vspace{-0.1cm}
\begin{equation*}\label{eq:concatenation-clade}
\mathop{ \raisebox{-2pt}{\Huge$\star$} } _{U\in \gamma_c} \mathbf{W}_U
:= \sum_{U\in\gamma_c} \int \delta (g(U)\!+\!t, \boldsymbol{\beta}) \mathbf{W}_{U} (dt,d\boldsymbol{\beta}),
\, \text{where}~ g(U)=\!\! \sum_{V\in\gamma_c, \sup V \le \inf U}\!\! \mathrm{len} (\mathbf{W}_{V}).\vspace{-0.1cm}
\end{equation*}
Write $\mathbf{Q}^{(\alpha)}_{(\gamma_c,\gamma_{f})} (\theta_1,\theta_2)$ for the law of $ \mathop{ \raisebox{-2pt}{\Huge$\star$} } _{U\in \gamma_c} \mathbf{W}_U$.
We next present a Markov-like property for such point measures of interval partition excursions, analogous to \cite[Proposition 6.6]{Paper1-1}.
\begin{lemma}\label{lem:nest-Markov-like}
For $(\gamma_c, \gamma_f)\!\in\! \mathcal{I}^2_{\mathrm{nest}}$, let $\mathbf{W}\sim \mathbf{Q}^{(\alpha)}_{(\gamma_c,\gamma_{f})} (\theta_1,\theta_2)$ and $\mathbf{X}\!=\! \xi_{\mathbf{W}}^{(\bar{\alpha})}$.
For $y\!\ge\! 0$, set \vspace{-0.1cm}
\[
\mathrm{cutoff}^{\ge y}_{\mathbf{W}}=\!\!\sum_{\text{points }(t,\boldsymbol{\gamma}_t) \text{ of } \mathbf{W}} \!\! \mathbf{1}\{ \mathbf{X}(t-)\ge y\}\delta(\sigma^y(t), \boldsymbol{\gamma}_t) + \mathbf{1}\{y\in (\mathbf{X}(t-), \mathbf{X}(t)) \}\delta(\sigma^y(t), \widehat{\boldsymbol{\gamma}}^y_t), \vspace{-0.1cm}
\]
where $\sigma^y(t)=\mathrm{Leb} \{ u\!\le\! t\colon\mathbf{X}(u)\!>\! y\}$ and
$\widehat{\boldsymbol{\gamma}}^y_t = (\gamma_t(y\!-\!\mathbf{X}(t-)\!+\!z), z\!\ge\! 0)$. Similarly define $\mathrm{cutoff}^{\le y}_{\mathbf{W}}$.
Given
$(\beta_c(y),\beta_f(y))=(\ensuremath{\normalfont c\textsc{skewer}}(y,\mathbf{W}, \mathbf{X}),\ensuremath{\normalfont f\textsc{skewer}}(y,\mathbf{W}, \mathbf{X}))$, $\mathrm{cutoff}^{\ge y}_{\mathbf{W}}$ is conditionally independent of $\mathrm{cutoff}^{\le y}_{\mathbf{W}}$ and has conditional distribution $\mathbf{Q}^{(\alpha)}_{(\beta_c(y),\beta_f(y))} (\theta_1,\theta_2)$. \pagebreak
\end{lemma}
\begin{proof}
Recall that the construction of the nested processes is a modification of the scaffolding-and-spindles construction of the coarse component, with the same scaffolding and the $\Lambda_{\mathtt{BESQ}}^{(-2\bar\alpha)}$-excursions being replaced by the interval-partition excursions under $\Theta$.
In view of this, we can follow the same arguments as in the proof of \cite[Proposition 6.6]{Paper1-1}, with an application of Lemma~\ref{lem:Theta-Ha}.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:nest-Markov}]
The path-continuity follows directly from that of an SSIP-evolution.
As in \cite[Corollary 6.7]{Paper1-1}, Lemma~\ref{lem:nest-Markov-like} can be translated to the skewer process under $\mathbf{Q}^{(\alpha)}_{(\gamma_c,\gamma_{f})} (\theta_1,\theta_2)$, thus giving the Markov property for $\mathrm{cfSSIP}^{(\alpha,\theta_1,\theta_2)} (0)$-evolutions.
When the immigration rate is $\bar{\theta}>0$, we introduce an excursion measure $\Theta_{\mathrm{nest}}$ on the space of continuous $\mathcal{I}_{\mathrm{nest}}^2$-excursions, such that the coarse excursion is a $\Theta^{(\bar{\alpha})}(0,\bar{\alpha})$, and each of its ${\tt BESQ}(-2\bar{\alpha})$-excursions is split into a $\Theta^{(\alpha)}(\theta_1,\theta_2)$-excursion.
More precisely, for $y>0$, it has the following properties:
\begin{enumerate}
\item $\Theta_{\mathrm{nest}}(\zeta > y) = \Theta^{(\bar{\alpha})}(0,\bar{\alpha})(\zeta>y) = y^{-1}$.
\item If $(\boldsymbol{\beta}_c,\boldsymbol{\beta}_f)\sim\Theta_{\mathrm{nest}}(\,\cdot\,|\,\zeta > y)$, then $(\beta_c(y),\beta_f(y))
\mbox{$ \ \stackrel{d}{=}$ } \mathtt{Gamma} (1-\bar{\alpha}, 1/2y) (\bar\gamma_c,\bar\gamma_f)$, where $\bar\gamma_c\sim \mathtt{PDIP}^{(\bar{\alpha})}(0)$ and the conditional distribution of $\bar\gamma_f$ given $\bar\gamma_c$ is $\mathrm{Frag}^{(\alpha)}(\theta_1,\theta_2)$. Moreover, conditionally on $(\beta_c(y),\beta_f(y))$, the process
$((\beta_c(y+z),\beta_f(y+z)),\, z\ge 0)$ is a
$\mathrm{cfSSIP}^{(\alpha,\theta_1, \theta_2)}(0)$-evolution.
\end{enumerate}
Having obtained the pseudo-stationarity (Proposition~\ref{prop:cf-ps-theta1theta2}) and the Markov property of $\mathrm{cfSSIP}^{(\alpha,\theta_1,\theta_2)} (0)$-evolutions, the construction of $\Theta_{\mathrm{nest}}$ can be made by a similar approach as in Section~\ref{sec:exc}.
Using $\mathbf{F}\sim \mathtt{PRM} (\bar{\theta}\mathrm{Leb}\otimes \Theta_{\mathrm{nest}})$, by the construction in \cite[Section 3]{IPPAT}, the following process has the same law as a $\mathrm{cfSSIP}^{(\alpha,\theta_1, \theta_2)}(\bar{\theta})$-evolution starting from $(\emptyset,\emptyset)$, for $y\ge 0$,
\[
\beta_{c}(y)= \mathop{ \raisebox{-2pt}{\Huge$\star$} } _{\text{points }(s,\boldsymbol{\gamma}_c,\boldsymbol{\gamma_f})\text{ of }\mathbf{F}\colon s\in [0,y]\downarrow}\gamma_c(y-s), \quad
\beta_{f}(y)= \mathop{ \raisebox{-2pt}{\Huge$\star$} } _{\text{points }(s,\boldsymbol{\gamma}_c,\boldsymbol{\gamma_f})\text{ of }\mathbf{F}\colon s\in [0,y]\downarrow}\gamma_f(y-s).
\]
The Markov property of $\mathrm{cfSSIP}^{(\alpha,\theta_1,\theta_2)} (\bar{\theta})$-evolutions is now a consequence of this Poissonian construction and the form of $\Theta_{\mathrm{nest}}$; see the proof of \cite[Lemma 3.10]{IPPAT} for details.
\end{proof}
\begin{theorem}\label{thm:nested-ssip}
For any $\theta\ge 0$ and pairwise nested $\gamma_\alpha\in \mathcal{I}_H$, $\alpha\in(0,1)$, there exists a nested family $(\boldsymbol{\beta}_{\alpha}, \alpha\in (0,1))$ of $\mathrm{SSIP}^{(\alpha)}(\theta)$-evolutions, in the following sense:
\begin{enumerate}
\item each $\boldsymbol{\beta}_{\alpha}$ is an $\mathrm{SSIP}^{(\alpha)}(\theta)$-evolution starting from $\gamma_\alpha$;
\item for any $0<\bar\alpha< \alpha<1$, $(\boldsymbol{\beta}_{\bar\alpha}, \boldsymbol{\beta}_{\alpha})$ takes values in $\mathcal{I}^2_{\mathrm{nest}}$.
\end{enumerate}
\end{theorem}
\begin{proof}
For $0<\bar\alpha<\alpha<1$, let $(\boldsymbol{\beta}_c,\boldsymbol{\beta}_f)\!=\!((\beta_c(y),\beta_f(y)),y\!\ge\! 0)$ be a $\mathrm{cfSSIP}^{(\alpha,0,\alpha\!-\!\bar\alpha)} ( \theta)$-evolution starting from $(\gamma_{\bar\alpha},\gamma_\alpha) \in \mathcal{I}^2_{\mathrm{nest}}$.
Then by Proposition~\ref{prop:nested}, the coarse process $\boldsymbol{\beta}_c$
is an $\mathrm{SSIP}^{(\bar\alpha)}(\theta)$-evolution and the fine process $\boldsymbol{\beta}_f$
is an $\mathrm{SSIP}^{(\alpha)}(\theta)$-evolution.
This induces a kernel $\kappa_{\bar\alpha,\alpha}$ from the coarse process to the fine process. Arguing by approximation as in Theorem \ref{thm:cv-cfIP1}, we can prove that $\kappa_{\alpha_1,\alpha_2}\circ\kappa_{\alpha_2,\alpha_3}=\kappa_{\alpha_1,\alpha_3}$ for all $0<\alpha_1<\alpha_2<\alpha_3<1$. More generally, for any finitely many $0<\alpha_1 < \alpha_2 <\cdots < \alpha_n<1$, we can find nested $(\boldsymbol{\beta}_{\alpha_i}, 1\le i\le n)$ that are consistently related by these kernels.
We can thus construct the full family by using Kolmogorov’s extension theorem.
\end{proof}
Let $\mathcal{I}^2_{\mathrm{nest},1}:=
\{ (\gamma_c, \gamma_{f})\!\in\!\mathcal{I}^2_{\mathrm{nest}} \colon \|\gamma_c\|\!=\!\|\gamma_f\|\!=\!1\}$ be the space of nested partitions of $[0,1]$. \pagebreak[2]
\begin{theorem}
For any $\theta\ge 0$ and pairwise nested $\bar\gamma_\alpha\in \mathcal{I}_{H,1}$, $\alpha\in(0,1)$, there exists a family of processes $(\overline{\boldsymbol{\beta}}_{\alpha}, \alpha\in (0,1))$ on $\mathcal{I}_{H,1}$, such that
\begin{enumerate}
\item each $\overline{\boldsymbol{\beta}}_{\alpha}$ is an $\mathrm{IP}^{(\alpha)}(\theta)$-evolution starting from $\bar\gamma_\alpha$;
\item for any $0<\bar\alpha< \alpha<1$, $(\overline{\boldsymbol{\beta}}_{\bar\alpha}, \overline{\boldsymbol{\beta}}_{\alpha})$ takes values in $\mathcal{I}^2_{\mathrm{nest},1}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Build a family of SSIP-evolutions $(\boldsymbol{\beta}_{\alpha}, \alpha\in (0,1))$ as in Theorem~\ref{thm:nested-ssip} on the same probability space.
In particular, they have the same total mass process and thus the same de-Poissonisation.
So the de-Poissonised family $(\overline{\boldsymbol{\beta}}_{\alpha}, \alpha\in (0,1))$ is still nested.
\end{proof}
\subsection{An application to alpha-gamma trees}\label{sec:trees}
For $n\ge 1$, let $\mathbb{T}_n$ be the space of all (non-planar) trees without degree-2 vertices, a root vertex of degree 1, and exactly $n$ further degree-1 vertices, \em leaves \em labelled by $[n] = \{1,\ldots, n\}$.
For $\alpha\in (0,1)$ and $\gamma\in [0,\alpha]$, we construct random trees $T_n$ by using the following \emph{$(\alpha,\gamma)$-growth rule} \cite{CFW}:
$T_1$ and $T_2$ are the unique elements in $\mathbb{T}_1$ and $\mathbb{T}_2$.
Given $T_k$ with $k\ge 2$, assign weight $1\!-\!\alpha$ to each of the $k$ edges adjacent to a leaf, weight $\gamma$ to each of the other edges, and weight $(d\!-\!2)\alpha-\gamma$ to each branch point with degree $d\ge 3$.
To create $T_{k+1}$ from $T_k$, choose an edge or a branch point proportional to the weight, and insert the leaf $k\!+\!1$ to the chosen edge or branch point.
This generalises R\'emy's algorithm \cite{Remy85} of the uniform tree (when $\alpha=\gamma= 1/2$) and Marchal's recursive construction \cite{Mar08} of $\rho$-stable trees with $\rho\in (1,2]$ (when $\alpha =1-1/\rho$ and $\gamma=1-\alpha$).
For each $T_n$, consider its spinal decomposition as discussed in the introduction, the spine being the path connecting the leaf $1$ and the root.
Let $C_c(n)$ be the sizes of bushes at the spinal branch points, ordered from left to right in decreasing order of their distances to the root. Then the $(\alpha,\gamma)$-growth rule implies that
the $(C_c(n), n\in \mathbb{N})$ is an $\mathrm{oCRP}^{(\gamma)}(1\!-\!\alpha,\gamma)$.
Similar as the \emph{semi-planar $(\alpha,\gamma)$-growth trees} in \cite{Soerensen}, we further equip each spinal branch point with a left-to-right ordering of its subtrees, such that the sizes of the sub-trees in each bush follow the $(\alpha,0,\alpha\!-\!\gamma)$-seating rule.
By concatenating the sub-tree-configurations of all bushes according to the order of bushes, we obtain the composition $C_f(n)$ of sizes of subtrees.
Then $(C_f(n), n\in \mathbb{N})$ is an $\mathrm{oCRP}^{(\alpha)}(1\!-\!\alpha,\alpha)$ nested to $(C_c(n), n\in \mathbb{N})$, as in Figure~\ref{fig:nested_CRP}.
Let us introduce a continuous-time Markov chain $(\mathbf{T}(s), s\ge 0)$ on $\mathbb{T}=\bigcup_{n\ge 1} \mathbb{T}_n$, the space of labelled rooted trees without degree-2 vertices.
Given $\mathbf{T}(s)$, assign weights to its branch points and edges as in the $(\alpha,\gamma)$-growth model, such that for each branch point or edge, a new leaf arrives and is attaches to this position at the rate given by its weight. Moreover, fix the root and the leaf $1$, and delete any other leaf at rate one, together with the edge attached to it; in this operation, if a branching point degree is reduced to two, we also delete it and merge
the two edges attached to it.
For each $n\ge 1$, consider such a continuous-time up-down Markov chain $(\mathbf{T}^{(n)}(s), s\!\ge\! 0)$ starting from a random tree $T_n$ built by the $(\alpha,\gamma)$-growth rule.
At each time $s\ge 0$, with the spine being the path connecting the leaf $1$ and the root, we similarly obtain a nested pair $(C_c^{(n)}(s), C_f^{(n)}(s))$, representing the sizes of spinal bushes and subtrees respectively. Then it is clear that $\big(C_c^{(n)}(s), s\ge 0\big)$ is a $\mathrm{PCRP}^{(\gamma)}(1\!-\!\alpha)$ and that $\big( C_f^{(n)}(s), s\ge 0\big)$ is a $\mathrm{PCRP}^{(\alpha)}(1\!-\!\alpha,\alpha)$ nested within $C^{(n)}_c$, such that the size evolution of the subtrees in each bush gives a $\mathrm{PCRP}^{(\alpha)}(0, \alpha\!-\!\gamma)$.
\begin{proposition}
For each $n\ge 1$, let $\big((C_c^{(n)}(t), C_f^{(n)}(t)), t\ge 0\big)$ be a pair of nested PCRPs defined as above, associated with a tree-valued process $(\mathbf{T}^{(n)}(s), s\ge 0)$ starting from $T_n$. As $n\to \infty$, $\big(\frac{1}{n}\big(C_c^{(n)}(2nt), C_f^{(n)}(2nt)\big), t\ge 0\big)$ converges in distribution to a $\mathrm{cfSSIP}^{(\alpha,0, \alpha-\gamma)}(1\!-\!\alpha)$-evolution starting from $ (\gamma_c,\gamma_f)$, where $\gamma_c\sim \mathtt{PDIP}^{(\gamma)} (1\!-\!\alpha, \gamma)$ and $\gamma_f \sim \mathrm{Frag}^{(\alpha)}(0,\alpha\!-\!\gamma)(\gamma_c,\,\cdot\,)$.
\end{proposition}
\begin{proof}
We characterised the limiting initial distribution in Lemma~\ref{lem:cv-oCRP} and deduce the convergence of the rescaled process by Theorem~\ref{thm:cv-cfIP1}.
\end{proof}
\section*{Acknowledgements}
QS was partially supported by SNSF grant P2ZHP2\_171955.
\bibliographystyle{imsart-number}
|
1,477,468,751,160 | arxiv | \section{Introduction}
According to Quantum Chromodynamics, the quark-gluon plasma (QGP) phase
refers to matter where quarks and gluons are believed to be deconfined and it
probably takes place at temperatures of the order of 150 to 170 MeV.
In large colliders around the world
(RHIC/BNL, ALICE/CERN, GSI, etc), physicists are trying to
find a QGP signature looking at
non-central heavy ion collisions.
Possible experiments towards this search are Au-Au collisions at RHIC/BNL
and Pb-Pb collisions at SPS/CERN, where the
hadron abundances and particle ratios are used in order to determine the
temperature and baryonic chemical potential of the possibly present
hadronic matter-QGP phase transition.
In previous papers a statistical model under chemical equilibration was
used to calculate particle yields \cite{munzinger,munzinger2} and in these
works the densities of particles were obtained from free Fermi and Boson gas
approximations, where the interaction among the baryons and mesons were
neglected. More recently, relativistic nuclear models have been tested in the
high temperature regime produced in these heavy ion collisions.
In \cite{nosso1,nosso2} different versions of Walecka-type relativistic models
\cite{sw} were used to calculate the Au-Au collision particle yields at RHIC/BNL
and in \cite{nosso_qmc} the quark-meson-coupling model \cite{qmc,qmc2,qmc3} was used
to calculate this reaction results and also Pb-Pb collision particle rations
at SPS/CERN.
In all cases 18 baryons, pions, kaons, $\rho$'s and $K^*$s were incorporated
in the calculations and a fit based on the minimum value of the
quadratic deviation was implemented in
order to obtain the temperature and chemical potential for each model,
according to a prescription given in \cite{munzinger}.
For Au-Au collision (RHIC) these numbers lie in the range
$132 < T < 169$ MeV and $30.5 < \mu_B < 62.8$ MeV and for Pb-Pb
collision (SPS),
$99 < T < 156.1$ MeV and $167.5 < \mu_B < 411$ MeV.
On the other hand, the magnetic fields involved in heavy-ion collisions
\cite{kharzeev,kharzeev2,kharzeev3} can reach
intensities even higher than the ones considered in magnetars
\cite{magnetars,magnetars2}. As suggested in \cite{kharzeev,kharzeev2,kharzeev3} and
\cite{eduardo,eduardo2,eduardo3} it is interesting to investigate fields of the order
of $eB=5 -30 m_\pi^2$ (corresponding to $1.7 \times 10^{19} - 10^{20}$ Gauss)
and temperatures varying from $T=120-200$ MeV related to heavy ion collisions.
In fact, the densities related to the chemical potentials obtained within
the relativistic models framework, in all cases, are very low (of the
order of $10^{-3}$ fm$^{-3}$). At these densities the nuclear interactions are
indeed very small and this fact made us reconsider the possibility of free
Fermi and Boson gases, but now under the influence of strong magnetic
fields.
In a recent paper \cite{tuchin}, the author studies the
synchrotron radiation of gluons by fast quarks in strong magnetic
fields produced in heavy ion collisions and shows that a strong
polarization of quarks and leptons with respect to the direction of
the magnetic field is expected. The polarization of quarks seems
to be washed out during the fragmentation but this is not
the case of the leptons. The observation of lepton polarization asymmetry
could be a proof of the existence of the magnetic field,
which may last for $1-2$ fm/c. This slowly varying magnetic field could
leave its signature in the particle yields.
The purpose of the analysis we present in this paper is to check
if the inclusion of strong magnetic fields can improve the fitting
of experimental results. We start from the simplest possible
calculation, assuming that the magnetic field is homogeneous,
constant and time-independent. We are aware that it is not the case,
as shown in \cite{skokov,deng}, where the shape of
the magnetic field presents a special non-trivial pattern.
Moreover, from the calculations performed in these references, one
can see that after averaging over many events one is left with
just of the components of the magnetic field. Nevertheless, the
event-by-event fluctuation of the position of charged particles can
induce another component of the magnetic field (perpendicular to the
remaining one in the average calculation) and also an
electric field, which is quite strong at low impact
parameters. While the magnetic field remains quite high in peripheral collisions,
the opposite happens with the electric field.
To make our first analysis as simple as possible, we shall restrict
ourselves to data at centralities of the order of 80$\%$, i.e., high values of the impact
parameter $b \simeq 11-13$ fm, where we are more comfortable to
disregard the electric field effects.
In the present paper we briefly revisit the formalism necessary for the
calculation of particle densities subject to magnetic fields and the expressions
used to implement a $\chi^2$ fit to the experimental results.
\section{Formalism}
We model matter as a free gas of baryons and mesons under the influence of a constant magnetic field.
We consider only normal and strange matter, i.e., the baryons and mesons constituted by $u$, $d$ and $s$ quarks:
the baryon octet (spin 1/2 baryons), the baryon decuplet (spin 3/2 baryons), the pseudoscalar meson nonet (spin 0 mesons) and
the vector meson nonet (spin 1 mesons),
which leaves us with a total of $54$ particles ($18$ baryons, $18$ antibaryons and $18$ mesons).
We utilize natural units ($\hbar=c=1$) and define $\epsilon_0=\mu_0=1$. From
the relation
$\alpha=\frac{e^2}{4\pi \epsilon_0 \hbar c}$ we obtain that the electron charge is $e=\sqrt{4\pi\alpha}$,
where $\alpha=\frac{1}{137}$ is the fine structure constant.
The natural units with the electron charge in that form is known as
Heaviside-Lorentz units \cite{jackson}.
In this work, the magnetic field is introduced trough minimal coupling, so
the derivatives become
\begin{equation}
\partial_\mu \rightarrow D_\mu=\partial_\mu +i q A_\mu.
\end{equation}
We write the charge as $q=\epsilon_q|q|$, where $\epsilon_q=+(-)$ corresponds
to a particle with positive (negative) charge, and assume the gauge
\begin{equation}
A_\mu=\delta_{\mu 2} x_1 B \quad \rightarrow \quad A_0=0 \quad \text{and} \quad \vec{A}=(0,x_1 B,0),
\end{equation}
so,
\begin{equation}
\vec{\nabla} \cdot \vec{A}=0 \quad \text{and} \quad \vec{\nabla} \times \vec{A}=B\hat{e}_3,
\end{equation}
and the derivatives
\begin{equation}
D_\mu=\partial_\mu -i\epsilon_q|q|Bx_1\delta_{\mu 2}.
\end{equation}
We search for solutions of the fields $\psi$ in the form
\begin{equation}
\psi^{(\epsilon)}_\alpha=
\left\{
\begin{matrix}
C^{(\epsilon)}_\alpha e^{-i\epsilon Et+i \epsilon \vec{p}\cdot \vec{x}} & (q=0)\\
f^{(\epsilon)}_\alpha(x_1)e^{-i\epsilon Et+i \epsilon p_2 x_2+i\epsilon p_3 x_3} & (q\neq0)
\end{matrix}
\right.,
\end{equation}
where $\psi_\alpha$ are the components of the field $\psi$ and $\epsilon=+(-)$ corresponds to the states of positive (negative) energy.
For the spin 1/2 baryons (Dirac field) $\psi$ has $4$ components,
for the spin 3/2 baryons (Rarita-Schwinger field) $\psi_\mu$ has $16$ components,
for the spin 0 mesons (Klein-Gordon field) $\psi$ has just one component,
and for the spin 1 mesons (Proca field) $\psi_\mu$ has $4$ components.
Due to the use of statistical methods to deal with the system under
consideration, we do not need the complete expression for $\psi$, but
just the form of the energy $E$ for each one of the fields and
the degeneracy of the energy levels $\gamma$.
\subsection{Spin 1/2 Baryons}
The baryons with spin 1/2 are described by the Dirac Lagrangian density
\cite{melrose}
\begin{equation}
{\cal L}^D=\bar{\psi}(i\gamma^\mu D_\mu-m)\psi,
\end{equation}
which (after we apply the Euler-Lagrange equation) lead us to the equation of motion
\begin{equation}
(i\gamma^\mu D_\mu - m)\psi=0.
\end{equation}
where $\gamma^\mu$ are the Dirac matrices.
The solution of the equation of motion gives
\begin{equation}
E=\left\{
\begin{matrix}
&\sqrt{\vec{p}^2+m^2} & (q=0) \\
&\sqrt{p_3^2+m^2+2\nu|q|B} & (q\neq0)
\end{matrix}
\right.,
\end{equation}
where $\nu$ runs over the possible Landau Levels and the degeneracy for the
energy states are given by:
\begin{equation}
\gamma=\left\{
\begin{matrix}
&2 &(q=0) \\
&2-\delta_{\nu0} &(q\neq0)
\end{matrix}
\right..
\end{equation}
\subsection{Spin 3/2 Baryons}
The baryons with spin 3/2 are described by the Rarita-Schwinger Lagrangian
density \cite{rs,weinberg}
\begin{equation}
{\cal L}^{RS}=-\frac{1}{2}\bar{\psi}_\mu(\epsilon^{\mu\nu\rho\sigma} \gamma_5 \gamma_\nu D_\rho+im\sigma^{\mu\sigma})\psi_\sigma,
\end{equation}
where $\gamma^5=i\gamma^0\gamma^1\gamma^2\gamma^3$ and $\sigma^{\mu\nu}=\frac{i}{2}[\gamma^\mu,\gamma^\nu]$.
The equation of motion reads
\begin{equation}
(i\gamma^\mu D_\mu - m)\psi_\nu=0
\quad \text{with} \quad
\gamma^\mu\psi_\mu=0 \quad \text{and} \quad D^\mu\psi_\mu=0.
\end{equation}
The solution of the Rarita-Schwinger equation is not trivial and poses
non-causality problems. To obtain the degeneracy of the energy
states, we follow the prescription used in \cite{melrose}, which
is given in detail for the Rarita-Schwinger equation in \cite{ours}.
Observing the equation of motion one can see that each component of $\psi_\mu$ obeys a Dirac type equation,
so the energy must have the form
\begin{equation}
E=\left\{
\begin{matrix}
&\sqrt{\vec{p}^2+m^2} & (q=0) \\
&\sqrt{p_3^2+m^2+2\nu|q|B} & (q\neq0)
\end{matrix}
\right..
\end{equation}
Besides that, $\psi_\mu$ has $4$ components, but, two equations are
constrainted, which means that only $2$ components of $\psi_\mu$ are really
independent.
So, $\psi_\mu$ have $2$ polarizations, but, (because of the Dirac equation solution) each polarization are double degenerate.
In the presence of a magnetic field
there is another constraint for the $\nu=0$ and $\nu=1$ energy levels,
which leads to the following degeneracy for the energy states
\begin{equation}
\gamma=\left\{
\begin{matrix}
&4 &(q=0) \\
&4-2\delta_{\nu 0} -\delta_{\nu 1} &(q\neq0)
\end{matrix}
\right..
\end{equation}
\subsection{Spin 0 Mesons}
The mesons with spin 0 are described by the Klein-Gordon Lagrangian density
\cite{greiner}
\begin{equation}
{\cal L}^{KG}=D^\mu \psi^\ast D_\mu \psi - m^2\psi^\ast\psi,
\end{equation}
whose equation of motion is given by
\begin{equation}
(D^\mu D_\mu+m^2)\psi=0,
\end{equation}
with the energy satisfying the relation:
\begin{equation}
E=\left\{
\begin{matrix}
&\sqrt{\vec{p}^2+m^2} & (q=0) \\
&\sqrt{p_3^2+m^2+(2\nu+1)|q|B} & (q\neq0)
\end{matrix}
\right..
\end{equation}
\subsection{Spin 1 Mesons}
The mesons with spin 1 are described by the Proca Lagrangian density
\cite{russo}
\begin{equation}
{\cal L}^{P}=\frac{1}{2}(D^\mu\psi^{\nu \ast}-D^\nu\psi^{\mu \ast})(D_\mu\psi_\nu-D_\nu\psi_\mu)-m^2\psi^{\nu \ast}\psi_\nu.
\end{equation}
The equation of motion is
\begin{equation}
(D^\mu D_\mu+m^2)\psi_\nu=0 \quad \text{with} \quad D_\mu\psi^\mu=0.
\end{equation}
Each component of $\psi_\mu$ obey a Klein-Gordon type equation, so that
the energy states are
\begin{equation}
E=\left\{
\begin{matrix}
&\sqrt{\vec{p}^2+m^2} & (q=0) \\
&\sqrt{p_3^2+m^2+(2\nu+1)|q|B} & (q\neq0)
\end{matrix}
\right.,
\end{equation}
$\psi_\mu$ has $4$ components, but, one of the equations is a compressed constraint equation,
which means that only $3$ components of $\psi_\mu$ are independent.
So, each energy state have $3$ polarizations in the case with zero charge
(or without magnetic field).
If the charge is different from zero (and we have the presence of an
external magnetic field) there is an additional constraint for the
$\nu=0$ energy level,
which leads to the following degeneracy for the energy states
\begin{equation}
\gamma=\left\{
\begin{matrix}
&3 &(q=0) \\
&3-\delta_{\nu0} &(q\neq0)
\end{matrix}
\right..
\end{equation}
\subsection{Thermodynamics}
Using the Grand Canonical formalism we obtain that the particle densities
for the baryons are
\begin{equation}
\rho_b=\left\{
\begin{aligned}
\gamma_b \frac{1}{2\pi^2}\int^\infty_0f(E_b-\mu_b)p^2dp \quad (q=0) \\
\sum_{\nu=0}^\infty \gamma_b \frac{|q_b|B}{2\pi^2}\int^\infty_0f(E_b-\mu_b)dp \quad (q\neq0)
\end{aligned}
\right.,
\end{equation}
for the antibaryons are
\begin{equation}
\rho_{ab}=\left\{
\begin{aligned}
\gamma_b \frac{1}{2\pi^2}\int^\infty_0f(E_b+\mu_b)p^2dp \quad (q=0) \\
\sum_{\nu=0}^\infty \gamma_b \frac{|q_b|B}{2\pi^2}\int^\infty_0f(E_b+\mu_b)dp \quad (q\neq0)
\end{aligned}
\right.,
\end{equation}
and for the mesons are
\begin{equation}
\rho_m=\left\{
\begin{aligned}
\gamma_m \frac{1}{2\pi^2}\int^\infty_0b(E_m-\mu_m)p^2dp \quad (q=0) \\
\sum_{\nu=0}^\infty \gamma_m \frac{|q_m|B}{2\pi^2}\int^\infty_0b(E_m-\mu_m)dp \quad (q\neq0)
\end{aligned}
\right.,
\end{equation}
with $f(x)=(e^{x/T}+1)^{-1}$ and $b(x)=(e^{x/T}-1)^{-1}$.
The total baryonic particle density is
\begin{equation}
\rho_B=\sum_b(\rho_b-\rho_{ab}),
\end{equation}
and the total mesonic density is
\begin{equation}
\rho_M=\sum_m\rho_m.
\end{equation}
The energy density is given by the sum of the energy densities
of each particle, so
\begin{equation}
\epsilon=\sum_b(\epsilon_b+\epsilon_{ab})+\sum_m\epsilon_m,
\end{equation}
with
\begin{equation}
\epsilon_b=\left\{
\begin{aligned}
\gamma_b \frac{1}{2\pi^2}\int^\infty_0 E_b f(E_b-\mu_b)p^2dp \quad (q=0) \\
\sum_{\nu=0}^\infty \gamma_b \frac{|q_b|B}{2\pi^2}\int^\infty_0 E_b f(E_b-\mu_b)dp \quad (q\neq0)
\end{aligned}
\right.,
\end{equation}
\begin{equation}
\epsilon_{ab}=\left\{
\begin{aligned}
\gamma_b \frac{1}{2\pi^2}\int^\infty_0 E_b f(E_b+\mu_b)p^2dp \quad (q=0) \\
\sum_{\nu=0}^\infty \gamma_b \frac{|q_b|B}{2\pi^2}\int^\infty_0 E_b f(E_b+\mu_b)dp \quad (q\neq0)
\end{aligned}
\right.,
\end{equation}
\begin{equation}
\epsilon_m=\left\{
\begin{aligned}
\gamma_m \frac{1}{2\pi^2}\int^\infty_0 E_m b(E_m-\mu_m)p^2dp \quad (q=0) \\
\sum_{\nu=0}^\infty \gamma_m \frac{|q_m|B}{2\pi^2}\int^\infty_0 E_m b(E_m-\mu_m)dp \quad (q\neq0)
\end{aligned}
\right.,
\end{equation}
in the same way the pressure is given by
\begin{equation}
P=\sum_b (P_b+P_{ab})+\sum_m P_m,
\end{equation}
with
\begin{equation}
P_b=\left\{
\begin{aligned}
\gamma_b \frac{1}{6\pi^2}\int^\infty_0 \frac{1}{E_b}f(E_b-\mu_b)p^4dp \quad (q=0) \\
\sum_{\nu=0}^\infty \gamma_b \frac{|q_b|B}{2\pi^2}\int^\infty_0 \frac{1}{E_b} f(E_b-\mu_b)p^2dp \quad (q\neq0)
\end{aligned}
\right.,
\end{equation}
\begin{equation}
P_{ab}=\left\{
\begin{aligned}
\gamma_b \frac{1}{6\pi^2}\int^\infty_0 \frac{1}{E_b}f(E_b+\mu_b)p^4dp \quad (q=0) \\
\sum_{\nu=0}^\infty \gamma_b \frac{|q_b|B}{2\pi^2}\int^\infty_0 \frac{1}{E_b} f(E_b+\mu_b)p^2dp \quad (q\neq0)
\end{aligned}
\right.,
\end{equation}
\begin{equation}
P_m=\left\{
\begin{aligned}
\gamma_m \frac{1}{6\pi^2}\int^\infty_0 \frac{1}{E_m}b(E_m-\mu_m)p^4dp \quad (q=0) \\
\sum_{\nu=0}^\infty \gamma_m \frac{|q_m|B}{2\pi^2}\int^\infty_0 \frac{1}{E_m} b(E_m-\mu_m)p^2dp \quad (q\neq0)
\end{aligned}
\right.,
\end{equation}
the entropy density can be found through
\begin{equation}
s=\epsilon+P-\sum_b \mu_b (\rho_b-\rho_{ab})-\sum_m\mu_m\rho_m.
\end{equation}
\subsection{Chemical Potential}
The hadron chemical potential is
\begin{equation}
\mu_h=B_h\;\mu_B+I_{3h}\;\mu_{I_3}+S_h\;\mu_S,
\end{equation}
where $B_h$, $I_{3h}$ and $S_h$, are respectively the baryonic number, the third isospin component and the strangeness of the particle $h$.
The baryonic chemical potential $\mu_B$ is a free parameter of the system (the other is the temperature $T$).
The chemical potential of isospin $\mu_{I_3}$ and strangeness $\mu_S$ are determined trough their respectively conservation laws.
We impose the local conservation of the baryonic number, isospin and strangeness. This imposition leads to the following equations
\begin{equation}
\sum_h B_h\;\rho_h=\frac{N_B}{V},
\quad
\sum_h I_{3h}\;\rho_h=\frac{I_3}{V},
\quad
\sum_h S_{h}\;\rho_h=\frac{S}{V},
\end{equation}
where $N_B$ is the total baryonic number, $I_3$ is the total isospin,
$S$ is the total strangeness of the system and $V$ are
the volume occupied by the system.
The charge conservation is automatically achieved trough the other three conservation laws.
The baryonic number of an Au atom is $N_B=(N+Z)=79+118=197$, the isospin is $I_3=(Z-N)/2=19.5$
and for the deuteron ($d$) we have that $N_B=1+1=2$ and $I_3=0$.
Hence, assuming that the total strangeness of the system is zero, we write the following table for the conserved quantities:
\begin{itemize}
\item{Au$+$Au Collision, $N_B=394$, $I_3=-39$, $S=0$.}
\item{$d+$Au Collision, $N_B=199$, $I_3=-19.5$, $S=0$.}
\end{itemize}
At this point it is important to emphasize some of the drawbacks
of our simple calculation. As shown in \cite{kharzeev_ref}, the
magnetic field should depend on the charges of the colliding nuclei
and the number of participants should vary for different centralities.
These constraints were not taken into account directly in our
calculations. All the information we use as input come from the
experimental particle yields and the magnetic field is modified until
the best fitting is encountered. The number of
different participants is reflected only in the resulting radii.
\section{Results and Discussions}
We have implemented a $\chi^2$ fit in
order to obtain the temperature and chemical potential.
The particle properties (spin, mass, baryonic number, isospin
and strangeness) were taken from the \textit{Particle Data Group} \cite{rpp-2010}.
In tables 1, 2, 3 and 4 we show our results
corresponding to the temperature and chemical potential that
give the minimum value for the quadratic deviation $\chi^2$:
\begin{equation}
\chi^2 = \sum_i \frac{({\cal R}_i^{exp} -{\cal R}_i^{theo})^2}
{\sigma_i^2},
\end{equation}
where ${\cal R}_i^{exp}$ and ${\cal R}_i^{theo}$ are the $i^{th}$ particle
ratio given experimentally and theoretically, and $\sigma_i$
represents the errors in the experimental data points.
To make clear the improvement in the data fitting by the addition of the magnetic field,
we calculate the relative percent deviation $(\Delta_\%)$ with respect
to the experimental values for $B=0$ and the best $B\neq0$ (the bold columns in the tables)
trough the equation
\begin{equation}
\Delta_\%=\Bigg|\frac{{\cal R}^{theo}-{\cal R}^{exp}}{{\cal R}^{exp}}\Bigg|\cdot100\%,
\end{equation}
and show these values in parenthesis in all the tables.
For the simulations our code deals with $5$ unknowns
($\mu_B$, $\mu_{I3}$, $\mu_S$, $T$, $V$)
and $3$ constrained equations. We run over the values of $\mu_B$ and
$T$ (the free parameters) in order to find the smallest $\chi^2$.
Our results are given next.
In tables 1, 2, 3 and 4, $B$ is the magnetic field, $T$ is the temperature, $\mu_B$ is the baryonic chemical potential,
$\chi^2$ is quadratic deviation, $\mu_{I3}$ is the isospin chemical potential, $\mu_S$ is the strangeness chemical potential,
$R$ is the radius of the "fire-ball",
$\rho_B=\sum_b(\rho_b-\rho_{ab})$ is the usual baryonic density,
$\rho_\Delta=\rho_{\Delta^{++}}-\rho_{\bar\Delta^{++}}+\rho_{\Delta^+}-\rho_{\bar\Delta^+}
+\rho_{\Delta^0}-\rho_{\bar\Delta^0}+\rho_{\Delta^-}-\rho_{\bar\Delta^-}$ is delta baryon density,
$\rho_M=\sum_m\rho_m$ is the meson density,
$\rho_\pi=\rho_{\pi^0}+\rho_{\pi^+}+\rho_{\pi^-}$ is the pion density,
$\epsilon$ is the energy density, $P$ is the pressure, $s$ is the entropy density and
$ndf$ is the number of degrees of freedom.
For $B=0$, $ndf=5$ ($7$ experimental values minus $2$ free parameters, $T$ and $\mu$),
for $B\neq0$, $ndf=4$ ($7$ experimental values minus $3$ free parameters, $T$, $\mu$ and $B$).
$\pi^-/\pi^+$, $K^-/K^+$, $\bar{p}/p$, $K^-/\pi^-$, $K^+/\pi^+$ and $p/\pi^+$ are the theoretical (first $7$ columns)
and experimental (last column) particle ratios \cite{star-2009}. The temperatures and
baryonic chemical potentials obtained from the statistical model in \cite{star-2009} are
also given in the last columns of all tables.
In figs. 1-$a/b$, 2-$a/b$, 3-$a/b$ and 4-$a/b$ we plot the
experimental and theoretical ratios for $B=0$ and the best $B\neq0$.
In figs. 1-$c$, 2-$c$, 3-$c$ and 4-$c$ we show the $\chi^2$ behavior for $B=0$ and for the best $B\neq0$.
In figs. 1-$d$, 2-$d$, 3-$d$ and 4-$d$ we show the $\chi^2$ behavior
for the different magnetic fields.
One can notice that the best fitting is generally obtained for magnetic fields around 6 $m_\pi^2$,
a little higher than what is expected for RHIC collisions (5 $m_\pi^2$).
Our results show that, even for the free Fermi and Boson gas models, a strong magnetic field plays an important role.
The inclusion of the magnetic field improves the data fit up to a field of the order of $B=10^{19}$ G.
For stronger magnetic fields, it becomes worse again.
This behavior is easily observed in tables 1 to 4 and in figs.1-d to 4-d.
It is worth pointing out how the "fireball" radius $R$ and the total
density $\rho$ vary with the magnetic field in a systematic way:
$R$ and $\rho$ practically do not change between $B=0$ and $B=10^{18}$ G,
but when the field increases even further, the density increases and the radius decreases.
This behavior is common to all collision cases studied.
This huge jump in the density explains why the ratios get worse for a
magnetic field of the order of $B=10^{20}$ G, for which
the densities are much higher than what is expected in a heavy ion collision.
Our model gives a good description for the particle/antiparticle
ratios, but fails to describe the relation between baryons and mesons.
This occurs because our model produces too many mesons (especially
pions) as shown explicitly in the particle densities. In all collision
types our model presents
a baryon density ($\rho_B$) with more than 30\% of $\Delta$ baryons and
a meson density ($\rho_M$) with more than 60\% of $\pi$ ($\rho_\pi$).
The relative percent deviations in the particle yields show
clearly that some results improve considerably when the magnetic
field is considered, while others remain unaltered or even get
slightly worse. However, our figures also show that the behavior
of the $\chi^2$ changes drastically with the addition of the
magnetic field and that the temperature and chemical potentials
calculated with the statistical model lie within
the $3-\sigma$ confidence ellipse obtained for the best $\chi^2$ in
some cases, but
they are always outside the confidence ellipses obtained with zero
magnetic field.
We would like to comment that when we first started these
calculations, we were not aware of references \cite{skokov,deng} and
we used data obtained for low centralities, i.e., low impact
parameters. In that case, the minimum $\chi^2$ was generally smaller
than the ones shown in this work and we believe this was so because of
the larger error bars accompanying data at low centralities.
Further improvements on the presented calculations are under
investigation, namely, the inclusion of electric fields at low
impact parameters and the variation of both electric and magnetic
fields with the number of participants in the collisions. Moreover,
we are working on the inclusion of the anomalous magnetic moments
and in the description of pion-pion interactions.
We next intend to repeat these calculations for the ALICE/LHC data for
the future Au$+$Au runs with all these improvements, so that our
results become more realistic.
\acknowledgments
This work was partially supported by CNPq, CAPES and FAPESC (Brazil).
We thank very fruitful discussions with Dr. Celso Camargo de Barros and
Dr. Sidney dos Santos Avancini.
|
1,477,468,751,161 | arxiv | \section{Introduction}
``The law that entropy always increases - the second law of thermodynamics -
holds, I think, the supreme position among the laws of nature. If someone
points out that your pet theory of the universe is in disagreement with
Maxwell's equations - then so much the worse for Maxwell's equations. If it
is found to be contradicted by experiments - well, these experimentalists do
bungle things sometimes. But if your theory is found to be against the
second law of thermodynamics I can give you no hope; there is nothing for it
but to collapse in deepest humiliation'' \cite{1}. Following Eddington's this
wise advice, I will not challenge the Second Law in this article. Demonic
devices I consider are by no means malignant to the Second Law and their
purpose is not to oppose it but rather to demonstrate possible subtleties of
the Law's realization in concrete situations.
This philosophy is much in accord with Maxwell's original intention who
introduced ``a very observant and neat-fingered being'', the Demon, as a means
to circumscribe the domain of validity of the Second law and in particular
to indicate its statistical nature \cite{2,3,4}.
Later, however, it became traditional to consider demons as a threat to the
Second Law to be defied. Modern exorcisms use information theory language and
it is generally accepted that the Landauer's principle \cite{5,6,7} finally
killed the demon.
But, as Maxwell's demon was considered to be killed several times in the
past and every time it resurrected from dead, we have every reason to suspect
this last information theory exorcism \cite{3,8,9} and believe the old wisdom
that ``True demons cannot be killed at all'' \cite{10}.
The problem with all exorcisms can be formulated as a dilemma \cite{8}: either
the whole combined system the demon included forms a canonical thermal system
or it does not. In the first case the validity of the Second Law is assumed
from the beginning and therefore the demon is predefined to fail. The exorcism
is sound but not profound, although sometimes it can be enlightening and
delightful.
In the second case it is not evident at all why anti-entropic behaviour can
not happen. For example, if Hamiltonian dynamics is abandoned, a pressure
demon can readily be constructed \cite{11}.
One needs a new physical postulate with independent justification
to ensure the validity of the Second Law for the combined system. Although
Landauer's principle can be proved in several physical situations \cite{12,13}
its generality is not sufficient to exorcise all conceivable extraordinary
demons. For example, the Landauer's principle, as well as ordinary
Gibbsian thermodynamics, fails in the extreme quantum limit then the
entanglement is essential and the total entropy is not the sum of partial
entropies of the subsystems \cite{14}.
According to Boltzmann, ``The second law of thermodynamics can be proved from
the mechanical theory if one assumes that the present state of the universe,
or at least that part which surrounds us, started to evolve from an improbable
state and is still in a relatively improbable state. This is a reasonable
assumption to make, since it enables us to explain the facts of experience,
and one should not expect to be able to deduce it from anything more
fundamental'' \cite{15}. But how improbable? Roger Penrose estimates
\cite{15,16} that the initial state of the universe was absurdly improbable:
one part in $10^{10^{123}}$. In the face of this number, I think, you would
rather welcome Maxwell's demon than want to exorcise it. Of course, recent
approaches to the Low Entropy Past problem \cite{17} do not involve demons,
but who can guarantee that the initial low entropy state was prepared without
them? In any case something more fundamental is clearly required to explain
the puzzle and avoid sheer impossibility of the world we observe around us.
It seems, therefore, that exorcism is not productive and it is better to
allow the presence of demons. However the notion of demons should be
understood in broader sense: not only creatures which challenge the Second
Law, but also the ones which use the Second Law in a clever and rather
unexpected manner.
The rest of the paper is organized as follows. In the first sections we
consider some classical Maxwell demons. A rather detailed analysis of them
is presented in the framework of the Langevin and Fokker-Planck equations.
I tried to make the exposition as self-contained as possible, assuming that
readers from the mirror matter community are as unfamiliar with some
subtleties of the Fokker-Planck equation, like It\^{o}-Stratonovich dilemma,
as was the author months ago.
After demonstrating that the demons really work when the temperature
difference between their two thermal baths is enforced, we speculate about
a possibility that this temperature difference can be naturally created
if significant amount of mirror matter is added to one of the thermal
reservoirs.
Mirror matter is a hypothetical form of matter expected to exist if nature
is left-right symmetric in spite of {\bf P} and {\bf CP} violations in weak
interactions. An example from the molecular physics is considered to support
this possibility. We cite some review articles where more detailed and
traditional exposition of the mirror matter idea, as well as relevant
references, can be found.
The conclusion just finishes the paper with the cheerful remark that mirror
matter demons might be technologically very useful. Therefore it is
worthwhile to search mirror matter experimentally. It is remarked that one
experiment which searches invisible decays of orthopositronium in vacuum
is under way.
\section{Smoluchowski's trapdoor}
Maxwell's original demon operates a tiny door between two chambers filled
with gas allowing only fast molecules to pass in one direction, and only slow
ones to pass in opposite direction. Eventually a temperature difference is
generated between the chambers. The demon can be just some purely mechanical
device to avoid complications connected with the demon's intellect and
information processing. If this device is assumed to be subject of
conventional thermodynamics (the sound horn of the dilemma mentioned above)
then the demon can succeed only if it hides entropy somewhere else. A very
clever way how to do this was indicated recently \cite{18}.
\begin{figure}[htb]
\begin{center}
\mbox{\epsfig{figure=sand.eps}}
\end{center}
\caption {Sand as Maxwell's Demon \cite{18}.}
\label{sand}
\end{figure}
In Fig.\ref{sand} Maxwell's demon left the door between two chambers
ajar but this shrewd creature had replaced the molecules in the chambers by
sand grains and set the chambers in vertical vibration by mounting them on a
shaker. Usually sand grains distribute equally to both sides. But if the
vibration frequency is lowered below a critical value something remarkable
happens: the symmetry is spontaneously broken and grains settle
preferentially on one side. The sand grains act as a Maxwell's demon! This is
possible because, in contrast to a molecular gas, collisions between grains
of sand are not elastic and grains are simply heated during the collisions,
absorbing the entropy \cite{18}.
One of the oldest mechanical Maxwell's demon is the Smoluchowski's trapdoor
\cite{19}. Two chambers filled with gas are connected by an opening that is
covered by a spring-loaded trapdoor. Molecules from one side tend to push the
door open and those on the other side tend to push it closed. Naively one
expects that this asymmetry in the construction of the door turns it into
a one-way valve which creates a pressure difference between the chambers.
Smoluchowski argued that the thermal fluctuations of the door will spoil the
naive picture and prohibit it to act as a one-way valve, although he did not
provide detailed calculations to prove this fact conclusively. It is
difficult to trace the trapdoor demon analytically to see how it fails,
but the corresponding computer simulations can be done and the results show
that Smoluchowski was right \cite{19,20}.
The simulations show that the pressure difference between the chambers is
exactly what is expected at equilibrium after the trapdoor's finite volume
is taken into account. Therefore the trapdoor can not act as a pressure
demon.
But if the trapdoor is cooled to reduce its thermal fluctuations the demon
becomes successful and indeed creates a measurable pressure difference between
the chambers. In simulation program the effective cooling of the door is
maintained by periodically removing energy from the door then it is near the
central partition. As a result the door remains near the closed position
longer than would occur without the cooling and it indeed operates as a
one-way valve \cite{19,20}.
The entropy decrease due to this pressure demon is counterbalanced by the
entropy increase in the refrigerator that cools the door below the ambient gas
temperature. However the demonstration of this fact in computer simulations
involves some subtleties \cite{20}.
\section{Feynman's ratchet and pawl}
Close in spirit to the Smoluchowski's trapdoor is Feynman's ratchet and pawl
mechanism \cite{21}. The device (Fig.\ref{Feyn}) contains a box with some gas
and an axle with vanes in it. At the other end of the axle,
outside the box, there is toothed wheel and a pawl that pushes on the cogwheel
through a spring.
\begin{figure}[htb]
\begin{center}
\mbox{\epsfig{figure=Feyn.eps}}
\end{center}
\caption {Feynman's ratchet and pawl engine \cite{21}.}
\label{Feyn}
\end{figure}
If the paddle wheel is small enough, collisions of gas molecules on the vanes
will lead to Brownian fluctuations of the axle. But the ratchet on the other
end side of the axle allows rotation only in one direction and blocks the
counter-rotation. Therefore one expects Feynman's ratchet and pawl engine
to rectify the thermal fluctuations in a similar manner as the use of ratchets
allow windmills to extract useful work from random winds.
At closer look, however, we realize that the miniature pawl would itself be
subject to thermal fluctuations that invalidate its rectification property.
In isothermal conditions, when the vanes have the same temperature as the
ratchet and pawl, the Second Law ensures that the rate of rotations in a wrong
direction due to the pawl's thermal bounces just compensates the rate of the
favorable rotations so that the wheel does a lot of jiggling but no net
turning. However if the temperatures are different the rate balance no more
holds and the ratchet and pawl engine really begins to rotate
\cite{21,22,23,24}.
Yet there are some subtleties and by way of proof it is better to go into some
details by modeling the ratchet and pawl machine by the following Langevin
equations \cite{25}:
\begin{eqnarray} &&
\lambda\frac{d\theta_1}{dt}=F(\theta_1)-k(\theta_1-\theta_2)+f_1(t),
\nonumber \\ && \lambda\frac{d\theta_2}{dt}=k(\theta_1-\theta_2)+f_2(t),
\label{Langevin} \end{eqnarray}
where $\theta_1$ and $\theta_2$ are the angular positions of the ratchet and
the windmill respectively, $\lambda$ is the friction coefficient assumed to
be the same for both baths,
$F(\theta_1)=-\frac{\partial U}{\partial \theta_1}$ is the torque the ratchet
experiences due to an asymmetric potential $U(\theta_1)$ of its interaction
with the pawl. It is assumed that the ratchet and the windmill are connected
by an elastic spring of rigidity $k$. Finally, $f_1(t)$ and $f_2(t)$ represent
Brownian random torques on the ratchet and on the windmill respectively due to
thermal fluctuations. It is assumed that these random torques constitute two
independent Gaussian white noises
\begin{equation}
<f_i(t)>=0,\;\;\;<f_i(t)\,f_j(t^\prime)>=2k_BT_i\lambda\,\delta_{ij}\,
\delta(t-t^\prime),
\label{wnoise} \end{equation}
$k_B$ being the Boltzmann constant. These Langevin equations correspond to
the over-dumped regime (with neglected inertia).
It is more convenient to introduce another set of angular variables
\begin{equation}
\Theta=\frac{1}{2}(\theta_1+\theta_2),\;\;\; \theta=\frac{1}{2}
(\theta_1-\theta_2).
\label{Theta} \end{equation}
It is the variable $\Theta$ which describes the net rotation of the system
and is, therefore, the only object of our interest, while the variable
$\theta$ describes the relative motion and is irrelevant for our goals. In
terms of these variables, equations (\ref{Langevin}) read
\begin{eqnarray} &&
\lambda\,\dot{\Theta}=\frac{1}{2}F(\Theta+\theta)+
\frac{1}{2}\left [ f_1(t)+f_2(t) \right ],
\nonumber \\ && \lambda\,\dot{\theta}=\frac{1}{2}F(\Theta+\theta)-
2k\theta+\frac{1}{2}\left [ f_1(t)-f_2(t) \right ].
\label{Langevin1} \end{eqnarray}
The relative rotation $\theta$ arises due to the Brownian jiggling of the
ratchet and of the windmill and hence is expected to be very small.
Therefore one can expand
$$F(\Theta+\theta)\approx F(\Theta)+\theta\,F^{\,\prime}(\Theta), \;\;
F^{\,\prime}(\Theta)\equiv \frac{dF}{d\Theta}.$$
Besides, the dynamics of the variable $\theta$ is very rapid and at any time
scale, relevant for the evolution of the slow variable $\Theta$, the fast
variable $\theta$ will be able to relax to a quasi-stationary value given by
setting $\dot{\theta}=0$ in the second equation of (\ref{Langevin1}):
\begin{equation}
\theta\approx\frac{F(\Theta)+f_1(t)-f_2(t)}{4k-F^{\,\prime}(\Theta)}.
\label{stheta} \end{equation}
This allows to eliminate $\theta$ from (\ref{Langevin1}) and arrive at the
following equation for the relevant variable $\Theta$ \cite{25}:
\begin{equation}
\dot{\Theta}=H(\Theta)+g_1(\Theta)f_1(t)+g_2(\Theta)f_2(t),
\label{Ltheta} \end{equation}
where
$$H(\Theta)=\frac{F(\Theta)}{2\lambda}\left [ 1+\frac{F^{\,\prime}(\Theta)}
{4k}\right],\;\;g_1(\Theta)=\frac{1}{2\lambda}\left [ 1+\frac{F^{\,\prime}
(\Theta)}{4k}\right],\;\;g_2(\Theta)=\frac{1}{2\lambda}\left [ 1-\frac{F^{\,
\prime}(\Theta)}{4k}\right],$$
and terms of the second and higher order in $\frac{F^{\,\prime}(\Theta)}
{4k}$ were neglected.
The resulting Langevin equation (\ref{Ltheta}) is the stochastic differential
equation with multiplicative noises and, therefore, subject to the notorious
It\^{o}-Stratonovich dilemma \cite{26,27}. The problem is that stochastic
integrals one needs to calculate various average values are in general
ill-defined without additional interpretation rules (the white noise is a
singular object after all, like the Dirac's $\delta$-function). The most
commonly used It\^{o} and Stratonovich interpretations of stochastic integrals
lead to different results when multiplicative noise is present.
From the physics side, this ill-definedness may be understood as follows
\cite{27}. According to (\ref{Ltheta}) each $\delta$-pulse in $f_1(t)$ or
$f_2(t)$ gives rise to a pulse in $\dot {\Theta}$ and hence an instantaneous
jump in $\Theta$. Then it is not clear which value of $\Theta$ should be used
in $g_1(\Theta)$ and $g_2(\Theta)$: the value just before the jump, after the
jump, or some average of these two. It\^{o} prescription assumes that the
value before the jump should be used, while Stratonovich prescription
advocates for the mean value between before and after the jump.
Our manipulations, which led to (\ref{Ltheta}) from (\ref{Langevin}), already
assumes the Stratonovich interpretation because we had transformed variables
in (\ref{Langevin}) as if (\ref{Langevin}) were ordinary (not stochastic)
differential equations and this is only valid for the Stratonovich
interpretation \cite{27}.
In fact there is a subtle point here. The naive adiabatic elimination of the
fast variable we applied for $\theta$ (by setting $\dot{\theta}=0$) not
necessarily implies the Stratonovich interpretation \cite{28}. Depending on
the fine details of the physical system and of the limiting process, one may
end with Stratonovich, It\^{o} or even with some other interpretation which is
neither It\^{o} nor Stratonovich \cite{28}. Nevertheless the Stratonovich
interpretation will be assumed in the following like as was done tacitly in
\cite{25}.
Let $P(\Theta,t)$ be the probability density for the stochastic process
$\Theta(t)$. Then
\begin{equation}
P(\Theta,t)=\int\limits_{-\infty}^\infty G(\Theta,t;\,\Theta_0,t_0)\,
P(\Theta_0,t_0)\,d\Theta_0,\;\;t>t_0,
\label{Pdens} \end{equation}
where the Green's function (the conditional probability density over $\Theta$
at time $t$ under the condition that $\Theta=\Theta_0$ at $t=t_0$) satisfies
the initial value equation
\begin{equation}
G(\Theta,t_0;\,\Theta_0,t_0)=\delta(\Theta-\Theta_0).
\label{Icond} \end{equation}
Now consider
\begin{equation}
\frac{\partial P(\Theta,t)}{\partial t}=\lim_{\Delta t\to 0}
\frac{P(\Theta,t+\Delta t)-P(\Theta,t)}{\Delta t}.
\label{dPdt} \end{equation}
According to (\ref{Pdens})
\begin{equation}
P(\Theta,t+\Delta t)=\int\limits_{-\infty}^\infty G(\Theta,t+\Delta t;\,
\Theta_0,t)\,P(\Theta_0,t)\,d\Theta_0.
\label{DtPdens} \end{equation}
As the time interval $\Delta t$ is very short, the function $G(\Theta,t+
\Delta t;\,\Theta_0,t)$ can differ from its initial $\delta$-function value
only slightly by drifting and broadening a little which can be modeled by the
drift coefficient $D^{(1)}(\Theta)$ and the diffusion coefficient
$D^{(2)}(\Theta)$ respectively if we expand \cite{29}
\begin{equation}
G(\Theta,t+\Delta t;\,\Theta_0,t)\approx \delta(\Theta-\Theta_0)+
D^{(1)}(\Theta_0)\,\Delta t \,\delta^{(1)}(\Theta-\Theta_0)+
D^{(2)}(\Theta_0)\,\Delta t \,\delta^{(2)}(\Theta-\Theta_0),
\label{dexpan} \end{equation}
where $\delta^{(n)}(\Theta)$ denotes the $n$-th order derivative of the
$\delta$-function with the basic property
$$\int\limits_{-\infty}^\infty f(\Theta)\,\delta^{(n)}(\Theta-\Theta_0)\,
d\Theta=\frac{(-1)^{\,n}}{n\,!}\,\left . \frac{d^{\,n}f(\Theta)}
{d\Theta^n}\right |_{\Theta=\Theta_0}. $$
Substituting (\ref{dexpan}) into (\ref{DtPdens}) we get
\begin{equation}
P(\Theta,t+\Delta t)\approx P(\Theta,t)+\Delta t\,\frac{\partial}
{\partial \Theta}\left [D^{(1)}(\Theta)\,P(\Theta,t)\right ]+\frac{\Delta t}
{2}\,\frac{\partial^2}{\partial \Theta^2}\left [D^{(2)}(\Theta)\,
P(\Theta,t)\right ]
\label{Pdt} \end{equation}
and therefore (\ref{dPdt}) implies the following Fokker-Planck equation
\begin{equation}
\frac{\partial P(\Theta,t)}{\partial t}=\frac{\partial}{\partial \Theta}
\left [D^{(1)}(\Theta)\,P(\Theta,t)\right ]+\frac{1}{2}\,\frac{\partial^2}
{\partial \Theta^2}\left [D^{(2)}(\Theta)\, P(\Theta,t)\right ].
\label{FP} \end{equation}
The Fokker-Planck equation determines the evolution of the probability density
provided the drift and diffusion coefficient functions are known. These
functions are related to the first two moments the initially localized
density function develops in the short time interval $\Delta t$ because
(\ref{Pdt}) indicates that
\begin{eqnarray} &&
<\Delta \Theta>\,=\int\limits_{-\infty}^\infty (\Theta-\Theta_0)\,
P(\Theta,t+\Delta t)\,d\Theta=-D^{(1)}(\Theta_0)\,\Delta t, \nonumber \\ &&
<(\Delta \Theta)^2>\,=\int\limits_{-\infty}^\infty (\Theta-\Theta_0)^2\,
P(\Theta,t+\Delta t)\,d\Theta=D^{(2)}(\Theta_0)\,\Delta t,
\label{D12} \end{eqnarray}
if $P(\Theta,t)=\delta(\Theta-\Theta_0)$.
On the other hand, these moments can be calculated directly from the Langevin
equation. Integrating (\ref{Ltheta}), we get
$$\Delta \Theta=\int\limits_t^{t+\Delta t} H(\Theta)\,dt^\prime+
\int\limits_t^{t+\Delta t} g_1(\Theta)f_1(t^\prime)\,dt^\prime+
\int\limits_t^{t+\Delta t} g_2(\Theta)f_2(t^\prime)\,dt^\prime.$$
According to the Stratonovich prescription, in the last two stochastic
integrals $\Theta$ should be replaced with
$$\frac{\Theta(t)+\Theta(t+\Delta t)}{2}\approx\Theta(t)+\frac{1}{2}
\dot{\Theta}(t)\,\Delta t=\Theta_0+\frac{\Delta t}{2}\left [ H(\Theta_0)+
g_1(\Theta_0)f_1(t)+g_2(\Theta_0)f_2(t)\right ],$$
where $\Theta_0=\Theta(t)$. But
$$g(\Theta_0+\Delta\Theta)\approx g(\Theta_0)+\Delta \Theta \left .
\frac{\partial g}{\partial \Theta}\right |_{\Theta=\Theta_0}.$$
Therefore we obtain
$$\Delta \Theta\approx H(\Theta_0)\,\Delta t +
\frac{\Delta t}{2}\left [ H(\Theta_0)+g_1(\Theta_0)f_1(t)+g_2(\Theta_0)f_2(t)
\right ]\times $$
\begin{equation}
\left [\frac{\partial g_1}{\partial \Theta}(\Theta_0)
\int\limits_t^{t+\Delta t} f_1(t^\prime)\,dt^\prime+
\frac{\partial g_2}{\partial \Theta}(\Theta_0)\int\limits_t^{t+\Delta t}
f_2(t^\prime)\,dt^\prime \right ]+
g_1(\Theta_0)\int\limits_t^{t+\Delta t} f_1(t^\prime)\,dt^\prime+
g_2(\Theta_0)\int\limits_t^{t+\Delta t} f_2(t^\prime)\,dt^\prime.
\label{Dtheta} \end{equation}
Taking an ensemble average by using (\ref{wnoise}), we get
$$<\Delta \Theta>\,=\left [H(\Theta_0)+k_BT_1\lambda\,g_1(\Theta_0)\,
\frac{\partial g_1}{\partial \Theta}(\Theta_0)+k_BT_2\lambda\,
g_2(\Theta_0)\,\frac{\partial g_2}{\partial \Theta}(\Theta_0)\right ]\,
\Delta t.$$
While averaging the square of (\ref{Dtheta}) gives
$$<(\Delta \Theta)^2>\,=2k_B\lambda\left [T_1\,g_1^2(\Theta_0)+
T_2\,g_2^2(\Theta_0)\right ]\Delta t, $$
because
$$\int\limits_t^{t+\Delta t}dt^\prime
\int\limits_t^{t+\Delta t}dt^{\prime\prime}<f_i(t^\prime)f_j(t^{\prime
\prime})>\,=2k_BT_i\lambda\,\delta_{ij}\,\Delta t.$$
Comparing these expressions with (\ref{D12}), we see that
\begin{equation}
D^{(1)}(\Theta)=-H(\Theta)-k_BT_1\lambda \,g_1(\Theta)\,
\frac{\partial g_1}{\partial \Theta}-
k_BT_2\lambda \,g_2(\Theta)\,\frac{\partial g_2}{\partial \Theta},
\label{D1}\end{equation}
and
\begin{equation}
D^{(2)}(\Theta)=2k_B\lambda\left [T_1\,g_1^2(\Theta)+T_2\,g_2^2(\Theta)
\right ].
\label{D2}\end{equation}
If we substitute (\ref{D1}) and (\ref{D2}) into the Fokker-Planck equation
(\ref{FP}), it takes the form
\begin{equation}
\frac{\partial P(\Theta,t)}{\partial t}+\frac{\partial J(\Theta,t)}
{\partial \Theta}=0,
\label{FPJ} \end{equation}
where the probability current
\begin{equation}
J(\Theta,t)=H(\Theta)P(\Theta,t)-k_BT_1\lambda\,g_1(\Theta)\,
\frac{\partial}{\partial \Theta}\left [ g_1(\Theta)P(\Theta,t)\right ]-
k_BT_2\lambda\,g_2(\Theta)\,\frac{\partial}{\partial \Theta}\left [
g_2(\Theta)P(\Theta,t)\right ].
\label{J} \end{equation}
It can easily be checked that \cite{25}
\begin{equation}
J(\Theta,t)=H(\Theta)P(\Theta,t)-g(\Theta)\,
\frac{\partial}{\partial \Theta}\left [ g(\Theta)P(\Theta,t)\right ],
\label{Jg} \end{equation}
with
\begin{equation}
g(\Theta)=\sqrt{k_BT_1\lambda\,g_1^2(\Theta)+k_BT_2\lambda\,g_2^2
(\Theta)}.
\label{gtheta} \end{equation}
We are interested in a steady state operation of the engine $P(\Theta,t)=
P(\Theta)$. Then the Fokker-Planck equation (\ref{FPJ}) and the relation
(\ref{J}) show that the probability current depends neither on time nor
angular position: $J(\Theta,t)=J_0$ is a constant. From (\ref{Jg}) we get
the following differential equation
\begin{equation}
\frac{dP(\Theta)}{d\Theta}+\frac{1}{g(\Theta)}\frac{dg(\Theta)}{d\Theta}\,
P(\Theta)-\frac{H(\Theta)}{g^2(\Theta)}\,P(\Theta)=-\frac{J_0}{g^2(\Theta)}.
\label{DEP} \end{equation}
The equation (\ref{DEP}) is a linear differential equation and can be solved
in a standard way. The solution is \cite{30}
\begin{equation}
P(\Theta)=\left [A_0-J_0\int\limits_0^\Theta \frac{e^{-B(\Theta^\prime)}}
{g(\Theta^\prime)}\,d\Theta^\prime \right ] \frac{e^{B(\Theta)}}{g(\Theta)},
\label{P} \end{equation}
where $A_0$ is some constant and
$$B(\Theta)=\int\limits_0^\Theta \frac{H(\Theta^\prime)}{g^2(\Theta^\prime)}
\,d\Theta^\prime .$$
The periodic boundary conditions $P(0)=P(2\pi),\, g(0)=g(2\pi)$ imply that
the constants $J_0$ and $A_0$ are interconnected:
$$A_0=J_0\,\frac{e^\beta}{e^\beta-1}\int\limits_0^{2\pi}\frac{e^{-B(\Theta)}}
{g(\Theta)}\,d\Theta,$$
where
\begin{equation}
\beta=B(2\pi)=\int\limits_0^{2\pi} \frac{H(\Theta)}{g^2(\Theta)}
\,d\Theta .
\label{beta} \end{equation}
After a little algebra, we find that
\begin{equation}
P(\Theta)=J_0\,\frac{e^{B(\Theta)}}{g(\Theta)}\left [ \; \int\limits_\Theta^
{2\pi} \frac{e^{-B(\Theta^\prime)}}{g(\Theta^\prime)}\,d\Theta^\prime+\frac{1}
{e^\beta-1}\int\limits_0^{2\pi}\frac{e^{-B(\Theta^\prime)}}{g(\Theta^\prime)}
\,d\Theta^\prime \right ].
\label{Ptheta} \end{equation}
The normalization condition
$$\int\limits_0^{2\pi}P(\Theta)\,d\Theta=1$$
then determines the probability current $J_0$ to be
\begin{equation}
J_0=\frac{e^\beta-1}{I},
\label{J0} \end{equation}
where
$$I=(e^\beta-1)\int\limits_0^{2\pi}\frac{e^{B(\Theta)}}{g(\Theta)}
\,d\Theta \int\limits_\Theta^{2\pi}\frac{e^{-B(\Theta^\prime)}}{g(\Theta^
\prime)}\,d\Theta^\prime+ \int\limits_0^{2\pi}\frac{e^{B(\Theta)}}{g(\Theta)}
\,d\Theta \int\limits_0^{2\pi}\frac{e^{-B(\Theta^\prime)}}{g(\Theta^\prime)}
\,d\Theta^\prime.$$
The angular velocity $\omega(\Theta)$ can be determined from the relation
$$J(\Theta)=P(\Theta)\,\omega(\Theta).$$
Its average (the net angular velocity of the ratchet and pawl engine) equals
to
\begin{equation}
\omega=\int\limits_0^{2\pi}\omega(\Theta)\,P(\Theta)\,d\Theta=
\int\limits_0^{2\pi} J(\Theta)\,d\Theta=2\pi\,J_0.
\label{omega} \end{equation}
As we see from (\ref{omega}) and (\ref{J0}), there is no net angular velocity
if $\beta=0$. But from (\ref{beta})
\begin{equation}
\beta\approx 2\int\limits_0^{2\pi}\frac{F(\Theta)}{k_BT_1}\left [1+
\frac{F^\prime(\Theta)}{4k}\right ]\,\frac{1}{1+\frac{T_2}{T_1}+\left (
1-\frac{T_2}{T_1}\right )\frac{F^\prime(\Theta)}{2k}}\,d\Theta.
\label{betaT} \end{equation}
Note that
$$\int\limits_0^{2\pi}F(\Theta)\,d\Theta=-\left [U(2\pi)-U(0)\right ]=0
\;\;\;\mbox{and}\;\;\;
\int\limits_0^{2\pi}F(\Theta)F^\prime(\Theta)\,d\Theta=\frac{1}{2}\left [
F^2(2\pi)-F^2(0)\right ]=0,$$
because of the periodic boundary conditions. Therefore, if $T_1=T_2$, then
$\beta=0$ and there is no net angular velocity, as is demanded by the Second
Law. Less trivial fact, which follows from (\ref{betaT}), is that for
absolutely rigid axle, $k\to\infty$, the engine does not work either.
\section{Brillouin's demon}
Although Feynman's ratchet and pawl gadget is fascinating, its experimental
realization is doubtful because it does not seem feasible to arrange the
necessary temperature gradient at nanoscale without generating violent
convective currents in ambient material \cite{23}. In this respect its
electrical counterpart, the Brillouin diode demon is more promising.
The Brillouin diode engine \cite{31,32} consists of a diode in parallel to
a resistor and a capacitor. Thermal noise in the resistor makes it the AC
voltage source. Naively, one expects the diode to rectify this AC voltage and
the capacitor to become self-charged. However the Second Law prohibits such
a demon devise to operate unless the diode and the resistor are kept at
different temperatures -- in complete analogy with the ratchet and pawl
engine.
Both the Feynman ratchet and pawl gadget \cite{22,23} and the diode engine
\cite{33} are very inefficient thermal engines with efficiencies far below
the Carnot value. The reason is that they operate irreversibly due to a
unavoidable heat exchange between the two thermal reservoirs the engine is
simultaneously in contact. In the ratchet and pawl case, it is the mechanical
coupling between the vanes and the ratchet which induces, via fluctuations,
a heat transfer between the reservoirs, even under negligible thermal
conductivity of the axle \cite{22}.
It was shown in \cite{34} that the heat exchange between the reservoirs of
the diode engine is significantly reduced if the resistor in the circuit is
replaced by the second diode switched in the opposite direction as shown in
Fig.\ref{RCD}. Let us analyze this system considering diodes as just nonlinear
resistors \cite{34}.
\begin{figure}[htb]
\begin{center}
\mbox{\epsfig{figure=RCD.eps,height=6cm}}
\end{center}
\caption {Brillouin's diode engine \cite{34}.}
\label{RCD}
\end{figure}
If $u$ is the voltage of the capacitor and $i_1,i_2$ are currents through two
nonlinear resistors then
\begin{equation}
i_1R_1(u)+u=v_1(t)\;\;\; \mbox{and} \;\;\; i_2R_2(u)+u=v_2(t),
\label{i12} \end{equation}
where $v_1(t)$ and $v_2(t)$ are Nyquist stochastic electromotive forces
\cite{35}, due to thermal agitation in the corresponding resistors,
satisfying \cite{36}
\begin{equation}
<v_i(t)>\,=0,\;\;\;<v_i(t)\,v_j(t^\prime)>\,=2k_BT_iR_i\,\delta_{ij}\,
\delta(t-t^\prime).
\label{Nuquist} \end{equation}
But $i_1+i_2=\dot{q}$, where $q=Cu$ is the charge of the capacitor with
capacitance $C$. Therefore (\ref{i12}) indicate the following Langevin
equation
\begin{equation}
\dot{u}=-\frac{u}{R(u)\,C}+\frac{v_1(t)}{R_1(u)\,C}+\frac{v_2(t)}{R_2(u)\,C},
\label{Ldiode} \end{equation}
where
$$R(u)=\frac{R_1(u)\,R_2(u)}{R_1(u)+R_2(u)}.$$
Using Stratonovich prescription, the drift and diffusion coefficient
functions can be calculated from (\ref{Ldiode}) in the manner described in
the previous section for the equation (\ref{Ltheta}). the results are
\begin{equation}
D^{(1)}(u)=\frac{u}{R(u)\,C}+\frac{k_BT_1R_1^\prime(u)}{R_1^2(u)\,C^2}+
\frac{k_BT_2R_2^\prime(u)}{R_2^2(u)\,C^2},\;\;\;
D^{(2)}(u)=\frac{2k_B}{C^2}\left [\frac{T_1}{R_1(u)}+\frac{T_2}{R_2(u)}
\right ],
\label{D12diode}\end{equation}
where the prime denotes differentiation with respect to $u$.
The Fokker-Planck equation that follows
$$\frac{\partial P(u,t)}{\partial t}+\frac{\partial J(u,t)}{\partial u}=0$$
has the following probability current
\begin{equation}
J(u,t)=-\frac{uP(u,t)}{R(u)\,C}-\frac{k_B}{C^2}\left (\frac{T_1}{R_1(u)}+
\frac{T_2}{R_2(u)}\right )\,\frac{\partial P(u,t)}{\partial u}
\label{FPdiode} \end{equation}
in agreement with \cite{33,34}.
The steady state operation (stationary solution of (\ref{FPdiode})) now
corresponds to the vanishing probability current because evidently
$J(\pm\infty,t)=0$. Then (\ref{FPdiode}) produces a simple homogeneous linear
differential equation for $P(u)$ with the solution
\begin{equation}
P(u)=P_0\,\exp{\left \{-\frac{C}{k_B}\int\limits_0^u\frac{R_1(u)+R_2(u)}
{T_1R_2(u)+T_2R_1(u)}\,u\,du\right \} }.
\label{Pu} \end{equation}
If $T_1=T_2=T$, (\ref{Pu}) reduces to the Boltzmann distribution
$$P(u)=P_0\,\exp{\left \{-\frac{Cu^2}{2k_BT}\right \}}$$
as should be expected for the capacitor's energy in thermal equilibrium.
The Boltzmann distribution is symmetric in $u$ and therefore $<u>\,=0$.
Not surprisingly, the Second Law wins in isothermal situation irrespective
of the volt-ampere characteristics of the diodes, and no self-charging of
the capacitor takes place.
If temperatures are different, however, the distribution (\ref{Pu}) is no
more symmetric (note that for identical diodes $R_2(u)=R_1(-u)$). Therefore,
if you feed the Brillouin demon with an external energy input, which
maintains the necessary temperature difference, it will operate successfully.
Let us now consider the heat exchange between the two thermal baths
\cite{36,37}. During a small time period $\Delta t$, the electromotive force
$v_1$ performs a work $\int i_1v_1dt$ and therefore this amount of energy is
taken from the first thermal bath at temperature $T_1$. But a part of this
energy, namely $\int i_1^2 R_1dt$, is returned back to the bath as the
nonlinear resistor dissipates the Joule heat into the bath. Therefore, the
net energy taken from the first bath equals
$$\Delta Q_1=\int\limits_t^{t+\Delta t}i_1(v_1-i_1R_1)\,dt^\prime=
\int\limits_t^{t+\Delta t}i_1u\,dt^\prime=\int\limits_t^{t+\Delta t}
\frac{v_1-u}{R_1}\,u\,dt^\prime\approx-\frac{u^2}{R_1}\Delta t+
\int\limits_t^{t+\Delta t}\frac{u}{R_1}\,v_1(t^\prime)\,dt^\prime.$$
In the remaining stochastic integral, $\frac{u}{R_1}$ has to be replaced,
according to the Stratonovich prescription, with its value at the middle of
the time interval, at time $t+\frac{\Delta t}{2}$, which approximately equals
$$\frac{u}{R_1}+\frac{\Delta t}{2}\,\frac{d}{du}\left (\frac{u}{R_1}\right )
\dot {u}=\frac{u}{R_1}+\frac{\Delta t}{2}\,\frac{d}{du}\left (\frac{u}{R_1}
\right )\left (-\frac{u}{R\,C}+\frac{v_1(t)}{R_1\,C}+\frac{v_2(t)}{R_2\,C}
\right ), $$
where now $u$ is evaluated at time $t$. Therefore, taking an ensemble average,
we get
$$<\Delta Q_1>\,=\left [ -\frac{u^2}{R_1}+\frac{k_BT_1}{C}\,\frac{d}{du}
\left (\frac{u}{R_1}\right )\,\right ]\Delta t.$$
Now we average $<\Delta Q_1>/\Delta t$ over the voltage distribution $P(u)$
and get the heat absorbed from the first reservoir at temperature $T_1$ per
unit time
$$\dot{Q}_1=\int\limits_{-\infty}^\infty \left [ -\frac{u^2}{R_1}+
\frac{k_BT_1}{C}\,\frac{d}{du}\left (\frac{u}{R_1}\right )\,\right ]P(u)\,du
=-\int\limits_{-\infty}^\infty \left [\frac{k_BT_1}{CR_1(u)}\,\frac{\partial
P(u)}{\partial u}+\frac{u}{R_1(u)}P(u)\right ] u\,du $$
in agreement with \cite{33,34}. The last step here follows from
$P(\pm\infty)=0$ when integration by parts is applied.
Note that
$$\dot{Q}_1+\dot{Q}_2=-\int\limits_{-\infty}^\infty \left [ \frac{u}{R(u)}P(u)
+\frac{k_B}{C}\left (\frac{T_1}{R_1(u)}+\frac{T_2}{R_2(u)}\right )
\frac{\partial P(u)}{\partial u}\right ] u\,du=\int\limits_{-\infty}^\infty
J(u)\,u\,du=0$$
when J=0. In other words, the heat dissipated into the second reservoir at
temperature $T_2$ per unit time equals to the heat absorbed from the first
reservoir per unit time. Therefore $\dot{Q}\equiv\dot{Q}_1$ is just the heat
flux from the first thermal bath to the second and
\begin{equation}
\dot{Q}=\int\limits_{-\infty}^\infty \left [ -\frac{u^2}{R_1(u)}+
\frac{k_BT_1}{C}\,\frac{d}{du}\left (\frac{u}{R_1(u)}\right )\,
\right ]P(u)\,du
\label{Qdot} \end{equation}
To have some impression of the magnitude of this flux, we approximate the
volt-ampere characteristics of the diodes by a step function
$$R_1(u)=R_2(-u)=R_+\,\theta(u)+R_-\,\theta(-u)=\left \{ \begin{tabular}{cc}
$R_+$, if $u>0$, \\ $R_-$, if $u<0$. \end{tabular}
\right . $$
When
$$\frac{d}{du}\left (\frac{u}{R_1(u)}\right )=\frac{d}{du}\left [\frac{u}
{R_+}\theta(u)+\frac{u}{R_-}\theta(-u)\right ]=\frac{\theta(u)}{R_+}+
\frac{\theta(-u)}{R_-}+\left (\frac{1}{R_+}-\frac{1}{R_-}\right )
u\,\delta(u).$$
But $u\,\delta(u)=0$ and a straightforward calculation gives the heat flux
which is linear in the temperature difference \cite{34}
\begin{equation}
\dot{Q}=\frac{k_B(T_1-T_2)}{C(R_++R_-)}.
\label{hflux} \end{equation}
For ideal diodes with infinite backward resistance, $\dot{Q}=0$ and there is
no heat exchange between thermal reservoirs under zero load. Therefore one
expects that the efficiency of the ideal diode engine tends to the Carnot
efficiency $1-T_2/T_1$ when the external load tends to zero. This fact was
indeed demonstrated in \cite{34}.
\section{Mirror World and how it can assist demons}
Mirror world was introduced in 1966 by Kobzarev, Okun, and Pomeranchuk
\cite{38} (a first-hand historical account can be found in \cite{39}),
although the basic idea dates back to Lee and Yang's 1956 paper \cite{40}
and subsequently was rediscovered in the modern context of renormalizable
gauge theories in \cite{42}.
The idea behind mirror matter is simple and can be explained as follows
\cite{43}. The naive parity operator {\bf P}, representing the space
inversion, interchanges left and right. But in case of internal symmetries,
when there are several equivalent left-handed states and several equivalent
right-handed states, it is not a priori obvious what right-handed state should
correspond to a given left-handed state. All operators of the type {\bf SP},
where {\bf S} is an internal symmetry operator, are equivalent. If we can find
some internal symmetry {\bf M}, for which {\bf MP} remains unbroken in the
real world, then we can say that the world is left-right symmetric in over-all
because {\bf MP} is as good as the parity operator, as is {\bf P} itself.
What remains is to find an appropriate internal symmetry {\bf M}. Not every
choice of {\bf M} leads to the mirror world in a sense of a decoupled hidden
sector. For example, the most economical first try is the charge conjugation
operator {\bf C} in the role of {\bf M} \cite{44,45,46}. In this case mirror
world coincides with the world of antiparticles. But {\bf CP} is not
conserved and therefore such a world is not left-right symmetric.
Then the most natural and secure way to enforce the left-right symmetry is to
double our Standard Model world by introducing a mirror twin for every
elementary particle in it and arrange the right-handed mirror weak
interactions, so that every {\bf P}-asymmetry in ordinary world is accompanied
by the opposite {\bf P}-asymmetry in the hidden mirror sector. This new mirror
sector must be necessarily hidden, that is only very weak interactions between
mirror and ordinary particles may be allowed, otherwise this scenario will
come to immediate conflict with phenomenology \cite{38}.
If the parity symmetry in our world is indeed restored at the expense of the
hidden mirror sector, many interesting phenomenological and astrophysical
consequences follow that were discussed many times in the literature. It would
be boring to repeat all this here. Therefore I cite only some review articles
\cite{39,47,48,49,50} where relevant references can be found. Instead of
following the well-known trails, I choose a new pathway to the mirror world,
with boron trifluoride ($BF_3$) as our rather exotic guide.
Boron trifluoride is a highly toxic, colorless, nonflammable gas with a
pungent odor, used heavily in the semiconductor industry. But what is relevant
for us here is the shape of its planar molecule. Three fluorine atoms seat at
the corners of a equilateral triangle with the boron atom in the center.
This shape is obviously parity invariant, with parity identified with the
reflection in the $y$-axis of the $x-y$ plane. But the world is quantum
mechanical after all and what is obvious from the classical point of view
often ceases to be obvious in quantum world. So we need a quantum theory of
shapes \cite{51,51a}.
Let us consider rotations of the boron trifluoride molecule. The translational
motion is ignored as it does not lead to anything non-trivial. Therefore it
will be assumed that the center of the molecule with the boron atom at it is
fixed at the origin. Then any configuration of the fluorine atoms can be
obtained from a standard configuration, with one of the fluorine atoms at the
positive y-axis, by some rotation
$$R(\phi)=\left (\begin{tabular}{cc} $\cos{\phi}$ & $\sin{\phi}$ \\
$-\sin{\phi}$ & $\cos{\phi}$ \end{tabular} \right ). $$
But rotations by $\phi=\frac{2\pi}{3}\,n$, $n$-integer transform the molecule
into itself because of symmetry of the equilateral triangle. Therefore the
configuration space for the rotational motion of the boron trifluoride
molecule is the coset space
$$Q=SO(2)/Z_3,$$
where $Z_3$ is the cyclic subgroup of $SO(2)$ generated by $R(2\pi/3)$.
Topologically SO(2) is the same as the unite circle $S^1$ and is thus
infinitely connected, because loops with different winding numbers around the
circle belong to the different homotopy classes. The configuration space $Q$
is obtained from $S^1$ by identifying points related by rotations with
$\phi=\frac{2\pi}{3}\,n$, $n$-integer. Therefore $Q$ is also infinitely
connected. The multiple connectedness brings a new flavour in quantization
and makes it not quite trivial \cite{52,53}. For our goals, the convenient
approach is the one presented in \cite{54,55} for quantum mechanics on the
circle $S^1$.
Naively one expects the free dynamics of any quantum planar-rotator, such as
the boron trifluoride molecule, to be defined by the Hamiltonian
\begin{equation}
\hat H=\frac{\hat L^2}{2I},
\label{H} \end{equation}
where ($\hbar=1$)
\begin{equation}
\hat L=-i\frac{\partial}{\partial \phi}
\label{Lcanon} \end{equation}
is the angular momentum operator satisfying the canonical commutation
relation
\begin{equation}
[\hat \phi,\hat L]=i.
\label{phiL} \end{equation}
However, there are many pitfalls in using the commutation relation
(\ref{phiL}) in the context of the quantum-mechanical description of angle
variable \cite{56}. The locus of problems lies in the fact that $\hat \phi$
is not a good position operator for the configuration space $Q$, as it is
multi-valued. Classically every point from $Q$ is uniquely determined by
a complex number $q=\exp{(-3i\phi)}$. Therefore one can expect that the
unitary operator
\begin{equation}
\hat q=\exp{(-3i\hat\phi)},
\label{qpos} \end{equation}
is more suitable as the position operator on $Q$ \cite{57}. From (\ref{phiL})
we expect the commutation relations
\begin{equation}
[\hat L,\hat q]=-3\,\hat q,\;\;\; [\hat L,\hat q^+]=3\,\hat q^+,
\label{Lqqs} \end{equation}
which we assume to hold, although the self-adjoint angular momentum
operator has not necessarily have to be in the naive form (\ref{Lcanon}).
The representation of the algebra (\ref{Lqqs}) is simple to construct
\cite{54,55}. The operator $\hat L$, being self-adjoint, has an eigenvector
$|\alpha>$ with a real eigenvalue $3\alpha$:
$$\hat L\,|\alpha>\,=3\alpha\,|\alpha>,\;\;\; <\alpha|\alpha>\,=1.$$
The commutation relations (\ref{Lqqs}) show that $\hat q$ and $\hat q^+=
\hat q^{-1}$ act as lowering and rising operators because
$$\hat L\,\hat q\,|\alpha>\,=\left ([\hat L,\hat q]+\hat q\hat L\right )
|\alpha>\,=3(\alpha-1)\hat q\,|\alpha>\;\;\mbox{and}\;\;\hat L\,\hat q^+\,
|\alpha>\,=3(\alpha+1)\hat q^+\,|\alpha>.$$
Therefore we can consider the states
$$|n+\alpha>\,=\left (\hat q^+\right )^n|\alpha>, \;\;n=0,\pm 1,\pm 2,\ldots$$
as spanning the Hilbert space ${\cal H}_{(\alpha)}$ where the fundamental
operators $\hat L$ and $\hat q$ are realized, because, as it follows from the
self-adjointness of $\hat L$ and unitarity of $\hat q$, the set of state
vectors $|n+\alpha>$ forms the orthocomplete system
$$<n+\alpha|m+\alpha>\,=\delta_{nm},\;\;\sum\limits_{n=-\infty}^\infty
|n+\alpha><n+\alpha|=1.$$
The angular momentum operator is diagonal in this basis
$\hat L\,|n+\alpha>\,=3(n+\alpha)\,|n+\alpha>$,
and so does the Hamiltonian (\ref{H}). The energy eigenvalues are
\begin{equation}
E_n=\frac{9(n+\alpha)^2}{2I}.
\label{En} \end{equation}
For each $\alpha$, there is a vacuum state corresponding to $n=0$ and these
vacuum states are in general different, like $\theta$-vacuums in QCD. More
precisely, ${\cal H}_{(\alpha)}$ and ${\cal H}_{(\beta)}$ are unitary
equivalent representation spaces of the algebra (\ref{Lqqs}) if and only if
the difference between $\alpha$ and $\beta$ is an integer \cite{54,55}.
Therefore, in contrast to the canonical commutation relations, the algebra
(\ref{Lqqs}) has infinitely many inequivalent unitary representations
parameterized by a continuous parameter $\alpha$ from the interval
$0\le\alpha<1$.
The spectrum (\ref{En}) is doubly degenerate for $\alpha=0$, because in this
case $E_n=E_{-n}$, as well as for $\alpha=1/2$, because then $E_n=E_{-(n+1)}$.
For other values of $\alpha$, there is no degeneracy. This degeneracy
reflects invariance under the parity transformation \cite{55}.
As we had already mentioned, geometrically the parity transformation is the
reflection in the $y$-axis, that is inversion of $S^1$ around a diameter.
Classically the parity transformation moves the point specified by the angle
$\phi$ to the one specified by the angle $-\phi$, if we measure an angular
coordinate $\phi$ from the axis fixed under the parity operation. Therefore it
is natural for the quantum mechanical unitary parity operator $\hat P$ on $Q$
to satisfy
\begin{equation}
\hat P^+\,\hat q\,\hat P=\hat q^+,\;\;\hat P^+\,\hat L\,\hat P=-\hat L.
\label{Pql} \end{equation}
Such a parity operator is an automorphism of the fundamental algebra
(\ref{Lqqs}), but can not always be realized in the Hilbert space
${\cal H}_{(\alpha)}$. Indeed,
$$\hat L\,\hat P\,|n+\alpha>\,=-\hat P\,\hat L\,|n+\alpha>\,=-3(n+\alpha)
\hat P\,|n+\alpha>$$
shows that $\hat P\,|n+\alpha>$ does not in general lies in ${\cal H}_
{(\alpha)}$, unless $\alpha=0$ or $\alpha=1/2$. Otherwise
$\hat P\,|n+\alpha>\,\in {\cal H}_{(1-\alpha)}$. Therefore only for
$\alpha=0$ or $\alpha=1/2$ is parity a good symmetry and other
realizations of quantum mechanics on $Q$ break parity invariance. In
later cases, to restore parity invariance requires doubling of the
Hilbert space by considering ${\cal H}_{(\alpha)}\oplus{\cal H}_{(1-\alpha)}$.
It is rather strange that most realizations of the quantum shape of the
boron trifluoride molecule violate parity, although classically the molecule
is reflection symmetric. After all no microscopic source of parity violation
was assumed in the molecular dynamics. How then does parity violation emerge
in the shape? This happens because we concentrated on the slow nuclear degrees
of freedom and completely neglected the fast electronic motion. A general
argument is given in \cite{51,51a} that in a complete theory, with fast degrees
of freedom included, there is no parity violation. For example, in molecular
physics the coupling between rotational modes and the electronic wave
functions lead to transitions between these wave functions related by parity,
with time scales much longer than typical rotational time scales. As a result,
parity invariance is restored but nearly degenerate states of opposite
parity, the parity doubles, appear in the molecular spectra.
To get an intuitive understanding why different quantum theories are possible
on $Q$ and why most of them violate parity, it is instructive to find the
explicit form for the angular momentum operator in each $\alpha$-realization
\cite{54,55}.
Let
$$|\theta>\,=\sum\limits_{n=-\infty}^\infty a_n\,|n+\alpha>$$
be an eigenvector of the position operator $\hat q^+$:
$$\hat q^+\,|\theta>\,=e^{\,3i\theta}\,|\theta>.$$
Then we get the recurrent relation
$$a_n=e^{-3i\theta}\,a_{n-1}$$
and, therefore, up to normalization
$$|\theta>\,=e^{i\,\omega(\theta)}\sum\limits_{n=-\infty}^\infty e^
{-3in\theta}\,|n+\alpha>,$$
where $\omega(\theta)=\omega(\theta+2\pi/3)$ is an arbitrary phase. But then
$$e^{-i\lambda\hat L}\,|\theta>\,=e^{-3i\lambda\alpha}\,e^{i\,\omega(\theta)}
\,e^{-i\omega(\theta+\lambda)}\,|\theta+\lambda>.$$
Let $|\psi>$ be an arbitrary state vector. Differentiating the equality
$$<\theta|e^{i\lambda\hat L}|\psi>=\left (<\psi|e^{-i\lambda\hat L}|\theta>
\right )^+=e^{3i\lambda\alpha}\,e^{-i\omega(\theta)}\,e^{i\,\omega(\theta+
\lambda)}<\theta+\lambda|\psi>$$
with respect to $\lambda$ and taking $\lambda=0$ in the result multiplied by
$-i$, we get
$$<\theta|\hat L|\psi>=\left [-i\frac{\partial}{\partial\theta}+3\alpha+
\frac{\partial\omega}{\partial\theta}\right ]\psi(\theta),$$
where $\psi(\theta)=<\theta|\psi>$ is the wave function. Therefore, in the
$q$-representation, where $\hat q$ is diagonal, the angular momentum operator
is
\begin{equation}
\hat L=-i\frac{\partial}{\partial\theta}+A(\theta),
\label{alphaL} \end{equation}
with
\begin{equation}
A(\theta)=3\alpha+\frac{\partial\omega(\theta)}{\partial\theta}
\label{gaugeA} \end{equation}
playing the role of a gauge field.
As we see, the $\alpha$-quantum theory on $Q$ is analogous to the quantum
theory on a unit circle with the vector potential (\ref{gaugeA}) along the
circle. Then we have the magnetic flux
$$\int \vec{B}\cdot d\vec{S}=\oint \vec{A}\cdot d\vec{l}=6\pi\alpha$$
piercing the circle perpendicularly to the $x-y$ plane. Classically a charged
particle on the circle does not feel the magnetic flux, if the the magnetic
field on the circle is zero, but quantum mechanically it does -- the
Aharonov-Bohm effect. Therefore it is not surprising that we have many
different quantum theories, nor is it surprising that parity is violated
\cite{58}. It is also clear that in a more complete theory, including
sources of the magnetic flux, parity is not violated \cite{58}.
Now back to demons. Suppose parity invariance is indeed restored in a
manner advocated by mirror matter proponents. Then some non-gravitational
interactions are not excluded between the ordinary and mirror sectors. One
interesting possibility is the photon-mirror photon kinetic mixing interaction
$${\cal{L}}=\frac{\epsilon}{2}F^{\mu\nu}F^{\,\prime}_{\mu\nu},$$
where $F^{\mu\nu}$ and $F^{\,\prime}_{\mu\nu}$ are the field strength tensors
for electromagnetism and mirror electromagnetism respectively. As a result
ordinary and mirror charged particles interact electromagnetically with the
interaction strength controlled by the mixing parameter $\epsilon$. A number
of observed anomalies can be explained from mirror matter perspective if
\begin{equation}
\epsilon\sim 5\cdot 10^{-9}
\label{epsilon} \end{equation}
(see \cite{47} and references wherein).
Remarkably, this tiny electromagnetic interaction is nevertheless sufficient
for thermal equilibration to occur in a mixture of ordinary and mirror matter
at Earth-like temperatures \cite{59}, and to provide a force strong enough to
oppose the force of gravity, so that a mirror matter fragment can remain on
the Earth's surface instead of falling toward its center \cite{59a}. Demons
considered above can clearly benefit from this fact. What is necessary is to
add a significant amount of mirror matter to the thermal reservoir we want to
operate at colder temperature. Mirror matter will draw in heat from the
surrounding ordinary component of the thermal bath and radiate it away as
mirror photons. Thereby an effective cooling will take place and the necessary
temperature difference will be created between the thermal baths (assuming the
another bath do not contain a mirror component), even if initially the
temperatures were the same.
Equation (\ref{hflux}) indicates that, at least for some demons, heat exchange
between two thermal reservoirs of the demon can be made very low.
Consequently, mirror matter walls of the colder reservoir should radiate a
very small flux of mirror electromagnetic energy at dynamical equilibrium and
hence must be very cold. In fact, it appears that for significant amount of
mirror matter the corresponding reservoir would be cooled to near absolute
zero where approximations of reference \cite{59} break down. Therefore we
refrain from any detailed calculations.
\section{Conclusion}
As frequently stressed by Landau \cite{39}, we expect the world of elementary
particles to be mirror symmetric because the space itself is mirror
symmetric. But weak interactions turned out to be left-handed and heroic
efforts of Landau and others \cite{44,45,46} to save left-right symmetry
by {\bf CP} also failed. Therefore we are left with the strange fact that
nature is left-right asymmetric. But the example from the molecular physics
considered above suggests a possibility that the observed asymmetry
might be just apparent, the result of the fact that some fast degrees of
freedom hidden, presumably, at the Planck scale, were overlooked. Then in the
more complete theory parity symmetry will be restored, but the parity
doubles will appear at the universe level in the form of mirror
world \cite{38,42}.
If nature is indeed organized in this manner, the fascinating Maxwell demons
considered in the previous sections can be made operative by just adding mirror
matter to one of their thermal reservoirs. Mirror matter demons can
extract heat from one thermal reservoir, for example from the world ocean,
and transform it very effectively (in case of Brillouin demon) to some useful
work, thereby solving the global energy problems of mankind!
All this ``sounds too good to be true'' \cite{60}. But the principal question
is whether mirror matter exists. If it does indeed exist and if the
photon-mirror photon mixing is not negligibly small I do not see how the
mirror demons can fail.
As for the photon-mirror photon mixing, the proposition that its magnitude is
of the order of (\ref{epsilon}) is experimentally falsifiable in near future,
because such mixing leads to orthopositronium to mirror orthopositronium
oscillations and as a result to invisible decays of orthopositronium in
vacuum, with intensities accessible in the experiment \cite{61} which is under
way.
Maybe the putative perspective of using mirror matter demons will appear a
little less exotic if we recall that, in a sense, we are all made of demons.
I mean that the Brownian ratchet engines, which operate extremely effectively
and reliably, can be found in every biological cell \cite{62,63,64}. Therefore
I would be not much surprised if someday nanotechnology will find the mirror
matter useful, provided, of course, that it exists at all.
The idea that mirror matter can find applications in heat engines I
credit to Saibal Mitra \cite{60}. But not his notes served as an inspiration
for this investigation. The paper emerged from J.~D.~Norton's advise to be
a little more critical about Landauer's exorcism of Maxwell's demon in
response to my rather careless claim in \cite{65} that Landauer's principle
killed the demon. ``O King, most high and wise Lord; How incomprehensible are
thy judgments, and inscrutable thy ways!''
\section*{Acknowledgments}
The author is indebted to S.~Mitra for his help with the manuscript.
Comments and suggestions from L.~B.~Okun and R.~Foot are Acknowledged with
gratitude. Special thanks to L.~B.~Okun for constantly encouraging me to
improve the paper. ``Have no fear of perfection - you'll never reach it''
-- Salvador Dali. I also have to finish without reaching it.
The work is supported in part by grants Sci.School-905.2006.2 and
RFBR 06-02-16192-a.
|
1,477,468,751,162 | arxiv | \section{Realism and the need of a physical model of quantum mechanics}
Quantum mechanics is extremely efficient for the prediction of experimental
results. In contrast the interpretation of the quantum formalism has been
the subject of continuous debate since the very begining of the theory\cite
{WZurek} and it lasts until today\cite{Mittelstaedt}, \cite{d'Espagnat}. Is
there a real problem? Long ago Feynman believed that a problem existed when
he stated: ``Nobody understand quantum mechanics''\cite{Feynman}, and many
people still agrees with this statement. The difficulty is that neither the
quantum formalism alone, nor the different interpretation proposed, offer a
clear intuitive picture of the quantum world. I think that a realistic
interpretation and a physical model of quantum mechanics are needed. This
opinion is not new, it was suported by Einstein, Podolsky and Rosen\cite{EPR
. Indeed their celebrated 1935 article begins: ``Any serious consideration
of a physical theory must take into account the distinction between the
objective reality, which is independent of any theory, and the physical
concepts with which the theory operates. These concepts are intended to
correspond with \emph{the objective reality}, and by means of these concepts
\emph{we picture this reality to ourselves''. }(My emphasis). In summary I
believe that any physical theory should contain two ingredients: a \emph
physical model} and a \emph{calculational tool, }the latter including \emph
the formalism and rules for the connection with the experiments. }The
calculational tool is essential because it is required for the comparison of
the theory with experiments. Of course that comparison is the test for the
validity of the theory. But I am convinced that the physical model is also
necessary in order to give satisfaction to the human being, who wants to
\emph{reach a picture }of\emph{\ }the world. For instance, a clear model
should say whether an electron is a wave (extended) or a particle
(localized). I do not think that saying that it is neither, or it is both is
a clear answer. Furthermore the existence of a physical model might open the
possibility for new developments and applications of the theory, and
therefore it is not a mere question of taste.
Finding a model of the quantum world is not easy, as is proved by the
failure to get it during almost one century. I will not present a complete
model here, but I will revisit stochastic electrodynamics, a theory
developed (slowly) during the last 50 years which offers a clear physical
model for some typical quantum features. I will summarize the most important
results and report new ones which enlarge the scope of the theory. I believe
that, even if stochastic electrodynamics cannot be taken as an alternative
to quantum mechanics, it gives hints for the goal of reaching a complete
physical picture of the quantum world.
\section{Stochastic electrodynamics}
Stochastic electrodynamics (SED) is a theory that assumes that the vacuum
electromagnetic zeropoint field (ZPF) is a real radiation, and it studies
systems of electrically charged particles immersed in that radiation. The
theory uses (classical) Newtonian dynamics and Maxwell electromagnetic
theory. The numerical results obtained for the ground state of the harmonic
oscillator in SED agree with the properties derived from nonrelativistic
quantum mechanics. This agreement has been the starting point and the
stimulation for SED, and it has led some people to propose the theory as a
possible alternative, or reinterpretation, of quantum mechanics. For them
the quantum effects would be produced by the electromagnetic noise (or ZPF)
combined with classical dynamics. The theory may be traced back to the work
of Walter Nernst in 1916, who extended to the electromagnetic field the
zeropoint fluctuations of oscillators assumed by Planck in his second
radiation theory of 1912. The hypothesis was forgotten, due to the progress
of quantum theory after Bohr\'{}s model, and it was rediscovered several
times many years later (e. g. by Braffort et al. in 1954 and by Marshall in
1963.) A good review of the work made until 1995 is the book by L. de la
Pe\~{n}a and A. M. Cetto\cite{dice}.
There are quantum phenomena which may be fully explained within SED like the
Casimir effect and the Unruh-Davis effect. The former may be seen as due to
the modification of the normal modes of the ZPF by the presence of two
parallel metallic plates (or other macroscopic objects), which leads to a
change in the vacuum energy via the assignement of $\frac{1}{2}h\nu $ to
every radiation mode. An equivalent picture is that the change of the normal
modes makes that the radiation pressure of the ZPF on the two sides of a
plate is no longer balanced. The Unruh-Davis effect derives from the fact
that the ZPF spectrum (eq.$\left( \ref{1.3}\right) $ below), although
invariant under Lorentz transformations, is modified in accelerated frames
(or gravitational fields) in such a way that it appears in those frames as a
black body (Planck) spectrum at a finite temperature. In these two cases the
calculation within SED provides an intuitive interpretation which is lacking
in the standard quantum treatment. For details see\cite{dice} and references
therein.
There are also phenomena where a quantum calculation is possible but a
calculation within SED is not possible (or it is difficult), and
nevertheless there are simple models where the SED calculation is easy and
the results agree with the quantum ones (and experiments). Typically those
models involve linear systems. A particular case is the diamagnetism of a
bound charged particle, which in SED appears because the charged particle is
never at rest, but it performs a random motion, induced by the ZPF, which is
modified by the magnetic field producing an effective magnetic moment. SED
offers also a clear picture of the van der Waals forces between two distant
oscillating dipoles, which provides an example of entanglement and I will
review below (see also \cite{Luis}.) Finally there are experiments in cavity
quantum electrodynamics which look as mind boggling but may be intuitively
understood within SED\cite{Humba2}. For instance an atom in a cavity does
not decay if the modes having the frequency of the emitted radiation are not
possible inside the cavity. In the quantum treatment the intriguing question
is how the atom ``knows'' in advance that it should not decay in these
conditions. In SED the explanation is trivial: spontaneous decay is actually
stimulated by appropriate modes of the ZPF, and the modes required for the
stimulation do not exist inside the cavity.
In contrast, there are phenomena where a SED calculation does not agree with
the quantum one or the derivations made actually contain quantum
assumptions. In my opinion this is the case for the equilibrium between
radiation and matter which, in spite of several claims by different authors,
does not lead to the Planck\'{}s law in a correct SED calculation\cite
{Blanco}. Of course there are many more experiments which may be accurately
calculated within quantum mechanics, or quantum field theory, but nobody has
ever attempted to interpret within SED. In particular this happens whenever
the particles involved do not possess electric charge. Thus there are good
reasons why SED is not currently taken by the community as a serious
alternative to (or reinterpretation of) quantum theory.
My opinion is that SED agrees with quantum mechanics only for either some
nonrelativistic calculations of linear systems involving charged particles
(e. g. the oscillator) or the electromagnetic radiation interacting with
macroscopic bodies (e. g. the Casimir and Unruh-Davis effects). Thus SED
might be seen as a step in the correct direction for reaching a physical
model of quantum mechanics, but it lacks some fundamental ingredients.
Amongst those the vacuum fluctuations of fields other than the
electromagnetic one, in particular metric fluctuations and, possibly, the
existence of a real sea of particles and antiparticles of different kinds.
\section{Quantum mechanics as a stochastic theory}
\subsection{Quantum vs. classical probabilities}
It is currently assumed that quantum probabilities are different from
classical, ordinary life, probabilities. The latter derive from ignorance,
maybe unavoidable, about the truth of some assertion. For instance we attach
a probability 1/2 to the appearance of head in the play of throwing a coin,
but we assume that the result is compatible with a deterministic (although
possibly chaotic and therefore not predictable) evolution. The current
wisdom is that quantum probabilities are quite different, that they derive
from a lack of strict causality of the natural laws. That is people assumes
that to the same cause, different effects may follow. This is usually called
the \emph{fundamental or essential probabilistic} character of the physical
laws.
Einstein disliked that assumption and strongly criticized it, as shown in
his celebrated sentence ``\emph{God does not play dice}''\cite{Einstein}. I
understand very well Einstein\'{}s opinion. For him the rational
understanding of nature was a kind of religion. As more loose (strict) are
the natural laws smaller (greater) may be our rational understanding of
nature. Assuming a weak causality is like accepting a light science. However
there are people happy with the absence of determinism implied by the
breakdown of strict causality. For instance some people claim that the
quantum lack of determinism may explain human freedom. In any case this
question lies outside the scope of this paper and I will not comment further.
In my opinion \emph{quantum mechanics is a stochastic theory}. There are
strictly causal laws in nature, but there is also an universal noise which
permeates everything and prevents any practical determinism. Strict
causality combined with stochasticity (randomness) is in practice
indistinguashable from essential probability, and the former is to me more
palatable. Actually the belief in essentially probabilistic laws derives
from an excesive esteem of the completeness of quantum mechanics. Indeed the
ensemble interpretation\cite{EPR}, \cite{Ballentine} offers a natural
explanation for the probabilities.
The purpose of this paper is to show that there are quantum phenomena which
may be intuitively understood as due to the existence of a universal noise
filling the whole space. The existence of such noise is likely not the only
difference between quantum and classical physics. Consequently we should not
expect that all quantum phenomena may be explained as originated in the
noise.
\subsection{Arguments for the existence of a universal noise}
If quantum mechanics is a stochastic theory, noise must play a fundamental
role. Thus the theory should have similarities with classical statistical
mechanics. The main difference is that in statistical mechanics it is
assumed that randomness is accidental but in quantum mechanics it is not so,
as said above. Also in statistical mechanics it is assumed that there is no
uncertainty in the state of minimal energy whilst in quantum mechanics that
state has fluctuations. In other words \emph{in} \emph{quantum physics
randomness is due to an unavoidable noise }present in all states including
the ground one. That noise is called ``zeropoint field (ZPF)'' or ``quantum
vacuum fluctuations'' in quantum field theory. The lack of acceptance of a
real universal noise has caused many of the difficulties for an intuitive
understanding of quantum physics, as I will explain in the following.
In our very complex universe (having more than $10^{80}$ particles) the
existence of \emph{a large amount of noise is quite natural. }This may be
seen with a simple argument. Let us consider a classical hydrogen atom
consisting of a proton (at rest with good approximation ) and an electron
moving around. In many books of quantum physics the need of quantization is
justified with the claim that a classical atom cannot be stable because the
electron would radiate, lossing energy and finally falling towards the
proton. The argument would be fine if there were an unique atom in space,
but if there are many atoms it is natural to assume that the radiation of
one atom will eventually arrive at other atoms. Thus every atom will
sometimes emit radiation but absorb it other times, possibly reaching a
dynamical stationary state with fluctuating energy. I shall not elaborate
further this example, my only purpose at the moment being to convince the
reader that the existence of a universal noise is more plausible than the
belief that systems may exist in complete isolation. In my view the former
assumption is one of the bases of quantum physics the latter being the
cornerstone of classical physics.
The existence of an unavoidable noise gives rise to two characteristic
traits of quantum physics. Firstly the universal presence of noise implies
that quantum theory should be probabilistic but, at a difference with
classical statistical mechanics, where the fluctuations cease in the ground
state, in quantum physics they are present even in that state. People used
to associate fluctuations with temperature have difficulties to accept that
they may exist at zero temperature. Historically this has been the reason
for the current belief that quantum probabilities are different from
classical probabilities.
Secondly quantum physics presents a kind of ``\emph{wholeness}'', which
looks quite strange to classical physics where the concept of isolated
system is crucial. The fact is that the ZPF at different points may be
correlated, giving rise to\emph{\ correlations} which might be interpreted
as wholeness. Below I will provide arguments suggesting that \emph
entanglement }may be just a correlation of quantum fluctuations at different
points.\emph{\ }
In the following I shall study only one kind of noise, the electromagnetic
background radiation, but we should assume that there are also other random
components like a fluctuating metric\cite{metric}. The electromagnetic
radiation here considered is not the well known cosmic microwave radiation.
More properly, I assume that the radiation existing in space consists of two
parts, one thermal, the cosmic microwave background, and another one the
non-thermal zeropoint field (ZPF). The latter is well known as a
contribution to the ``quantum vacuum'', and frequently named quantum noise.
However I will not share the current wisdom of considering it ``virtual'', a
not well defined concept anyway, but so real as the thermal part. To the
objection that the ZPF is not detected the reply is that many quantum
phenomena are consequences of it, and thus they are proofs of its real
existence. On the other hand the quantum noise has been directly detected in
some cases\cite{Humba}
\subsection{Spectrum of the quantum noise}
What are the \emph{characteristics of the universal noise}?. Firstly we need
the \emph{spectrum}, which may be defined as the energy density, $\rho
\left( \nu \right) ,$ per unit frequency interval, $d\nu $. It is intersting
that the spectrum is fully fixed, except for a constant, by the condition of
relativistic (Lorentz) invariance, leading to
\begin{equation}
\rho \left( \nu \right) d\nu =const.\times \nu ^{3}d\nu . \label{1.1}
\end{equation}
The proof is not difficult\cite{dice} but I omit it, giving instead an
argument which may be traced back to Wien\'{}s work in 1894. An advantage of
that derivation of eq.$\left( \ref{1.1}\right) $ is that it disciminates
clearly thermal noise from the ZPF. Combining thermodynamics with
Maxwell\'{}s electromagnetic theory Wien derived the displacement law, which
states that the spectrum of the black body at a temperature $T$ should be of
the form
\begin{equation}
\rho \left( \nu ,T\right) =\nu ^{3}f\left( \nu /T\right) . \label{1.2}
\end{equation}
Lorentz invariance is implicit in the use of electromagnetic theory. Now if
there is a ZPF present even at zero Kelvin, the function $f\left( \nu
/T\right) $ should have a finite (not zero) limit for $T\rightarrow 0,$
which leads to eq.$\left( \ref{1.1}\right) .$ It is obvious that the
constant involved in that expression should play a fundamental role in
quantum physics. It must be fixed by appeal to the experiments and the
result is that eq.$\left( \ref{1.1}\right) ,$ written in terms of the
angular frequency, $\omega =2\pi \nu ,$ becomes
\begin{equation}
\rho \left( \omega \right) d\omega =\frac{4\pi }{c^{3}}h\nu ^{3}d\nu =\frac{
}{2\pi ^{2}c^{3}
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega ^{3}d\omega
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
=h/2\pi . \label{1.3}
\end{equation}
Thus Planck\'{}s constant, $h,$ appears here with a transparent meaning,
namely it fixes the scale of the universal noise or \emph{quantum noise (
but rememeber, I consider it \emph{a real fluctuating field}.) Notice that
the spectrum eq.$\left( \ref{1.3}\right) $ is ultraviolet divergent, a fact
which has been taken as an argument against the noise being real. Actually
that spectrum is a consequence of the Lorentz invariance of special
relativity, but we should expect that at high frequencies the ZPF spectrum
would be cut-off, for instance by creation of particles or gravitational
(general relativistic) effects.
The spectrum eq.$\left( \ref{1.3}\right) $ corresponds to an \emph{energy }
\frac{1}{2}h\nu $\emph{\ per normal mode} of the radiation. Up to here I
have considered the electromagnetic field, but it is plausible that a
similar noise exists for all fields. Indeed all of them should be in a
dynamical equilibrium because they may interact exchanging energy. The
interaction will be stronger when the frequencies of the excitations of the
fields happen to have the same frequency, which plausibly leads to the same
energy $\frac{1}{2}h\nu $\ per normal mode for every field. Also it is
natural that the quantum noise, as a \emph{stochastic field, is Gaussian}.
This assumption combined with the spectrum fully determines the properties
of the noise.
\emph{In summary, the fundamental assumption of the physical model behind
quantum theory, supported in this paper, is the existence of a (real)
universal noise, even at zero Kelvin, consisting of Gaussian fluctuations of
all force fields existing in nature with an average energy }$\frac{1}{2}h\nu
$\emph{\ for every normal mode (except at very high frerquencies.)}
There are two features of quantum theory which have been considered dramatic
differences with respect to classical noise, namely the assertion that \emph
quantum noise is non-dissipative} and the \emph{Heisenberg uncertainty
relations}. I will show that both may be understood as consequences of the
specific spectrum, eq.$\left( \ref{1.3}\right) ,$ of the quantum noise.
Before doing that it is convenient a formal study of linear systems
interacting with the electromagnetic ZPF, which is made in the following.
\section{Linear systems interacting with the zeropoint field}
\subsection{The harmonic oscillator in equilibrium with the ZPF}
For the sake of clarity I recall in this subsection the well known treatment
of the oscillator in SED\cite{S74},\cite{dice}. In the next subsection I
shall preent a novel approach which is closer to the quantum one.
If a charged particle moves in one dimension within a potential well and at
the same time it is immersed in electromagnetic noise, it will arrive at a
dynamical equilibrium between absorption and emission of radiation. In order
to study the equilibrium I shall write the differential equation for the
one-dimensional motion of the particle in the non-relativistic
approximation. The passage to more dimensions\cite{Blanco2} is
straightforward. Neglecting magnetic effects and the dependence of the field
on the position coordinate, as is appropriate in a non-relativistic
treatment, the differential equation of motion of the particle is linear,
namely
\begin{equation}
m\stackrel{..}{x}=-m\omega _{0}^{2}x+\frac{2e^{2}}{3c^{3}}\stackrel{...}{x
+eE\left( t\right) , \label{ode}
\end{equation}
where $m(e)$ is the particle mass (charge) and $E\left( t\right) $ is the $x$
component of the electric field of the radiation (the zeropoint field.) The
second term on the right side is the damping force due to emission of
radiation and the third one is the force due to the random field.
Eq.$\left( \ref{ode}\right) $ may be solved by Fourier transform, which
gives
\begin{equation}
m(\omega _{0}^{2}-\omega ^{2}+i\tau \omega ^{3})\widetilde{x}\left( \omega
\right) =e\widetilde{E}\left( \omega \right) , \label{Fourier}
\end{equation}
where
\begin{equation}
\tau =\frac{2e^{2}}{3mc^{3}},\tau \omega _{0}=\frac{2}{3}\frac{e^{2}}
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
c}\frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}}{mc^{2}}<<1. \label{gamma}
\end{equation}
We see that $\tau \omega _{0}$ is the product of two small numbers, the fine
structure constant, $\alpha \equiv e^{2}
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
c,$ and the ratio $v^{2}/c^{2}\simeq
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}/mc^{2}.$ The spectrum of a stationary stochastic process is
proportional to the square modulus of its Fourier transform, so that the
spectrum of $x\left( t\right) $ in terms of the spectrum of $E\left(
t\right) $ is
\begin{equation}
S_{x}\left( \omega \right) =\frac{3c^{3}\tau }{2m\left[ \left( \omega
_{0}^{2}-\omega ^{2}\right) ^{2}+\tau ^{2}\omega ^{6}\right] }S_{E}\left(
\omega \right) .\smallskip \label{spectrum}
\end{equation}
Here I define the spectrum so that
\begin{equation}
\left\langle x^{2}\right\rangle =\int_{0}^{\infty }S_{x}\left( \omega
\right) d\omega ,\left\langle v^{2}\right\rangle =\int_{0}^{\infty }\omega
^{2}S_{x}\left( \omega \right) d\omega ,\left\langle E^{2}\right\rangle
=\int_{0}^{\infty }S_{E}\left( \omega \right) d\omega ,\smallskip
\label{mean}
\end{equation}
are the mean square coordinate of the oscillator, the mean square velocity
and the mean square value of the $x$ component of the electric field,
respectively. With this definition the spectrum, $S_{E}\left( \omega \right)
,$ of the latter is $4\pi /3$ times the density $\rho \left( \omega \right)
, $ eq.$\left( \ref{1.3}\right) ,$ that is
\begin{equation}
S_{ZPF}\left( \omega \right) =\frac{2}{3\pi c^{3}}
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega ^{3}. \label{Espectrum}
\end{equation}
The spectrum of the velocity is $\omega ^{2}$ times the spectrum of the
coordinate because the time derivative leads to multiplication times $\omega
$ in the Fourier transform$.$ (It must be pointed out that in our treatment
of the oscillator an ergodic hypothesis is implicit, namely that ensemble
averages are equal to time averages for the stationary stochastic processes
involved.) Hence taking eq.$\left( \ref{spectrum }\right) $ into account we
get
\begin{equation}
S_{x}\left( \omega \right) =\frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\tau \omega ^{3}}{\pi m\left[ \left( \omega _{0}^{2}-\omega ^{2}\right)
^{2}+\tau ^{2}\omega ^{6}\right] }. \label{oscilspectrum}
\end{equation}
The integral of $S_{x}\left( \omega \right) $ is involved, but becomes
trivial in the limit $\tau \rightarrow 0$ where we may approximate $\omega
\simeq \omega _{0}$ except in the difference $\omega -\omega _{0}$. This
leads to
\begin{equation}
\left\langle x^{2}\right\rangle =\int_{0}^{\infty }S_{x}\left( \omega
\right) d\omega =\frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}{2m\omega _{0}}\Rightarrow \frac{1}{2}m\omega _{0}^{2}\left\langle
x^{2}\right\rangle =\frac{1}{4
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}. \label{2.3}
\end{equation}
A similar procedure may be used for getting the mean square velocity, by
performing the integral of the velocity spectrum. That integral is actually
divergent, for the moment we assume that there is some frequency cut-off,
\omega _{c}$ (but see below for a discussion of this point). In the limit
\tau \rightarrow 0,$ the result is independent of the cut-off and we get
\begin{equation}
\left\langle v^{2}\right\rangle =\int_{0}^{\omega _{c}}\omega
^{2}S_{x}\left( \omega \right) d\omega =\frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}}{2m}\Rightarrow \frac{1}{2}m\left\langle v^{2}\right\rangle
\frac{1}{4
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}. \label{2.4}
\end{equation}
Adding eqs.$\left( \ref{2.3}\right) $ and $\left( \ref{2.4}\right) $ gives
the total mean energy, namely
\begin{equation}
\left\langle U\right\rangle =\left\langle \frac{1}{2}m\omega _{0}^{2}x^{2}
\frac{1}{2}mv^{2}\right\rangle =\frac{1}{2
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}. \label{energy}
\end{equation}
The problem of the divergence of the integral eq.$\left( \ref{2.4}\right) $
may be solved by putting an upper cut-off in the energies as said above, but
a better method is to use the canonical momentum, $p$, and define the energy
$U$ from it, that is (in one dimension)
\begin{equation}
p\equiv mv+\frac{e}{c}A\mathbf{,}U\equiv \frac{p^{2}}{2m}+\frac{1}{2}m\omega
_{0}^{2}x^{2}. \label{momentum}
\end{equation}
Now we take into account that the potential vector $A$ contains two parts,
one coming from the random field and the other one from the particle
self-field, the latter producing the radiation damping. These two terms give
rise to the two latter terms of eq.$\left( \ref{ode}\right) .$ Taking this
relation into account it is straightforward to get the spectrum of the
canonical momentum, that is
\begin{equation}
\frac{d}{dt}p=-m\omega _{0}^{2}x\Rightarrow S_{p}\left( \omega \right)
\frac{m^{2}\omega _{0}^{4}}{\omega ^{2}}S_{x}\left( \omega \right) =\frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
m\tau \omega _{0}^{4}\omega }{\pi \left[ \left( \omega _{0}^{2}-\omega
^{2}\right) ^{2}+\tau ^{2}\omega ^{6}\right] }. \label{canonmomentum}
\end{equation}
Hence we get
\[
\left\langle p^{2}\right\rangle =m^{2}\omega _{0}^{4}\int_{0}^{\infty
}\omega ^{-2}S_{x}\left( \omega \right) d\omega =\frac{m
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}}{2}\Rightarrow \frac{\left\langle p^{2}\right\rangle }{2m}=\frac
1}{4
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0},
\]
in the limit $\tau \rightarrow 0.$ We see that in that limit the kinetic
energy defined from the canonical momentum agrees with the one defined from
the velocity, eq.$\left( \ref{2.4}\right) .$ However the agreement is no
longer true for finite $\tau .$ Furthermore, the energy defined from the
velocity is divergent (a cut-off was needed), whilst the one derived from
the canonical momentum is finite.
In order to fully define the stationary state of the oscillator immersed in
the ZPF it is necessary to get the probability distribution of the energy,
not just the man value. This is achieved taking into account that the
assumed Gaussian character of the ZPF implies that the distributions of
positions and velocities of the oscillator are also Gaussian. This fixes
completely the probability distribution of the positions to be
\begin{equation}
W\left( x\right) dx=\sqrt{\frac{m\omega _{0}}{\pi
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}}\exp \left[ -\frac{m\omega _{0}x^{2}}{
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}\right] dx, \label{Wx}
\end{equation}
which is normalized and fits in eq.$\left( \ref{2.3}\right) .$ Similarly the
distribution of velocities is
\begin{equation}
W\left( v\right) dv=\sqrt{\frac{m}{\pi
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}}}\exp \left[ -\frac{mv^{2}}{
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}}\right] dv, \label{Wv}
\end{equation}
which is also normalized and fits in eq.$\left( \ref{2.4}\right) .$ Also it
follows that the distribution of energies, $U$, is exponential, that is
\begin{equation}
W\left( U\right) dU=\frac{2}
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}}\exp \left( -\frac{2U}
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}}\right) dU,U\geqslant 0. \label{WE}
\end{equation}
The latter is a consequence of the fact that for small $\tau ,$ eq.$\left(
\ref{gamma}\right) ,$ the motion of the oscillator is almost classical, so
that the mean kinetic energy equals the mean potential energy. Then the
total mean energy is twice the mean potential energy, whence eq.$\left( \ref
{Wx}\right) $ leads to eq.$\left( \ref{WE}\right) .$
Calculating in eqs.$\left( \ref{2.3}\right) $ to $\left( \ref{energy}\right)
$ the corrections due to the finite value of the parameter $\tau ,$ eq.
\left( \ref{gamma}\right) ,$ is straightforward although lenghty\cite{S74}
\cite{dice} and it will not be reproduced here. A relevant point is that the
correction is not analytical in $\tau $ (or in the fine structure constant
\alpha ),$ but the leading term agrees with the radiative corrections of
quantum electrodynamics (Lamb shift). An advantage of the SED calculation is
that the radiative corrections (to the nonrelativistic treatment) may be got
exactly whilst in quantum electrodynamics the required perturbative
techniques allow only an expansion in powers of $\tau $ (or $\alpha ),$ once
a ultraviolet cut-off is introduced. In any case the radiative corrections
depend on the high frequency region of integrals like eq.$\left( \ref
{oscilspectrum}\right) ,$ where the non-relativistic approximation breaks
down. Therefore the calculation has a purely academic interest.
In summary, in SED the calculation in the limit $\tau \rightarrow 0$, eqs.
\left( \ref{2.3}\right) $ and $\left( \ref{2.4}\right) ,$ corresponds to the
quantum mechanical oscillator whilst the corrections, which are functions of
$\tau \omega _{0},$ correspond to the radiative corrections of quantum
electrodynamics.
\subsection{Use of the ``commutator'' of stationary stochastic processes}
Here I will get again the energy of the stationary state of the harmonic
oscillator in SED, using a new method more close to the quantum one. I start
defining the ``commutator of two stationary stochastic process'' as follows.
Given two stationary stochastic process, $x(t)$ and $y(t),$ I shall define
the \emph{commutator }of these processes, $\left[ x\left( t\right) ,y\left(
t^{\prime }\right) \right] ,$ as $2i$ times the Hilbert transform of the
\emph{crosscorrelation}, $\left\langle x\left( t\right) y\left( t^{\prime
}\right) \right\rangle .$ If $g(u)$ is the Hilbert transform of $f(t),$ it
is defined
\[
g(u)=\frac{1}{\pi }P\int_{-\infty }^{\infty }f(t)\frac{1}{u-t}dt,f\left(
t\right) =\frac{1}{\pi }P\int_{-\infty }^{\infty }g\left( u\right) \frac{1}
u-t}du,
\]
where P means principal part.\emph{\ }For stationary processes both the
commutator and the crosscorrelation are functions of the difference of
times, $t^{\prime }-t$, the former odd and the latter even. Thus
\[
\left[ x\left( t\right) ,y\left( t^{\prime }\right) \right] =-\left[ y\left(
t^{\prime }\right) ,x\left( t\right) \right] .
\]
In the particular case $x\left( t\right) =y\left( t\right) $ both the
selfcorrelation and the commutator may be related to the spectrum,
S_{x}\left( \omega \right) ,$ as follows
\begin{eqnarray}
\int_{0}^{\infty }S_{x}\left( \omega \right) \exp \left[ i\omega \left(
t^{\prime }-t\right) \right] d\omega &=&\left\langle x\left( t\right)
x\left( t^{\prime }\right) \right\rangle +\frac{1}{2}\left[ x\left( t\right)
,x\left( t^{\prime }\right) \right] , \nonumber \\
\left\langle x\left( t\right) x\left( t^{\prime }\right) \right\rangle
&=&\int_{0}^{\infty }S_{x}\left( \omega \right) \cos \left[ \omega \left(
t^{\prime }-t\right) \right] d\omega , \nonumber \\
\left[ x\left( t\right) ,x\left( t^{\prime }\right) \right]
&=&2i\int_{0}^{\infty }S_{x}\left( \omega \right) \sin \left[ \omega \left(
t^{\prime }-t\right) \right] d\omega . \label{comm}
\end{eqnarray}
the real part, $\left\langle x\left( t\right) x\left( t^{\prime }\right)
\right\rangle ,$ is the selfcorrelation function and $2i$ times the
imaginary part, $\left[ x\left( t\right) ,x\left( t^{\prime }\right) \right]
,$ is the commutator. The factor 2 in the definition of commutator is chosen
in order to agree with the quantum definition. It is easy to see that the
commutator, like the correlation, is a linear functional of the stochastic
processes. That is, if $x(t),y(t)$ and $z(t)$ are stationary stochastic
processes and $a,b$ complex numbers the commutator has the property
\[
\left[ ax(t)+by(t),z(t^{\prime })\right] =a\left[ x(t),z(t^{\prime })\right]
+b\left[ y(t),z(t^{\prime })\right] .
\]
The use of the commutator, rather than the correlation, may be convenient
when the relevant spectra are odd in the frequency, like the socalled $1/f$
noise or the ZPF. This is because the quantity involved in the latter
integral eq.$\left( \ref{comm}\right) $ is an even function of $\omega ,$
whilst the quantity in the former integral is odd. The integral of an even
function has the advantage that it may be extended to the interval $\left(
-\infty ,\infty \right) ,$ which allows an integration in the complex plane.
This point is illustrated in the following treatment of the oscillator in
SED.
In order to study the harmonic oscillator I shall start getting the
commutator $\left[ x\left( t\right) ,x\left( t^{\prime }\right) \right] ,$
taking eqs.$\left( \ref{oscilspectrum}\right) $ and $\left( \ref{comm
\right) $into account. As the commutator (of a stationary process) depends
only on the difference of times, I shall replace $\left\{ t,t^{\prime
}\right\} $ by $\left\{ 0,t\right\} $ without loss of generality. I obtain
\[
\left[ x\left( 0\right) ,x\left( t\right) \right] =2i
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\int_{0}^{\infty }\frac{\tau \omega ^{3}\sin \left[ \omega t\right] d\omega
}{\pi m\left[ \left( \omega _{0}^{2}-\omega ^{2}\right) ^{2}+\tau ^{2}\omega
^{6}\right] }=
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\int_{-\infty }^{\infty }\frac{\tau \omega ^{3}\exp \left[ i\omega t\right]
d\omega }{\pi m\left[ \left( \omega _{0}^{2}-\omega ^{2}\right) ^{2}+\tau
^{2}\omega ^{6}\right] },
\]
where I have extended the integral to the full real line. The latter
integral may be performed via the method of residues. For $t>0$ we shall
take into account the two simple poles in the upper half plane of the
complex variable $\omega $, that is
\[
\omega \simeq \pm \omega _{0}+\frac{1}{2}i\tau \omega _{0}^{2}+O\left( \tau
^{2}\omega _{0}^{3}\right) .
\]
For $t<0$ we shall use the two poles in the lower half plane. The result may
be written, to order $O\left( \tau \omega _{0}\right) ,
\begin{equation}
\left[ x\left( 0\right) ,x\left( t\right) \right] =\frac{i
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}{m\omega _{0}}\left\{ \sin \left[ \omega _{0}t\right] +\tau \omega _{0
\frac{t}{\left| t\right| }\cos \left[ \omega _{0}t\right] \right\} \exp
\left( -\frac{1}{2}\tau \omega _{0}^{2}\left| t\right| \right) . \label{xx}
\end{equation}
Similarly, taking eqs.$\left( \ref{canonspectrum}\right) $ and $\left( \ref
{comm}\right) $into account I may obtain the commutator of the canonical
momentum, that is
\begin{equation}
\left[ p\left( 0\right) ,p\left( t\right) \right] =i
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
m\omega _{0}\sin \left( \omega _{0}t\right) \exp \left( -\frac{1}{2}\tau
\omega _{0}^{2}\left| t\right| \right) . \label{pp}
\end{equation}
In the limit $\tau \rightarrow 0$ the derivative of eq.$\left( \ref{xx
\right) $ with respect to $t$ gives
\begin{equation}
m\left[ x\left( 0\right) ,\stackrel{\cdot }{x}\left( t\right) \right]
=\left[ x\left( 0\right) ,p\left( t\right) \right] =i
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\cos \left( \omega _{0}t\right) , \label{xp}
\end{equation}
which is the standard commutator of quantum mechanics. (We have taken into
account that $p=m\stackrel{\cdot }{x}$ in the limit $\tau \rightarrow 0$ ).
\emph{This sugests that the commutation rules of quantum mechanics are a
disguised form of taking into account the peculiar stochasticity of the
theory}. That is the stochasticity derived from a noise with spectrum eq.
\left( \ref{Espectrum}\right) .$
The correlations involving $x$ and $p$ may be obtained via the Hilbert
transforms of eqs.$\left( \ref{xx}\right) $ to $\left( \ref{xp}\right) ,$
respectively. To first order in $\tau $ we get
\begin{eqnarray}
\left\langle x\left( t\right) x\left( t^{\prime }\right) \right\rangle &=
\frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}{2m\omega _{0}}\cos \left[ \omega _{0}\left( t-t^{\prime }\right) \right]
\exp \left( -\tau \omega _{0}\left| t-t^{\prime }\right| \right) , \nonumber
\\
\left\langle p\left( t\right) p\left( t^{\prime }\right) \right\rangle &=
\frac{1}{2}
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}\cos \left[ \omega _{0}\left( t-t^{\prime }\right) \right] \exp
\left( -\tau \omega _{0}\left| t-t^{\prime }\right| \right) , \nonumber \\
\left\langle x\left( t\right) p\left( t^{\prime }\right) \right\rangle &=
\frac{1}{2
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\sin \left[ \omega _{0}\left( t-t^{\prime }\right) \right] \exp \left( -\tau
\omega _{0}\left| t-t^{\prime }\right| \right) . \label{corx}
\end{eqnarray}
Hence it is possible to reproduce the results eqs.$\left( \ref{2.3}\right) $
to $\left( \ref{energy}\right) .$
\subsection{Comparison with quantum mechanics}
The probability distributions, eqs.$\left( \ref{Wx}\right) $ and $\left( \ref
{Wv}\right) ,$ of positions and velocities obtained in the limit $\tau
\rightarrow 0$ agree with the quantum predictions. In contrast in quantum
mechanics the energy of the oscillator in the ground state is assumed to be
sharp, therefore in disagreement with eq.$\left( \ref{WE}\right) .$ Thus it
seems as if SED contradicts the quantum predictions for the harmonic
oscillator. The conclusion is not so straightforward because there are
subtleties which I will explain in the following.
Firstly I mention that the conflict between the quantum prediction and eq.
\left( \ref{WE}\right) $ is an example of the general argument used by John
von Neumann\cite{von Neumann} in his celebrated theorem stating that hidden
variable theories are impossible. The 1932 theorem of von Neumann
practically stopped any research in hidden variables theories until
Bell\'{}s article of 1966\cite{BellRMP}. J. von Neumann starts with the
assumption that any linear relation between quantum observables should
correspond to a similar relation between the possible (dispersion free)
values in an hypothetical hidden variables theory. In our case the energy $U$
is a linear combination of $v^{2}$ and $x^{2}.$ Now as the energy predicted
by quantum mechanics, $U
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}/2,$ is sharp, any pair of values of $v^{2}$ and $x^{2}$ in the
hidden variables theory should fulfil
\begin{equation}
m(v^{2}+\omega _{0}^{2}x^{2})
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}, \label{linear}
\end{equation}
which is not compatible with the distributions eqs.$\left( \ref{Wx}\right) $
and $\left( \ref{Wv}\right) $ (for instance the possible value $v^{2}=2
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}/m$ is incompatible with eq.$\left( \ref{linear}\right) $, if we
assume $x^{2}\geq 0).$ Bell's rebuttal to von Neumann was to point out that
the contradiction only arises when two of the quantum obervables do not
commute and in this case the measurement of the three observables should be
made in, at least, two different experiments. Thus a contextual hidden
variables theory is possible, that is a theory where it is assumed that the
value obtained in the measurement depends on both the state of the observed
system and the experimental context.
In our case the apparent contradiction between eq.$\left( \ref{WE}\right) $
and the quantum prediction of a sharp energy dissapears if we take into
account how\emph{\ }the energy of a state is defined \emph{operationally
(i. e. how it may be measured.) In our model the ground state corresponds to
a dynanical equilibrium between the system (the oscillator) and the ZPF.
Checking whether a dynamical equilibrium exists requires a long time,
ideally infinite time. If we define the energy of the oscillator as the
average over an infinite time, it would be obviously sharp. In fact the
probability distribution of the average energies over a time interval,
\Delta t$ $,$ will have a smaller dispersion as greater is $\Delta t$ and
will be dispersion free in the limit $\Delta t\rightarrow \infty .$ Thus it
is natural to assume that the ground state energy as defined by quantum
mechanics corresponds to measurements made over infinitely long times. This
fits very well with the energy-time uncertainty relation
\begin{equation}
\Delta E\Delta t\geq
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
/2, \label{energytime}
\end{equation}
which shows that the measured energy does possesses a dispersion $\Delta E$
if the measurement involves a finite time $\Delta t$. In summary, the energy
of the ground state of any system is sharp because it is (implicitly)
defined as an average over an infinite time.
In order to make the argument quantitative, let us assume that a measurement
of the energy is made lasting a time $T$, and let us call $U_{T}$ the value
obtained. We should identify that value with the average of the potential
energy during a time interval $T$, that is
\begin{equation}
U_{T}=\frac{1}{2T}\int_{0}^{T}\left[ m\omega _{_{0}}^{2}x\left( t\right)
^{2}+\frac{1}{m}p\left( t\right) ^{2}\right] dt. \label{VT}
\end{equation}
The interesting quantity is the fluctuation of that energy, that is
\[
\Delta U_{T}=\sqrt{\left\langle U_{T}^{2}\right\rangle -\left\langle
U_{T}\right\rangle ^{2}},\left\langle U_{T}\right\rangle =\frac{1}{2}m\omega
_{_{0}}^{2}\left\langle x\left( t\right) ^{2}\right\rangle +\frac{1}{2m
\left\langle p\left( t\right) ^{2}\right\rangle .
\]
Taking eq.$\left( \ref{VT}\right) $ into account we get
\[
\left\langle U_{T}^{2}\right\rangle =\frac{1}{4T^{2}}\int_{0}^{T}d
\int_{0}^{T}\left[ m^{2}\omega _{_{0}}^{4}\left\langle x\left( t\right)
^{2}x\left( t^{\prime }\right) ^{2}\right\rangle +\omega
_{_{0}}^{2}\left\langle x\left( t\right) ^{2}p\left( t^{\prime }\right)
^{2}\right\rangle +\omega _{_{0}}^{2}\left\langle p\left( t\right)
^{2}x\left( t^{\prime }\right) ^{2}\right\rangle +\frac{1}{m^{2}
\left\langle p\left( t\right) ^{2}p\left( t^{\prime }\right)
^{2}\right\rangle \right] dt^{\prime }.
\]
Now, the stochastic process $x(t)$ being Gaussian, we have
\[
\left\langle x\left( t\right) ^{2}x\left( t^{\prime }\right)
^{2}\right\rangle =\left\langle x\left( t\right) ^{2}\right\rangle
\left\langle x\left( t^{\prime }\right) ^{2}\right\rangle +2\left\langle
x\left( t\right) x\left( t^{\prime }\right) \right\rangle ^{2},
\]
and similarly for the other two correlations. Hence, taking into account
that the correlations depend only on the difference of times, we have
\[
\Delta U_{T}=\sqrt{\frac{1}{2T}\int_{0}^{T}\left[ m^{2}\omega
_{_{0}}^{4}\left\langle x\left( 0\right) x\left( t\right) \right\rangle
^{2}+2\omega _{_{0}}^{2}\left\langle x\left( 0\right) p\left( t\right)
\right\rangle ^{2}+\frac{1}{m^{2}}\left\langle p\left( 0\right) p\left(
t\right) \right\rangle ^{2}\right] dt}.
\]
Inserting here the correlations eqs.$\left( \ref{corrx}\right) $ we get
\[
\Delta U_{T}
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\frac{1-\exp \left( -T\omega _{_{0}}^{2}\tau \right) }{2T\omega _{_{0}}\tau
\simeq \frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}{2T\omega _{_{0}}\tau }>>\frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}{2T},
\]
the latter equality being valid for $T\omega _{_{0}}^{2}\tau >>1$, which
fits in with the quantum prediction eq.$\left( \ref{energytime}\right) .$
For very short times the energy fluctuation, $\Delta U_{T},$ reaches the
limit $\frac{1}{2
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{_{0}},$ again compatible with eq.$\left( \ref{energytime}\right) $
and with the prediction, eq.$\left( \ref{WE}\right) ,$ of both quantum
mechanics and SED.
It is like magic how the quantum formalism leads to results, plausible
within a realistic model like SED, via a rather surprising path. In fact the
state vector of the ground state of any system is an eigenstate of the
Hamiltionian, which implies a nil dispersion of the state energy, but the
uncertainty relation leads in practice to some uncertainty in any actual
measurement. In SED the ground state of a physical system corresponds to a
dynamical equilibrium and the instantaneous energy is a badly defined
concept, thus the distribution eq.$\left( \ref{WE}\right) $ just derives
from the (classical) definition of total energy in terms of positions and
momenta, but it does not possesses any operational (measurable) meaning.
Up to here I have found the stationary solution of the linear eq.$\left( \ref
{ode}\right) ,$ a more general solution is obtained combining it with the
general solution of the homogenous equation, that is
\[
\stackrel{..}{x}+\omega _{0}^{2}x+\tau \omega _{0}\stackrel{.}{x
=0\Rightarrow x\simeq A\cos \left( \omega _{0}t+\phi \right) \exp \left(
-\tau \omega _{0}t\right) .
\]
where I have approximated $\stackrel{...}{x}\simeq -\omega _{0}\stackrel{.}{
}$ and have neglected a small shift, of order $\tau ,$ of the frequency
\omega _{0}.$ Hence, taking eq.$\left( \ref{Wx}\right) $ into account, we
see that the solution of eq.$\left( \ref{ode}\right) $ leads to a time
dependent probability distribution of positions, namely
\begin{equation}
W\left( x,t\right) \simeq \sqrt{\frac{m\omega _{0}}{\pi
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}}\exp \left[ -\frac{m\omega _{0}}{
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}\left[ x-A\cos \left( \omega _{0}t+\phi \right) \exp \left( -\tau \omega
_{0}t\right) \right] ^{2}\right] , \label{general}
\end{equation}
which contains two integration constants, $A$ and $\phi .$ It must be
stressed that this expression for the probability density result from eq.
\left( \ref{ode}\right) $ and the ZPF spectrum eq.$\left( \ref{Espectrum
\right) $ with the approximation of putting $\tau \rightarrow 0$ except in
the exponential decay. Also it is obvious that eq.$\left( \ref{general
\right) $ is not the most general solution, for instance we might find the
fundamental solution corresponding to an initial condition
\[
W\left( x,0\right) =\delta \left( x\right) ,
\]
but it does not have too much interest. It may be seen that when $\tau =0$
the probability density eq.$\left( \ref{general}\right) $ fully agrees with
the density associated to the coherent states of quantum mechanics, whilst
the expression for finite $\tau $ contains the most relevant contribution of
the radiative corrections of quantum electrodynamics to these states.
\subsection{The free particle in SED. Is quantum noise dissipative?}
Here I will study the free charged particle immersed in ZPF and, for the
sake of clarity, I will compare it with the free particle in classical
blackbody radiation (Rayleigh-Jeans law.) The spectrum of the particle's
coordinate may be got taking into account that the ZPF (the Rayleigh-Jeans
law) corresponds to $\frac{1}{2
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}$ ($kT)$ per normal mode of the radiation. Thus the replacement $
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}\rightarrow 2kT$ leads from eq.$\left( \ref{oscilspectrum}\right)
$ to the spectrum corresponding to the oscillator immersed in classical
thermal radiation at a temperature $T$. Hence the free particle spectrum is
obtained by putting $\omega _{0}=0,$ that is
\begin{equation}
S_{x}\left( \omega \right) =\frac{\tau }{m\pi \omega ^{2}\left( 1+\tau
^{2}\omega ^{2}\right) }2kT, \label{1.5}
\end{equation}
Now from the spectrum it is easy to get the position dispersion, $\Delta x,$
as a function of time, $\Delta t.$ It is\cite{dice}
\begin{equation}
\Delta x^{2}=\left\langle \left[ x(t+\Delta t)-x(t)\right] ^{2}\right\rangle
=2\left\langle x\left( t\right) ^{2}\right\rangle -2\left\langle
x(t)x(t+\Delta t)\right\rangle =2\int_{0}^{\infty }S_{x}\left( \omega
\right) \left[ 1-\cos (\omega \Delta t)\right] d\omega . \label{1.5a}
\end{equation}
Similarly we may get the velocity dispersion substituting $\omega
^{2}S_{x}\left( \omega \right) $ for $S_{x}\left( \omega \right) $ in the
integral eq.$\left( \ref{1.5a}\right) $. The integrals are convergent and
trivial, but I shall give only the results for large $\Delta t$ which are
most interesting, that is
\begin{equation}
\Delta x^{2}\simeq \frac{2\tau kT}{m}\Delta t,\;\Delta v^{2}\simeq \frac{kT}
m}. \label{1.6a}
\end{equation}
We see that the velocity dispersion of a free particle becomes, after some
time, a constant corresponding to the kinetic energy $kT/2$ (this is the
equipartition of the energy of classical statistical mechanics.) On the
other hand the proportionality between the mean square displacement and the
time is typical of Brownian motion. For our purposes of comparison with the
free particle in ZPF, the relevant result is that as time elapses the
particle losses any memory of the initial position and velocity. In addition
there is an important fluctuation in velocity which implies energy
dissipation (by radiation), a well known fact for an accelerated charged
particle. All these features may be summarized saying that ``classical
thermal noise is dissipative''.
In sharp contrast the quantum noise, ZPF, with spectrum eq.$\left( \ref{1.3
\right) $ may be said non-dissipative. The reason is that a random radiation
whose spectrum is Lorentz invariant cannot change the velocity of a particle
on the average, although it may increase the velocity dispersion. The
quantum noise spectrum may be got from eq.$\left( \ref{oscilspectrum}\right)
$ putting $\omega _{0}=0$, that is
\begin{equation}
S_{x}\left( \omega \right) =\frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\tau }{\pi m\left[ \omega +\tau ^{2}\omega ^{3}\right] }. \label{1.7}
\end{equation}
Then a calculation similar to those leading to eqs.$\left( \ref{1.6a}\right)
$ gives
\begin{equation}
\Delta x^{2}=\left\langle \left[ x(t+\Delta t)-x(t)\right] ^{2}\right\rangle
=\frac{
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\tau }{\pi m}\int_{0}^{\infty }\left( \omega +\tau ^{2}\omega ^{3}\right)
^{-1}\left[ 1-\cos (\omega \Delta t)\right] d\omega . \label{1.8}
\end{equation}
The integral is convergent but an exact solution is involved. The most
relevant results may be summarized as follows. The position dispersion,
\Delta x,$ increses rapidly for small $\Delta t$ but more slowly for large
\Delta t,$ that is
\begin{equation}
\Delta x^{2}\sim \frac{
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\tau }{\pi m}\left[ C+\ln \left( \frac{\Delta t}{\tau }\right) \right]
\text{ }\Delta t>>\tau , \label{1.9}
\end{equation}
where C=0.577... is the Euler constant. The (canonical) momentum has no
dispersion as may be seen from eq.$\left( \ref{canonmomentum}\right) $ which
gives a nil spectrum when we put $\omega _{0}.$ This agrees with the quantum
prediction that the momentum of a free particle is a constant. A similar
calculation for the velocity dispersion gives an ultraviolet divergent
integral, which I might make convergent by introducing a cut-off frequency
\omega _{c}.$ Thus we would get
\begin{equation}
\Delta v^{2}=\frac{
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\tau }{\pi m}\int_{0}^{\infty }\frac{\omega d\omega }{1+\tau ^{2}\omega ^{2}
\left[ 1-\cos (\omega \Delta t)\right] \sim \frac{2
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}{\pi m\tau }\ln \left( \omega _{c}\tau \right) ,\Delta t>>\tau .
\label{1.10}
\end{equation}
This is independent of $\Delta t$ but greater than the velocity of light
squared, which means that our calculation is nonsense. A correct calculation
would require a relativistic theory and it will not be made here because
probably SED is not valid in that domain.
The results obtained for the free particle in SED give some hints for an
intuitive picture of five quantum phenomena for which the standard formalism
does not offer any physical model. They are: 1) The electron looks like an
extended object, 2) The zitterbebegung of Dirac\'{}s particles, 3) The spin,
4) The fact that quantum noise does not produce loss of memory, 5) Quantum
noise looks as nondissipative.
\emph{Extended electron and zitterbewegung. . }Eqs.$\left( \ref{1.9}\right) $
and $\left( \ref{1.10}\right) $ offers a model of the electron immersed in
ZPF as consisting of a point charged particle which performs a rapid random
motion with relativistic velocity, but (almost) remains within a region of
size
\begin{equation}
\Delta x\sim \sqrt{\frac{4
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\tau }{3\pi m}}\sim \sqrt{\frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
e^{2}}{m^{2}c^{3}}}\sim 0.1\frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}{mc}, \label{extended}
\end{equation}
that is intermediate between the classical radius and the Compton
wavelength. Thus the particle looks like an extended object with rapid
internal motions, which we migh tentatively relate to zitterbevegung. In
addition there is a rather slow (logaritmic, see eq.$\left( \ref{1.9}\right)
$) diffusion of the whole object.
$\emph{Spin.}$ If the particle is in a region with an homogeneous magnetic
field there will be some coupling due to the random motion of the particle.
A nonrelativistic calculation would give a very bad approximation, as said
above, and it will not be attempted here. However it is plausible that the
particle behaves as having intrinsic angular momentum and magnetic moment.
Intrinsic means that it does not depend on whether the particle is free or
bounded. Indeed the results eqs.$\left( \ref{1.9}\right) $ and $\left( \ref
{1.10}\right) $ are caused by the high frequency part of the spectrum of the
quantum noise, which acts similarly in both the oscillator and the free
particle. In fact a comparison of the free particle spectrum eq.$\left( \ref
{1.7}\right) $ with the oscillator spectrum eq.$\left( \ref{oscilspectrum
\right) $ shows that the high frequency parts are identical. On the other
hand, in the oscillator there is an additional contribution due to
frequencies of order $\omega _{0},$ which would correspond to the orbital
angular momentum and the associated magnetic moment. These arguments suggest
that the intrinsic angular momentum and magnetic moment correspond to the
\emph{spin} of the electron.
\emph{No loss of memory. }As said above the electron appears as an extended
object which experiences a very slow diffusion in coordinate space given by
eq.$\left( \ref{1.10}\right) $. In sharp contrast with typical (Brownian
motion) difussion like eq.$\left( \ref{1.6a}\right) ,$ where the memory of
the initial velocity is lost, the probability density of the particle
immersed in quantum noise conserves the same mean velocity for ever, as a
consequence of the Lorentz invariance of the noise. Also the increase of the
velocity dispersion is very slow (logaritmic).
\emph{Lack of dissipation. }The particle strongly absorbs energy from the
ZPF radiation and also emits it, so that we might say that the dissipation
is large. However the dissipative motion remains within the small region
associated to the (apparently) extended particle. But in the motion over
distances greater than that size almost no additional absorption or emission
of radiation takes place, so that the quantum (ZPF) noise looks as
nondissipative.
The slow loss of memory and the (apparent) lack of dissipation, in sharp
contrast with the effects of thermal (white) noise, have led to the extended
belief that the quantum noise is not real but virtual.
\section{Other effects of the quantum noise}
In this section some phenomena will be discussed which may be plausibly
attributed to the quantum noise. Arguments will be presented which allow an
understanding, at least qualitative, of these phenomena.
\subsection{Heisenberg uncertainty relations}
The results eq.$\left( \ref{2.4}\right) $ may be roughly valid for the
particle in any potential well, provided that we substitute some effective
frequency, $\omega _{ef},$ for $\omega _{0}.$ It is possible to get a
relation independent of the frequency by taking into account that the mean
square velocity may be related to the mean square displacement via
\begin{equation}
\left\langle v_{x}^{2}\right\rangle \approx \omega ^{2}\left\langle \left(
x-\left\langle x\right\rangle \right) \right\rangle ^{2}\equiv \omega
^{2}\Delta x^{2}, \label{2.4a}
\end{equation}
where $2\pi \omega $ is the inverse of the period of the motion. The
frequency $\omega _{ef}$ is plausibly similar to the one appearing in eq.
\left( \ref{2.3}\right) $ (both are equal for a harmonic oscillator.) If we
identify them we get
\begin{equation}
\Delta x^{2}\Delta p_{x}^{2}\simeq \frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
^{2}}{4}
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\equiv \frac{h}{2\pi },\Delta p_{x}^{2}\equiv m^{2}\left\langle
v_{x}^{2}\right\rangle . \label{2.5}
\end{equation}
Of course our argument has been made for a charged particle, but we may
assume that other fields would contribute to the random motion of any
particle, e. g. a fluctuating metric, producing a similar effect.
This provides an intuitive interpretation of the Heisenberg uncertainty
relation as follows.\textit{\ Due to the quantum noise, with the peculiar
spectrum eq.}$\left( \ref{2.1}\right) ,$\textit{\ it is imposible to
localize a particle in a region of size }$\Delta x$\textit{\ without the
particle having a random motion with typical momentum dispersion }$\Delta p.$
Thus the uncertainty relation appears as a practical limit to the
localization of particles in phase space, rather than a fundamental
principle of ``uncertainty'' as in quantum mechanics. However in practive
the difference is less relevant than it may appear. For instance as all
measuring devices are immersed in the universal noise, the interaction of a
device with a microscopic system has a random character which necessarily
leads to a ``disturbance induced by the measurement''. This fact may explain
the ``Heisenberg microscope'' and other effects associated to the
uncertainty relations.
As is well known the Heisenberg relations allow estimating the size and
energy of the ground state of any quantum system. Thus these properties may
be interpreted intuitively as due to the fact that all systems are immersed
in the universal quantum noise. A noise consisting of electromagnetic
radiation but also radiation of other force fields present in nature, in
particular metric fluctuations.
\subsection{Entanglement}
As is well known the concept of entanglement was introduced by
Schr\"{o}dinger in his celebrated ``cat paradox'' paper\cite{Schrodinger}.
There he claimed that entanglement is the characteristic trait of quantum
mechanics. Entanglement is a quantum property of systems with several
degrees of freedom, which appears when the total state vector cannot be
written as a product of vectors associated to one degree of freedom each. In
formal terms a typical entangled state fulfils
\begin{equation}
\mid \psi \left( 1,2\right) \rangle =\sum c_{mn}\mid \psi _{m}\left(
1\right) \rangle \mid \psi _{n}\left( 2\right) \rangle , \label{entangled}
\end{equation}
where $1$ and $2$ correspond to two different degrees of freedom, usually
belonging to different subsystems. The essential condition is that the state
eq.$\left( \ref{entangled}\right) $ cannot be written as a product, that is
the sum cannot be reduced to just one term. Entanglement appears as a
specifically quantum form of correlation, mysterious because nobody has been
able to define it in common words, without recourse to the abstract quantum
formalism. Here I propose that entanglement is just a correlation which
involves the quantum noise.
In recent times entanglement has been the subject of intense study, and a
resource for many applications, specially in the domain of quantum
information. In this case the relevant entanglement usually involves spin or
polarization. Nevertheless entanglement is quite common in nonrelativistic
quantum mechanics. Indeed most wave functions of many-particle systems
present entanglement. Here I will illustrate, with a simple example, how the
entanglement may be understood as a correlation induced by the quantum noise
acting on two different places.
I shall study the entanglement which appears in the London theory of the van
der Waals forces. For the sake of clarity I will consider a very simple
example, namely two one-dimensional oscillating electric dipoles. Each
dipole consists of a positively charged particle at rest and an oscillating
particle (which I will name electron) with mass $m$ and charge $e$. I shall
work the model with the techniques of stochastic electrodynamics (SED).
I start deriving the equations of motion from the Hamiltonian
\begin{equation}
H=\frac{p_{1}^{2}}{2m}+\frac{1}{2}m\omega _{0}^{2}x_{1}^{2}+\frac{p_{2}^{2}}
2m}+\frac{1}{2}m\omega _{0}^{2}x_{2}^{2}-Kx_{1}x_{2}, \label{dip}
\end{equation}
where $x_{1}(x_{2})$ is the position of the electron of the first (second)
dipole with respect to the equilibrium position. The positive parameter $K$
depends on the distance bewteen the dipoles, but the dependence is
irrelevant for our purposes. (For a more complete study of this problem
within SED see Ref.\cite{dice}). The quantum calculation is simple using
perturbation theory. It is convenient to work in the coordinate
representation, that is in terms of wavefunctions rather than state vectors.
The wavefunction correct to first order perturbation is
\begin{equation}
\psi \left( x_{1},x_{2}\right) =\left( 1+\frac{K^{2}}{4m^{2}\omega _{0}^{4}
\right) ^{-1/2}\left[ \psi _{0}\left( x_{1}\right) \psi _{0}\left(
x_{2}\right) +\frac{K}{2m\omega _{0}^{2}}\psi _{1}\left( x_{1}\right) \psi
_{1}\left( x_{2}\right) \right] , \label{1.2a}
\end{equation}
where $\psi _{0}$ $\left( \psi _{1}\right) $is the ground (first excited)
wavefunction of the oscillator. The interaction energy to lowest order is
\begin{equation}
E_{int}=-\frac{K^{2
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}{2m^{2}\omega _{0}^{3}}. \label{1.2f}
\end{equation}
The joint probability density of the coordinates $x_{1},x_{2}$ is given by
the modulus square of eq.$\left( \ref{1.2a}\right) $, that is
\begin{equation}
\rho \left( x_{1},x_{2}\right) =\frac{m\omega _{0}}{\pi
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}\left( 1+\frac{K^{2}}{4m^{2}\omega _{0}^{4}}\right) ^{-1}\left( 1+\frac
Kx_{1}x_{2}}{4
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}^{3}}\right) ^{2}\exp \left[ -\frac{m\omega _{0}}{2
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}\left( x_{1}^{2}+x_{2}^{2}\right) \right] . \label{1.2d}
\end{equation}
We see that entanglement is a special form of correlation. Indeed eq.$\left(
\ref{1.2d}\right) $ shows that the probability is larger when the quantities
$x_{1}$ and $x_{2}$ are both positive or both negative, and it is smaller
when they have an opposite sign. However in quantum mechanics the
correlation appears as somewhat mysterious because no explanation is given
for the randomness. Furthermore the orthodox (Copenhagen) quantum
interpretation forbids the obvious picture that the electrons possess a
random motion which is correlated due to the interaction. Indeed we should
not speak about the probability that \emph{an electron is} in the region
x_{1}>0$ and the other one \emph{is} in the region $x_{2}>0$. We are
compelled to interpret eq.$\left( \ref{1.2d}\right) $ saying something like
``if we performed a measurement of the simultaneous positions of the
electrons \emph{we would get} that one of them is in the region $x_{1}>0$
and the other one is in the region $x_{2}>0"$. (Simultaneous measurements
are possible because the observables commute.)
In sharp contrast the interpretation offered by SED is transparent: the
random motion of the electrons is induced by the ZPF, and the correlation is
produced by the interaction. The SED calculation is as follows. The
differential equations of motion may be obtained from eq.$\left( \ref{dip
\right) $. I shall write them including the forces due to the random ZPF and
the radiation reaction, see eq.$\left( \ref{ode}\right) ,$ that is
\begin{eqnarray}
m\stackrel{..}{x_{1}} &=&-m\omega _{0}^{2}x_{1}-Kx_{2}+\frac{2e^{2}}{3c^{3}
\stackrel{...}{x_{1}}+eE_{1}\left( t\right) , \nonumber \\
m\stackrel{..}{x}_{2} &=&-m\omega _{0}^{2}x_{2}-Kx_{1}+\frac{2e^{2}}{3c^{3}
\stackrel{...}{x_{2}}+eE_{2}\left( t\right) . \label{ode2}
\end{eqnarray}
The approximation of neglecting the $x$ dependence of the field, $E(\mathbf
x,}t)$, is not good if the dipoles are at a long distance (on the other hand
the Hamiltonian eq.$\left( \ref{dip}\right) $ is not valid for short
distances). However we may neglect the $x$ dependence within each dipole and
simplify the notation writing $E_{1}\left( t\right) $ for $E\left( \mathbf{x
_{1,}t\right) $. Furthermore, as we assume that the distance between dipoles
is large, we shall take the stochastic processes $E_{1}\left( t\right) $ and
$E_{2}\left( t\right) $ as uncorrelated.
The coupled eqs.$\left( \ref{ode2}\right) $ may be uncoupled writing new
equations which are the sum and the difference of the former, and
introducing the new functions
\[
x_{+}\left( t\right) =\frac{1}{\sqrt{2}}\left[ x_{1}\left( t\right)
+x_{2}\left( t\right) \right] ,x_{-}\left( t\right) =\frac{1}{\sqrt{2}
\left[ x_{1}\left( t\right) -x_{2}\left( t\right) \right] ,
\]
and similarly definitions for $E_{+}\left( t\right) $ and $E_{-}\left(
t\right) .$ We get
\begin{eqnarray}
m\stackrel{..}{x_{+}} &=&-(m\omega _{0}^{2}-K)x_{+}+\frac{2e^{2}}{3c^{3}
\stackrel{...}{x_{+}}+eE_{+}\left( t\right) , \nonumber \\
m\stackrel{..}{x}_{-} &=&-(m\omega _{0}^{2}+K)x_{-}+\frac{2e^{2}}{3c^{3}
\stackrel{...}{x_{-}}+eE_{-}\left( t\right) , \label{ode3}
\end{eqnarray}
where the stochastic processes $E_{+}\left( t\right) $ and $E_{-}\left(
t\right) $ are statistically independent. With the method used to solve eqs.
\left( \ref{2.3}\right) $ and $\left( \ref{2.4}\right) $ we get
\begin{equation}
\left\langle x_{\pm }^{2}\right\rangle =\frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}{2m\sqrt{\omega _{0}^{2}\mp K/m}},\left\langle v_{\pm }^{2}\right\rangle
\frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\sqrt{\omega _{0}^{2}\mp K/m}}{2m}. \label{xv}
\end{equation}
The Hamiltonian eq.$\left( \ref{dip}\right) $ may be written in terms of
x_{+}\left( t\right) $, $x_{-}\left( t\right) $ leading to
\[
H=\frac{p_{+}^{2}}{2m}+\frac{1}{2}m\omega _{0}^{2}x_{+}^{2}+\frac{p_{-}^{2}}
2m}+\frac{1}{2}m\omega _{0}^{2}x_{-}^{2}-\frac{1}{2}K\left(
x_{+}^{2}-x_{-}^{2}\right) .
\]
Hence, defining $p_{\pm }=mv_{\pm },$ it is easy to get the total energy,
\left\langle H\right\rangle ,$ taking eqs.$\left( \ref{xv}\right) $ into
account. The result is
\[
\left\langle H\right\rangle =\frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}{2}\left( \sqrt{\omega _{0}^{2}-K/m}+\sqrt{\omega _{0}^{2}-K/m}\right) =
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}-\frac{K^{2
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}{2m^{2}\omega _{0}^{3}}+O\left( K^{3}\right) ,
\]
in agreement with the quantum result eq.$\left( \ref{1.2f}\right) .$ The
joint probability distribution of positions is Gaussian and factorices
because eqs.$\left( \ref{ode3}\right) $ are decoupled. That is
\[
\rho \left( x_{+},x_{-}\right) dx_{+}dx_{-}=\rho _{+}\left( x_{+}\right)
\rho _{-}\left( x_{-}\right) dx_{+}dx_{-}.
\]
The density $\rho _{+}$ should be normaliced and such that it leads to the
former eq.$\left( \ref{xv}\right) ,$ whence we get
\[
\rho _{\pm }\left( x_{\pm }\right) =\sqrt{\frac{2m}{\pi
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}}\left( \omega _{0}^{2}\mp K/m\right) ^{-1/4}\exp \left[ -\frac{m}{2
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}\sqrt{\omega _{0}^{2}\mp K/m}x_{+}^{2}\right] .
\]
This gives agreement with the quantum prediction, eq.$\left( \ref{1.2d
\right) ,$ to leading order in $K.$
\subsection{Bose-Einstein condensation}
In the equation of motion $\left( \ref{ode2}\right) $ I have assumed that
the field components, $E_{1}\left( t\right) $ and $E_{2}\left( t\right) ,$
acting upon the two particles are uncorrelated. That is a good approximation
if the particles are at a distance which is large in comparison with
wavelenghts, say $\lambda \simeq c/\omega _{0}$, corresponding to the
typical frequencies involved. However if the distance is of that order or
smaller, the field components will be correlated, which would cause a much
stronger correlation between the particle\'{}s motions. In
quantum-mechanical language this behaviour would correspond to the particles
being ``in the same quantum state'', which is the typical feature of the
Bose-Einstein condensation. It is not worth to pursue this idea further
because I do not understand, within the picture offered by SED, the
difference between bosons and fermions, which is essential for the
phenomenon.
\subsection{Do oscillators possess discrete energy states? Planck's law
revisited.}
As is well known Planck discovered his celebrated law in october 1900 and
presented an interpretation in terms of discrete energy states of the
material oscillators in december 14th. The latter date is considered the
birthday of quantum theory, as Sommerfeld put it. Here I shall show that
another interpretation is possible without any energy discontinuity in the
oscillators.
I shall calculate within SED the harmonic oscillator in a Planck radiation
at a finite temperature, including the ZPF (which I am assuming real in this
paper.) The spectrum of that radiation is
\begin{equation}
S_{Planck}\left( \omega \right) =\frac{4\omega ^{2}}{3\pi c^{3}}\left[ \frac
1}{2
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega +\frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega }{\exp \left(
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega /kT\right) }\right] =\frac{
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega ^{3}}{3\pi c^{3}}\coth \left( \frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega }{2kT}\right) . \label{Planck}
\end{equation}
In the former expression the first term corresponds to the ZPF and the
second one to the thermal part. With the same techniques used in previous
sections for the derivation of eqs.$\left( \ref{2.3}\right) $ and $\left(
\ref{2.4}\right) $from the latter eq.$\left( \ref{Espectrum}\right) ,$ we
get from Planck spectrum eq.$\left( \ref{Planck}\right) $ the following mean
energy of the oscillator, in the limit $\tau \rightarrow 0$
\begin{equation}
\left\langle E\right\rangle \simeq \frac{1}{2
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}\coth \left( \frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}}{2kT}\right) . \label{P0}
\end{equation}
This relation was actually obtained by Planck and led him to the energy
quanta, an interpretation which is still standard. Essentially the argument
rests upon the assumption that the mean energy at a temperature $T$ should
be obtained in terms of the Boltzmann factor as follows
\begin{equation}
\left\langle E\right\rangle =\left[ \sum_{n=0}^{\infty }\exp \left( -\frac
E_{n}}{kT}\right) \right] ^{-1}\sum_{n=0}^{\infty }E_{n}\exp \left( -\frac
E_{n}}{kT}\right) . \label{P1}
\end{equation}
As is well known a set of energies fulfilling this and eq.$\left( \ref{P0
\right) $ is
\begin{equation}
E_{n}=\left( n+\frac{1}{2}\right)
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}, \label{P2}
\end{equation}
and there is no other solution valid for all $T$.
The problem is that, if we accept eqs.$\left( \ref{P1 }\right) $ and $\left(
\ref{P2}\right) $, it seems impossible to get an intuitive picture of the
oscillator. In fact all excited states of the quantum oscillator, obtained
solving the Schr\"{o}dinger equation, present nodes that is positions which
are forbidden for the oscillating particle. For instance the wavefunction
and the probability density of the first excited state are
\[
\psi _{1}\left( x\right) =\sqrt{\frac{2m\omega _{0}}{\pi
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}}x\exp \left[ -\frac{m\omega _{0}}{
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}x^{2}\right] \Rightarrow \rho _{1}\left( x\right) =\frac{2m\omega _{0}}{\pi
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}x^{2}\exp \left[ -\frac{m\omega _{0}}
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}x^{2}\right] ,
\]
that is zero probability of the particle being at the point $x=0$. It is
extremely difficult to understand intuitively how a random motion may be so
that the particle can be both at the right and at the left of $x=0$ but
never in that point.
In my opinion the quantum excited states of the oscillator do not correspond
to real physical states but to mathematical constructs useful for
calculations. The case is similar to the solution of the difussion equation,
for instance for the cooling of a plate with boundaries $x=-L$ and $x=L$.
Assuming that the temperature, $T$, does not depend on the coordinates $y$
and $z$, the distribution is ruled by the diffusion equation
\[
\frac{\partial T\left( x,t\right) }{\partial t}=\sigma \frac{\partial
^{2}T\left( x,t\right) }{\partial x^{2}},
\]
where $t$ is the time and $\sigma $ a constant related to the conductivity
of the medium. The solution may be easily found via a Fourier series
expansion. For instance if the initial and boundary conditions are
\[
T\left( x,0\right) =T_{0},T\left( \pm L,t\right) =T_{L}<T_{0},
\]
the result is
\[
T\left( x,t\right) =T_{L}+\sum_{n}\frac{4\left( T_{0}-T_{L}\right) }{\left(
2n+1\right) \pi }\left( -1\right) ^{n}\cos \left[ \frac{\left( 2n+1\right)
\pi x}{2L}\right] \exp \left[ -\frac{\left( 2n+1\right) ^{2}\pi ^{2}\sigma
}{4L^{2}}\right] .
\]
The point is that the functions $\cos \left[ \left( 2n+1\right) \pi
x/(2L)\right] $ do not correspond to actual temperature distributions, they
are auxiliary mathematical functions. I propose that the same is true for
the solutions of the stationary Schr\"{o}dinger equation.
The interpretation of the oscillator at a finite temperature according to
SED is different and quite intuitive. As in all other random fields which I
have considered up to now, it is plausible to assume that the Planck
radiation is a Gaussian stochastic field. Then the relation between the
x(t) $ and the $E(t)$ being linear, the stochastic process $x(t)$ should be
also Gaussian. Now as the energy is a quadratic function of the coordinates
and velocities, the probability distribution of the oscillator energies will
be an exponential function. In order to agree with the mean energy eq.
\left( \ref{P0}\right) $ that probability distribution should be
\begin{equation}
\rho \left( E\right) dE=\frac{2}
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}}\tanh \left( \frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}}{2kT}\right) \exp \left\{ -2E/\left[
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}\coth \left( \frac
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\omega _{0}}{2kT}\right) \right] \right\} dE. \label{rooscil}
\end{equation}
(I have written $dE$ rather than $dxdp,$ which is the standard volume
element in phase space, but they are equivalent modulo a factor $m$.)
Probably Planck was aware that his law eq.$\left( \ref{Planck}\right) $was
incompatible with either the use of Boltzmann's factor or the Gaussian
character of the random thermal radiation. He choosed to reject the latter
and accept the former. I am compelled to propose the opposite in order to
get a simple and clear physical model, which is the purpose of this article.
That is I must assume that thermal equilibrium corresponds to the continuous
probability distribution of energies eq.$\left( \ref{rooscil}\right) $
rather than the discrete (quantum) distribution eq.$\left( \ref{P2}\right) .$
However the contradiction between the quantum and the SED distributions is
less obvious than it appears if we take into account that the definition of
energy is different as was commented above, when I studied the oscillator
immersed in ZPF. We may assume that no contradiction with the experiments
may arise from the SED interpretation.
Up to here I have considered the harmonic oscillator. However in nonlinear
systems the existence of some relatively longlived excited states cannot be
excluded. After all SED is not valid for nonlinear systems, as said above.
Furthermore we might assume that the said excited long lived states
correspond to a resonance between one of the harmonics of the particle\'{}s
motion with one mode of the ZPF. That is we might write, instead of eq.
\left( \ref{2.4}\right) ,$ the following one
\[
\frac{1}{2}m\left\langle v^{2}\right\rangle \approx \frac{1}{4}
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
n\omega _{0}=\frac{hn}{4T},
\]
$T$ being the period of the motion. This relation may be rewritten,
substituting a time average for the ensemble average $\left\langle
v^{2}\right\rangle ,$ as follows
\[
\int_{0}^{T}p\stackrel{\cdot }{x}dt=\frac{1}{2}nh,
\]
which agrees, except for a factor $2$, with the Bohr-Sommerfeld quantization
rule.
\subsection{Wave behaviour of particles. L. de Broglie waves}
I assume that in nature there are actual particles, like electrons, neutrons
or atoms, and actual fields (waves) like the electromagnetic radiation. Thus
there is no problem to understand the localized detection of particles or
the interference of waves, but there are difficulties to get a picture of
the wave behaviour of particles or the corpuscular behaviour of waves. Here
some hints will be provided for a possible understanding of the particle
aspects of light and the wave behaviour of particles like electrons or atoms.
The detection of individual photons in a photographic plate is due to the
atomic nature of the plate. In this case saying that radiation are particles
because they give rise to individual blackened grains is like saying that
wind is corpuscular because the number of trees falling in the forest is an
integer. Of course in both cases, the photo and the forest, there is a
random element. It is obvious for the wind but, as explained above, there is
also a random element in the radiation: the quantum noise.
The detection process in a photon counter may be explained as follows.
Inside the detector there are systems, e. g. molecules, in a metastable
state. The arriving radiation, with a random element due to the quantum
(vacuum) noise, has from time to time sufficient intensity to stimulate the
decay of the metastable system and this gives rise to a photocount. However
the noise alone, being fluctuating, may eventually produce counts in the
absence of any signal, which are called dark counts. (Dark counts are
usually attributed to thermal fluctuations, but I claim that quantum
fluctuations may produce them also.) The counter behaves like an alarm
system. If it has low sensitivity it may not detect some relevant
perturbation, but if it is too sensitive it may be activated by accident.
The same is likely true for photon counters. This leads me to conjecture
that it is not possible to manufacture detectors with 100\% efficiency but
no dark counts and that this trade-off is the origin of the socalled
efficiency loophole in the optical tests of Bell\'{}s inequalities.
It is attractive the hypothesis that the wave behaviour of particles derives
from the existence of the quantum noise. The noise has a wave character, it
consists of the ZPF or metric fluctuations. Thus we might assume that, in
the interference of electrons or atoms, it is the case that the said waves
interfere and some of them couple strongly to the particle, guiding it to
the screen where the interference pattern appears.
Historically the idea that any particle has an associated wave was put
forward by L. de Broglie. In modern quantum mechanics it is assumed that
particles have sometimes a wave behaviour, but without attempting to give
any clear picture which might explain that behaviour. In de Brogie's work
there was a physical model:\ any particle has an associated wave which acts
on it and is also influenced by it. This picture is usually understood as if
every particle possesses one wave, an understanding reinforced by the
quantitative relation between the particle's momenum, \textbf{p}, and the
wavevector, \textbf{k,} of the wave proposed by de Broglie, that is
\begin{equation}
\mathbf{p}
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
\mathbf{k\Rightarrow }\lambda =\frac{h}{\left| \mathbf{p}\right| },
\label{Broglie}
\end{equation}
$\lambda $ being de Broglie's wavelength.
In my opinion the picture of a unique wave associated to every particle is
untenable. For instance, what happens when one particle is created or
annihilated?, or how the (extended) wave may follow the (localized)
particle? It is more plausible to assume that there is some background of
waves in space able to interact with particles. This leads me again to the
idea that the waves are, actually, those of the quantum noise. The problem
is then to explain why the overhelming interaction of the particle occurs
with just one mode of the radiation, that is the one given by eq.$\left( \ref
{Broglie}\right) .$ In the original presentation of de Broglie he had
recourse to Lorentz transformations, but his argument was rather loose. He
began with the assumption that a particle of mass $m$ is associated to a
periodic motion with frequency
\begin{equation}
\omega =mc^{2}
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
. \label{Broglie1}
\end{equation}
Actually eq.$\left( \ref{Broglie1}\right) $ is not too strange in our
treatment of the free particle in SED, where a frequency roughly like that
one appears in eq.$\left( \ref{extended}\right) .$ If this is the frequency
in the rest frame of the particle, the frequency seen in another frame
moving with velocity $v$ should be
\[
\omega +\Delta \omega =\omega \sqrt{\frac{c+v}{c-v}}\Rightarrow \Delta
\omega \simeq \omega \frac{v}{c}=\frac{mcv}
\rlap{\protect\rule[1.1ex]{.325em}{.1ex}}
}.
\]
It is true that $c/\Delta \omega $ agrees with the wavelength $\lambda $ of
eq.$\left( \ref{Broglie}\right) ,$ but this fact does not provide a clear
picture of how a moving particles interacts mainly with the modes of the ZPF
radiation fulfilling eq.$\left( \ref{Broglie}\right) .$
\section{Conclusions}
An intuitive picture of the quantum world would be useful and possible. The
starting point for that picture is to assume that quantum mechanics is a
stochastic theory and that many typically quantum phenomena are due to an
universal noise in the form of \emph{real} vacuum fluctuations of all fields
present in nature. We should distinguish between actual particles (e. g.
electrons, neutrons, atoms) and actual fields (e. g. electromagnetic).
Elaboration upon these two ingredients provides hints for reaching an
intuitive picture of many allegedly pure quantum phenomena, like the
Heisenberg uncertainty relations or entanglement.
|
1,477,468,751,163 | arxiv | \section{Introduction}
Collaborative human activities are grounded in {\em social and moral
norms}, which humans consciously and subconsciously use to guide and
constrain their behavior: they undergird human societies by prescribing what is obligatory,
permitted, prohibited, and optional \cite{brown1991human}. In doing so, they enable
effective collaboration and prevent emotional and physical harm.
Consider a restaurant kitchen where cooks and assistants perform tasks
such as passing knives and cutting vegetables. When handing over a
knife to the chef, assistants do so in a way that does not look like
they are about to stab the chef. Not only will they orient the knife
in the right way, but they should take care not to approach the chef
menacingly and without prior warning, while the chef has their back to
them. The underlying normative principle could be roughly stated as a
rule: ``If you need to hand over a potentially dangerous object with a
sharp blade, do not point it with the blade at the other person, but
rather grasp it carefully by the blade and hand it over with the bland
side or handle facing the other person''. The tacit understanding
among the kitchen staff is that everyone will abide by this
principle, thus enabling safe exchanges of knives and other
potentially dangerous objects. Failing to follow the rule will likely
result in blame and reprimand from the chef, which then has to be
addressed either by apologizing or by offering an explanation as to
why the rule violation was justified \cite{malle2014moral}.
Clearly, social and moral norms play an important functional role in
the human cognitive architecture: they are at work in perception to
detect morally charged contexts and norm violations, they are employed
during decision-making and behavior execution, and they are referred
to in communications about normative behavior and norm violations. In
other words, normative processing is deeply integrated into the human
cognitive system and affects virtually every aspect of the
architecture (from perception, to reasoning, to action, to
communication). Hence, this type of norm-based processing is also
critical for robots in many human-robot interaction scenarios (e.g.,
when helping elderly and disabled persons in assisted living
facilities, or assisting humans in assembly tasks in factories or even
the space station). Human beings expect their interactants, including
intelligent robots, to follow social and moral norms, and
disappointing those expectations will lead to impoverished
interactions at best, but can lead to emotional and physical harm in
the worst cases.
In this position paper, we will briefly describe how several
components in an integrated cognitive architecture can be used to
implement processes that are required for normative human-robot
interactions, especially in collaborative tasks where actions and
situations could potentially be perceived as threatening and thus need
a change in course of action to mitigate the perceived threats. We
will focus on {\em affordance-based reasoning} to infer complex
affordance relationships between an agent and objects in the
environment, and {\em analogical reasoning} to decide the
appropriateness of the action plan by comparing against other past
situations.
\section{Background and Related Work}
Many in the HRI field have recognized that ethics will have to inform competent robot behavior in the social sphere. Various ethical theories and combinations thereof have been proposed, most commonly weighing utilitarian approaches against deontic frameworks (e.g. obligated or forbidden actions), \cite{abney2012robotics}. Recent work in cognitive science on human moral reasoning, however, has recently yielded insights into intricate relationships between moral norms, emotions, theory of mind, and blame \cite{monroe2012morality}, \cite{greene2002and}. While autonomous social robots need not, indeed should not, recapitulate models or features of embodied human cognition for the sake of sheer similarity (e.g. reproducing ``aggression" that clouds moral principle), it is clear that to interact competently in social space such systems will incorporate adept perspective taking and reason giving for their actions \cite{scheutz2015towards}. Moreover, in their dealings with people robots will also be dealing with inanimate objects (from tools to keepsakes), which can be especially charged morally when involved with collaborative tasks or other social interactions: HRI cannot ignore object affordance in its scenarios and design for moral performance.
In terms of cognitive architecture, calls for robust moral reasoning have been building in scope and force as roles for AI and robotics, from self-driving cars and autonomous weapons systems to domestic and healthcare roles, have accelerated entry into the social sphere \cite{bello2013build}. Relatively little work on cognitive architecture has directly tackled social and moral norms, though there are some initial modeling efforts to meet that challenge \cite{sun2013moral}. MoralDM, for example, as part of the Companions architecture, base moral decision-making on analogies with cultural narratives
\cite{dehghani2008integrated} or generalizations of stories \cite{blass2015moral} to determine the appropriate action. Recognizing how thoroughly moral norms will shape expectations and evaluations of social robots, we situate moral reasoning as an integral feature of the DIARC architecture \cite{scheutzetal07autrobot}.
\section{Components for Normative HRI}
Various architectural capabilities are required in cognitive robotic architectures for robots to become morally competent \cite{malle2014moral}. Here we focus on three key functionalities: (1) affordance inference, (2) analogical reasoning, and (3) action selection. For example, when handing over a knife, robots must pass the knife in a socially acceptable way (affordance perception), while evaluating whether the situation as a whole is socially appropriate compared with similar situations (analogical reasoning), and mitigating perceived threats by choosing alternative actions (action selection).
In prior work we have developed a computational representation and framework to reason about affordances more generally (i.e., physical, functional, aesthetic, social and ethical affordances) \cite{Sarathy2015}. We have also implemented a structure-mapping engine for analogical reasoning and have the ability to compare and score
situations for similarity in structure. Here we supplement this work with an action selection engine to reason about social and moral perceptions and select
mitigating actions.
We propose implementing these key functionalities by means of components integrated into the existing robotic DIARC architecture, which comprises components for perceptual and visual processing, navigation, action planning, and natural language processing. DIARC has been used extensively for human-robot interaction in natural language \cite{scheutzetal07autrobot}. Next, we will discuss each of these functionalities along with the architectural components needed to enact them (Figure 1).
\begin{figure}[tb!]
\centering
\includegraphics[scale=2.6]{moralcontextresolution2.png}
\caption{High-level Schematic of DIARC showing the three relevant components together with some of their connections.}
\label{figurelabel}
\vspace{-3mm}
\end{figure}
\subsection{Goal Manager (GM)}
The Goal Manager (GM) is responsible for accepting a goal and assembling an action script to satisfy this goal and manages the execution of the script. The GM performs these functions in conjunction with the Affordance Inference Component and the Analogical Reasoning Component, which we will discuss in more detail below.
\subsection{Cognitive Affordance Inference}
We have developed a formal rules-based logical representational format
and inference algorithm for cognitive affordances, in which an
object's affordance ($A$) and its perceived features ($F$) depend on
the context ($C$). The perceived features ($F$) include color, shape, texture, relational information, and general
information obtained from the vision (or other sensory systems)
pipeline that an agent may interpret.
The context is representative of the agent's beliefs, goals,
desires, and intentions, along with certain other abstract constructs
in the agent's narrative situation.
The object's affordance ($A$) represents the types of action possibilities that
might be available to the robot at any given moment in time.
We use Dempster-Shafer (DS) theory for inferring affordance ($A$) from
object features ($F$) in contexts ($C$) \cite{Shafer1976}. DS theory is an uncertainty
processing framework often interpreted as a generalization of the
Bayesian framework.
Our cognitive affordance model also consists of a
set of affordance rules ($R$) taking the form $r: \equiv f \land c \implies_{[\alpha,\beta]} a$ with $f\in F$, $c\in C$, $a\in A$, $r\in R$,
$[\alpha,\beta]\subseteq [0,1]$. Here, the confidence interval
$[\alpha,\beta]$ is intended to capture the uncertainty associated
with the affordance rule $r$ such that if $\alpha=\beta=1$ the rule is
logically true, while $\alpha=0$ and $\beta=1$ assign maximum
uncertainty to the rule. Rules can then be applied for a given
feature percept $f$ in given context $c$ to obtain the implied
affordance $a$ under uncertainty about $f$, $c$, and the extent to
which they imply the presence of $a$.
These types of rules are very versatile, and we can employ DS-theoretic modus ponens to make uncertain deductive and abductive inferences. We have started to integrate this functionality into the DIARC architecture by means of a special affordance inference component in conjunction with the existing visual perception components, which allows us to incorporate cognitive affordance inference into the visual perception pipeline.
\subsection{Analogical Reasoning}
We use analogical reasoning to identify applicable actions that are
consistent with the surrounding context. The process proceeds as follows.
Given an encoding of the situation we make a series of analogical comparisons with other
situations. We use the Structure Mapping Engine (SME) \cite{falkenhainer1989structure}
to perform each comparison. The other situations are stored in memory and originate from prior experiences, instruction, observation, and demonstration. Each successful
analogical comparison yields a similarity score and a set of
candidate inferences. Comparing the similarity scores of each
comparison indicates which situations are most
analogous to the current situation. The candidate inferences
represent information known in the other situation that
structurally fits with the new situation. Since there is no semantic
verification of this information, a follow-on step is necessary to
check each candidate inference and determine whether it can be true in
the current situation. Included amongst the candidate inferences
may be the action for the robot to take or the perceived intent of the
action in a given context.
\section{Proof-of-Concept Example}
DIARC aims for ``natural, human-life" human-robot interaction through which a robot can deliver goal-oriented, socially appropriate behavior from exchanges in ordinary settings through natural language. To illustrate how the three components discussed above can contribute to that effort within DIARC, let us consider a robotic agent who is helping human beings at home in their kitchen. One of them asks the robot, ``Can you bring me something to cut a tomato?" The speech and natural language systems within DIARC can parse and interpret this instruction, submitting a goal to the GM (e.g.,
$possess(human, cutwith(tomato))$). The GM will resolve this goal into a high-level action script with three sequenced parts: $find$, $pick up$, and $bring$.
\textbf{Find Object.}
Once the GM has resolved the larger goal into a hierarchical action script, each step of the action script is then further resolved into more primitive actions. The step of ``Find Object" is further resolved by turning to the Affordance Inference Component in the architecture. The Affordance Inference Component interacts with a set of affordance rules (which include physical, functional, and social
rules) stored in memory, where each rule associates an affordance
with certain perceptual features and certain contextual elements. In the kitchen-helper example, consider rule $r^{1}$ with an uncertainty interval $[0.8,1]$:
\begin{flushleft}
$r^1_{[0.8, 1]} := hasSharpEdge(O) \land domain(X,kitchen) \implies$ \\
$cutWith(X,O)$\\
\end{flushleft}
The Affordance Inference Component receives contextual information (e.g., it is in the kitchen working as a helper) from a Belief component (tasked with resolving agent beliefs and intentions) and from the GM. It also interacts with the robot's vision component and directs a visual search to look for objects in the environment that satisfy the $hasSharpEdge(O)$ perceptual predicate.
The Affordance Inference Component then applies perceptual and contextual information
(along with accompanying uncertainty masses, $m$) to determine the affordance implied by the rule, as follows:
\begin{flushleft}
$r^1_{[0.8, 1]} (m_r = 0.8) := $\\
$hasSharpEdge(\mathit{knife}) (m_f = 0.95)\land $\\
$domain(\mathit{self},kitchen) (m_c = 1.0)\implies$ \\
\line(1,0){200}\\
$cutWith(\mathit{self},\mathit{knife}) (m_a = (m_f \otimes m_c) \odot m_r = 0.76)$
\end{flushleft}
\noindent where the $\otimes$ is the DS-theoretic uncertain logic AND
operator and the $\odot$ is the DS-theoretic uncertain logic modus
ponens operator. The uncertainty interval for the rule can then be
computed as $[0.76, 1]$.The Affordance Inference Component will then perform this analysis for
each of the other rules in the set to determine uncertainty intervals
for all the implied affordances.
Once the Affordance Inference component has found a suitable object with the required affordances, it will have completed the ``Find Object" action step and the GM will then advance the action script to the next step to ``Pick up the Object."
\textbf{Pick up Object.}
To perform this action, the GM must generate an action sequence to move near the object and then determine appropriate grasp affordances for the object in conjunction with the Affordance Inference Component.
The Affordance Inference Component is capable of resolving not only handling functional affordances as described above with respect to finding objects to cut with, but more complex social and aesthetic affordances. For example, to properly hand over knives, it is preferable to grasp the knife by its blade and orient its handle towards a receiver. But it is acceptable to grasp the handle if the blade is dirty or being used. The Affordance Inference Component takes into account these considerations using rules of the form described above and infers appropriate grasp affordances in the current context.
Consider the situation where the knife is dirty and the Affordance Inference Component determines that the knife is graspable by the handle. The GM selects this as a suitable grasp affordance and initiates an action sequence to execute the ``Pick up Object" action.
Once the robot has picked up the object, it will have completed the ``Pick up Object" action step and the GM will then advance the action script to the next step to ``Bring Object to Human."
\textbf{Bring Object to Human.}
The action of bringing an object to a human is decomposed into a simple action script that has the robot
translocating itself from its current location to the location of the human and then handing the object to
the human (handing over the object will itself be decomposed into more primitive actions).
Given this action script, we check that the behavior of the robot is morally and socially acceptable.
These checks are made before each action is executed (including the actions described above), but for
simplicity we discuss these checks only here.
We focus on verifying that the action would be perceived as a morally acceptable behavior. This is done
by drawing analogical comparisons with known situations and checking that similar situations are not
morally unacceptable. If the action script to be executed next may be objectionable, then the robot tries
to modify the script and rechecks that the new script is acceptable. Algorithm ~\ref{alg:moral} describes
this process.
\begin{algorithm}
\caption{Moral Perception Acceptability algorithm}\label{alg:moral}
\begin{algorithmic}[1]
\Procedure{CheckMoralPercept}{$s$}
\State $m \gets$ similarScenarios($s$)
\If{acceptable($m[0]$)}
\Return $s$
\Else
\While{modifiable($s$)}
\State $t \gets $nextModifiedActionScript($s$)
\State CheckMoralPercept($t$)
\EndWhile
\State \Return error
\EndIf
\EndProcedure
\end{algorithmic}
\end{algorithm}
The check starts with the current scenario $s$, which includes the action script to be
executed and information about the agents and objects involved. Given this
description of the scenario, we find a set of \emph{similarScenarios} that analogous
to the current one (line 2). To compile this set of scenarios, we use SME to perform
analogical comparisons of the current scenario with other known
scenarios and return the most similar ones (up to three). Each scenario with
which the current is compared may describe a normative action that is
taken or an action that is impermissible in the current context. Assuming
the robot does not know of any scenario that is a \emph{literal similarity} --
such as approaching a person with a knife in a kitchen -- then we rely on
analogous scenario -- ones that are similar in structure but may differ in
content. Consider the three scenarios described in Fig. \ref{fig:scen}.
Initially, the similarity scores of each of the scenarios is 0.4465, 0.4285, and 0.2910, respectively.
The scores estimate the quality of the analogy but are not along any particular scale.
If the most similar scenario is morally acceptable then the current scenario does
not resemble any moral violations and the robot may proceed with executing the script (line 3).
However, our most similar scenario, the BBS, represents a moral violation.
In this case, the algorithm proceeds by considering modifications to the action script (line 6),
and then repeating the moral check on the updated scenario (line 7).
Once the action script is modified to alert the human before moving towards her,
the HSS becomes the most analogous, and this scenario does not have any moral violations.
\begin{figure}[htpb]
\centering
\begin{mdframed}[backgroundcolor=gray!20]
\textbf{{\small Baseball Bat Scenario (BBS)}}
{\small The approaching agent surprises the approached agent
from behind and strikes them with a baseball bat. Here the approaching
agent is holding a bat, which might posses the affordances of a
weapon, and they do not provide any warning or notice to the
other. Moreover, the approaching agent is causing harm, resulting in a
morally-negative outcome.\par}
\noindent \textbf{{\small Flowers Scenario (FS)}}
{\small One agent surprises another agent with a bouquet of
flowers. Like the Baseball Bat scenario, here too the approaching
agent surprises the approached agent from behind, without
warning. However, unlike the Baseball Bat Scenario, here the
approaching agent is holding a bouquet of flowers, which does not
posses the affordances of a weapon. Finally, the approaching agent is
not causing harm, and in fact is cheering up the other, thereby
resulting in a morally-positive outcome.\par}
\noindent \textbf{{\small Hot Saucepan Scenario (HSS)}}
{\small One agent holding a hot saucepan warns nearby agents
while passing by behind them. Like the Flowers Scenario, here too the
outcome is a morally-positive one and the approaching agent is not
intending to cause harm. Like the Baseball Bat Scenario, here too, the
approaching agent is holding an object (hot saucepan) that possesses
weapon affordances. However, unlike both the prior scenarios, in this
scenario, the approaching agent provides a verbal warning to the
approached agent.\par}
\end{mdframed}
\caption{Analogous Scenarios}
\label{fig:scen}
\end{figure}
\section{Discussion}
Being able to recognize morally and socially charged situations is an
important skill for robots in a human-robot
collaborations. As research in robotic cognition
progresses and robots are endowed with more advanced action
capabilities, it will become ever more important to ensure that robotic
actions are monitored, discerning their moral and social implications and verifying that these actions are within societal norms. This is
especially true as robotic systems make their way into everyday
lives. Take, for example, self-driving cars. As these systems develop
the ability to monitor roads and navigate them safely, it will also be important
that they conduct themselves within social and moral expectations of
other drivers on the road. This means, therefore, looking at its own driving from
others' perspective and considering if the actions will result in
morally-positive outcomes.
Our long-term goal is to endow robots with moral competence. Here we
took a step in this direction by proposing promising mechanisms in an
integrated architecture for reasoning about the social and moral
propriety of situations. Yet, many challenges remain to be addressed,
including computational complexity, episodic memory management, data
representations, as well as more advanced affordance-based and
analogical reasoning techniques.
\bibliographystyle{IEEEtran.bst}
|
1,477,468,751,164 | arxiv | \section{Introduction} \label{intro}
Random damage may crucially change
the structure and function of a network and may even completely
destroy it. The description of the destruction of the complex
network architectures due to damage is a challenging direction in
the multidisciplinary science of networks. Remarkably, much
attention was attracted to the hierarchical organization of
various real-world networks (the Internet, the WWW, cellular
networks, etc.) and extracting and indexing of their highly
interconnected substructures --- $k$-cores, cliques, and others
\cite{aabv05,k05,wa05,dpv05,dgm05}. The question is: how does
random damage change and destroy these substructures, in
particular, the $k$-cores?
The $k$-{\em core} of a network is its largest subgraph whose
vertices have degree at least $k$. In other words, each of
vertices in the $k$-core has at least $k$ nearest neighbors within
this subgraph. The $k$-core of a graph may be obtained in the
following way (the $k$-core algorithm or ``the pruning
rule''). Remove from the graph all vertices of degree less than
$k$. Some of the rest vertices may remain with less than $k$
edges. Then prune these vertices, and so on until no further
pruning is possible. The result, if it exists, is the $k$-core.
The notion of the $k$-core \cite{b84,s83} is a natural
generalization of the giant connected component in the ordinary
percolation \cite {ajb00,cnsw00,nsw01,cah00,cah02,dms01}. The
$k$-core percolation implies the breakdown of
the
giant $k$-core at
a threshold concentration of vertices or edges removed at random.
In physics, the $k$-core percolation (the $k$-core percolation) on
a random Bethe lattice was introduced in Ref.~\cite{clr79} for
describing some magnetic materials. The $k$-core percolation on a
random Bethe lattice was used as a basic model of the quadrupolar
ordering in solid $(o-$H$_{2})_{x}(p-$H$_{2})_{1-x}$ mixtures
\cite{apm87}, the rigidity percolation \cite{mdl97}, the jamming
in sphere packing \cite{slc04}, glassy dynamics \cite{sbt05}, etc.
An exact threshold of the emergence of the $k$-core in some basic
random networks was found in Ref.~\cite{psw96,fr04}. Recently, a
general exact solution of the $k$-core problem for damaged and
undamaged uncorrelated networks with arbitrary degree
distributions was obtained in Ref.~\cite{dgm05}.
These investigations revealed that the $k$-core percolation is
featured by an unusual phase transition which differs strongly
from the ordinary percolation. The latter emerges through a
continuous phase transition occurring at a critical concentration
$p_{c}$ (percolation threshold) of the vertex occupation
probability $p$ \cite {ajb00,cnsw00,nsw01,cah00,cah02,dms01}. (A
vertex is occupied with the probability $p$ and is removed with
the complementary probability $Q=1-p$.) At concentrations
$p>p_{c}$ the giant connected component of a network occupies a
finite fraction $M$ of the total number of vertices $N$ in the
network. At $p\rightarrow p_{c}$, $M$ tends to zero, $M\propto
(p-p_{c})^{\beta }$. The standard mean-field exponent $\beta =1$
takes place in networks with a rapidly decaying degree
distribution.
In networks with a scale-free degree distribution $q^{-\gamma }$,
exponent $\beta$ deviates from the mean-field value at
$2<\gamma <4$ \cite{cah02}. In these networks, $p_c=0$ at $\gamma \leq 3$.
The $k$-core percolation for $k\geqslant 3$ demonstrates another
behavior. When $p\rightarrow p_{c}(k)$, the relative size $ M_{k}$
of the giant $k$-core tends to a finite value $M_{k}(p_{c}(k))$,
and at $ p<p_{c}(k)$ the $k$-core is absent.
Note that the
critical concentration $p_{c}(k)$ depends on $k$. In this
respect, the $k$-core percolation looks like a first-order phase
transition with a jump of $M_{k}$ at the critical point. However,
for a first-order phase transition, one would expect that $M_{k}$
is an analytical function of $p$ at $p>p_{c}(k)$. Contrary to
these expectations, $M_{k}$ shows a singular behavior \cite
{clr79,slc04,hs05,dgm05}: $M_{k}(p)-M_{k}(p_{c}(k))\propto
[p-p_{c}(k)]^{1/2}$. Recently, a similar phase transition was
observed by numerical simulations of the random packing of
frictionless particles (jamming transition) \cite{jamming}.
In the case of the ordinary percolation, the
critical phenomena are related to the divergence of the mean size
of finite clusters (finite connected components). But what is the
origin of critical phenomena for the $k$-core percolation? First
important steps in resolving this question have been made in
Refs.~\cite{slc04,hs05} where the important role of a so-called
``corona'' of the $k$-core was noted. The {\em corona} is a subset
of vertices in the $k$-core which have exactly $k$ nearest
neighbors in the $k$-core.
In the present paper we develop the qualitative and exact
theories of the $k$-core percolation on complex
networks. Our consideration is based on an exact solution of the
$k$-core percolation on uncorrelated networks with arbitrary
degree distributions. Specifically, we use the configuration model
--- the maximally random graphs with a given degree distribution
\cite{bbk72}. In the large network limit, in any finite
neighborhood of a vertex, these graphs have a tree-like local
structure, i.e., --- without loops, see, e.g.,
Ref.~\cite{dms03,bm05}. Note that in tree-like networks, finite
$k$-cores are absent, and so we discuss only the giant $k$-core.
In Sec.~\ref{damage} we present a qualitative picture and
demonstrate that the critical behavior at the $k$-core percolation
threshold is related to the critical behavior of the corona of the
$k$-core. In Sec.~\ref{basic} we overview an exact solution
describing the $k$-core organization. In Sec.~\ref{edges} we
discuss the statistics of edges in a network with the $k$-core and
the meaning of the order parameter for the $k$-core percolation.
The critical behavior of the order parameter is considered in
Sec.~\ref{point}. Using generating functions, in Sec.~\ref{gf} we
show that the $k$-core percolation threshold is at the
same time the percolation threshold for corona clusters.
At this point the mean size of corona
clusters diverges. The distribution of corona clusters over sizes
is found in Sec.~\ref{distrib}. Specific correlations between
vertices in the $k$-core are discussed in Sec.~\ref{correlation}.
It is demonstrated that the mean intervertex distance in corona
clusters plays the role of the correlation length. In
Sec.~\ref{longrange} we derive exact equations which describe the
evolution of the degree distribution in the $k$-core with
increasing concentration of randomly removed vertices. It is
demonstrated that a removal of even one vertex may result in a
vast damage of the $k$-core. The ``diameter'' of the damaged
region tends to infinity at the $k$-core percolation threshold. In
Sec.~\ref{mapping} we propose an exact mapping of the $k$-core
percolation to a model of cooperative relaxation. This
model undergoes
critical relaxation with a divergent rate
at some critical moment of the evolution.
\section{Random damaging the $k$-core} \label{damage}
In this section we
qualitatively describe
the $k$-core
percolation in an uncorrelated random network for $k\geq 3$. We
assume that a vertex in the original network is occupied with a
probability $p$. Let $M_{k}$ be the probability that a randomly
chosen vertex in a given network belongs to the $k$-core. The
$k$-core in itself is a random network which consists of vertices
with degrees at least $k$. Corona vertices, i.e. vertices with the
degree $k$, are distributed randomly in the $k$-core. They occupy
a finite part of the $k$-core. We shall show in Sec.~\ref{gf} that
at $p>p_{c}(k)$, the corona consists of only finite clusters, so
that its relative size in the $k$-core is sufficiently small. Any
vertex in the $k$-core may have a link with a corona vertex
belonging to a finite corona cluster.
\begin{figure}
\begin{center}
\scalebox{0.34}{\includegraphics[angle=270]{clusters-prime.eps}}
\end{center}
\caption{(a) A part of the $k{=}3$-core with a finite corona
cluster which consists of vertices with exactly 3 edges. This
cluster is shown as a grey region. Corona vertices are represented
by open dots. In a tree-like network only one corona cluster may
connect two vertices, for example, vertices $i$ and $j$ on this
figure. Removal of vertex $i$ results in pruning all vertices
which belong to the corona cluster. As a result, vertex $j$ loses
one neighbor (this is the neighboring corona vertex). The degree
of vertex $j$ decreases by 1. (b) In a
network
with
loops, two or more corona clusters may connect together a pair of
vertices in the $k$-core. (c) In networks with nonzero clustering,
a vertex in the $k$-core may be attached to a corona cluster by
two or more edges. In the cases (b) and (c) a removal of the
vertex $i$ results in pruning the corona vertices and, in turn,
the degree of the vertex $j$ is decreased by 2.
}
\label{clusters}
\end{figure}
Let us study the change of the $k$-core size when removing
vertices at random from the original network. Let the occupation
probability $p$ be diminished by a value $\Delta p$. This
corresponds to a random removal of $N\Delta p$ vertices. We
denote the corresponding decrease in the size of the $k$-core by
$N\Delta M_{k}=NM_{k}(p)-NM_{k}(p-\Delta p)$. This change is a
quantity to be found. Firstly, there is a trivial contribution to
$N\Delta M_{k}$ due to the removal of the deleted vertex from the
$k$-core:
\begin{equation}
N\delta M_{k}\equiv N\Delta p\partial M_{k}/\partial p=\Delta pNM_{k}/p
.
\label{pdMk}
\end{equation}
This can be seen from Eqs.~(\ref{Mnk}) and (\ref{k-core}).
Secondly, after removing a vertex together with its edges from the
$k$-core we must prune all other vertices which will occur with
degrees less than $k$. In fact, the removal of a single vertex $i$
from the $k$-core results in the removal of the entire corona
clusters attached to vertex $i$. Note that several corona
clusters may be attached to a vertex with degree $n>k$ in the
$k$-core. The removal of the corona clusters happens due to ``the
domino principle''. Indeed, after removing vertex $i$, its nearest
neighboring corona vertex loses one link with the $k$-core. This
vertex must be pruned from the $k$-core, because now it has only
$k-1$ links with the $k$-core. Due to this removal, each of second
nearest neighbors of vertex $i$ in the corona clusters also loses
one link with the $k$-core and also must be pruned, and so on
until all vertices in the corona clusters will be pruned one by
one. This process is explained in Fig.~\ref{clusters}(a), where a
part of the $k{=}3$-core with a corona cluster is represented. Let
$N_{\text{crn}}$ be the mean total size of all corona clusters
attached to a vertex in the $k$-core \cite{remark2}. Then the
second contribution to $N\Delta M_{k}$ is $N\delta
M_{k}N_{\text{crn}}$ which is the number of the deleted vertices
in the $k$-core multiplied by $N_{\text{crn}}$.
What happens with other vertices remaining in the $k$-core after
the removal of vertex $i$ together with corona clusters attached
to $i$? If there is no loops, all nearest neighbors of the deleted
vertex $i$ and of the pruned corona vertices remain in the
$k$-core. Their degrees decrease by 1 since each of these vertices
loses one link with the $k$-core: $n\rightarrow n-1\geq k$. On the
other hand, in networks with loops, due to the pruning, a vertex
may lose more than one connection to the $k$-core. Such situations
are represented in Figs.~\ref{clusters}(b) and \ref{clusters}(c).
Thus, in a tree-like network,
\begin{equation}
N\Delta M_{k}=N\delta M_{k}+N\delta M_{k}N_{\text{crn}}
.
\label{deltaMk}
\end{equation}
In a differential form this equation looks as follows:
\begin{equation}
\frac{d\ln M_{k}}{d\ln p}=1+N_{\text{crn}}
.
\label{dM/dp}
\end{equation}
We will show in Sec.~\ref{gf} that corona clusters percolate
exactly at the $k$-core percolation threshold $p_{c}(k)$ and that
$N_{\text{crn}}$ diverges as $[p-p_{c}(k)]^{-1/2}$. Consequently,
according to Eq.~(\ref{dM/dp}), $M_{k}$ demonstrates a critical
singularity:
\begin{equation}
M_{k}(p)-M_{k}(p_{c}(k))\propto [p-p_{c}(k)]^{1/2} . \label{Mpc}
\end{equation}%
In Sec.~\ref{gf} we will also show that Eq.~(\ref{dM/dp}) is exact
for uncorrelated networks with an arbitrary degree distribution
(the configuration model) in the limit $N\rightarrow \infty $.
\section{Basic equations} \label{basic}
In this section we
develop
an exact formalism for calculating
various $k$-core's characteristics. This approach is based on our
paper \cite{dgm05}.
We consider an uncorrelated network with a given degree
distribution $P(q)$ --- the so-called configuration model. We
assume that a vertex in the network is occupied with the
probability $p$. In this tree-like network, the giant $k$-core
coincides with the infinite $(k{-}1)$-ary subtree. By definition,
the $m$-ary tree is a tree, where all vertices have branching at
least $m$. The introduction of the $(k{-}1)$-ary tree notion
allows one to strictly define the order parameter in the $k$-core
problem for tree-like networks (see below).
Let $R$ be the probability that a given end of an edge of a
network is not the root of an infinite $(k{-}1)$-ary subtree.
Then, the probability $M_{k}(n)$ that a vertex chosen at random
has exactly $n\geqslant k$ neighbors in the $k$-core is given by
the equation:
\begin{equation}
M_{k}(n)=p\sum\limits_{q\geqslant
n}^{{}}P(q)C_{n}^{q}R^{q-n}(1-R)^{n}
.
\label{Mnk}
\end{equation}%
Here, $P(q)$ is the probability that a randomly chosen vertex in
the original undamaged network has degree $q$. $C_{n}^{m}R^{q-n}(1-R)^{n}$ is
the probability that a vertex with degree $q$ has $q-n$ neighbors
which are not roots of infinite $(k{-}1)$-ary subtrees
and $n$ neighbors which
are roots of infinite $(k{-}1)$-ary subtrees.
The combinatorial multiplier
$C_{n}^{m}=m!/(m-n)!n!$ gives the number of ways one can choose
$n$ neighbors from $q$ neighbors. A vertex belongs to the $k$-core
if at least $k$ its neighbors are roots of infinite $(k{-}1)$-ary
subtrees. So the probability $M_{k}$ that a vertex belongs to the
$k$-core is equal to
\begin{equation}
M_{k}=\sum\limits_{n\geqslant k}^{{}}M_{k}(n). \label{k-core}
\end{equation}%
Schematic Fig.~\ref{f1} explains (\ref{Mnk}) and (\ref{k-core}).
Note that for the ordinary percolation we must set $k=1$ in this
equation. The number of vertices in the $k$-core is equal to
$NM_{k}$.
\begin{figure}
\epsfxsize=50mm \centerline{\epsffile{k-percolation-fig1.eps}}
\caption{ (a) Schematic representation of the order parameter.
$1-R$ is the probability that a given end of a randomly chosen
edge in an undamaged network is a root of an infinite
$(k{-1})$-ary subtree. $R$ is the probability that a given end of
an edge is not a root of an infinite $(k{-}1)$-ary subtree. (b)
Schematic view of vertex configurations contributing to $M_k$
which is the probability that a vertex is in the $k$-core [see
Eqs.~(\protect\ref{Mnk}) and (\protect\ref{k-core})]. A vertex in
a tree-like network belongs to the $k$-core if at least $k$ its
nearest neighbors are the roots of infinite $(k{-}1)$-ary
subtrees.
The symbol $\forall$ shows that the number of
nearest neighbors which are not roots of infinite $(k{-}1)$-ary
subtrees is arbitrary.
}
\label{f1}
\end{figure}
The probability $R$ plays the role of the order parameter in our
problem. Due to using the $(k{-}1)$-ary trees, $R$ is defined in
such a way that it is independent of the second end of the edge
--- whether it belongs or does not belong to the $k$-core.
An end of an edge is not a root of an infinite $(k{-}1)$-ary
subtree if at most $k{-}2$ its children branches are roots of
infinite $(k{-}1)$-ary subtrees. This leads to the following exact
equation for $R$ \cite{dgm05}:
\begin{equation}
\!R=1{-}p{+}p\sum_{n=0}^{k-2}\left[ \,\sum_{i=n}^{\infty }\frac{(i{+}1)P(i{+}%
1)}{z_{1}}\,C_{n}^{i}R^{i-n}(1{-}R)^{n}\right] \!\!{.}\!\!\! \label{R}
\end{equation}%
Let us explain this equation. (i) The first term, $1{-}p\equiv Q$,
is the probability that the end of the edge is unoccupied. (ii)
$C_{n}^{i}R^{i-n}(1-R)^{n}$ is the probability that if a given end
of the edge has $i$ children (i.e., other edges than the starting
edge), then exactly $n$ of them are roots of infinite
$(k{-}1)$-ary subtrees. $(i+1)P(i+1)/z_{1}$ is the probability
that a randomly chosen edge leads to a vertex with branching $i$.
$z_{1}=\sum\nolimits_{q}qP(q)$ is the mean number of the nearest
neighbors of a vertex in the graph. Thus, in the square brackets,
we present the probability that a given end of the edge has
exactly $n$ edges, which are roots of infinite $(k-1)$-ary
subtrees. (iii) Finally, we take into account that $n$ must be at
most $k-2$. In an alternative form, Eq.~(\ref{R}) may be written as
follows,
\begin{equation}
\!1-R=p\sum_{n=k-1}^{\infty}\left[ \,\sum_{i=n}^{\infty }\frac{(i{+}1)P(i{+}%
1)}{z_{1}}\,C_{n}^{i}R^{i-n}(1{-}R)^{n}\right]
\!\!{.}\!\!\!
\label{Ralt}
\end{equation}
This equation shows that a given end of an edge is a root of an
infinite $k-1$-ary tree with the probability $1-R$ if it has at
least $k-1$ children (we sum over $n\geq k-1$) which also are
roots of an infinite $(k{-}1)$-ary subtree. Equations~(\ref{R})
and (\ref{Ralt}) are graphically represented in Fig.~\ref{f2}.
\begin{figure}
\epsfxsize=82mm \centerline{\epsffile{k-percolation-fig2.eps}}
\caption{
(a) and (b) are graphic representations of
Eqs.~(\protect\ref{R}) and (\protect\ref{Ralt}), respectively. In
(a) the open circle with a dashed boundary represents an
unoccupied vertex. Other notations are explained in the caption to
Fig.~(\ref{f1}).
}
\label{f2}
\end{figure}
Introducing a function
\begin{equation}
\Phi _{k}(R)\!\!=\sum_{n=0}^{k-2}\sum_{i=n}^{\infty
}\frac{(i{+}1)P(i{+}1)}{z_{1}}\,C_{n}^{i}R^{i-n}(1{-}R)^{n},
\label{F1}
\end{equation}
we rewrite Eq.~(\ref{R}) in a concise form:
\begin{equation}
R=1-p+p\Phi _{k}(R).
\label{R2}
\end{equation}%
If this equation has only the trivial solution $R\!=\!1$, there is
no giant $k$-core. The emergence of a nontrivial solution
corresponds to the birth of the giant $k$-core. The $k$-core is
described by the lowest nontrivial solution $R\!<1$.
The structure of the $k$-core is essentially determined by its
degree distribution $P_{k}(n)$:
\begin{equation}
P_{k}(n)\equiv \frac{M_{k}(n)}{M_{k}}.
\label{Pkq}
\end{equation}
$P_{k}(n)$ is the probability to find a vertex with degree $n$ in
the $k$-core. Note that the $k$-core of an uncorrelated network in
itself is an uncorrelated graph, and so it is completely described
by its degree distribution $P_{k}(n)$ \cite{remark3}. We will
extensively use this circumstance. The corona occupies a fraction
$P_{k}(k)$ of the $k$-core. Therefore, the total number of
vertices in the corona is equal to $NM_{k}P_{k}(k)$. The mean
degree of vertices in the $k$-core is
\begin{equation}
z_{1k}=\sum_{n\geq k}P_{k}(n)n.
\label{z1k}
\end{equation}
Comparing Eqs.~(\ref{z1k}) and (\ref{Ralt}), we get an important
relationship between $z_{1k}$, $M_{k}$ and $1-R$:
\begin{equation}
z_{1k}M_{k}=z_{1}(1-R)^{2}.
\label{z1k2}
\end{equation}
Below, in Sec.~\ref{edges}, we will discuss its meaning.
In the general case the analytical solution of Eq.~(\ref{R}) is
unknown. But it can be obtained numerically. By using this
solution, one can calculate all basic characteristics of the
$k$-core structure of an arbitrary uncorrelated network
\cite{dgm05}.
\begin{figure
\begin{center}
\scalebox{0.35}{\includegraphics[angle=270]{crn-size.eps}}
\end{center}
\caption{ Dependence of the sizes of the $k$-core and the corona,
$M_{k}$ and $M_{k}(k)$, respectively, on the occupation
probability $p$ in the Erd\H os-R\'{e}nyi graphs with the mean
degree $z_1=10$. Solid lines show $M_{k}$ versus $p$, and dashed
lines show $M_{k}(k)$ versus $p$ at $k=3,5,$ and 6. Notice that both
$M_{k}$ and $M_{k}(k)$ have a square root singularity at the
$k$-core percolation thresholds, but in the curves $M_{k=5,6}(k)$
this singular addition is practically unnoticeable. Notice the
non-monotonous dependence $M_{k}(k)$. The inset shows the fraction
$P_{k}(k)$ of the corona vertices in the $k$-core versus $p$. }
\label{crn-size}
\end{figure}
Some results of the numerical solution of Eq.~(\ref{R}) for the
Erd\H os-R\'{e}nyi graph with $z_1=10$ are represented in
Fig.~\ref{crn-size}. More results may be found in
Ref.~\cite{dgm05}. This figure displays the dependences of the
sizes of the $k$-core, the corona and the fraction of the corona
vertices in the $k$-core on the occupation probability $p$ for
several values of $k$. One can see that far from the critical
point $p_{c}(k)$ the size of the corona is small in comparison to
the size of the $k$-core. However, close to $p_{c}(k)$ the corona
occupies a noticeable fraction of the $k$-core.
Let us consider a network with a scale-free degree distribution,
$P(q)\propto q^{-\gamma}$. The case $2<\gamma \leqslant 3$ is
realized in most important real-world networks. With $\gamma $ in
this range, the mean number of the nearest neighbors of a vertex
in a network, $z_{2}$, diverges if $N\rightarrow \infty $. Solving
analytically Eq.~(\ref{R2}), we find that the size of the
$k$-core decreases with increasing $k$ as follows~\cite{dgm05}:
\begin{equation}
M_{k}=p^{2/(3-\gamma )}(q_{0}/k)^{(\gamma -1)/(3-\gamma )},
\label{k-size}
\end{equation}
where $q_{0}$ is the minimum degree in the initial (undamaged)
network. Vertices which belong to the $k$-core, but do not belong
to the $ k{+}1$-core, form the $k$-shell of the size
$S_{k}=M_{k}-M_{k+1}$. Using Eq.~(\ref{k-size}), at $k\gg 1$ we
find:
\begin{equation}
S_{k}\propto (q_{0}/k)^{2/(3-\gamma )}.
\label{shell-size}
\end{equation}
The asymptotic behavior given by Eqs.~(\ref{k-size}) and
(\ref{shell-size}) agrees well with an empirical analysis of the
$k$-core architecture of the Internet on the Autonomous Systems
level \cite{aabv05,k05}.
\section{Statistics of edges and the order parameter} \label{edges}
Let us consider edges in an uncorrelated network with the
$k$-core.
We start with the case $p=1$.
There are three types of edges: (i) edges which connect two
vertices in the $k$-core, (ii) edges connecting two vertices which
do not belong to the $k$-core, and (iii) edges connecting together
a
vertex
in
the $k$-core and the other one which do not belong
to the $k$-core. These types of connections in a network are
schematically shown in Fig.~\ref{nedges}. Let $L_{k}$, $L_{0}$ and
$L_{0k}$ be the total numbers of edges of these three types in the
network, respectively. The sum of these numbers gives the total
number $L=Nz_{1}/2$ of edges in the initial network:
\begin{equation}
L_{0}+L_{k}+L_{0k}=L
.
\label{Lt}
\end{equation}
The ratios $L_{k}/L$, $L_{0}/L$ and $L_{0k}/L$ are probabilities
that a randomly chosen edge in the initial network is of the type
(i), (ii) or (iii), respectively. Because $L_{k}=Nz_{1k}M_{k}/2$,
we can rewrite Eq.~(\ref{z1k2}) in a form:
\begin{equation}
\frac {L_{k}}{L}=(1-R)^{2}.
\label{z1k3}
\end{equation}
This equation has a simple explanation.
It
shows that the
probability to find an edge which connects two vertices in the
$k$-core is equal to the probability that both its ends are
roots of the $(k{-}1)$-ary tree, that is, $(1-R)^2$
[see Fig.~\ref{f6}(a)]. On the other hand, Eq.~(\ref{z1k3})
explains the meaning of the order parameter $R$ via the
relationship with the measurable parameters: $1-R=\sqrt{L_{k}/L}$.
One should note that Eq.~(\ref{z1k3}) is also
valid at $p<1$
since
it follows from the exact equation~(\ref{z1k2}). In this case,
$L_{k}$ must be replaced
by the number $L_{k}(p)$ of edges in a damaged $k$-core while $L$
remains
the total number of edges in the initial network.
\begin{figure}
\epsfxsize=34mm
\centerline{\epsffile{nedges.eps}}
\caption{
Schematic
representation of the three types of edges in a network (large
circle) with the $k$-core (grey central region): (i) edges between
vertices in the $k$-core (links between two black dots), (ii)
edges between vertices which do not belong to the $k$-core (links
between two open dots), and (iii) edges between vertices in the
$k$-core and vertices which do not belong to the $k$-core (links
between black and open dots).
}
\label{nedges}
\end{figure}
\begin{figure}[b]
\epsfxsize=82mm
\centerline{\epsffile{k-percolation-fig6.eps}}
\caption{
Schematic representations of the probabilities that an
edge connects together vertices of the $k$-core or that it
connects vertices outside the $k$-core, figures (a) and (b),
respectively.
}
\label{f6}
\end{figure}
Let us find the probability $L_{0}/L$ that an edge chosen at
random in the network connects two vertices which do not belong to
the $k$-core. We stress that $L_{0}/L$ is not equal to $R^2$ as
one could naively expect, but
is
larger,
see Fig.~\ref{f6}(b).
Indeed, in addition to configurations where both the ends of an
edge are not the roots of infinite $(k{-}1)$-ary trees --- the
$R^2$ contribution --- one must take into account extra
configurations. In these additional configurations, one end of the
edge is not the root of an infinite $(k{-}1)$-ary tree, but the
second end has exactly $k-1$ childrens which are roots of infinite
$(k{-}1)$-ary trees.
This second vertex does not belong to the
$k$-core as it should be.
Thus we have
\begin{equation}
L_{0}/L = R^2+2R\sum\limits_{q\geqslant
k}^{{}}\frac{qP(q)}{z_{1}}C_{k-1}^{q-1}R^{q-k}(1-R)^{k-1}.
\label{L01}
\end{equation}
Comparing the sum in Eq.~(\ref{L01})
and the probability $M_{k}(k)$ given by Eq.~(\ref{Mnk}) at $n=k$,
we get
\begin{equation}
L_{0}/L=R^2+2R\frac{kM_{k}(k)}{z_{1}(1-R)}.
\label{L0}
\end{equation}
Equations (\ref{Lt}), (\ref{z1k3}) and (\ref{L0}) establish
nontrivial relationships between independently measurable network
parameters: $L$, $L_{k}$, $L_{0}$, $L_{0k}$, and $M_{k}(k)$. These
relations may be used as a criterion of the validity of the tree
ansatz for various networks.
Let us now touch upon the case $p<1$.
After random removal vertices from an uncorrelated network, we again get an uncorrelated network.
Therefore, at $p<1$, we may still use the same formulas~(\ref{z1k3}), (\ref{L01}), and (\ref{L0}) but with substituted characteristics of the damaged network --- the number of edges, the mean degree, etc.~\cite{remark5}.
\section{$k$-core percolation threshold} \label{point}
When decreasing the occupation probability $p$, the $k$-core
decreases in size and disappears at a critical concentration
$p_{c}(k)$. According to Ref.~\cite{dgm05}, the critical
concentration $p_{c}(k)$ is determined by the following equation:
\begin{equation}
p_{c}(k)\Phi _{k}^{\prime }(R_{c})=1.
\label{cp1}
\end{equation}
Here, $R_{c}$ is a critical value of the order parameter $R$ at
the birth point of the $k$-core. At $p<p_{c}(k)$, Eq. (\ref{R2})
has only the trivial solution $R=1$, and the giant $k$-core does
not exist. The derivative $\Phi _{k}^{\prime }(R)\equiv d\Phi
_{k}(R)/dR$ is determined by the following equation:
\begin{eqnarray}
p\Phi _{k}^{\prime }(R) &= &p \sum\limits_{q\geqslant k}^{{}}\frac{qP(q)}{%
z_{1}}C_{k-2}^{q-1}(q+1-k)R^{q-k}(1-R)^{k-2}
\notag
\\[5pt]
&=&k(k-1)P_{k}(k)/z_{1k}
.
\label{F2}
\end{eqnarray}
Using this equation, the condition (\ref{cp1}) for the $k$-core
percolation threshold may be rewritten in the form:
\begin{equation}
k(k-1)P_{k}(k)/z_{1k}=1.
\label{cp2}
\end{equation}
Let us consider the behavior of $R$ near the phase transition in
an uncorrelated complex network with a finite mean number $z_{2}$
of the second neighbors of a vertex,
$z_{2}=\sum\nolimits_{q}q(q-1)P(q)$. At $p$ near $p_{c}(k)$, i.e.,
at $0<p-p_{c}(k)\ll 1$, Eq.~(\ref{R}) has a nontrivial solution:
\begin{equation}
R_{c}-R\propto \lbrack p-p_{c}(k)]^{1/2}.
\label{expR}
\end{equation}
This demonstrates that $R$ has a jump at $p=$ $p_{c}(k)$ [from
$R=R_{c}$ to $R=1$] as at an ordinary first-order phase transition
and a singular behavior as at a continuous phase transition
\cite{clr79}. The derivative $dR/dp$ diverges,
\begin{equation}
\frac{dR}{dp}=-\frac{1-R}{p[1-p\Phi _{k}^{\prime }(R)]}\propto
-[p-p_{c}(k)]^{-1/2}
,
\label{dR}
\end{equation}
since at $p\rightarrow p_{c}(k)+0$ we have
\begin{equation}
1-p\Phi _{k}^{\prime }(R)\propto [p-p_{c}(k)]^{1/2}
.
\label{expFi}
\end{equation}
This singularity suggests intriguing critical phenomena near the
threshold of the $k$-core percolation.
In contrast, in networks with infinite $z_{2}$, instead of the
hybrid phase transition, the $k$-core percolation becomes an
infinite order phase transition \cite{dgm05}, similarly to the
ordinary percolation in this situation \cite {cnsw00}.\ In this
case the entire $k$-core organization of a network is extremely
robust against random damage.
\section{Generating functions for corona clusters} \label{gf}
Using the approach of Refs. \cite{cnsw00,nsw01}, we introduce the
generating function $H_{1}(x)$ of the probability that an end of a
randomly chosen edge in the $k$-core belongs to a finite corona
cluster of a given size:
\begin{equation}
H_{1k}(x)=1-\frac{kP_{k}(k)}{z_{1k}}+x\frac{kP_{k}(k)}{z_{1k}}%
[H_{1k}(x)]^{k-1}
.
\label{H1}
\end{equation}
Here $kP_{k}(k)/z_{1k}$ is the probability that an end of an edge
chosen at random in the $k$-core belongs to the corona. In turn,
$1-kP_{k}(k)/z_{1k}$ is the complementary probability that the end
of the edge does not belong to the corona. We have $H_{1k}(1)=1$.
We introduce the generating function $H_{0k}(x)$ for the size of a
corona cluster attached to a vertex in the $k$-core:
\begin{equation}
H_{0k}(x)=\sum\limits_{q}P_{k}(q)[H_{1k}(x)]^{q}
.
\label{H0}
\end{equation}
Using this function, one can calculate the mean total size
$N_{\text{crn}}$ of the corona clusters attached to a vertex
randomly chosen in the $k$-core:
\begin{equation}
N_{\text{crn}}=\left.\frac{dH_{0k}(x)}{dx}\right\vert _{x=1}
.
\label{Ncrn1}
\end{equation}
Differentiating
Eqs.~(\ref{H1}) and (\ref{H0}) over $x$ gives
\begin{equation}
N_{\text{crn}}=\frac{kP_{k}(k)}{1-k(k-1)P_{k}(k)z_{1k}^{-1}}
.
\label{Ncrn2}
\end{equation}
Inserting Eqs.~(\ref{F2}) and (\ref{expFi}) into
Eq.~(\ref{Ncrn2}), we find that $N_{\text{crn}}$ diverges at the
critical point,
\begin{equation}
N_{\text{crn}}\propto \lbrack p-p_{c}(k)]^{-1/2}
.
\label{Ncrn3}
\end{equation}
This means that at $p=p_{c}(k)$ the corona is in its ``percolation
transition threshold''. Note however that the $k$-core and its
corona are absent at $p<p_{c}(k)$, so that there is no giant
connected corona above this threshold, in contrast to the ordinary
percolation. Equation (\ref{cp2}) resembles the condition
$p z_{2}/z_{1}=1$ of the emergence of the giant connected component
in the ordinary percolation.
The exact derivation of Eq.~(\ref{dM/dp}) is based on the
following steps. We differentiate Eq.~(\ref{k-core}) for $M_{k}$
over $p$:
\begin{equation}
\frac{dM_{k}}{dp}=\frac{M_{k}}{p}-kP_{k}(k)\frac{M_{k}}{(1-R)}\frac{dR}{dp}
.
\label{dM/dp2}
\end{equation}
Inserting Eqs.~(\ref{F2}), (\ref{dR}) and (\ref{Ncrn2}) into
Eq.~(\ref{dM/dp2}), we get Eq.~(\ref{dM/dp}).
\section{Size distribution of corona clusters} \label{distrib}
Let $\mathfrak{N}_{\text{crn}}(s)$ be the number of corona
clusters of
size $s$ in the $k$-core. Because the total
number of vertices in the corona clusters is equal to
$NM_{k}P_{k}(k)$, we obtain the following condition:
\begin{equation}
\sum\limits_{s=1}^{s_{\max}}s\mathfrak{N}_{\text{crn}}(s)=NM_{k}P_{k}(k)
,
\label{Ns1}
\end{equation}
where $s_{\max}$ is the size of the largest corona cluster. We
introduce a function
\begin{equation}
\Pi _{k}(s)\equiv \frac{s\mathfrak{N}_{\text{crn}}(s)}{NM_{k}P_{k}(k)
},
\label{Ps0}
\end{equation}
which is the probability that a randomly chosen corona vertex
belongs to a corona cluster of
size $s$. The function $\Pi
_{k}(s)$ is related to the generating function
$G_{\text{crn}}(x)$:
\begin{equation}
\Pi _{k}(s)=\frac{1}{s!}\left.
\frac{d^{s}G_{\text{crn}}(x)}{dx^{s}}\right\vert _{x=0}
,
\label{PS1}
\end{equation}
where $G_{\text{crn}}(x)=x[H_{1k}(x)]^k$. There is a simple
relationship between $N_{\text{crn}}$ and the mean size
$s_{\text{crn}}$ of a corona cluster to which a randomly chosen
corona vertex belongs:
\begin{equation}
s_{\text{crn}}\equiv\sum\limits_{s=1}^{s_{\max}}s\Pi
_{k}(s)=1+\frac{k}{z_{1k}}N_{\text{crn}}
.
\label{crnM}
\end{equation}
At $s\gg 1$ the probability $\Pi _{k}(s)$ has the usual asymptotic
form \cite{nsw01}:
\begin{equation}
\Pi _{k}(s)\approx Cs^{-\alpha }e^{-s/s^{\ast }}
.
\label{PS2}
\end{equation}
Here $C$ is a constant. Exponent $\alpha$ and the parameter
$s^{\ast }=1/\ln \left\vert x^{\ast }\right\vert $ are determined
by the
type and the position $x^{\ast }$ of the singularity of the
function $H_{0k}(x)$, nearest to $x=1$. Solving Eq.~(\ref{H1}) and
inserting the obtained solution into Eq.~(\ref{H0}), we find that
at $p\rightarrow p_{c}(k)+0$ the generating functions $H_{1k}(x)$
and $G_{\text{crn}}(x)$ have a square-root singularity:
\begin{equation}
G_{\text{crn}}(x)\propto H_{1k}(x)\propto (1-x)^{1/2}
.
\label{sH01}
\end{equation}
In the case $k=3$, Eq.~(\ref{H1}) is solved exactly,
\begin{equation}
H_{1k=3}(x)=\frac{1-(1-x/x^{\ast })^{1/2}}{2ax}
,
\label{sH3}
\end{equation}
where $a=3P_{3}(3)/z_{1,k=3}$ and $x^{\ast }=1/[4a(1-a)]$. At the
critical point, Eq. (\ref{cp2}), we have $2a=1$ and, therefore,
$x^{\ast }=1$. At $p$ near $p_{c}(k)$, the parameter $s^{\ast }$
diverges,
\begin{equation}
s^{\ast }\approx 1/(1-2a)^{2} \propto 1/[p-p_{c}(k)] .
\label{star}
\end{equation}
The singularity
(\ref{sH01}) results in the standard
mean-field exponent $\alpha =3/2$. At the critical point
$p=p_{c}(k)$, the distribution function is
\begin{equation}
\Pi _{k}(s)\propto s^{-3/2}
.
\label{e39}
\end{equation}
It gives $\mathfrak{N}_{\text{crn}}(s) \propto \Pi _{k}(s)/s \sim s^{-5/2}$.
In scale-free networks with a degree distribution $P(q) \sim
q^{-\gamma}$, this is valid for any $\gamma>3$. In contrast, in
the ordinary percolation on scale-free uncorrelated networks,
exponent $\alpha $ differs from the standard mean-field value
$3/2$ if $2<\gamma <4$ \cite{cah02,cha03}.
Let us estimate the
size $s_{\max }$ of the largest corona
cluster. We use the condition that there is only one corona
cluster with the size $s\geq s_{\max }$:
\begin{equation}
\int\limits_{s_{\max }}^{\infty }\mathfrak{N}_{\text{crn}}(s)ds=NM_{k}P_{k}(k)%
\int\limits_{s_{\max }}^{\infty }\Pi _{k}(s)s^{-1}ds=1
.
\label{smax}
\end{equation}
Using asymptotics (\ref{PS2}), at $p>p_{c}(k)$ we obtain
\begin{equation}
s_{\max }\propto \ln N
.
\label{e41}
\end{equation}
At the critical point $p=p_{c}(k)$, using the distribution
function $\Pi _{k}(s)\propto s^{-3/2}$, we get
\begin{eqnarray}
s_{\max} & \propto & N^{2/3}
,
\label{e42}
\\[5pt]
s_{\text{crn}} & \propto & \sqrt{s_{\max}} \propto N^{1/3}
.
\label{critCRN}
\end{eqnarray}
Equation (\ref{e42}) coincides with a result for the maximum size
of a connected component at the birth point of the giant connected
component in classical random graphs \cite{bollobasbook} and in
uncorrelated networks where three first moments of a degree
distribution converge \cite{cha03}. However, relation~(\ref{e42})
essentially differs from that for the maximum size of a connected
component if the third moment of the degree distribution diverges
in the infinite network limit.
\begin{figure
\epsfxsize=48mm
\centerline{\epsffile{chain.eps}} \caption{Diagrammatic
representation of the mean number
$\mathcal{P_{\ell}}(n_{0},n_{1},\ldots,n_{\ell})$ of ways to reach
a vertex which is at
distance $\ell$ from a vertex $i=0$ in
the $k$-core. The path goes through vertices
$m=1,2,\ldots,\ell{-}1$ with degrees $n_{m}$ in the $k$-core.}
\label{chain}
\end{figure}
\section{The correlation length} \label{correlation}
In this section we consider correlations between vertices in the
$k$-core with $k\geq 3$. Let us chose at random a vertex $i$ in
the $k$-core. We aim to find the mean number $\mathcal{P_{\ell}}$
of vertices in the $k$-core which may be reached from $i$
following
a path of
a
length $\ell $.
In the configuration model the giant $k$-core is a unique and
simply connected subgraph. Therefore, all $\ell -1$ vertices on a
path connecting $i$ and $j$ must also belong to the $k$-core.
$\mathcal{P_{\ell}}$ is given by the following
relation:
\begin{eqnarray}
&& \mathcal{P_{\ell}}(n_{0},n_{1},\ldots, n_{\ell }) \nonumber
\\[5pt]
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! =P_{k}(n_{0})n_{0} \prod\limits_{m=1}^{\ell
-1}\Bigg[\frac{P_{k}(n_{m})n_{m}(n_{m}-1)}{z_{1k}}\Bigg] \frac{P_{k}(n_{\ell
})n_{\ell }}{z_{1k}}
.
\label{Ppath}
\end{eqnarray}
A diagrammatic representation of $\mathcal{P_{\ell}}$ is shown in
Fig.~\ref{chain}. Here $n_{m}$ is the degree of vertex $m$, where
$m=0,1...$ $\ell $, on a path connecting $i$ and $j$ in the
$k$-core. For
the sake of
convenience we set $i\equiv 0$ and $j\equiv \ell $.
Here $P_{k}(n_{0})$ is the probability that $i$ has the degree
$n_{0}$ in the $k$-core. The multiplier $n_{0}$ gives the number
of ways to reach vertex $1$ from $i{=}0$ following along any of
its $n_{0}$ edges. In the brackets, $P_{k}(n_{m})n_{m}/z_{1k}$ is
the probability that an edge outgoing from a vertex $m-1$ leads to
a vertex $m$ of degree $n_{m}$. The multiplier $n_{m}-1$ gives the
number of ways to leave this vertex.
Finally, $P_{k}(n_{\ell })n_{\ell }/z_{1k}$ is the probability
that the final edge on the path leads to the destination vertex
$\ell $ of degree $n_{\ell }$.
Now it is easy to find the number $N_{\text{crn}}(\ell )$ of
corona vertices which are at a distance $\ell$ from a randomly
chosen vertex in the $k$-core and belong to corona clusters
attached to this vertex. We set $n_{1}=n_{2}=...=n_{\ell }=k$ and
sum over degree $n_{0}$ of the starting vertex $0$ in
Eq.~(\ref{Ppath}). Using Eq.~(\ref{z1k}) gives
\begin{eqnarray}
N_{\text{crn}}(\ell )&&=\sum\limits_{n_{0}\geq k}
\mathcal{P_{\ell}}(n_{0},n_{1}=k,...n_{\ell}=k) \notag
\\[5pt]
&&=kP_{k}(k)e^{-(\ell -1)/\lambda }
.
\label{Ncrn4}
\end{eqnarray}
Here we have introduced the correlation length
\begin{equation}
\lambda \equiv -\,\frac{1}{\ln [p\Phi _{k}^{\prime }(R)]} =
-\,\frac{1}{\ln[k(k-1)P_{k}(k)/z_{1k}]}
.
\label{lambda}
\end{equation}%
In accordance with Eq.~(\ref{expFi}), at $p\rightarrow p_{c}(k)+0$
the parameter $\lambda$ diverges,
\begin{equation}
\lambda \propto [p-p_{c}(k)]^{-1/2} . \label{lambda2}
\end{equation}
Summing $N_{\text{crn}}(\ell)$ over $\ell$, we reproduce
Eq.~(\ref{Ncrn2}) for $N_{\text{crn}}$:
\begin{equation}
N_{\text{crn}}=\sum_{\ell =1}^{\infty }N_{\text{crn}}(\ell
)=\frac{kP_{k}(k)}{1-\exp [-1/\lambda ]}
.
\label{Ncrn5}
\end{equation}
Let us determine the mean intervertex distance $r_{\text{crn}}(k)$
between vertices in corona clusters. We use the quantity
$\mathcal{P_{\ell}}(n_{0},n_{1},\ldots,n_{\ell})$ and set
$n_{0}=n_{1}=\ldots=n_{\ell}=k$. We find
\begin{eqnarray}
r_{\text{crn}}(k) &\equiv &\frac{\sum_{\ell =1}^{\infty } \ell
\mathcal{P_{\ell}}(k,k,\ldots, k)}{\sum_{\ell =1}^{\infty
}\mathcal{P_{\ell}}(k,k,\ldots, k)} \notag
\\[5pt]
&=&\frac{1}{1-\exp [-1/\lambda ]} . \label{Radius}
\end{eqnarray}
At $p$ close to $p_{c}(k)$, the correlation length $\lambda \gg 1$
and therefore $r_{\text{crn}}(k)\approx \lambda $.
\section{Nonlocal effects in the $k$-core percolation} \label{longrange}
In Sec.~\ref{damage} we have shown that a removal of a vertex from
the $k$-core leads to pruning corona clusters attached to the
vertex. In this section we will demonstrate that a removal of even
one vertex from the $k$-core, $k\geq 3$, influences degrees of
vertices in a vast region of the $k$-core around this vertex.
Moreover, the size of this damaged region diverges at the critical
point.
Let $N\Delta p$ vertices be removed at random from the initial
network.
As a result, the total number of vertices with degree $n$ in the
$k$-core is changed,
\begin{equation}
N\Delta M_{k}(n)=NM_{k}(p,n)-NM_{k}(p-\Delta p,n). \label{DMkn1}
\end{equation}
Let us find $N\Delta M_{k}(n)$. With probability $M_{k}(n)$, a
removed vertex may have degree $n$ in the $k$-core. Therefore,
there is a trivial contribution to $N\Delta M_{k}(n)$:
\begin{equation}
N\delta M_{k}(n)=N\Delta p\partial M_{k}(n)/\partial p=N\Delta pM_{k}(n)/p
.
\label{Mn1}
\end{equation}
Removal of a vertex $i$ in the $k$-core may influence on the
degree of a vertex $j$ which is at a distance $\ell $ from $i$. If
$j$ is a nearest neighbor of $i$, then the degree $n$ of vertex $j$ will
be decreased by 1. If $\ell >1$, then the probability of this
effect is determined by the probability that $j$ and $i$ are
connected by a chain of corona vertices. If vertex $i$ is removed,
then all vertices of a corona cluster attached to $i$ also must be
pruned from the $k$-core due to the domino principle. As a result,
vertex $j$ loses one neighbor in the $k$-core. Let $V(\ell ,n)$ be
the mean number of vertices of degree $n$ which are connected to a
randomly chosen vertex $i$ in the $k$-core by a chain of corona
vertices of length $\ell $. Removal at random $N\delta M_{k}$
vertices results in a decrease of $M_{k}(n)$ by a quantity
\begin{equation}
N\delta M^{(2)}(n)=N\delta M_{k}\sum _{\ell =1}^{\infty }V(\ell
,n)
.
\label{Mn2}
\end{equation}
At the same time vertices with degree $n+1$ may also lose one edge
with the $k$-core. After the pruning, they have $n$ edges within
the $k$-core. This effect increases $M_{k}(n)$ by a quantity
\begin{equation}
N\delta M^{(3)}(n)=-N\delta M_{k}\sum _{\ell =1}^{\infty }V(\ell
,n+1)
.
\label{Mn3}
\end{equation}
Note that only in networks with loops, vertices in the $k$-core may change their degree by 2 during the pruning.
Thus, in a tree-like network there
are only three contributions to $N\Delta M_{k}(n)$:
\begin{eqnarray}
N\Delta M_{k}(n) &=&N\delta M_{k}(n)+N\delta M_{k}\sum _{\ell
=1}^{\infty
}V(\ell ,n)
\notag
\\[5pt]
&&-N\delta M_{k}\sum _{\ell =1}^{\infty }V(\ell ,n+1)
.
\label{deltaMkn}
\end{eqnarray}
$V(\ell ,n)$ is given by the probability $\mathcal{P_{\ell }}$,
Eq.~(\ref{Ppath}):
\begin{eqnarray}
V(\ell ,n)&=&\sum\limits_{n_{0}\geq
k}\mathcal{P_{\ell}}(n_{0},n_{1}=k,\ldots,n_{\ell -1}=k, n_{\ell
}=n) \nonumber
\\[5pt]
&=&nP_{k}(n)e^{-(\ell -1)/\lambda } . \label{Vln}
\end{eqnarray}
Inserting this result into Eq.~(\ref{deltaMkn}), in the limit
$\Delta p\rightarrow 0$, we get the main result of the present
section:
\begin{equation}
\frac{dM_{k}(n)}{d\ln p}=M_{k}(n)+rnM_{k}(n)-r(n+1)M_{k}(n+1)
,
\label{dM/dp3}
\end{equation}
where
\begin{equation}
r=\frac{1}{1-\exp [-1/\lambda ]}=\left[
1-\frac{k(k-1)M_{k}(k)}{\sum\nolimits_{n=k}^{q_{\text{cut}}}nM_{k}(n)}\right]
^{-1}.
\label{rs}
\end{equation}
The parameter $r$ determines the mean size of a region in the
$k$-core which is damaged by a removal of one vertex chosen at
random. One should stress that $r$ depends on the entire degree
distribution in the $k$-core: $r=r\{M_k(n)\}$. At $p$ close to
$p_{c}(k)$, we have $r\propto \lambda$. Therefore, at
$p\rightarrow p_{c}(k)$, this size diverges. Interestingly, the
parameter $r$ is equal to the mean intervertex distance
$r_{\text{crn}}$ in corona clusters given by
expression~(\ref{Radius}).
In Eq.~(\ref{dM/dp3}), the index $n$ can take the values
$n=k,k+1,\ldots,q_{\text{cut}}$. The cutoff $q_{\text{cut}}$ of
the network's degree distribution $P(q)$ depends on details of a
specific network and its size $N$.
Although we derived Eq.~(\ref{dM/dp3}) by using heuristic
arguments, this equation is exact for uncorrelated random graphs
in the limit $N\rightarrow \infty$. Equation~(\ref{dM/dp3}) may be
strictly derived by differentiating Eq.~(\ref{Mnk}) over $p$ and
using Eq.~(\ref{dR}).
The set of Eqs.~(\ref{dM/dp3}) with $n$ from $k$ to
$q_{\text{cut}}$ is a complete set of nonlinear equations which
determine $M_{k}(n)$ as a function of $p$. The nonlinearity is due
to the functional dependence of $r$ on $M_{k}(n)$.
Summing over $n$ from $k$ to $q_{\text{cut}}$ on the left and right hand
sides of Eq.~(\ref{dM/dp3}), we obtain Eq.~(\ref{dM/dp}). If we
know $M_{k}(n)$ for an initial network, i.e., at $p=1$, then we
can use Eq.~(\ref{dM/dp3}) and find the evolution of $M_{k}(n,p)$
with decreasing $p$. Inserting Eqs.~(\ref{Pkq}) and (\ref{z1k})
into (\ref{z1k2}), we can determine the order parameter $R$ as a
function of $p$,
\begin{equation}
R=1-\Bigg[z_{1}^{-1}\sum\limits_{n=k}^{q_{\text{cut}}}nM_{k}(n,p)\Bigg]^{1/2}
.
\label{R3}
\end{equation}
Alternatively, we could find the order parameter $R(p)$, solving
Eq.~(\ref{R}), and afterwards obtain $M_{k}(n)$ from
Eq.~(\ref{Mnk}).
\begin{figure
\epsfxsize=80mm
\centerline{\epsffile{relaxation.eps}}
\caption{
Schematic picture
of a relaxation process described by Eq.~(\protect\ref{dM/dt}). An
initial distribution $M_{k}(n,t=0)$ over states with
$n=q_{\text{cut}},q_{\text{cut}}-1,\ldots,k$ relaxes into the
final state $\{M_{k}(n)=0 \}$ due to transitions of vertices from
a state with degree $n+1$ to a state with degree $n$. Here
$q_{\text{cut}}$ is the maximum degree.
}
\label{relaxation}
\end{figure}
\section{Mapping to a cooperative relaxation model} \label{mapping}
Let us consider the $k$-core percolation as an evolutionary
process. At time $t=0 $ we have an initial uncorrelated network
with the $k$-core. During a time interval $\Delta t$ we remove at
random a fraction $\Delta p/p=\Delta t$ of occupied vertices from
the network. This means that the occupation probability $p$
decreases in time as $p=e^{-t}$. With this substitution,
Eq.~(\ref{dM/dp3}) takes a form
\begin{equation}
\frac{dM_{k}(n,t)}{dt}=-M_{k}(n,t)-rnM_{k}(n,t)+r(n+1)M_{k}(n+1,t)
.
\label{dM/dt}
\end{equation}
This rate equation describes the relaxation of an initial
distribution $\{M_{k}(n,t=0)\}$ to the final state with the
destroyed $k$-core, i.e., $\{M_{k}(n)=0\}$, due to the chain of
transitions of vertices from states of degree $n+1$ to states of
degree lower by one: $n+1 \to n$, see Fig.~\ref{relaxation}. Note
that we consider only relaxation in states with $n \geq k$,
assuming that vertices of degree less than $k$ are pruned
instantly \cite{remark4}. The parameter $r$ plays the role of the
characteristic scale of the relaxation rate. This relaxation is a
cooperative process due to the functional dependence of $r$ on
$M_{k}(n,t)$, see Eq.~(\ref{rs}). At time
$t_{c}(k)=\ln[1/p_{c}(k)]$ this model undergoes a dynamical phase
transition. Using Eq.~(\ref{Mpc}), we find that the total number
$M_{k}(t)$ of vertices in the $k$-core has a singular time
dependence near $t_{c}(k)$:
\begin{equation}
M_{k}(t)-M_{k}(t_{c}(k))\propto \lbrack p(t)-p_{c}(k)]^{\nu
}\propto [t_{c}(k)-t]^{\nu }
.
\label{Mtc}
\end{equation}
The critical exponent $\nu =1/2$ is valid for $k\geqslant 3$.
Inser\-ting Eq.~(\ref{lambda2}) into Eq.~(\ref{rs}), we find that
the relaxation rate diverges at the critical time $t_{c}(k)$,
\begin{equation}
r\propto \lbrack p(t)-p_{c}(k)]^{-1/2}\propto [t_{c}(k)-t]^{-1/2}
.
\end{equation}
Note that in accordance with the results obtained in
Sec.~\ref{damage}, the characteristic scale $r$ of the relaxation
rate also determines the mean size of the region in the $k$-core
cropped out due to the deletion of a vertex. In its turn, $r$ is
approximately equal to the correlation length $\lambda$, i.e., the
larger is the correlation length the larger is the relaxation
rate. This is in contrast to the usual critical slowing down of
the order parameter relaxation for continuous phase transitions.
In the latter case, the larger is the correlation length the
smaller is the relaxation rate, $r\approx \lambda ^{-z}$, where
$z$ is a dynamical critical exponent.
\section{Conclusions}\label{conclusion}
In this paper we have explained the nature of the $k$-core
percolation transition in uncorrelated networks.
To
obtain our results, we used heuristic arguments and developed
an exact theory. Let us list the main features
of the quite unusual $k$-core percolation transition: (i) a jump
emergence of the $k$-core, (ii) the critical singularity in the
phase with the $k$-core, (iii) the absence of any critical effects
--- strong correlations, divergent ``susceptibilities'', etc. ---
on the ``normal phase'' side. We had to reveal the meaning of the
order parameter in this problem, to explain the nature of the
jump, to find the origin of the singularity, to indicate a
``physical'' quantity, which diverges at the critical point, to
indicate long-range correlations, specific for this transition.
We have shown that the order parameter in this problem is simply expressed in terms of the relative number of edges in the $k$-core, see relation~(\ref{Lt}). The tree ansatz has allowed us to find the $k$-core order parameter and other $k$-core characteristics of various uncorrelated networks.
We have found that the unique properties of the $k$-core percolation transition are essentially determined by the corona subset of the $k$-core, that is, by vertices with exactly $k$ connections to the $k$-core. These are the ``weakest'' vertices in the $k$-core.
The critical correlations in the $k$-core are due to the correlations in the system of the corona clusters.
In the ``$k$-core phase'', the corona clusters are finite, but
their sizes and long-range correlations grow as the network
approaches the $k$-core percolation threshold. The mean size of a
corona cluster to which a randomly chosen corona vertex belong
diverges at the $k$-core percolation threshold. This quantity
plays the role of a critical susceptibility in this problem. So,
the $k$-core percolation threshold coincides with the percolation
threshold for corona clusters, and the $k$-core phase is the
``normal'' phase for the corona. The dramatic difference from the
ordinary percolation is that the corona disappears on the other
side of the threshold, and so critical fluctuations in the phase
without the $k$-core are absent.
For understanding the nature of this transition, we have studied the process of the destruction of the $k$-core due to the random deletion of vertices.
The deletion of a vertex in the $k$-core results in the clipping out the entire adjacent corona clusters from the $k$-core due to the domino principle. This effect is enormously increased when corona clusters become large --- near the $k$-core percolation threshold. In the threshold, the removal of a tiny fraction of vertices results in the complete collapse of the corona and the $k$-core.
In this respect, the $k$-core percolation problem can be mapped to a model of cooperative relaxation, which undergoes critical relaxation with a divergent rate at some critical moment.
To conclude, let us indicate a possible application --- a social
network model,
where social links connect individuals. Each of vertices (individuals) in our model may
occur in one of a few states --- distinct beliefs, opinions, religions,
ideologies, diseases, etc. We assume that each vertex takes a
specific state if at least $k$ its neighbors are in this state. Is
it possible that in this social net a giant, say, religious group
will emerge? The answer is yes if the network has the giant
$k$-core. A giant community of individuals being in the same state
forms the $k$-core of this
network.
We believe that our results are applicable to a variety of complex cooperative systems of this kind.
\begin{acknowledgments}
This work was partially supported by projects POCTI: FAT/46241,
MAT/46176, FIS/61665, and BIA-BCM/62662, and DYSONET. Authors
thank J.G.~Oliveira for help in numerical calculations.
\end{acknowledgments}
|
1,477,468,751,165 | arxiv | \section{Introduction}
Theoretical investigation of Ruderman\cite{Ruder} leads to the conclusion that matter may be anisotropic in densities of order $10^{15}\;gm\;cm^{-3}$. The impact of anisotropy on steller configuration may be found in pioneering works of Bower and Liang\cite{BL} and Herrera and Santos\cite{HS}. Anisotropy may occur due to existance of type 3A superfluids\cite {Ruder,BL,KW}, or phase transition\cite{Soko}. Consenza \textit{et. al.}\cite{CHEW} developed the procedure to obtain anisotropic solutions from isotropic solutions of Einstein's field equations. Tikekar and Thomas\cite{TT} found exact solutions of Einstein's field equations for anisotropic fluid sphere on pseudo spheroidal spacetime. The key feature of their model is the high variation of density from centre to boundary of stellar configuration. The class of exact anisotropic solutions on spherically symmetric spacetime has been obtained by Mak and Harko\cite{MH}. Karmakar \textit{et. al.}\cite{KMSM} analysed the role of pressure anisotropy for Vaidya-Tikekar\cite{VT} model. Paul \textit{et. al}\cite{PCKT} developed anisotropic stellar model for strage star. A core-envelope model describing superdense stars with anisotropic fluid distribution has been obtained by Thomas and Ratanpal\cite{TR}, Thomas \textit{et. al.}\cite{TRV} \& Tikekar and Thomas\cite{TT1}. Hence the study of anisotropic fluid distribution is important in general theory of relativity.\\\\
\noindent The study of Einstein-Maxwell system is carried out by several authors. Patel and Kopper\cite{PK} obtained charged analog of Vaidya-Tikekar\cite{VT} solution. The study of analytic models of quark stars have been carrried out by Komathiraj and Maharaj\cite{KM}, they found a class of solutions of Einstein-Maxwell system. Charged anisotropic matter with linear equation of state have been extensively studied by Thirukkanesh and Maharaj\cite{TM}.\\\\
\noindent Hence, both anisotropy and electromagnatic field are important in relativistic astrophysics. In this paper charged anisotropic model of a stellar configuration have been studied on the background of paraboloidal spacetime. Sec-2 describes, the field equations for charged static stallar configuration on paraboloidal spacetime and their solution is obtained. Sec-3 described the physical plausibility condition and Sec-4 contains discussion.
\section{Field Equations and Solution}
The interior of stellar configuration is described by static spherically symmetric paraboloidal spacetime metric,
\begin{equation}\label{ISpaceTime1}
ds^{2}=e^{\nu(r)}dt^{2}-\left(1+\frac{r^{2}}{R^{2}} \right)dr^{2}-r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right),
\end{equation}
with the energy-momentum tensor for anisotropic charged fluid,
\begin{equation}\label{EMTensor}
T_{ij}=diag\left(\rho+E^{2},\;p_{r}-E^{2},\;p_{t}+E^{2},\;p_{t}+E^{2} \right),
\end{equation}
where $\rho$ is the energy density, $p_{r}$ is the radial pressure, $p_{t}$ is the tangential pressure and $E$ is the electric field intensity. These quantities are measured relative to the comoving fluid velocity $u^{i}=e^{-\nu}\delta_{0}^{i}$. For the spacetime metric (\ref{ISpaceTime1}) and energy-momentum tensor (\ref{EMTensor}), the Einstein-Maxwell system takes the form,
\begin{equation}\label{rho1}
\rho+E^{2}=\frac{3+\frac{r^{2}}{R^{2}}}{R^{2}\left(1+\frac{r^{2}}{R^{2}} \right)^{2}},
\end{equation}
\begin{equation}\label{pr1}
p_{r}-E^{2}=\frac{\nu'}{r\left(1+\frac{r^{2}}{R^{2}}\right)}-\frac{1}{R^{2}\left(1+\frac{r^{2}}{R^{2}}\right)},
\end{equation}
\begin{equation}\label{pt1}
p_{t}+E^{2}=\frac{1}{1+\frac{r^{2}}{R^{2}}}\left[\frac{\nu''}{2}+\frac{\nu'^{2}}{4}+\frac{\nu'}{2r}\right]-\frac{\nu' r}{2R^{2}\left(1+\frac{r^{2}}{R^{2}}\right)^{2}}-\frac{1}{R^{2}\left(1+\frac{r^{2}}{R^{2}}\right)^{2}},
\end{equation}
\begin{equation}\label{Sigma1}
\sigma=\frac{\left(r^{2}E\right)'}{r^{2}\sqrt{1+\frac{r^{2}}{R^{2}}}},
\end{equation}
where $\sigma$ is the proper charge density and prime denotes differentiation with respect to $r$. In field equations (\ref{rho1}) - (\ref{Sigma1}), velocity of light $c$ is taken as $1$ also $\frac{8\pi G}{c^{4}}=1$.\\\\
The anisotropic parameter $\Delta$ is defined as,
\begin{equation}\label{Delta1}
\Delta=p_{t}-p_{r}.
\end{equation}
To solve the system (\ref{rho1}) - (\ref{Sigma1}) radial pressure is assumed to be of the form,
\begin{equation}\label{pr2}
p_{r}=\frac{p_{0}\left(1-\frac{r^{2}}{R^{2}}\right)}{R^{2}\left(1+\frac{r^{2}}{R^{2}}\right)^{2}},
\end{equation}
where $p_{0}>0$ is the model parameter and $\frac{p_{0}}{R^{2}}$ is central pressure. At the boundary of the star $r=R$, $p_{r}$ must vanish, which gives $r=R$ as the radius of the star.\\\\
This form of radial pressure is prescribed by Sharma and Ratanpal\cite{SR} to describe anisotropic stellar model admitting a quadratic equation of state on paraboloidal spacetime. Equations (\ref{pr2}) and (\ref{pr1}) gives,
\begin{equation}\label{NuDash1}
\nu'=\frac{p_{0}r\left(1-\frac{r^{2}}{R^{2}}\right)}{R^{2}\left(1+\frac{r^{2}}{R^{2}}\right)}+\frac{r}{R^{2}}-E^{2}r\left(1+\frac{r^{2}}{R^{2}}\right).
\end{equation}
We assume electric field intensity of the form,
\begin{equation}\label{E1}
E^{2}=\frac{k\frac{r^{2}}{R^{2}}}{R^{2}\left(1+\frac{r^{2}}{R^{2}} \right)^{2}},
\end{equation}
where $k\geq 0$ is a model parameter, from equation (\ref{E1}) it is clear that $E$ is decreasing in radially outward direction. Equations (\ref{NuDash1}) and (\ref{E1}) leads to,
\begin{equation}\label{NuDash2}
\nu'=\frac{\left(2p_{0}+k\right)r}{R^{2}\left(1+\frac{r^{2}}{R^{2}}\right)}+\left(1-p_{0}-k\right)\frac{r}{R^{2}},
\end{equation}
and hence,
\begin{equation}\label{Nu}
\nu=\log\left[C\left(1+\frac{r^{2}}{R^{2}}\right)^{\left(\frac{2p_{0}+k}{2}\right)}\right]+\left(\frac{1-p_{0}-k}{2}\right)\frac{r^{2}}{R^{2}},
\end{equation}
where $C$ is constant of integration. Therefore spacetime metric (\ref{ISpaceTime1}) is written as,
\begin{equation}\label{ISpaceTime2}
ds^{2}=C\left(1+\frac{r^{2}}{R^{2}}\right)^{\left(\frac{2p_{0}+k}{2} \right)}e^{\left(\frac{1-p_{0}-k}{2}\right)\frac{r^{2}}{R^{2}}}dt^{2}-\left(1+\frac{r^{2}}{R^{2}} \right)dr^{2}-r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2} \right).
\end{equation}
The spacetime metric (\ref{ISpaceTime2}) should continously match with Reissner-Nordstrom spacetime metric
\begin{equation}\label{ESpaceTime}
ds^{2}=\left(1-\frac{2m}{r}+\frac{Q^{2}}{r^{2}}\right)dt^{2}-\left(1-\frac{2m}{r}+\frac{Q^{2}}{r^{2}}\right)^{-1}dr^{2}-r^{2}\left(d\theta^{2}+\sin^{2}\theta\right)d\phi^{2},
\end{equation}
at the boundary of the star $r=R$, where $\left(p_{r}\right)\left(r=R\right)=0$. This matching conditions gives
\begin{equation}\label{M}
M=\frac{k+2R^{2}}{8R},
\end{equation}
and
\begin{equation}\label{C}
C=\frac{e^{\left(\frac{p_{0}+k-1}{2}\right)}}{\left(2p_{0}+2+k\right)/2},
\end{equation}
where $M$ is the mass enclosed the spherical body of radius $R$, hence the electric field intensity parameter $k$ directly effects the mass of the star.
Equations (\ref{pr1}), (\ref{pt1}), (\ref{Delta1}) and (\ref{NuDash2}) gives anisotropic parameter $\Delta$ as,
\begin{equation}\label{Delta2}
\Delta=\frac{\frac{r^{2}}{R^{2}}\left[X_{1}+Y_{1}\frac{r^{2}}{R^{2}}+Z_{1}\frac{r^{4}}{R^{4}}\right]}{4R^{2}\left(1+\frac{r^{2}}{R^{2}}\right)^{3}},
\end{equation}
which vanishes at $r=0$, where $X_{1}=p_{0}^{2}-8p_{0}-12k+3$, $Y_{1}=-2p_{0}^{2}-2p_{0}k+2p_{0}-8k+4$, $Z_{1}=1+p_{0}^{2}+k^{2}-2p_{0}-2k+2p_{0}k$. Equations (\ref{rho1}) and (\ref{E1}) gives,
\begin{equation}\label{rho2}
\rho=\frac{3+\left(1-k\right)\frac{r^{2}}{R^{2}}}{R^{2}\left(1+\frac{r^{2}}{R^{2}}\right)^{2}},
\end{equation}
and from equations (\ref{Delta1}), (\ref{pr2}) and (\ref{Delta2}) we get expression of $p_{t}$ as,
\begin{equation}\label{pt2}
p_{t}=\frac{4p_{0}+X_{1}\frac{r^{2}}{R^{2}}+Y_{1}\frac{r^{4}}{R^{4}}+Z_{1}\frac{r^{6}}{R^{6}}}{4R^{2}\left(1+\frac{r^{2}}{R^{2}}\right)^{3}}.
\end{equation}
Hence equations (\ref{rho2}), (\ref{pr2}), (\ref{pt2}), (\ref{E1}) and (\ref{Delta2}) describes matter density, radial pressure, tangential pressure, electric field intensity and measure of anisotropy respectively.
\section{Physical Plausibility Conditions}
Following Delgaty and Lake\cite{Delgaty} we impose the following condition on the sytem to make model physically acceptable.
\begin{itemize}
\item [(i)] $\rho(r),~p_{r}(r),~p_{t}(r) \geq 0 $ for $ 0 \leq r \leq R$.
\item [(ii)] $\rho-p_{r}-2p_{t} \geq 0$ for $ 0 \leq r \leq R$.
\item [(iii)] $\frac{d\rho}{dr},~\frac{dp_{r}}{dr},~\frac{dp_{t}}{dr} < 0$ for $0 \leq r \leq R$.
\item [(iv)] $0 \leq \frac{dp_{r}}{d\rho} \leq 1$; $0 \leq \frac{dp_{t}}{d\rho} \leq 1$, for $0 \leq r \leq R$.
\end{itemize}
From equation (\ref{rho2}), $\rho(r=0)=\frac{3}{R^{2}}>0$ and $\rho(r=R)=\frac{4-k}{4R^{2}}$,
therefore, $\rho>0$ for $0\leq r\leq R$ if $k\leq4$, i.e.
\begin{equation}\label{k1}
0\leq k\leq 4.
\end{equation}
From equation (\ref{pr2}) $p_{r}(r=0)=\frac{p_{0}}{R^{2}}>0$ as $p_{0}>0$ and $p_{r}(r=R)=0$. Hence $p_{r}\geq 0$ for $0\leq r\leq R$. It is required that $p_{t}\geq 0$ for $0\leq r\leq R$ and further to get the simple bounds on $p_{0}$ and $k$, we assume $p_{r}=p_{t}$ at $r=R$.\\\\
from (\ref{pt2}), $p_{t}(r=0)=\frac{p_{0}}{R^{2}}>0$ as $p_{0}>0$, and $p_{t}(r=R)=\frac{k^{2}-22k+8-8p_{0}}{32R^{2}}$, at $r=R$, $p_{t}=p_{r}=0$ if,
\begin{equation}\label{p0}
p_{0}=\frac{k^{2}-22k+8}{8},
\end{equation}
but $k$ should be chosen such that $p_{0}$ is positive, which restrict the value of $k$ as,
\begin{equation}\label{k2}
k<0.3699.
\end{equation}
Hence,
\begin{equation}\label{k3}
0\leq k<0.3699,\;\;\;\;\;p_{0}=\frac{k^{2}-22k+8}{8}
\end{equation}
which is condition for positivity of $p_{t}$.\\\\
Hence condition (i) is satisfied throughout the star. For the values of $k$ and $p_{0}$ specified in (\ref{k3}), programatically it has been varified that condition (ii) i.e. energy condition is satisfied throughout the star. From equation (\ref{rho2}),
\begin{equation}\label{drhodr}
\frac{d\rho}{dr}=-\frac{2r}{R^{4}}\frac{\left[(5+k)+(1-k)\frac{r^{2}}{R^{2}}\right]}{\left(1+\frac{r^{2}}{R^{2}} \right)^{3}},
\end{equation}
from equation (\ref{drhodr}), $\left(\frac{d\rho}{dr}\right)(r=0)=0$ and $\left(\frac{d\rho}{dr}\right)(r=R)=-\frac{3}{2R^{3}}<0$. Hence $\rho$ is decreasing throught the star. From equation (\ref{pr2}),
\begin{equation}\label{dprdr}
\frac{dp_{r}}{dr}=\frac{-2p_{0}r\left(3-\frac{r^{2}}{R^{2}}\right)}{R^{4}\left(1+\frac{r^{2}}{R^{2}}\right)^{3}}.
\end{equation}
Now, $\left(\frac{dp_{r}}{dr}\right)(r=0)=0$ and $\left(\frac{dp_{r}}{dr}\right)(r=R)=\frac{-p_{0}}{2R^{3}}<0$ as $p_{0}>0$. Hence $p_{r}$ is decreasing throughout the star. From equation (\ref{pt2}),
\begin{equation}\label{dptdr}
\frac{dp_{t}}{dr}=\frac{r\left[X_{2}+Y_{2}\frac{r^{2}}{R^{2}}+Z_{2}\frac{r^{4}}{R^{4}}\right]}{2R^{4}\left(1+\frac{r^{2}}{R^{2}}\right)^{4}},
\end{equation}
where $X_{2}=p_{0}^{2}-20p_{0}-12k+3$, $Y_{2}=-6p_{0}^{2}+12p_{0}-4p_{0}k+8k+2$ and $Z_{2}=5p_{0}^{2}-4p_{0}+8p_{0}k+3k^{2}+2k-1$. Now $\left(\frac{dp_{t}}{dr}\right)(r=0)=0$ and $\left(\frac{dp_{t}}{dr}\right)(r=R)=\frac{-12p_{0}+3k^{2}-2k+4p_{0}k+4}{32R^{3}}$. Substituting $p_{0}$ from equation (\ref{p0}) in $\left(\frac{dp_{t}}{dr}\right)(r=R)$, $\left(\frac{dp_{t}}{dr}\right)(r=R)<0$ if $4k^{3}-76k^{2}+280k-64<0$. This further restrict value of $k$ as $k<0.2446$.\\\\
Hence, if
\begin{equation}\label{k4}
0\leq k<0.2446,\;\;\;\;\;p_{0}=\frac{k^{2}-22k+8}{8},
\end{equation}
then $\frac{d\rho}{dr}$, $\frac{dp_{r}}{dr}$ and $\frac{dp_{t}}{dr}$ are decreasing in radially outward direction between $0\leq r\leq R$. From equations (\ref{drhodr}) and (\ref{dprdr}) we have,
\begin{equation}\label{dprdrho}
\frac{dp_{r}}{d\rho}=\frac{p_{0}\left(3-\frac{r^{2}}{R^{2}}\right)}{\left[(5+k)+(1-k)\frac{r^{2}}{R^{2}}\right]}.
\end{equation}
At the centre of the star $\left(\frac{dp_{r}}{d\rho}\right)(r=0)<1$ if $k<24.8810$, which is consistent with condition (\ref{k4}) and at the boundary of the star $\left(\frac{dp_{r}}{d\rho}\right)(r=R)<1$ if $k<22.7047$, which is also consistent with conditon (\ref{k4}). From equations (\ref{drhodr}) and (\ref{dptdr}) we have,
\begin{equation}\label{dptdrho}
\frac{dp_{t}}{d\rho}=\frac{-\left[X_{2}+Y_{2}\frac{r^{2}}{R^{2}}+Z_{2}\frac{r^{4}}{R^{4}}\right]}{4\left(1+\frac{r^{2}}{R^{2}}\right)\left[(5+k)+(1-k)\frac{r^{2}}{R^{2}}\right]}.
\end{equation}
Now, $\left(\frac{dp_{t}}{d\rho}\right)(r=0)<1$ if $k<19.4283$, which is consistent with condition (\ref{k4}) and $\left(\frac{dp_{t}}{d\rho}\right)(r=R)<1$ if $k<6.6371$, which is also consistent with condition (\ref{k4}). Hence for $0\leq k<0.2446$ and $p_{0}=\frac{k^{2}-22k+8}{8}$, all the physical plausibility conditions are satisfied.
\section{Dicussion}
Certain ascepts of charged relativistic star on paraboloidal spacetime have been discussed. It is observed that all the physical plausibility conditions are satisfied for $0\leq k\leq 0.2446$ and $p_{0}=\frac{k^{2}-22k+8}{8}$. The plots of $\rho$ (charged, uncharged), $p_{r}$ \& $p_{t}$ (charged), $p_{r}$ \& $p_{t}$ (uncharged), anisotropy $\Delta$ (charged, uncharged), $\rho-p_{r}-2p_{t}$ (charged, uncharged), $\frac{dp_{r}}{d\rho}$ \& $\frac{dp_{t}}{d\rho}$ (Charged), $\frac{dp_{r}}{d\rho}$ \& $\frac{dp_{t}}{d\rho}$ (uncharged) over $\frac{r^{2}}{R^{2}}$ for $R=10$, $k=0.2$ and taking $G=c^{2}=1$ are shown in figures 1 to 7. It is observed that energy condition is satisfied throughout the star. When $k=0$ the value of $p_{0}=1$ and the model reduces to the Sharma \& Ratanpal\cite{SR} model. Hence the model described here is charged generalization of particular case $p_{0}=1$ of uncharged Sharma \& Ratanpal\cite{SR} model.
\begin{figure}[h]
\begin{center}
\includegraphics[width=12cm]{rho.eps}\\
\caption{Variation of density ($\rho$) (charged \& uncharged) against $\frac{r^{2}}{R^{2}}$.}\label{rho}
\end{center}
\end{figure}
\pagebreak
\begin{figure}[h]
\begin{center}
\includegraphics[width=12cm]{pressureC.eps}\\
\caption{Variation of radial pressure ($p_{r}$) \& tangential pressure ($p_{t}$) (charged) against $\frac{r^{2}}{R^{2}}$.}\label{pressureC}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=12cm]{pressureU.eps}\\
\caption{Variation of radial pressure ($p_{r}$) \& tangential pressure ($p_{t}$) (uncharged) against $\frac{r^{2}}{R^{2}}$.}\label{pressureU}
\end{center}
\end{figure}
\pagebreak
\begin{figure}[h]
\begin{center}
\includegraphics[width=12cm]{anisotropy.eps}\\
\caption{Variation of anisotropy ($\Delta$) (charged \& uncharged) against $\frac{r^{2}}{R^{2}}$.}\label{anisotropy}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=12cm]{energycondition.eps}\\
\caption{Variation of energy condition ($\rho-p_{r}-2p_{t}$) (charged \& uncharged) against $\frac{r^{2}}{R^{2}}$.}\label{anisotropy}
\end{center}
\end{figure}
\pagebreak
\begin{figure}[h]
\begin{center}
\includegraphics[width=12cm]{soundspeedC.eps}\\
\caption{Variation of $\frac{dp_{r}}{d\rho}$ \& $\frac{dp_{t}}{d\rho}$ (charged) against $\frac{r^{2}}{R^{2}}$.}\label{soundspeedC}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=12cm]{soundspeedU.eps}\\
\caption{Variation of $\frac{dp_{r}}{d\rho}$ \& $\frac{dp_{t}}{d\rho}$ (uncharged) against $\frac{r^{2}}{R^{2}}$.}\label{soundspeedC}
\end{center}
\end{figure}
\pagebreak
|
1,477,468,751,166 | arxiv |
\subsection{Numerical methods}
\subsubsection{Exact dynamics of the periodically modulated lattice}
The system, composed of $N_c$ cells of spatial size $\lambda=2\pi$ is discretized with $N_p$ points per cell; the total basis size is thus $N_t=N_c N_p$.
We have used both a spatial $\ket{x}$ and momentum $\ket{p}$ representation.
The corresponding grids are centered around $x=0$ and $p=0$ with respective size-step:
\begin{gather}
\delta x = \frac{\lambda}{N_p} \qqtext{and} \delta p = \frac{2\pi}{\lambda}\frac{\heff}{N_c}.
\end{gather}
For the whole of the study, we took $ N_p = 32 $ after checking that this discretatization was fine enough to faithfully represent the dynamics of the system: in particular, the total size in the $p$ direction $N_p \heff$ should be larger than the extension of the chaotic sea in momentum space.
The time propagation of a given state $\ket{\psi}$ is achieved with a symmetrized split-step method:
\begin{gather}
\ket{\psi(t+\delta t)}=U_P F U_X F^{-1} U_P \ket{\psi(t)},
\end{gather}
with
\begin{gather}
U_X = \sum_x \exp(-i\frac{V(x,t) \delta t}{\hbar}) \ketbra{x}{x}, ~U_P = \sum_p \exp(-i\frac{p^2 \delta t}{4\hbar})\ketbra{p}{p}\\
F=\frac{1}{\sqrt{N}} \sum_{x,p} \exp(-i \frac{x p}{\hbar}) \ketbra{p}{x} \qqtext{(using FFT).}
\end{gather}
The time step $\delta t=4\pi/1000$ was chosen after consistency tests.
\subsubsection{Construction of the effective Hamiltonian}
The determination of the Floquet-Bloch band is equivalent to the determination of the quasi-energy spectrum of the following Hamiltonian
\begin{gather}
H_\beta(x,t)=\frac{(p-\heff\beta)^2}{2} - \gamma(1+\varepsilon \cos t) \cos x,
\end{gather}
on a single cell $N_c=1$ (with $N_p=32$, see above), with the quasi-momentum $\beta$ taking the discrete values ${\beta_m=2\pi m/(N_c \lambda)}$, $m=0,\dots, N_c-1$.
Thus, for a system size $N_c$, we repeat $N_c$ times the following procedure (for each value of $\beta_m$):
\begin{itemize}
\item First, we build the matrix (in $x$ reprentation) of the Floquet operator from the propagation of $N_p$ $\delta$-function states $\ket{x}$.
To do so, we use the previous split-step method over two periods of modulation $T=4\pi$ (this choice was made to be consistent with \cite{Arnal2020}, but is of no importance here).
\item Second, we diagonalize the Floquet operator and look for the eigenstate having the largest overlap with a Gaussian state centered on the regular island.
This eigenstate is associated with a complex eigenvalue $\alpha_\beta$ that gives the effective energy:
\begin{equation}
\varepsilon_\text{eff}^\text{reg}(\beta) = - \frac{i\heff}{T} \log \alpha_\beta.
\end{equation}
\item Once we have obtained the $N_c$ values of $\varepsilon_\text{eff}^\text{reg}(\beta_m)$, we build explicitly the effective tight-binding Hamiltonian $H_\text{eff}$, whose coupling elements $t_n^\text{eff}\equiv\mel{(m+n)_\text{reg}}{H_\text{eff}}{m_\text{reg}}$ are computed from the discrete Fourier Transform:
\begin{equation}
t_n^\text{eff}=\frac{1}{N} \sum_{\beta_m} \varepsilon_\text{eff}^\text{reg}(\beta_m) \exp(i \beta_m \lambda n).
\end{equation}
\end{itemize}
\subsubsection{Dynamic evolution under the effective Hamiltonian}
The effective Hamiltonian is a tight-binding model of $N_c$ sites $\ket{n}$, with $n=0,\dots N_c-1$.
The wavefunction $\ket{\psi}$ is propagated over two periods with effective evolution propagator:
\begin{gather}
\ket{\psi(t+T)}=U_\text{eff} \ket{\psi(t)} \qqtext{with} U_\text{eff}=\exp(-i\frac{H_\text{eff} T}{\heff})\; ,
\end{gather}
obtained using a Padé approximation.
\subsubsection{Construction of the regular Wannier-states}
The Wannier states of the unperturbed lattice (with $\varepsilon=0$) provide an approximation of the regular modes $\ket{n_\text{reg}}$ of the modulated lattice discussed in the letter.
To construct them, we thus use a procedure similar to that used for the determination of the effective energy band, but using the unmodulated lattice (with $\varepsilon=0$):
For each value of $\beta_m=2\pi m/(N_c \lambda)$, $m=0,\dots N_c-1$, we diagonalize the evolution operator over two periods $T=4 \pi$ and look for the eigenstate having the largest overlap with a Gaussian state centered on the regular island.
The $p$ representation of this eigenstate gives the coefficient of the Wannier state on the partial (uncomplete) grid $p=\heff \beta + n \delta p$ of size $N_p$.
After repeating $N_c$ times this procedure, we obtain the full $p$ representation of the Wannier state (on the complete momentum basis of size $N_p N_c$).
\subsubsection{Miscellaneous}
The classical dynamics is simulated using a RK4 Runge-Kutta algorithm.
Husimi phase-space representations are computed using the procedure described e.g. in \cite{Terraneo2005}.
\subsection{Derivation of the hopping law for large system sizes}
To derive the hopping law Eq.~(3), we first decompose the effective Bloch band as a regular part $\varepsilon_0$ and a sum over all resonance terms:
\begin{gather}
\label{eq:sum_ebeta}
\varepsilon^\text{eff}_\text{reg}(\beta)=\varepsilon_0(\beta)+\sum_\text{resonances} \varepsilon_\asymp(\beta-\beta_0,W,\alpha)\,,
\end{gather}
where each resonance is characterized by three parameters: $\beta_0$ the position of the resonance, $W$ the coupling intensity between the chaotic and the regular state and $\alpha$ the slope of the energy of the involved chaotic state with $\beta$.
Each resonance can be described by a two-level Hamiltonian of an avoided crossing at $\beta=0$:
\begin{gather}
\mqty(\varepsilon_\text{reg}(\beta) & W\\ W & \varepsilon_\text{ch}(\beta))
\end{gather}
with $\varepsilon_\text{reg}(\beta)=0$ (since it is taken into account by $\varepsilon_0$ in Eq.~\eqref{eq:sum_ebeta}) and $\varepsilon_\text{ch}(\beta)=\alpha\beta$. The corresponding eigenstates $\vert \beta_\pm\rangle$ and eigenenergies $\varepsilon_\pm(\beta)$ follow:
\begin{gather}
\label{eq:shiftenergy}
\varepsilon_\pm(\beta)= \frac{\varepsilon_\text{reg}+\varepsilon_\text{ch}}{2} \pm \sqrt{\Delta^2 + W^2}\qqtext{and} \ket{\beta_\pm}=\begin{cases}
\cos \theta \ket{\beta_\text{reg}} + \sin \theta \ket{\beta_\text{ch}}\\
-\sin \theta \ket{\beta_\text{reg}} + \cos \theta \ket{\beta_\text{ch}}
\end{cases},
\end{gather}
with $\Delta=(\varepsilon_\text{reg}-\varepsilon_\text{ch})/2$ and $\theta \in [0,\pi/2]$ verifying $\tan 2\theta=|W|/\Delta$.
The prescription for the effective spectrum construction is to select the energy associated with the eigenstate having the largest projection on the regular subspace. We thus get:
\begin{gather}
\varepsilon_\asymp(\beta,W,\alpha) = \frac{\alpha}{2}\qty(\beta-\sgn{\beta} \sqrt{\beta^2+\qty(\frac{2|W|}{\alpha})^2}).
\end{gather}
Taking the Fourier transform, we then have
\begin{gather}
t_n = t_n^0+\sum_\text{resonances} t_n^\asymp(\beta_0,W,\alpha) \qqtext{with}
t_n^\asymp(\beta_0,W,\alpha)=\frac{\lambda}{2\pi}\int_{-\pi/\lambda}^{\pi/\lambda} \varepsilon_\asymp(\beta-\beta_0,W,\alpha) \e{-i n\beta \lambda} \dd{\beta}.
\end{gather}
We now assume that $\varepsilon_\asymp(\beta-\beta_0,W,\alpha)$ is peaked around $\beta_0$ and that $\beta_0$ is sufficiently far from the edge of the boundary of the Brillouin zone, so that
\begin{gather}
t_n^\asymp(\beta_0,W,\alpha)\approx \e{in\beta_0 \lambda} \frac{\lambda}{2\pi}\int_{-\pi/\lambda}^{\pi/\lambda} \varepsilon_\asymp(\beta,W,\alpha) \e{-i n\beta \lambda} \dd{\beta}.
\end{gather}
The latter expression can be evaluated for large $n$ values.
We introduce $x=\beta \lambda$ and $\eta=2 \lambda|W|/\alpha=\lambda \Delta\beta/2$, it reads
\begin{align}
t_n^\asymp=&\frac{\e{in\beta_0 \lambda} \alpha}{4\pi \lambda} \times \underbrace{\int_{-\pi}^{\pi} \qty(x-\sgn{x}\sqrt{x^2+\eta^2})\e{-inx} \dd{x}}_{I^*},
\end{align}
we split the integral $I$ (taking complex conjugation) in two parts, the first one gives
\begin{gather}
\int_{-\pi}^{\pi} x\e{inx} \dd{x} = \frac{2i\pi}{n} (-1)^{n+1}.
\end{gather}
The second part can be rewritten
\begin{gather}
\int_{0}^{\pi} \sgn{x}\sqrt{x^2+\eta^2}\e{inx} \dd{x} - \text{c.c.},
\end{gather}
we then deform the contour of integration $0\rightarrow \pi$ to a complex circuit $0 \rightarrow iT \rightarrow iT+\pi \rightarrow \pi$ with $T$ some large real number. Using Watson's formula, the first part gives (setting $x=iy$)
\begin{gather}
i\int_{0}^{T} \sqrt{\eta^2-y^2}\e{-ny} \dd{y}\sim \frac{i |\eta|}{n}.
\end{gather}
The second part is negligible for $T$ large enough (setting $x=y+iT$):
\begin{gather}
\e{-nT} \int_0^\pi \sqrt{(y+iT)^2+\eta^2} \e{-iny} \dd{y} \rightarrow 0.
\end{gather}
Using Watson's formula and assuming $\Delta \beta \ll \frac{2\pi}{\lambda}$ so that $(\eta/\lambda)^2 \ll 1$, the third part (setting $x=\pi + iy$) gives:
\begin{align}
i(-1)^{n+1}\int_0^T \sqrt{(\pi+iy)^2+\eta^2} &\e{-ny} \dd{y} \sim \frac{i\pi}{n}(-1)^{n+1}.
\end{align}
Putting all terms together (taking care of complex conjugation) we end up with
\begin{gather}
t_n^\asymp\approx \frac{\e{in\beta_0 \lambda} \alpha}{4\pi \lambda} \qty(\frac{2i\pi}{n}(-1)^{n+1}-\frac{2i \eta}{n}-\frac{2i\pi}{n}(-1)^{n+1})^* = \frac{\e{in\beta_0 \lambda} \alpha}{4\pi \lambda} \times \frac{i 4 \lambda |W|}{|\alpha|} =\frac{i}{\pi n}\sgn\alpha|W|\e{i n\beta_0\lambda}.
\end{gather}
We finally assume that $t_n^0$ is negligible for large $n$ values (because it decays exponentially), so that
\begin{gather}
t_n \approx \frac{i}{\pi n}\sum_{\text{resonances}} \sgn\alpha|W|\e{i n\beta_0\lambda}.
\end{gather}
\bibliographystyle{apsrev4-1.bst}
|
1,477,468,751,167 | arxiv | \section{Introduction}
\label{secintro}
Finding the galaxies that have the highest star formation rates (SFRs)
at high redshifts has been a difficult problem.
Such galaxies cannot be easily picked out in rest-frame
ultraviolet (UV) or optical samples due to their very
large and highly variable extinctions
(e.g., Bouwens et al.\ 2009; Reddy et al.\ 2012),
and though they can be found in far-infrared (FIR) or submillimeter
selected galaxy samples, the poor resolution of single-dish submillimeter
telescopes makes their interpretation
complex. In particular, recent follow-up surveys
with submillimeter interferometers have shown that at least
some of the brightest submillimeter galaxies (SMGs) are blends of
fainter sources with lower SFRs (e.g., Wang et al.\ 2011;
Smol\v{c}i\'{c} et al.\ 2012;
Barger et al.\ 2012; Hodge et al.\ 2013b). In fact, Karim et al.\ (2013)
suggested that almost all bright ($>9$~mJy) SMGs are blends
and that there is a natural upper limit of $\sim 1000~M_\sun$ yr$^{-1}$
on the SFRs of galaxies.
One thing all SMG studies agree on is that there is a large
fraction of cosmic star formation hidden by dust
(e.g., Barger et al.\ 2000, 2012; Lagache et al.\ 2005;
Chapman et al.\ 2005; Wang et al.\ 2006;
Serjeant et al.\ 2008; Wardlow et al.\ 2011; Casey et al.\ 2013), most of
which is occurring in the most massively star-forming galaxies in the universe.
Thus, the construction of a complete picture
of galaxy evolution requires a full understanding of galaxies
at both optical and FIR/submillimeter wavelengths. However, in order to
develop this understanding of the dusty universe, we need large,
uniformly selected samples with well determined star-forming
properties.
In this paper, we work towards this goal using a combination
of powerful new data on the heavily studied Great Observatories Origins Deep
Survey-North (GOODS-N; Giavalisco et al.\ 2004)/{\em Chandra\/} Deep Field-North
(CDF-N; Alexander et al.\ 2003) field.
We begin with a new, uniformly selected sample of 850~$\mu$m galaxies
observed with the SCUBA-2 camera (Holland et al.\ 2013)
on the 15~m James Clerk Maxwell Telescope (JCMT).
Then, using a combination of extremely deep Karl G. Jansky Very Large
Array (VLA) 1.4~GHz observations
(F. Owen 2014, in preparation; hereafter, Owen14) and
high-resolution Submillimeter Array (SMA; Ho et al.\ 2004) 860~$\mu$m
observations, we analyze the fraction of SMGs that are single sources
and estimate the SFR distribution function for the most massively
star-forming galaxies in the universe.
The principal underpinning of this work is the
new submillimeter imaging capability provided by SCUBA-2.
SCUBA-2 covers 16 times the area of SCUBA (Holland et al.\ 1999)
and has a mapping speed that is considerably faster than SCUBA,
which means that large samples of SMGs
can be obtained in only a few nights of observations. Previous
samples of SMGs in the GOODS-N/CDF-N field were based on mosaics
of SCUBA fields that only partially covered
the area and that had widely varying sensitivities. SCUBA-2 enables
the development of uniform, deep SMG samples over the entire field.
In order to construct the SFR distribution function,
we need to determine how many of the SCUBA-2 selected
sources are multiples, where the observed flux is a blend of two or
more individual galaxies. High spatial resolution
follow-up of bright SMGs at the same wavelength is now possible
with the SMA or the Atacama Large Millimeter/submillimeter Array (ALMA),
but the level of multiplicity is still somewhat controversial,
particularly for the brightest SMGs (see Chen et al.\ 2013 for a full discussion).
As an illustration of the different results that have been obtained,
Karim et al.\ (2013) found that all of the brightest ($>12$~mJy) sources in their
LABOCA Extended {\em Chandra\/}
Deep Field-South Submillimeter Survey (LESS)
that they successfully detected with their targeted ALMA observations
were composed of emission from multiple fainter SMGs, each with
870~$\mu$m fluxes of $\lesssim9$~mJy.
(Note that of the 88 ``best" ALMA maps used in their analysis, 19 contained no
$>3.5\sigma$ detections.)
They also did not find any ALMA sources with fluxes $>9$~mJy.
In contrast, Barger et al.\ (2012) confirmed with the SMA three single sources in the
GOODS-N field with fluxes $>9$~mJy, two of which had fluxes $\gtrsim12$~mJy.
The differences may be partly explainable as a purely observational blending effect
due to the different beam sizes of the single-dish submillimeter telescopes
used to construct the SMG samples ($14''$ for SCUBA versus $19.2''$ for LABOCA).
However, this emphasizes the importance
of determining the multiplicity level for the specific sample that is being used.
In this paper, we approach the multiplicity issue in two ways.
Our first approach is the most direct: submillimeter interferometric
imaging of the SCUBA-2 selected SMGs.
Such follow-up observations can localize the submillimeter emission
extremely accurately and allow the determination of whether the SMG is
a single source or a blend
(e.g., Iono et al.\ 2006; Wang et al.\ 2007, 2011;
Younger et al.\ 2008a,b; Cowie et al.\ 2009; Hatsukade et al.\ 2010;
Knudsen et al.\ 2010; Chen et al.\ 2011; Barger et al.\ 2012; Karim et al.\ 2013;
Hodge et al.\ 2013b). We have used the SMA to measure the properties
of a very large fraction of the SMGs in the SCUBA-2 sample, including many of
the brightest ones.
Our second approach is to use 1.4~GHz observations to identify the
counterparts to the SMGs. Historically, this approach has been less than
ideal, because it introduced a strong bias against high-redshift SMGs due to the
positive $K$-correction of the radio synchrotron emission and the negative
$K$-correction of the submillimeter thermal dust emission.
However, with the upgraded VLA, 1.4~GHz images (Owen14)
are now deep enough to find counterparts to nearly all of the SCUBA
(Barger et al.\ 2012) or SCUBA-2 (this paper) sources, removing the radio bias.
Thus, for most of the SCUBA-2 galaxies without SMA data, we can
identify single radio sources as their counterparts.
It is also possible to identify
high SFR galaxies directly in the radio, though this is complicated
by the fact that many high radio power sources are AGNs rather than star formers.
Radio galaxies pick out high-mass galaxies, as can be seen from
the ``$K-z$ relation'', a well-known tight
correlation between the $K$-band magnitudes of
radio host galaxies and their redshifts that was discovered by Lilly \& Longair (1984)
using the bright 3CRR survey (Laing et al.\ 1983) and
confirmed with lower power radio surveys
(e.g., Eales et al.\ 1997 using the 6CE;
Jarvis et al.\ 2001 using the 6C*;
Lacy et al.\ 2000 using the 7C-III).
At an order of magnitude even fainter than the above surveys,
De Breuck et al.\ (2002) found that their $S_{\rm 1.4~GHz}>10$~mJy
sample of ultra-steep-spectrum radio sources
also followed the $K-z$ relation
and traced the bright $K$-band envelope of field galaxies out to
$z\lesssim1$ before becoming $\gtrsim2$~mag brighter at higher redshifts.
This led them to conclude that the radio galaxies were pinpointing the most
massive systems at all redshifts.
Clearly, the $K-z$ relation has important implications for the formation of
massive galaxies.
Willott et al.\ (2003; hereafter, Willott03) combined the 7C Redshift Survey
(7CRS) with the 3CRR, 6CE, and 6C* samples
and found that the shape of the $K-z$ relation
is closely approximated by the magnitude-redshift relation for an
elliptical galaxy that formed at $z=10$.
Rocca-Volmerange et al.\ (2004) modeled the $K-z$
relation using magnitudes computed for a fixed baryonic
mass galaxy with an elliptical (fast conversion of gas to
stars) star formation history starting at high redshift ($z=10$).
In their interpretation, the most powerful radio sources are
in galaxies with baryonic masses of $\sim10^{12}~M_\sun$,
while lower radio power sources are in galaxies with slightly smaller
baryonic masses. Rocca-Volmerange et al.'s adopted elliptical star
formation timescale for this model (1~Gyr) would imply SFRs
$\sim1000~M_\sun$~yr$^{-1}$ in all luminous radio galaxies
at $z\gtrsim4$.
The primary science goals of the present paper are to estimate for the
most massively star-forming galaxies in the universe
(1) the highest SFRs, (2) the distribution of SFRs, and (3) the contribution
to the universal star formation history and how that contribution
compares to the contribution from extinction-corrected UV selected samples.
The structure of the paper is as follows. In Section~\ref{secdata}, we discuss
the GOODS-N/CDF-N imaging data sets that we use,
including SCUBA-2, radio, SMA, $K_s$, and X-ray.
We construct both a $>4\sigma$ SCUBA-2 850~$\mu$m
catalog and an SMA 860~$\mu$m catalog
using all available data. We identify radio counterparts
to the SMGs and give their $K_s$ magnitudes, as well as their spectroscopic
and photometric redshifts, where available. We measure millimetric redshifts
from the radio to submillimeter flux ratios assuming an Arp~220 spectral
energy distribution (SED).
In Section~\ref{seczdist}, we show the redshift distribution of the radio sample.
In Section~\ref{seckz}, we present the $K-z$ relation for our sample and
use it to estimate additional redshifts.
In Section~\ref{secradiopower}, we focus on the high radio power sources in the
sample and how the SMGs are drawn from this population.
We also describe our conversions of the 1.4~GHz powers and
submillimeter fluxes into SFRs.
In Section~\ref{secsfh}, we use our SCUBA-2 sample to determine the
SFR distribution function at $z=1.5-6$ and to construct the star formation history,
which we compare with the history determined from extinction-corrected UV samples.
In Section~\ref{secdisc}, we discuss the implications of our results, and
in Section~\ref{secsum}, we summarize the paper.
We adopt the AB magnitude system for the optical and NIR
photometry, and we assume the Wilkinson Microwave
Anisotropy Probe cosmology of $H_0=70.5$~km~s$^{-1}$~Mpc$^{-1}$,
$\Omega_{\rm M}=0.27$, and $\Omega_\Lambda=0.73$
(Larson et al.\ 2011) throughout.
\section{Data}
\label{secdata}
\subsection{SCUBA-2 Imaging}
\label{secscuba2obs}
We obtained 25.4~hr of observations on the CDF-N with SCUBA-2 on the
JCMT during observing runs in
2012 and 2013. The data were obtained using a mixture of scanning
modes and under a variety of weather conditions.
Using the CV Daisy scanning mode
(detailed information about the SCUBA-2 scan patterns can be found in
Holland et al.\ 2013),
we obtained a 2.2~hr observation
in band 1 weather (225~GHz opacity $<0.05$) and a 16.5~hr observation
in band 2 weather (225~GHz opacity $\sim0.05-0.08$).
We also obtained a 6.7~hr observation in band 2 weather
using the pong-900 scanning mode. While SCUBA-2 observes
at both 450~$\mu$m and 850~$\mu$m simultaneously,
there are too few sources directly detected at 450~$\mu$m in our data
to be interesting. Thus, we only use the 850~$\mu$m data in our subsequent
analysis.
In terms of the two scanning modes,
CV Daisy is optimal for going deep on small areas
($<4'$ radius), while pong-900 provides a uniform rms over a large area ($<12'$ radius).
In Figure~\ref{sma_detected}, we compare the CV Daisy field (dark green shading
indicates the area with the highest sensitivity, and light green shading indicates the area
where the rms noise is less than 4 times the central sensitivity) with the pong-900 field
(yellow shading). To illustrate the size of the SCUBA-2 images, the black rectangle shows
the GOODS-N {\em HST\/} ACS coverage.
\begin{inlinefigure}
\vskip 0.5cm
\centerline{\includegraphics[width=3.2in,angle=180]{fig1.pdf}}
\caption{
GOODS-N/CDF-N
SCUBA-2 field obtained with the CV Daisy scanning mode (dark green --- area
with the highest sensitivity; light green --- area where the rms noise is $<4\times$
the central sensitivity) and the pong-900 scanning mode (yellow shading).
The black rectangle shows the GOODS-N {\em HST\/} ACS field.
The red circles ($24''$ radius to show the area where the rms noise is
$<3\times$ the central sensitivity) denote the 28 SMA fields in the region
(see Section~\ref{secsma}); note that some of the SMA fields overlap.
The blue diamonds mark the $>4\sigma$ SMA detections in the SMA fields.
\label{sma_detected}
}
\end{inlinefigure}
We took darks and flat fields at
the beginning and end of each scan. We did Skydips at least twice per night in order
to calculate the opacity factors.
We reduced the data using the Dynamic Iterative
Map-Maker (DIMM) in the SMURF
package from the STARLINK software developed by the Joint Astronomy Centre.
We calibrated the fluxes using the standard Flux Conversion Factor for 850\,$\mu$m
of 537\,Jy\,pW$^{-1}$. The relative calibration accuracy is expected to be
stable and good to 5\% at 850\,$\mu$m (Dempsey et al.\ 2013).
For more details on the GOODS-N/CDF-N SCUBA-2 data reduction and flux
calibration, we refer the reader to Chen et al.\ (2013).
We combined the pong-900 map with only the portion of the CV Daisy map contained within
the dark and light green shaded regions of Figure~\ref{sma_detected} to avoid damaging the
sensitivity of the pong-900 map further out where the Daisy coverage is sparse.
Nearly all of the SMGs are expected to be compact relative to the
beam size of the JCMT at 850~$\mu$m. In order to increase the
detectability of these point sources, we applied a matched-filter
to our maps, which is a maximum likelihood estimator of the source strength
(e.g., Serjeant et al.\ 2003a). The point spread function (PSF) for the matched-filter
algorithm should ideally be a Gaussian normalized
to a peak of unity with full-width half-maximum (FWHM) equal to the JCMT beam size
($14''$ at 850\,$\mu$m). However, the map produced from DIMM
typically has low spatial frequency structures that
need to be subtracted off before source extraction can be done.
Thus, before running the matched-filter, we convolved the map with a broad
Gaussian ($30''$ FWHM) normalized to a sum of unity, and we subtracted this
convolved map from the original map. The resulting PSF is a Gaussian with a
broader Gaussian subtracted off, giving a Mexican hat-like wavelet.
We show the area versus rms noise in the matched-filter
850~$\mu$m image in Figure~\ref{area}.
\begin{inlinefigure}
\vskip 0.5cm
\centerline{\includegraphics[width=3.2in,angle=180]{fig2.pdf}}
\caption{
Area vs. rms noise of the combined GOODS-N/CDF-N SCUBA-2 pong-900
and CV Daisy 850~$\mu$m maps.
\label{area}
}
\end{inlinefigure}
We first extracted sources having a peak
S/N greater than 3.0. We did this by finding the maximum pixel in the matched-filter
image. We then used the position and flux of the source at that peak
to subtract a PSF that we centered at that
position and scaled to match the flux. We iterated this process of identifying
and removing sources until there were no remaining peak S/N values greater than 3.0.
We treated the $>3\sigma$ peaks as the preliminary catalog. In forming the
final catalog, we kept every $>4\sigma$ source in the preliminary catalog.
In Figure~\ref{scuba2}, we mark on the SCUBA-2 map the
850~$\mu$m $>4\sigma$ catalog sources (large circles).
Hereafter, we refer to this as our SCUBA-2 sample.
In Table~1, we present the sample in tabular form.
In Column~1, we give our CDFN name for
the source. Where a GOODS~850 number from Wang et al.\ (2004)
or a GN number from Pope et al.\ (2005) exists, we give that name in parentheses.
In Columns~2 and 3, we give the J2000 right ascensions and declinations
from the SCUBA-2 data.
We have ordered the catalog by decreasing 850~$\mu$m flux, which we give in
Column~4. In Columns 5 and 6, we list the 850~$\mu$m $1\sigma$ errors and signal-to-noise
ratios, respectively. For the sources detected at 860~$\mu$m with the SMA, in Column~7,
we give the SMA fluxes with the $1\sigma$ errors in parentheses.
In Column~8, we give the 1.4~GHz
fluxes with the $1\sigma$ errors in parentheses, and in Columns~9 and 10,
we give the J2000 right ascensions and declinations from the radio data.
Where more than one radio source lies within the SCUBA-2 beam, and where
there are no SMA observations to determine the true radio counterpart, we list
both radio sources. We list the $K_s$ magnitudes in Column~11, and we give
the spectroscopic (including CO), photometric, and millimetric redshifts in
Columns~$12-14$.
\begin{inlinefigure}
\centerline{\includegraphics[width=3.5in]{fig3.pdf}}
\vskip -1.35cm
\caption{
The 850~$\mu$m $>4\sigma$ catalog sources marked on the SCUBA-2 map
(large circles). The portion of the map shown is $16.7'$ on a side.
The SMA detections (see Section~\ref{secsma}) are shown with smaller circles.
Sources that were single detections in the SCUBA survey of
Wang et al.\ (2004) but then were observed to
split into multiples in the SMA data are labeled with the Wang
et al.\ name. (Note that the GOODS~850-15 multiple is very close; see
Figure~14 in Barger et al.\ 2012.)
In some cases (GOODS~850-13 and GOODS~850-16),
the new SCUBA-2 data also separate the sources. (Note, however, that two
of the three SMA sources making up GOODS~850-13 lie below the SCUBA-2
detection threshold.)
\label{scuba2}
}
\end{inlinefigure}
\subsection{Radio Imaging}
\label{secradioobs}
Owen14 constructed a catalog of 1.4~GHz sources detected in the VLA image of the
GOODS-N/CDF-N field. The image covers a $40'$ diameter region with an effective resolution of
$1\farcs8$. The absolute radio positions are known to $0\farcs1-0\farcs2$ rms.
The highest sensitivity region is about $9'$ in radius, producing a relatively uniform radio map
with an rms of 2.3~$\mu$Jy. We refer to this region as the full field in the rest of the
paper. There are 894 distinct radio sources in this region, excluding sources that appear
to be parts of other sources.
\subsection{SMA Imaging}
\label{secsma}
There are 28 fields within $\pm9\farcm5$ of the GOODS-N center that have been
observed with the SMA. Most of these observations were targeted on SCUBA
850~$\mu$m sources or, more recently, SCUBA-2 850~$\mu$m sources, primarily
by our own group (24 of the 28 fields).
Twelve of these were presented in Wang et al.\ (2011)
and Barger et al.\ (2012), while Chen et al.\ (2013) used the full sample to analyze
the effects of multiplicity on the number counts.
We show all 28 fields (some of which are overlapped) in Figure~\ref{sma_detected}
as the red circles. Taken together, they cover $\sim14$~arcmin$^2$.
There are 16 images not already analyzed in Wang et al.\ (2011) and
Barger et al.\ (2012). For one of these images (GN20),
we simply adopted the SMA 890~$\mu$m
flux and error presented in Iono et al.\ (2006).
For the others, we calibrated and inspected the SMA 860~$\mu$m data using the
IDL-based Caltech package MIR modified for the SMA, as in our previous work.
Considering only the regions in each image where the noise was less than four times the
minimum noise, we searched all of the SMA images (except for the one containing GN20)
for sources detected above the $4\sigma$ threshold. Including GN20,
there are 29 such sources, which we mark in Figure~\ref{sma_detected}
with blue diamonds. These sources were all the intended targets of the SMA observations.
Apart from the multiples, we found no serendipitous sources in the fields.
Chen et al.\ (2013) compared the SMA and SCUBA-2 fluxes for the sources and found that most
agree statistically (see their Figure~10).
We compare the SMA (small circles)
and SCUBA-2 (large circles) detections in Figure~\ref{scuba2}.
All of the SCUBA-2 sources observed with the SMA
were detected.
We find that only three of the SCUBA-2 sources, CDFN16 (GOODS~850-11), CDFN37
(GOODS~850-15), and CDFN15,
have multiple counterparts in the SMA images. Two previous SCUBA
sources (GOODS~850-13 and GOODS~850-16) that were blends of SMA sources
are separated in the SCUBA-2 data into individual sources. (Note, however, that two
of the three SMA sources making up GOODS~850-13 lie below the SCUBA-2 detection threshold.)
\begin{inlinefigure}
\centerline{\includegraphics[width=4.5in,angle=180]{fig4.pdf}}
\vskip -0.5cm
\caption{SMA image of CDFN8 with a measured
860~$\mu$m flux of $11.5\pm0.7$~mJy from the SMA and an 850~$\mu$m
flux of $9.5\pm2.3$~mJy from SCUBA-2.
The large circle shows the
SCUBA-2 beam centered at the SCUBA-2 position. The small circles
show the positions of 1.4~GHz sources in the field. The submillimeter
source is a single unresolved source at the position of a 1.4~GHz source
with a flux of $34.2\pm2.9$~$\mu$Jy. None of the other three radio
sources in the field have a significant submillimeter flux.
\label{sma_example}
}
\end{inlinefigure}
With our accurate SMA positions, we can unambiguously determine the radio
counterparts to the SMGs. Indeed, we find radio counterparts above the
1.4~GHz threshold of $\sim 11.5~\mu$Jy ($5\sigma$) for all of the SMA sources,
except CDFN15a and CDFN15b.
We show one example in Figure~\ref{sma_example}. There are four radio
sources in the SMA field (small circles), but only one of them is the clear
counterpart to the SMA source. The other three have no significant submillimeter flux.
We tested the astrometric accuracy of the SMA observations
relative to the Owen14 1.4~GHz sample and found
that they are in perfect astrometric agreement.
The dispersion in the positional offsets is $0\farcs5$.
In Table~2, we summarize the properties of the 29 SMA sources.
In Column~1, we give the name from the literature or, for new sources,
the name from our SCUBA-2 catalog (Table~1).
In Columns~2 and 3, we give the J2000 right ascensions and declinations
for each source as measured from the SMA data.
In Column~4, we list the SMA $860~\mu$m fluxes and $1\sigma$ errors.
In Column~5, we give the references for the SMA data.
In Column~6, we list the 1.4~GHz fluxes and $1\sigma$ errors from Owen14.
These are peak fluxes when the peak flux equals the extended
flux and extended fluxes otherwise.
In Column~7, we give the spectroscopic redshifts as found in the literature.
In Column~8, we give the references for those spectroscopic measurements.
\subsection{Near-Infrared Imaging}
\label{secnirobs}
Wang et al.\ (2010) constructed a $K_s$ catalog of the GOODS-N/CDF-N field, which they publicly
released along with the extremely deep Canada-France-Hawaii Telescope (CFHT) $K_s$
image from which the catalog was extracted. In the GOODS-N region, the image has a
$1\sigma$ depth of $0.12~\mu$Jy.
We measured $3''$ diameter aperture $K_s$ magnitudes corrected to total magnitudes
at the positions of the radio sources using the $K_s$ image. For sources
brighter than $K_s=19$, we used an isophotal magnitude computed using an
aperture corresponding to 1$\%$ of the central surface brightness
in the galaxy.
\subsection{X-ray Imaging}
\label{secxrobs}
Alexander et al.\ (2003) presented the 2~Ms X-ray image of the
CDF-N, which they aligned with the Richards (2000)
radio image. Near the aim point, the X-ray data reach limiting fluxes of
$f_{\rm 2-8~keV}\approx 1.4\times 10^{-16}$ and
$f_{\rm 0.5-2~keV}\approx 1.5\times 10^{-17}$~erg~cm$^{-2}$~s$^{-1}$.
We assume a conservative $L_X>10^{42}$~erg~s$^{-1}$ as the threshold for a source
to be classified as an X-ray active galactic nucleus (AGN) on energetic grounds
(Zezas et al.\ 1998; Moran et al.\ 1999),
and we assume $L_X>10^{44}$~erg~s$^{-1}$ as the threshold for a source to be
classified as an X-ray quasar.
\subsection{Spectroscopic Redshifts}
\label{secspecoobs}
Many redshifts have been obtained of galaxies in the GOODS-N/CDF-N field
using either the Low-Resolution Imaging Spectrograph (LRIS; Oke et al.\ 1995)
on the Keck~I 10~m telescope
or the large-format DEep Imaging Multi-Object Spectrograph (DEIMOS; Faber et al.\ 2003)
on the Keck II 10~m telescope. These include large magnitude-selected samples
(Cohen et al.\ 2000; Cowie et al.\ 2004b; Wirth et al.\ 2004;
Barger et al.\ 2008; Cooper et al.\ 2011) or targeted
samples looking for interesting galaxy populations
(Reddy et al.\ 2006; Chapman et al.\ 2003, 2004a, 2005; Swinbank et al.\ 2004;
Treu et al.\ 2005; Barger et al.\ 2002, 2003, 2005, 2007; Trouille et al.\ 2008).
There are also a small number of CO redshifts that have been measured for SMGs
in the region (Daddi et al.\ 2009a,b; Bothwell et al.\ 2010; Walter et al.\ 2012).
We targeted new radio sources detected in the Owen14 catalog
during DEIMOS runs in 2012 and 2013.
We used the 600~line~mm$^{-1}$ grating, giving a resolution of 3.5~\AA\ and a wavelength
coverage of 5300~\AA. The spectra were centered at an average wavelength of 7200~\AA,
although the exact wavelength range for each spectrum depends on the slit position.
Each $\sim 1$~hr exposure was broken into three subsets, with the objects stepped along
the slit by $1\farcs5$ in each direction. Unidentified sources were continuously re-observed,
giving maximum exposure times of up to 7~hr. We reduced the spectra in the same way
as with previous LRIS spectra (Cowie et al.\ 1996). We only used spectra that could be
identified confidently based on multiple emission and/or absorption lines. We identified
a number of spectra using the doublet structure of the [OII] 3727~\AA\ line, which is
resolved in the spectra.
We also searched the infrared grism spectra obtained by P.I.~B.~Weiner
({\em HST\/} Proposal ID \#11600)
using the G140 grism on the WFC3 camera on {\em HST}. We formed the
spectra of 5709 galaxies with F140W $< 24.5$ and identified
607, mostly using the [OIII]$\lambda$5007 doublet and
H$\beta$ or H$\alpha$. The galaxies primarily lie in the redshift
interval $z=0.8-2.3$. A full catalog will be given in future work.
Of the identified sources, 107 are also radio sources; however, only
2 had not previously been identified from the optical spectra.
556 (62$\%$) of the 894 distinct radio sources in the full field
have secure spectroscopic redshifts either from the literature or from our
targeted spectroscopy of the sample.
In the GOODS-N region (here defined as
the region that is well covered by the {\em HST\/} ACS observations
of Giavalisco et al.\ 2004),
367 (67\%) of 543 radio sources have spectroscopic
redshifts. These spectroscopic identifications primarily
come from UV/optical or NIR spectroscopy, but they also contain
the small number of sources with CO spectroscopic redshifts.
\subsection{Photometric Redshifts}
\label{secphotobs}
Photometric redshifts can extend the spectroscopically identified sample
and provide a check on the spectroscopic redshifts.
Berta et al.\ (2011) compiled a multiwavelength catalog of sources
in the GOODS-N field and computed photometric
redshifts using the EAZY code of Brammer et al.\ (2008).
In comparing the spectroscopic and photometric redshifts,
we only consider the spectroscopically identified radio sources
in the spectroscopically well-covered area of the GOODS-N field
having a photometric redshift quality flag $Q_{z}<2$.
We also eliminate photometric redshifts $z\le0.06$, which are invariably
misidentifications of blue, higher redshift galaxies, and we restrict to galaxies
with {\em HST\/} ACS F850LP~$<25$ (Giavalisco et al.\ 2004).
\vskip 0.5cm
\begin{inlinefigure}
\centerline{\includegraphics[width=3.2in,angle=0]{fig5.pdf}}
\caption{Comparison of spectroscopic redshifts with photometric redshifts
from Berta et al.\ (2011). Only spectroscopically identified
radio sources lying in the spectroscopically well-covered area of the GOODS-N
field and having F850LP~$<25$, a photometric redshift quality flag of $Q_{z}<2$,
and $z>0.06$ are shown. Of the 303 objects, 13 have serious discrepancies.
These are marked with the red squares.
\label{photspec}}
\end{inlinefigure}
We show a comparison of the photometric and spectroscopic redshifts
in Figure~\ref{photspec}. Only 13 (red squares)
of the 303 spectroscopic redshifts have
strong disagreements with the corresponding photometric redshifts.
This number is a strong function of the quality flag.
For $Q_{z}=1$, we find 5 strongly discrepant sources out of
243 radio sources, while for $Q_{z}=3$, this rises to 19 out of 323.
Thus, we adopt $Q_{z}<2$ to maximize the number of included
sources while not allowing too high of an error rate. We inspected
all 13 discrepant sources individually to confirm both the
galaxy identification and the spectral redshift measurement. We
concluded in all cases that the spectroscopic identification
was reasonable. In some cases the photometric redshift may
have been contaminated by blending of two distinct galaxies,
while in other cases strong emission lines in the spectrum
may have perturbed the photometric redshift estimate. However,
there were some cases where we could not find an obvious explanation
for the discrepancy.
Rafferty et al.\ (2011) determined photometric redshifts over the full
field. Since these are based on more limited photometric information,
we do not use them in our subsequent analysis. However, we include
these in the photometric redshift column of Table~1 (marked with
an (R)) where no photometric redshift is available from Berta et
al.\ (2011).
\subsection{Millimetric Redshifts}
\label{secmillobs}
In Barger et al.\ (2012), we plotted the 1.4~GHz to 860~$\mu$m flux ratio
versus $1+z$ for the 16 SMGs in our SMA sample with spectroscopic
redshifts. We found that the Barger et al.\ (2000) Arp~220-based model
agreed reasonably well with a power law fit over the observed spectroscopic
redshift range. We therefore adopt this relation
(Equation~5 of Barger et al.\ 2000) to measure millimetric redshifts
(see also Carilli \& Yun 1999) for the SMGs in the SMA sample of Table~2.
In Figure~\ref{figzmilli}, we compare these
millimetric redshifts with the spectroscopic redshifts for the sources in
the SMA sample with spectroscopic redshifts
(see Table~2; black squares). The agreement is very good.
For the SMGs without spectroscopic redshifts (blue diamonds), we
use the millimetric redshifts on both axes. We mark X-ray AGNs with red
small squares.
\vskip 0.5cm
\begin{inlinefigure}
\centerline{\includegraphics[width=3.2in,angle=0]{fig6.pdf}}
\caption{
Millimetric redshifts estimated from the 1.4~GHz to 860~$\mu$m
flux ratio using the Barger et al.\ (2000) Arp~220-based model vs.
spectroscopic redshift.
The SMGs in the SMA sample with spectroscopic redshifts are denoted
by black squares.
Those without spectroscopic redshifts are
denoted by blue solid diamonds and are plotted at their millimetric
redshifts on both axes. X-ray AGNs are marked with red small squares.
\label{figzmilli}
}
\end{inlinefigure}
We searched a $3''$ radius around each SMA position to find
X-ray counterparts in the 2~Ms
{\em Chandra\/} catalog of Alexander et al.\ (2003).
Sources that contain AGNs are expected to follow the same relation as
the non-AGNs, since both star-forming galaxies and radio-quiet AGNs obey
the same tight empirical correlation between non-thermal radio
emission and thermal dust emission (e.g., Helou et al.\ 1985; Condon 1992).
This is seen to be true from
Figure~\ref{figzmilli}, where we mark the X-ray AGNs with red squares. Only
one of the five X-ray AGNs has only a millimetric redshift, which may be due
to the fact that AGNs are easier to identify spectroscopically.
In Figure~\ref{fighist}(a), we show the spectroscopic (gray shading) and millimetric
(blue) redshift distributions for the SMA sample in histogram form.
The millimetric redshifts predominantly fill in the $z\sim2.5-4$ range.
CDF15a and CDFN15b do not have radio counterparts, and are not
shown in Figure~\ref{fighist}(a), but the lower limit on the millimetric redshift based on the upper
limit on the radio flux would place them at high redshift ($z>5$).
Unfortunately, it would be hard to model the selection effects for the SMA sample,
given the diverse reasons for the observations.
All but eight of the sources in the SCUBA-2 sample have either SMA identifications
or single radio sources in the SCUBA-2 beam. We show the spectroscopic
(gray shading) and millimetric (blue) redshift distributions in
Figure~\ref{fighist}(b). The spectroscopic redshifts
range from $z=1$ to just above $z=5$, while the millimetric redshifts again
predominantly fill in the $z\sim2.5-4$ range. As in the SMA figure the four
sources with only radio upper limits, which are not shown in the figure,
are likely to lie at higher redshifts.
\vskip 0.5cm
\begin{inlinefigure}
\centerline{\includegraphics[width=3.2in,angle=0]{fig7a.pdf}}
\vskip 0.2cm
\centerline{\includegraphics[width=3.2in,angle=0]{fig7b.pdf}}
\caption{
Histograms of the spectroscopic (gray shading) and millimetric (blue) redshifts
for (a) the SMA sample and (b) the SCUBA-2 sample.
In (a) the 27 sources with radio counterparts in Table~2 are shown.
The 2 omitted sources without counterparts (CDN15a and CDFN15b)
are predicted to lie at $z>5$ by the millimetric estimate.
In (b), only sources with SMA identifications
or single radio counterparts in the SCUBA-2 beam are shown (i.e., all but
8 of the 49 SCUBA-2 sources in Table~1).
\label{fighist}
}
\end{inlinefigure}
\section{Redshift Distribution of the Radio Sample}
\label{seczdist}
In Figure~\ref{radio_hist}(a), we show how the spectroscopic completeness
(gray shading)
is very high for radio sources in the GOODS-N region (black histogram)
with bright optical or NIR counterparts, and how the
number of additional sources with secure photometric redshifts
(green shading) is small. In the GOODS-N field,
we have 367 spectroscopically identified radio sources, including
four with CO redshifts (cyan shading).
The photometric redshifts add only a further 30.
The spectroscopic plus photometric redshift identifications
are highly complete to $K_{s}\sim21$, but nearly all have $K_{s}<22.5$.
In Figure~\ref{radio_hist}(b), we show similar results for
the radio sources in the full field (black histogram).
We denote the spectroscopically identified
sources with gray shading and the spectroscopically unidentified
sources having clear submillimeter counterparts with blue shading.
We can estimate millimetric redshifts for the latter sources, and
CO redshifts may also be obtainable.
\vskip 0.5cm
\begin{inlinefigure}
\centerline{\includegraphics[width=3.2in,angle=0]{fig8a.pdf}}
\vskip 0.2cm
\centerline{\includegraphics[width=3.2in,angle=0]{fig8b.pdf}}
\caption{
Histogram of the $K_s$ magnitudes for the radio sources.
The step size is 0.2 mags. Sources with
$K_s$ fainter than 25.7 are all put into the faintest
bin. In (a), we show the sources in the GOODS-N
region ({\em gray shading\/} --- spectroscopic
redshifts; {\em cyan shading\/} --- CO redshifts;
{\em green shading\/} --- additional sources with
photometric redshifts; {\em no shading\/} --- sources without
redshifts). In (b), we show the sources in the full field
({\em gray shading\/} --- spectroscopic
redshifts; {\em cyan shading\/} --- CO redshifts;
{\em blue shading\/} --- additional sources having clear submillimeter
counterparts for which we can measure millimetric redshifts;
{\em no shading\/} --- sources without redshifts).
\label{radio_hist}
}
\end{inlinefigure}
We see that there is an extended tail of radio sources with faint
NIR counterparts ($K_{s}>22.5$). This is the magnitude regime where
many of the sources with millimetric redshifts lie and where all four
of the sources with CO redshifts lie (all with $z\gtrsim4$).
In each figure, we lumped the
sources that are undetected in the $K_s$ image into the faintest bin. These
sources are also extremely optically faint: nearly all of the ones that lie
in the GOODS-N region are undetected in the {\em HST\/} ACS images.
Unfortunately, such sources, which may lie at high redshifts, cannot be
identified with either optical/NIR spectroscopy or photometry.
\section{The $K-\lowercase{z}$ Relation}
\label{seckz}
Because of the potential for estimating redshifts for radio galaxies in the
NIR-faint (likely high-redshift) tail using the $K-z$ relation,
we now turn to examining that relation for our faint radio sample.
In Figure~\ref{radio_kmg}(a), we show $K_s$ magnitude versus
spectroscopic (black squares), CO (cyan squares), or photometric
(green circles) redshift for
the radio sources in the GOODS-N region.
By comparing with a $K_s$-selected field
sample with spectroscopic redshifts in the GOODS-N region (Barger et al.\ 2008;
purple contours show the surface density), we see that our radio sources
are nearly all located in the most $K_s$ luminous galaxies at all redshifts.
Remarkably, the $K-z$ relation from Willott03 (red line;
approximately converted from their $K$(Vega) to our $K_s$(AB) using a
1.9~mag offset) accurately traces the $K_s$ luminous envelope of our sample
over a wide range in $K_s$ magnitude and redshift, indicating that some
faint radio sources lie in the same galaxy mass hosts as powerful radio
sources.
In Figure~\ref{radio_kmg}(b), we show $K_s$ magnitude versus
spectroscopic (black squares), CO (cyan squares), or millimetric (blue diamonds)
redshift for the radio sources in the full field. The millimetric redshifts fill in
the higher redshift part of the plot.
We again show the Willott03 relation (red line), as well as the
same relation with the $K_s$ magnitude made fainter by 2.5~mag (black line).
Nearly all of the radio sources lie
in the band between the red and black lines.
At low redshifts (out to $z\sim1$), this could be
a consequence of the rapid evolution in the maximum
SFRs with increasing redshift, such that the radio
selection is always dominated by the highest redshift and highest
SFR galaxies (e.g., Condon 1989; Barger et al.\ 2007).
Eales et al.\ (1997) and Willott03 showed for the most powerful radio samples
that the $K-z$ relation has a weak dependence on radio flux selection,
with lower radio flux sources having fainter $K$-band magnitudes for the same
redshift. More recently, Simpson et al.\ (2012) studied the $K-z$ relation for the
$S_{\rm 1.4~GHz} > 100~\mu$Jy sample in the Subaru/{\em XMM-Newton Deep Field},
which is about a factor of 1000 fainter than the faintest radio survey limit of Willott03,
and found that at $z\gtrsim1$, the sources
were systematically fainter than the literature $K-z$ relations
(Willott03; Brookes et al.\ 2008; Bryant et al.\ 2009).
Our $S_{\rm 1.4~GHz} > 11.5~\mu$Jy sample shows a clear dependence of the $K-z$
relation on the radio flux selection. We illustrate this in Figure~\ref{radio_kmg_byflux},
where we show $K_s$ magnitude versus spectroscopic or photometric (black squares),
CO (cyan squares), or millimetric (blue solid diamonds) redshift in four radio flux intervals for the
radio sources in the GOODS-N field. We mark X-ray AGNs with red squares and
SCUBA-2 sources with spectroscopic, photometric, or CO redshifts with blue
large open diamonds. In each radio interval, we adopt a $K-z$ relation
having the Willott03 shape (see their Equation~2),
\begin{equation}
K_s = \Delta K_s + 4.53 \log_{10} z - 0.31 (\log_{10} z)^{2} \,.
\label{ks_eqn}
\end{equation}
We obtain least-squares fits to the data by adjusting the $K_s$ offset
($\Delta K_s$). In each panel, we show our fits in black and the Willott03
relation ($\Delta K_s = 17.37$) in red.
Remarkably, the observed dependence on radio flux appears
to hold for all sources, whether they are star formation dominated or AGN
dominated. However, by $z\gtrsim3$, the SCUBA-2 sources (blue solid or open
diamonds) appear fainter than expected, as was also observed
by Dunlop (2002) and Serjeant et al.\ (2003b) for SCUBA sources.
In Figure~\ref{kz_offset}, we plot the $\Delta K_s$ values found from our
fits versus the mean radio fluxes for each interval. These data points are
well fit by the functional form
\begin{equation}
\Delta K_s=21.73-0.84 \log_{10} f_{1.4~{\rm GHz}} \,,
\label{ks_off}
\end{equation}
which we show in the figure with the red line.
Here $f_{1.4~{\rm GHz}}$ is in $\mu$Jy.
\vskip 1.0cm
\begin{inlinefigure}
\centerline{\hskip 0.30cm \includegraphics[width=3.3in,angle=0]{fig9a.pdf}}
\vskip 0.2cm
\centerline{\includegraphics[width=3.2in,angle=0]{fig9b.pdf}}
\caption{
$K_s$ magnitude vs. redshift for the radio sources.
Sources undetected in $K_s$ are shown at a nominal magnitude of 25.2. In (a),
we show the radio sources in the GOODS-N region
({\em black squares\/} --- spectroscopic redshifts; {\em cyan squares\/} --- CO redshifts;
{\em green circles\/} --- photometric redshifts).
The purple contours show the surface density of a $K_s$-selected
field sample with spectroscopic redshifts in the GOODS-N region
from Barger et al.\ (2008).
The contours rise by multiplicative factors of 2 from the lowest
contour with 1/40th of the peak surface density.
In (b), we show the radio sources in the full field
({\em black squares\/} --- spectroscopic redshifts;
{\em cyan squares\/} --- CO redshifts;
{\em blue diamonds\/} --- millimetric redshifts).
The red line (also in (a)) shows the $K-z$ relation from Willott03.
The black line shows the same relation with the $K_s$ magnitude
made fainter by 2.5~mag.
\label{radio_kmg}
}
\end{inlinefigure}
This relation also extrapolates approximately
to fit the highest radio flux samples of previous work.
An exact comparison is difficult, however, because those
samples are chosen at different radio frequencies, and we
cannot precisely convert the NIR photometry.
Willott03 found a 0.55~mag difference in the mean $\Delta K$
for the 3CRR and 7C samples. Since the 7C sample is 20
times fainter, that would correspond
to a slope of 0.42 in Equation~\ref{ks_off}, suggesting that
brighter radio sources may follow a shallower relation.
From Equation~\ref{ks_off}, we can see that the observed
$K_s$-band flux depends on the observed 1.4~GHz flux to the power
of 0.34. This is a very weak dependence, which
means that for a large range in radio flux, there is only a small
range in host galaxy $K_s$-band flux.
Thus, for our sample, with $f_{1.4~\rm{GHz}}$ ranging from
11.5 to 5276~$\mu$Jy, the range in $f_{K_s}$ is only 7.8. The corresponding
range in $K_s$ is 2.2~mag, consistent with the range
seen in Figure~\ref{radio_kmg}(a).
We may use the dependence of the $\Delta K_s$ values
on $f_{1.4~{\rm GHz}}$ to tighten the relation
between the $K_s$ magnitude and the redshift for the radio sources. We
define a corrected $K_s$ magnitude, $K_{corr}$, as
\begin{equation}
K_{corr}\equiv K_s + 0.84 \log_{10} (f_{1.4~{\rm GHz}}/100~\mu {\rm Jy})
\label{k_corr}
\end{equation}
to move all of the sources to the track followed by 100~$\mu$Jy sources.
In Figure~\ref{radio_kmg_correct}, we plot $K_{corr}$ versus redshift.
We made a fourth order polynomial fit to the data in Figure~\ref{radio_kmg_correct},
\begin{eqnarray}
K_{corr}&=&19.88 +3.20 \log_{10} z + 1.13(\log_{10} z)^{2} \nonumber \\
&+& 2.79(\log_{10} z)^{3}+2.58(\log_{10} z)^{4} \,.
\label{kcorr_relation}
\end{eqnarray}
Above $K_{s}=22$, our spectroscopic and photometric identifications in the GOODS-N
are extremely incomplete, with the spectroscopic identifications likely biased towards
star-forming galaxies and AGNs, so this turn up may not be representative
of the full radio population. However, we can use Equation~\ref{kcorr_relation}
to obtain rough redshift estimates for the radio sources. We show these in
Figure~\ref{figkz} plotted versus spectroscopic, photometric, millimetric, or CO
redshift for the radio sources in the GOODS-N field with such information.
\begin{inlinefigure}
\centerline{\includegraphics[width=2.6in,angle=0]{fig10a.pdf}}
\centerline{\includegraphics[width=2.6in,angle=0]{fig10b.pdf}}
\centerline{\includegraphics[width=2.6in,angle=0]{fig10c.pdf}}
\centerline{\includegraphics[width=2.6in,angle=0]{fig10d.pdf}}
\caption{
$K_s$ magnitude vs. redshift in four 1.4~GHz flux intervals for radio
sources in the GOODS-N: (a) $<25~\mu$Jy,
(b) $25-50~\mu$Jy, (c) $50-200~\mu$Jy, and $>200~\mu$Jy.
Sources undetected in $K_s$ are shown at a
nominal magnitude of 25.2. We show
the radio sources in the full field ({\em black squares\/} ---
spectroscopic or photometric redshifts; {\em cyan squares\/} --- CO redshifts;
{\em blue solid diamonds\/} --- millimetric redshifts).
X-ray AGNs are marked with red squares.
Sources with spectroscopic, photometric, or CO redshifts
and SCUBA-2 detections are shown
surrounded by blue large open diamonds.
The red line shows the $K-z$ relation from Willott03.
The black line shows the best-fit model using the shape
of the Willott03 $K-z$ relation, but allowing $\Delta K_s$
in Equation~\ref{ks_eqn} to vary.
\label{radio_kmg_byflux}
}
\end{inlinefigure}
\begin{inlinefigure}
\centerline{\includegraphics[width=3.5in,angle=0]{fig11.pdf}}
\caption{
Black squares show the $\Delta K_s$ values
(20.72, 20.37, 20.12, 19.57)
determined in the radio flux intervals of Figure~\ref{radio_kmg_byflux}
vs. the mean radio fluxes in the intervals (18.1, 34.7,
87.6, and 308.9~$\mu$Jy). The red line shows the least squares
fit of $\Delta K_s$ vs. $\log_{10}f_{1.4~{\rm GHz}}$
(Equation~\ref{ks_off}).
\label{kz_offset}
}
\end{inlinefigure}
\vskip 1.0cm
\begin{inlinefigure}
\centerline{\includegraphics[width=3.2in,angle=0]{fig12.pdf}}
\caption{
$K_s$ magnitude corrected to $K_{corr}$ using Equation~\ref{k_corr}
for the radio sources in the GOODS-N vs. redshift
({\em black squares\/} --- spectroscopic or
photometric redshifts; {\em cyan squares\/} --- CO redshifts;
{\em blue solid diamonds\/} --- millimetric redshifts).
X-ray AGNs are marked with red squares.
Sources with spectroscopic, CO, or photometric redshifts and
SCUBA-2 detections are shown surrounded by blue large open diamonds.
The curve from Equations~\ref{ks_eqn} and \ref{ks_off} for
$f_{\rm 1.4~GHz}=100~\mu$Jy is shown in red.
The fourth order polynomial fit to the data from
Equation~\ref{kcorr_relation} is shown in black.
\label{radio_kmg_correct}
}
\end{inlinefigure}
\vskip 0.5cm
\begin{inlinefigure}
\includegraphics[width=3.2in,angle=0]{fig13.pdf}
\caption{
Redshift estimated from the $K-z$ relation (Equations~\ref{k_corr} and \ref{kcorr_relation})
vs. spectroscopic (black squares), CO (cyan squares),
photometric (green circles), or millimetric
(blue diamonds) redshift
for the radio sources in the GOODS-N field with such information.
\label{figkz}
}
\end{inlinefigure}
\section{Radio Power and Submillimeter Flux}
\label{secradiopower}
Locally, most radio sources more powerful than
$P_{1.4~{\rm GHz}}=10^{30}$~erg~s$^{-1}$~Hz$^{-1}$
are found to be associated with AGN activity
(e.g., Condon 1989; Sadler et al.\ 2002; Best \& Heckman 2012).
However, as we move to higher redshifts
where dusty galaxies with high SFRs become more common
(e.g., Cowie et al.\ 2004a), it is possible
that some fraction of the
$P_{1.4~{\rm GHz}}>10^{30}$~erg~s$^{-1}$~Hz$^{-1}$
radio sources are dominated by
star formation. Indeed, as we discussed in the introduction,
all $z\gtrsim4$ radio galaxies may have high SFRs
($\gtrsim1000~M_{\sun}$~yr$^{-1}$) based on
modeling of the $K-z$ relation (Rocca-Volmerange et al.\ 2004),
and such SFRs have been observed
(e.g., Dunlop et al.\ 1994; Ivison et al.\ 1998; Archibald et al.\ 2001; Reuland et al.\ 2004).
Thus, our radio sample should provide a powerful way of finding such sources.
However, separating the contributions to the radio emission
from star formation and AGN activity is not straightforward.
We compute the rest-frame radio powers for the radio sources in the
full field assuming
$S_\nu\propto \nu^\alpha$ and a radio spectral index of $\alpha=-0.8$
(Condon 1992; Ibar et al.\ 2010) using
\begin{equation}
P_{1.4~{\rm GHz}}=4\pi {d_L}^2 S_{1.4~{\rm GHz}} 10^{-29}
(1+z)^{\alpha - 1}~{\rm erg~s^{-1}~Hz^{-1}} \,.
\label{eqradio}
\end{equation}
Here $d_L$ is the luminosity distance (cm) and $S_{\rm 1.4~GHz}$
is the 1.4~GHz flux density ($\mu$Jy).
In Figure~\ref{radio}, we show these radio powers versus redshift
for the sources with spectroscopic
(including CO; black squares) or photometric (green circles)
redshifts. For the remaining sources (red plus signs), we use
the redshifts estimated from the $K_s-z$ relation
(Equations~\ref{k_corr} and \ref{kcorr_relation}).
We also plot the radio catalog limit of 11.5~$\mu$Jy ($5\sigma$) (blue dotted curve)
and the radio powers of
a luminous infrared galaxy (LIRG; $L_{\rm FIR}>10^{11}~L_\odot$)
(dashed horizontal line) and an ultraluminous infrared
galaxy (ULIRG; $L_{\rm FIR}>10^{12}~L_\odot$) (solid horizontal line),
which we calculated by assuming that the FIR-radio correlation holds.
At $z\gtrsim3$, our radio observations are only sensitive to star-forming
galaxies brighter than the ULIRG limit.
\vskip 0.5cm
\begin{inlinefigure}
\includegraphics[width=3.2in,angle=0]{fig14.pdf}
\caption{
Radio power vs. redshift for the radio sources in the full field with spectroscopic
or CO redshifts (black squares) or photometric redshifts (green circles).
The remaining sources are shown at the redshifts that would be predicted from
the $K-z$ relation (red plus signs; Equations~\ref{k_corr} and \ref{kcorr_relation}).
The blue dotted curve shows the radio power corresponding to the 1.4~GHz
catalog limit of 11.5~$\mu$Jy ($5\sigma$).
The dashed and solid horizontal lines show
the radio powers that correspond to the definitions of a LIRG
and a ULIRG, respectively, which we calculated by assuming that the galaxies
are star formers and that the FIR-radio correlation holds.
\label{radio}
}
\end{inlinefigure}
\subsection{High Radio Power}
\label{subsechighradio}
Our primary interest in this paper is to determine whether there is a
turn-down in the SFR distribution function at high redshifts.
Thus, we now turn our attention
to the high radio power sources in our sample, which we define as having
$P_{\rm 1.4~GHz}\ge10^{31}$~erg~s$^{-1}$~Hz$^{-1}$.
In Figures~\ref{radio_double}a and \ref{radio_double}b, we show blow ups
of Figure~\ref{radio} to focus on the high radio power sources with secure redshifts.
We have 51 (50 are spectroscopic or CO, and 1 is photometric),
39 of which are at $z>1.5$.
Most of the spectroscopically unidentified sources are faint in $K_s$ and
hence are likely to
lie at high redshifts based on the $K-z$ relation (see Figure~\ref{radio}).
For example, if we include our $K-z$ estimated redshifts, then the ratio of
secure to total $z>1.5$ sources would be 39/226, which would mean we have
secure identifications for $\lesssim1/5$ of the high-redshift, high radio
power sources.
In Figure~\ref{radio_double}(a), we distinguish sources where the X-ray data show
them to be X-ray AGNs (red squares) or X-ray quasars (red large squares).
We use green squares to distinguish sources
that are likely to be elliptical galaxies,
as determined by their having rest-frame EW([OII]$\lambda3727)<10$~\AA.
The latter distinction can only be made from the
optical spectra for galaxies at $z<1.5$.
We show with the blue dotted curve the radio limit of 11.5~$\mu$Jy ($5\sigma$).
In Figure~\ref{radio_double}(b), we mark radio
sources with 850~$\mu$m counterparts detected
above the $4\sigma$ level (blue circles). (If there is no SMA
measurement, then we only mark the source if there is a single radio
counterpart within the SCUBA-2 beam.)
We show with the blue dashed curve the $850~\mu$m
limit of 4~mJy ($4\sigma$) for the higher sensitivity region of the SCUBA-2 map
(see Figure~\ref{area}) converted
to a radio power assuming an Arp~220 SED.
Thus, over most of our SCUBA-2 map we would not expect to be able to detect
a source having a radio power much less than our high radio power
source definition.
\vskip 0.5cm
\begin{inlinefigure}
\centerline{\includegraphics[width=3.2in,angle=0]{fig15a.pdf}}
\centerline{\includegraphics[width=3.2in,angle=0]{fig15b.pdf}}
\caption{
Radio power vs. redshift for the
$P_{\rm 1.4~GHz}>10^{30}$~erg~s$^{-1}$~Hz$^{-1}$
radio sources in the full field with spectroscopic, CO, or
photometric redshifts (black squares).
The solid horizontal line shows our high radio power dividing
line of $P_{\rm 1.4~GHz}=10^{31}$~erg~s$^{-1}$~Hz$^{-1}$.
(a) Green squares show radio sources with measured rest-frame
EW([OII]$\lambda3727)<10$~\AA; only $z<1.5$ sources can be classified
as elliptical galaxies this way. X-ray AGNs are marked with red squares,
and X-ray quasars are marked with red large squares.
The blue dotted curve shows the radio power corresponding to the 1.4~GHz
catalog limit of 11.5~$\mu$Jy ($5\sigma$).
(b) Blue circles show single radio sources with
$>4\sigma$ 850~$\mu$m counterparts.
The blue dashed curve shows the 850~$\mu$m
limit of 4~mJy (4$\sigma$) for the higher sensitivity region
of the SCUBA-2 map (see Figure~\ref{area})
converted to a radio power assuming an Arp~220 SED.
The right-hand $y$-axes show the SFRs that would
correspond to the radio powers if the sources are powered by star formation
(see Section~\ref{subsfr}).
\label{radio_double}
}
\end{inlinefigure}
Nearly all of the high radio power sources at $z<1.5$ are X-ray quasars or
elliptical galaxies (see Figure~\ref{radio_double}(a)), both of which are likely to be
AGN powered (Condon 1989; Sadler et al.\ 1989; Condon et al.\ 2013 and references
therein). Of the two remaining sources, one is a SCUBA-2 source, and the
other is likely to be AGN powered, since it would be easily detectable with
SCUBA-2 if it were dominated by star formation
(see Figure~\ref{radio_double}(b)).
In contrast, at high redshifts ($z>1.5$), a substantial fraction of the high
radio power sources are detected at $>4\sigma$ in the submillimeter data.
In Section~\ref{subsfr}, we use the (albeit limited) available data to show
that high radio power sources detected with SCUBA-2 are primarily extended
and dominated by star formation rather than spatially
compact and dominated by AGN activity.
Thus, our detection at $850~\mu$m of 15 of the 41 high radio power sources
at $z>1.5$ suggests that 37\% are star formers.
However, there are strong selection effects in the spectroscopic
identifications of the radio sources at high redshifts, both in the targeting
(e.g., by investing long integration times on the submillimeter detected galaxies
through multiple masks or by obtaining CO redshifts) and in the
ease of making the identifications (e.g., by seeing strong emission line features).
Indeed, since ``red and dead'' galaxies would be hard to identify spectroscopically
at high redshifts and hence do not appear on Figure~\ref{radio_double},
we may expect that our star-forming fraction is overestimated.
We test the impact of our spectroscopic incompleteness on our estimated
star-forming fraction using the $K_s>21$ high radio power sources.
At $K_s\le21$, the combined spectroscopic and photometric redshifts provide an
essentially complete identification of the radio sample in the GOODS-N region
(see Figure~\ref{radio_hist}(a)).
It is only above this magnitude where the identifications become substantially
incomplete.
In Figure~\ref{kmg_flux}, we show $850~\mu$m signal-to-noise
ratio versus $K_s$ magnitude for the $K_s>21$ high radio power sample in
the region of the SCUBA-2 image where the rms 850~$\mu$m noise
is $<1$~mJy.
Based on the $K-z$ relation, the unidentified $K_s>21$
sources are estimated to lie at high redshifts.
Consistent with this are the high redshifts of the sources with spectroscopic,
CO, or photometric identifications, which we mark on the plot using
red ($z=1-2$), pink ($z=2-4$), and cyan squares ($z>4$).
Thus, we can roughly compare with our
previous estimate made for the secure $z>1.5$ sources.
We find that 29 of the 179 (16\%) sources in the figure are detected above
the $3\sigma$ level at $850~\mu$m, and 23 of 179 (13\%) are detected
above the $4\sigma$ level.
This result is insensitive to choosing a fainter $K_s$ magnitude threshold.
\vskip 0.5cm
\begin{inlinefigure}
\centerline{\includegraphics[width=3.2in,angle=0]{fig16.pdf}}
\caption{
850~$\mu$m signal-to-noise ratio vs. $K_s$ magnitude for
the $K_s>21$ radio sources in the region of the SCUBA-2 image
where the rms 850~$\mu$m noise is less than 1~mJy.
The red horizontal lines mark the 3$\sigma$ positive and negative noise levels.
The red (pink) squares show galaxies with spectroscopic
or photometric redshifts $z=1-2$ ($z=2-4$).
The cyan squares show galaxies with CO redshifts (these are all at $z>4$;
see Figure~\ref{radio_kmg}).
\label{kmg_flux}
}
\end{inlinefigure}
\subsection{Radio Power Based Star Formation Rates}
If, as we shall argue in Section~\ref{subsfr}, the submillimeter detected
radio galaxies are primarily star formation dominated, then we can calculate the
SFRs for the individual galaxies from their radio powers. However, in order to do so,
we need to assume that the FIR-radio correlation is roughly invariant
to $z\sim6$. Barger et al.\ (2012) showed this to be true at $z=2-4.2$ for luminous
SMGs, but here we assume that the FIR-radio correlation extends to
our highest sensitivity $850~\mu$m flux threshold of 2~mJy ($4\sigma$), which
is about a factor of 2 lower than the fluxes of the 5 SMGs in the Barger et al.\ (2012)
clean SMA sample (see their Section~5) with well-determined SEDs and measured FIR
luminosities (hereafter, the 5 SMGs).
We convert radio power to SFR using the FIR-radio correlation
(Helou et al.\ 1985; Condon et al.\ 1991), parameterized by the quantity $q$,
\begin{equation}
q = \log \left(\frac{L_{\rm FIR(8-1000~\mu {\rm m})}}{3.75\times 10^{12}~{\rm erg~s^{-1}}} \right) - \log \left(\frac{P_{\rm 1.4~GHz}}{\rm erg~s^{-1}~Hz^{-1}} \right) \,,
\end{equation}
and the Kennicutt (1998) relation between $L_{{\rm FIR}(8-1000~\mu {\rm m})}$
and SFR. This gives
\begin{equation}
\log {\rm SFR} (M_\odot~{\rm yr}^{-1}) = \log {P_{\rm 1.4~GHz}~({\rm erg~Hz^{-1}})} - A \,.
\label{sfr_power}
\end{equation}
To determine the normalization constant,
$A$, we use the $\langle q\rangle=2.51$ value obtained by Barger et al.\ (2012)
from the 5 SMGs. We find $A=28.25$, a value which
is almost identical to that determined by Bell (2003) ($A=28.26$)
and about a factor of two lower than that determined by
Condon (1992) based on the Milky Way.
Barger et al.\ (2012) followed Cowie et al.\ (2011) in using an intermediate
normalization of $A=28.1$, which is a factor of 1.4 higher than the present value.
However, here, in order to be consistent with our submillimeter determinations of the
SFRs, we stay with $A=28.25$.
The factor of two range between the Bell and Condon determinations is probably a
reasonable measure of the systematic uncertainty in the SFR-radio power relation.
The SFRs that we obtain from Equation~\ref{sfr_power} are for a -1.35
power-law Salpeter (1955) initial mass function (IMF) extending
from $0.1-100~M_\odot$.
(This assumption is built into our adopted value of the normalization constant $A$
through our use of the Kennicutt (1998) relation, which is calculated for that IMF.)
The Salpeter IMF only differs significantly from the current
best IMFs of Kroupa (2001) and Chabrier (2003) below $1~M_\odot$.
One can roughly convert the Salpeter IMF SFRs
into Chabrier IMF SFRs by dividing by 1.39 and the Salpeter IMF SFRs
into Kroupa IMF SFRs by dividing by 1.31.
On the right-hand $y$-axes of Figure~\ref{radio_double},
we show the SFR scale corresponding to the radio power scale for
the radio star formers.
Our high radio power source definition corresponds to a
SFR of $800~M_{\sun} ~{\rm yr}^{-1}$ for a star formation
dominated radio source. The submillimeter detected sources are seen to
have SFRs up to $\sim6,000~M_{\sun} ~{\rm yr}^{-1}$.
At very high redshifts, the relation between SFR and radio power,
and presumably also the FIR-radio correlation,
must begin to break down, particularly for less luminous galaxies,
because the Compton cooling of the relativistic electrons on the
Cosmic Microwave Background (CMB), which increases rapidly
with increasing redshift, will begin to dominate over synchrotron
losses (e.g., Condon 1992). This will decrease the radio power
for a given SFR. The cross-over point will occur when the energy
density in the CMB becomes comparable to the magnetic field energy
density in the galaxy.
We emphasize that such additional sources of cooling would cause us to
underestimate the SFRs based on the observed radio power.
However, for the ULIRGs of the present
sample, where the magnetic field and relativistic energy density
are expected to be extremely high, this breakdown of the FIR-radio
correlation may not occur over the $z<6$ redshift range that
we are considering.
\subsection{Submillimeter Flux Based Star Formation Rates}
For sources with spectroscopic or photometric redshifts, we can compute the
SFR directly from the observed frame $850~\mu$m flux using the
Kennicutt (1998) relation between $L_{{\rm FIR}(8-1000~\mu {\rm m})}$
luminosity and SFR, if we assume a spectral shape, such as Arp~220.
As is well known, this relation is almost redshift independent
for sources above $z=1.5$ (Blain \& Longair 1993; see blue dashed curve
in Figure~\ref{radio_double}(b)). For an Arp~220 shape obtained from
the fits of Klaas et al.\ (1997) over the $8-1000~\mu$m range, the relation is
${\rm SFR}_{850~\mu{\rm m}} = 180 \times S_{850~\mu{\rm m}}$, where
${\rm SFR}_{850~\mu{\rm m}}$ is in $M_{\sun}$~yr$^{-1}$, and
$S_{850~\mu{\rm m}}$ is in mJy.
\vskip 0.5cm
\begin{inlinefigure}
\centerline{\includegraphics[width=3.2in,angle=0]{fig17.pdf}}
\caption{SEDs for the 9 isolated SMGs with spectroscopic or CO redshifts
between $z=1.5$ and 5 and substantial coverage from the {\em Herschel\/} satellite.
We show rest-frame $L_{\nu}$ divided by observed-frame SMA flux.
The SEDs are very similar for
seven of the sources but higher for the remaining two.
The sources shown are CDFN3 (black
squares), CDFN11 (red squares), CDFN13 (blue squares), CDFN14 (green squares),
CDFN16 (black diamonds), CDFN22 (red diamonds), CDFN27 (blue diamonds),
CDFN29 (green diamonds), and CDFN37 (black triangles). The two solid
curves show the combined gray body and power law fits to the SEDs
of CDFN3 (black) and CDFN29 (green).
\label{show_fir_850}
}
\end{inlinefigure}
We may directly measure the conversion for the 9 SMGs
that are spatially isolated based on the SMA and 24~$\mu$m
images, have spectroscopic redshifts greater than 1.5, and are covered
by the {\em Herschel\/} data. (The 5 SMGs
of Barger et al.\ (2012) are a subset of this sample.) We first compiled
the fluxes in the 24, 70, 100, 160, 250, 350, 500, 860, and 1100~$\mu$m
bands from Magnelli et al.\ (2012, 2013), Barger et al.\ (2012),
and Perera et al.\ (2008).
In Figure~\ref{show_fir_850}, we show the rest-frame $L_{\nu}$
divided by the observed-frame SMA flux for these 9 sources.
Hereafter, we will say 850~$\mu$m everywhere instead of alternating
between 850~$\mu$m (SCUBA-2) and 860~$\mu$m (SMA), since, within
the uncertainties, the differences are not important.
For each source we fitted both a gray body and, below a rest-frame
wavelength of 50~$\mu$m, a power law (see, e.g., Casey 2012
for a discussion of the fitting procedures). We show two sample fits
in Figure~\ref{show_fir_850}. In Figure~\ref{sma_conversion}, we
show the $L_{{\rm FIR}(8-1000~\mu{\rm m})}$ to observed-frame
850~$\mu$m flux ratios that we determined from the fits.
Converting these luminosities to SFRs using the Kennicutt (1998)
formula, we find a mean conversion at $z>1.5$ of
\begin{eqnarray}
{\rm SFR}_{850~\mu{\rm m}} &=& 200\times S_{850~\mu{\rm m}} \,,
\label{sfr_850}
\end{eqnarray}
with a multiplicative range over the individual values of just over 2 in
each direction about the mean. (Seven of the sources have similar conversions,
while two of the sources have higher conversions.) We adopt this value,
which is quite close to that inferred from the Arp~220 shape, as our conversion factor.
We will assume that the multiplicative range of 2 is the systematic error in the
individual SFRs determined from the submillimeter fluxes based on the
variations in their SEDs.
\vskip 0.5cm
\vskip 0.1cm
\begin{inlinefigure}
\centerline{\includegraphics[width=3.2in,angle=0]{fig18.pdf}}
\caption{
Ratio of measured $L_{{\rm FIR}(8-1000~\mu {\rm m})}$
to observed-frame SMA flux vs. observed-frame SMA flux
for the 9 isolated SMGs with
spectroscopic or CO redshifts above 1.5 and substantial
coverage from the {\em Herschel\/} satellite. The solid line
shows the mean conversion of $4.44\times10^{45}$~erg~s$^{-1}$~mJy$^{-1}$.
\label{sma_conversion}
}
\end{inlinefigure}
We also computed the mean fluxes for the isolated sample
based on direct measurements in the GOODS-{\em Herschel\/}
images (DR1 release; Elbaz et al.\ 2011).
We measured the fluxes using a matched filter equal to the
point spread function of the image centered at the radio position.
We compared the SFR
conversion for the fainter sources at 850$~\mu$m ($2-5$~mJy)
with that for the brighter ($>5$~mJy) sources.
For both samples, we computed the mean luminosity normalized
to the observed-frame 850~$\mu$m flux at the mean
rest-frame wavelength of the sample.
We only included sources
lying within regions of the 100~$\mu$m image where the
exposure time was greater than 25\% of the maximum
exposure time. We computed the background
correction and the 68$\%$ confidence limits by constructing
100 equivalent samples with the same redshift distribution
but randomized positions.
In Figure~\ref{sed_flux}(a), we show the results for the spectroscopic
sample only, and in Figure~\ref{sed_flux}(b), we show the results for
a much larger sample that uses millimetric redshifts when
we do not have spectroscopic redshifts. The $2-5$~mJy sample
conversion is 2\% higher than the $>5$~mJy sample conversion
if we use the spectroscopic
sample, and it is 25\% lower than the $>5$~mJy sample
conversion if we use the spectroscopic plus millimetric redshift
sample.
However, in both cases, the SEDs are consistent within
the errors. (Note that because the noise is due to confusion,
the errors are correlated between bands.) We therefore conclude
that the SFR conversion is not strongly dependent on flux
over the observed flux range. A similar test shows no evolution
in the SFR conversion as a function of redshift.
\vskip 0.5cm
\vskip 0.1cm
\begin{inlinefigure}
\centerline{\includegraphics[width=3.2in,angle=0]{fig19a.pdf}}
\centerline{\includegraphics[width=3.2in,angle=0]{fig19b.pdf}}
\caption{Mean value of $L_\nu$ divided by observed-frame 850~$\mu$m flux
in each of the {\em Herschel\/} bands from 100 to 500~$\mu$m and in the
SCUBA-2 850~$\mu$m band vs. the mean rest-frame wavelength of the
sample. The blue diamonds (red squares) show the values for isolated sources
with radio source identifications and 850$~\mu$m fluxes between 2 and 5~mJy
($>5$~mJy). (a) Sources with spectroscopic redshifts.
(b) Sources with spectroscopic redshifts or with millimetric
redshifts when there is no spectroscopic identification. The latter includes
12 sources in the high flux range and 16 sources in the low flux range.
The two solid curves show the combined gray body and power law fits to the
SEDs of CDFN3 (black) and CDFN29 (green) from Figure~\ref{show_fir_850},
which give the range of the individual fits.
\label{sed_flux}
}
\end{inlinefigure}
We can only obtain independent estimates of the SFRs from the radio power
and the submillimeter flux where we have spectroscopic, CO, or photometric
redshifts. Where we only have millimetric redshifts, the SFRs
obtained from the radio power will agree with those from the
submillmeter fluxes by construction, since we are using a consistent
assumption about the FIR SED.
In Figure~\ref{sfr_comparison}, we compare the SFRs derived from the
two methods for sources with spectroscopic, CO, or photometric redshifts $z>1.5$.
The SFRs derived are in good agreement, though the submillimeter derived SFRs are about
$5\%$ higher, on average, than the radio derived SFRs. This is well
within the uncertainties in the absolute calibration of the SFRs, and we conclude
that either method will produce similar results.
\vskip 0.5cm
\begin{inlinefigure}
\centerline{\includegraphics[width=3.2in,angle=0]{fig20.pdf}}
\caption{
Comparison of the SFRs derived from the radio power with those
derived from the 850~$\mu$m flux for the sources with
spectroscopic, photometric, or CO redshifts $z>1.5$.
The submillimeter derived
SFRs are about 5$\%$ higher, on average, than the radio derived SFRs.
\label{sfr_comparison}
}
\end{inlinefigure}
\subsection{Star Formers in the High Radio Power Population}
\label{subsfr}
At high radio powers, the submillimeter detected sources are clearly quite
distinct from other radio sources.
In Figure~\ref{radio_flux}, we show $850~\mu$m flux versus
radio power for the radio sources with spectroscopic,
photometric, or CO redshifts $z>1$ in the region of
the SCUBA-2 field where the rms 850~$\mu$m noise
is less than 1.5~mJy (black squares).
We mark radio sources with 850~$\mu$m counterparts
detected above the $4\sigma$ level with blue solid circles.
(If there is no SMA observation, then we only mark the
source if there is a single radio counterpart within the SCUBA-2 beam.)
These SMGs begin
to enter at a radio power of $\approx~5\times10^{30}$~erg~s$^{-1}$~Hz$^{-1}$,
as would be expected for sources with an Arp~220 SED obeying the
local FIR-radio correlation (cyan curve), given our $4\sigma$ $850~\mu$m
flux limit of 2~mJy. (This radio power corresponds to a
SFR of $\sim 400~M_{\sun}$~yr$^{-1}$, or an $850~\mu$m
flux of 2.5~mJy, assuming the sources are powered by star formation.)
We mark X-ray AGNs with red squares and X-ray quasars with red large
squares. None of the submillimeter detected sources are X-ray quasars.
Above this radio power, we see a bifurcation, with some sources being undetected
in the submillimeter, even at very high radio luminosities,
while others follow the FIR-radio track of the cyan curve. We shall refer
to the two tracks as submillimeter-bright and submillimeter-blank
radio sources.
Based on the small number of sources with high-resolution
radio observations in the field, the submillimeter-bright sources
appear to be predominantly extended and star formation dominated.
The three submillimeter-bright sources in our radio sample with high-resolution
1.4~GHz observations from either the Multi-Element Radio Linked Interferometer
Network (MERLIN)+VLA (Chapman et al.\ 2004b)
or the Very Long Baseline Interferometer (Momjian et al.\ 2010)
have all been confirmed as being extended
(CDFN9/GOODS~850-3/GN6, CDFN7/GOODS~850-36,
and CDFN3/GN20).
(Note that CDFN3/GN20 lies outside the area shown in Figure~\ref{radio_flux},
but it lies smoothly on the submillimeter-bright track.)
Two submillimeter-blank sources in our radio sample at $z>1$ were
classified as AGNs by Guidetti et al.\ (2013) using high-resolution
5~GHz observations with e-MERLIN combined with existing 1.4~GHz
MERLIN+VLA observations obtained by Muxlow et al.\ (2005).
We show these enclosed in black large squares in Figure~\ref{radio_flux}.
(A further 3 of the submillimeter-blank sources shown
in Figure~\ref{radio_flux} were observed by Guidetti et al.\ but were
not clearly classified.)
We shall assume in the following that the submillimeter-bright
sources are star formation dominated, though the number of sources
used to come to this conclusion is small.
\vskip 0.5cm
\begin{inlinefigure}
\centerline{\includegraphics[width=3.2in,angle=0]{fig21.pdf}}
\caption{
850~$\mu$m flux vs. radio power for the radio sources with
spectroscopic, photometric, or CO redshifts $z>1$
in the region of the SCUBA-2 image where the
rms 850~$\mu$m error is $<1.5$~mJy (black squares).
X-ray AGNs are marked with red squares, and
X-ray quasars are marked with red large squares.
Green squares show radio sources with measured rest-frame
EW([OII]$\lambda3727)<10$~\AA; only $z<1.5$ sources can be classified
as elliptical galaxies this way.
Blue solid circles show single radio sources with well-determined
$>4\sigma$ 850~$\mu$m counterparts. The sources enclosed in
black large squares are sources classified as AGNs by Guidetti et al.\ (2013).
Error bars for all the symbols are $\pm1\sigma$.
The cyan curve shows the submillimeter flux expected for an Arp~220 SED
based on Equations~\ref{sfr_power} and \ref{sfr_850}.
The top axis (right-hand axis) shows the SFR that would correspond to the
radio power (submillimeter flux), if the source is powered by star formation.
\label{radio_flux}
}
\end{inlinefigure}
\section{Star Formation Rate Distribution Function}
\label{secsfh}
A major goal of this paper is to search for evidence of a
turn-down in the SFR distribution function, which would indicate a
characteristic maximum SFR in galaxies. Here, we
use the SCUBA-2 sample of Table~1 to explore
the shape of the SFR distribution function at high redshifts.
Of the 49 SCUBA-2 sources in Table~1, 24 have SMA
observations that directly determine the radio counterparts. Three of these
SCUBA-2 sources have multiple SMA/radio counterparts, giving
a total of 27 SMA detected sources. These correspond to all but two of
the sources in the SMA sample of Table~2; i.e.,
GOODS~850-13a and GOODS~850-13c are not included in the
SCUBA-2 selection, because they lie below the detection threshold.
There are a further 18 SCUBA-2 sources for which there is only a single
radio source within the SCUBA-2 beam, which we take to be
the counterpart. The remaining 7 SCUBA-2 sources either
have multiple radio sources within the beam (this is the case for three sources)
or no radio counterpart (this is the case for four sources, including
the single SCUBA-2 source/SMA pair CDFN15a and CDFN15b
where both SMA counterparts are undetected in the radio).
Some of the latter category could be spurious when they are close to the
$4\sigma$ threshold, but if they are real, as is clearly the case for
CDFN15, then they are the most plausible extremely high-redshift galaxy candidates.
\vskip 0.5cm
\begin{inlinefigure}
\centerline{\includegraphics[width=3.2in,angle=0]{fig22a.pdf}}
\vskip 0.2cm
\centerline{\includegraphics[width=3.2in,angle=0]{fig22b.pdf}}
\caption{
(a) Radio power vs. redshift for the SCUBA-2 sample
with single radio counterparts at $z>1$ (black squares - spectroscopic,
photometric, or CO redshifts; blue diamonds - millimetric redshifts),
as well as the five without radio counterparts (green right-pointing
arrows; we computed the minimum millimetric redshifts for these by
assuming a 1.4~GHz flux of $10~\mu$Jy).
X-ray AGNs are marked with red squares.
None of the sources are X-ray quasars.
The right-hand axis shows the SFRs calculated from the radio powers
using Equation~\ref{sfr_power},
assuming the sources are powered by star formation.
(b) 850~$\mu$m flux vs. redshift for the same sample and using the
same symbols as in (a). In this panel,
the right-hand axis shows the SFRs calculated from the submillimeter
fluxes using Equation~\ref{sfr_850},
assuming the sources are powered by star formation.
This axis is only valid for sources at $z>1.5$.
\label{rad_power}
}
\end{inlinefigure}
In the following, we restrict our analysis to the SCUBA-2 SMGs with SMA/radio
detections or single radio counterparts, giving a total sample of 45 galaxies.
(Note, however, that with some reasonable assumptions, we also present
results that include the five sources without radio counterparts.)
Where possible, we use the spectroscopic, photometric, or CO redshifts.
As summarized in Table~1, 19 of the 45 sources have such redshifts,
14 of which lie at $z>1.5$. For the remaining 26 sources, we use
the millimetric redshifts from Table~1, 22 of which lie at $z>1.5$.
In Figure~\ref{rad_power}(a), we show radio power (left-hand $y$-axis) and
the SFR calculated from the radio power using Equation~\ref{sfr_power}
(right-hand $y$-axis) versus redshift for the SMGs at $z>1$.
In Figure~\ref{rad_power}(b), we show submillimeter flux (left-hand $y$-axis)
and the SFR calculated from the submillimeter flux using Equation~\ref{sfr_850}
(right-hand $y$-axis) versus redshift for the same sample.
We denote sources with spectroscopic, photometric, or CO redshifts
with black squares, and we denote sources with millimetric redshifts with blue
diamonds. We mark X-ray AGNs with red squares. None of the sources
are X-ray quasars. We show the five sources without radio
counterparts as green right-pointing arrows. We computed the minimum millimetric
redshifts for these by assuming a 1.4~GHz flux of 10~$\mu$Jy.
In both panels, the SFRs range from $400~M_{\sun}$~yr$^{-1}$ to
$\sim6000~M_{\sun}$~yr$^{-1}$. For homogeneity, we decided to calculate
the SFRs from the submillimeter fluxes in our subsequent analysis, but our results
are not significantly changed if we instead compute the SFRs from the radio
powers.
For each source, we determined the area over which a $4\sigma$ detection
would have been made in the SCUBA-2 image.
We then used this to determine the accessible volume in the redshift interval
z1 to z2. Since the conversion from $850~\mu$m flux to SFR
is nearly redshift invariant, this is just the comoving volume
between z1 and z2 that corresponds to the area for that source.
We then formed the SFR per unit volume per $\log$ SFR in the redshift interval
by summing the inverse volumes and dividing by the bin size width. We used
bins stepped by 0.5 in $\log$ SFR.
In Figure~\ref{star_formers}, we show the number density of sources per unit
comoving volume per unit
$\log$ SFR versus $\log$~SFR for the $z=1.5-6$ SCUBA-2 sources with SMA
detections or single radio counterparts (black squares). Here and subsequently,
we only use the SMGs with SFRs $>500~M_\sun$~yr$^{-1}$
corresponding to $850~\mu$m fluxes $\gtrsim3$~mJy,
where we have substantial area coverage (see Figure~\ref{area}; this
only eliminates two SMGs).
The green diamonds show the same
but assuming that the five SCUBA-2 sources without radio
counterparts also lie in this redshift interval. Because there is no redshift
dependence in the SFR conversion (see Equation~\ref{sfr_850}),
the submillimeter fluxes of these sources place them in the appropriate
SFR bin. We have not included the three SCUBA-2 sources that have multiple radio
sources within the SCUBA-2 beam, but if they also are at
$z=1.5-6$, then they contain just under 10\% of the total
submillimeter flux, or, equivalently, of the total SFR.
Thus, the overall normalization should not be
increased by more than this amount with their inclusion.
The red solid line shows the shape that would be required
to produce the same amount of star formation
in each logarithmic SFR interval. The two lowest
SFR bins fall on this relation; however, above
$\log {\rm SFR} \sim 3.3$, the measured volume
density begins to drop below this relation. This drop is highly statistically
significant, since a constant amount of star formation
in each logarithmic SFR interval would imply that we would have
23 objects above $\log$ SFR\,$\sim 3.3$ in the field, whereas we see
only four. Over the range of the two lowest data points
($500-2000~M_{\sun}$~yr$^{-1}$), the total SFR density is
$0.016~M_{\sun}$~yr$^{-1}$~Mpc$^{-3}$, while the contribution
from sources with SFRs above $2000~M_{\sun}$~yr$^{-1}$
is only $0.004~M_{\sun}$~yr$^{-1}$~Mpc$^{-3}$. Thus,
we appear to have a characteristic maximum SFR of
$\sim2000~M_{\sun}$~yr$^{-1}$.
\vskip 0.5cm
\begin{inlinefigure}
\centerline{\includegraphics[width=3.2in,angle=0]{fig23.pdf}}
\caption{
Number density per unit comoving volume
per unit $\log$ SFR vs. $\log$~SFR for the $>4\sigma$ SCUBA-2
sources at $z=1.5-6$ with SFRs $>500~M_\odot$~yr$^{-1}$.
Black squares show the sources with SMA detections or single radio counterparts.
The error bars are 68$\%$ confidence ranges based on the number of
sources in each bin.
The green diamonds show the results if the five SMGs
without radio counterparts are also assumed to lie in this redshift interval.
The red solid line shows the shape of the SFR distribution
function that would produce equal amounts of star formation in each
$\log$ SFR interval.
\label{star_formers}
}
\end{inlinefigure}
It is unlikely that this result could be affected by gravitational lensing
of the submillimeter/1.4~GHz sources. While the bright end sources
in ultra-wide fields surveys are dominated by lensed sources (Negrello
et al.\ 2010), there is only a low probability of seeing a significantly lensed
source in a field of the present size (e.g., Takahashi et al.\ 2011).
We searched around the
brightest SMGs for neighboring bright foreground galaxies
that could be plausible lensers and found only two. One
of these is HDF850.1 (Hughes et al.\ 1998) or GOODS 850-1,
which has a nearby elliptical galaxy
at $z=1.224$ from Barger et al.\ (2008). Walter et al.\ (2012), using their
new redshift and position for the SMG from the IRAM Plateau de Bure
Interferometer, derived only a modest possible amplification factor of $\sim1.4$.
We can compare the contributions that we found from the very massively
star-forming galaxies in the SCUBA-2 sample to the contributions
from rest-frame UV selected samples.
In Figure~\ref{star_formers_plus}, we plot volume density
versus $\log$~SFR for the SCUBA-2 galaxies from Figure~\ref{star_formers}
and for Lyman Break Galaxy (LBGs) from
the extinction-corrected UV luminosity functions of van der Burg et al.\ (2010)
(red triangles for $z=4.8$, green diamonds for $z=3.8$,
and blue squares for $z=3.1$) and Reddy \& Steidel (2009) (blue curve
for $z\sim3$ and cyan curve for $z\sim2$). We converted their luminosity
functions to the units of Figure~\ref{star_formers_plus} using the Kennicutt (1998)
conversion of 1600~\AA\ luminosity to SFR for a Salpeter IMF.
van der Burg et al.\ (2010) adopted luminosity-dependent
dust correction factors from Bouwens et al.\ (2009).
Reddy \& Steidel (2009) also used luminosity-dependent dust corrections,
but theirs were significantly smaller.
Indeed, van der Burg et al.\ directly compared their
extinction-corrected SFR densities with those Reddy \& Steidel in their Figure~14
and found them to be quite different, illustrating the level of uncertainty in the
extinction corrections.
While the distribution of SMG SFRs appears to extend smoothly
from the distribution of LBG SFRs, the LBG SFRs determined from the
extinction-corrected UV selected samples are not as high as those of the
SMGs but instead cut off at $\sim300~M_\odot$~yr$^{-1}$.
Thus, either the SMGs are completely omitted from the UV selected samples, or
the extinction corrections applied to some UV sources are substantially
underestimated (see discussion in Bouwens et al.\ 2009).
Even if catastrophically wrong extinction corrections are applied to some
UV sources, causing lower SFRs to be assigned to sources that
genuinely have high SFRs, the UV distributions in Figure~\ref{star_formers_plus}
would remain the same. The reason is that the volume density of SMGs is
much smaller than that of LBGs, which means the number of sources that
would be affected would be too small to make a difference.
\vskip 0.5cm
\begin{inlinefigure}
\centerline{\includegraphics[width=3.2in,angle=0]{fig24.pdf}}
\caption{
Number density per unit comoving volume
per unit $\log$ SFR vs. $\log$~SFR for the $>4\sigma$ SCUBA-2
sources at $z=1.5-6$ with SFRs $>500~M_\odot$~yr$^{-1}$.
Black squares show the sources with SMA detections or single radio counterparts.
The error bars are 68$\%$ confidence ranges based on the number of
sources in each bin.
The green diamonds show the results if the five SMGs
without radio counterparts are also assumed to lie in this redshift interval.
For comparison, the small symbols and curves show extinction-corrected UV
results from van den Burg et al.\ (2010)
(red triangles - $z=4.8$; green diamonds - $z=3.8$; blue squares - $z=3.1$)
and Reddy \& Steidel (2009) (blue curve - $z\sim3$; cyan curve - $z\sim2$),
assuming the Kennicutt (1998) conversion of UV luminosity to SFR for a
Salpeter IMF.
\label{star_formers_plus}
}
\end{inlinefigure}
Since the LBGs' brightest submillimeter fluxes are only $\sim0.2-0.3$~mJy
based on stacking analyses
(e.g., Peacock et al.\ 2000; Chapman et al.\ 2000; Webb et al.\ 2003),
with the present submillimeter sensitivities, which are set by the
blank field confusion limit, the SMG SFRs do not overlap with LBG SFRs.
Thus, there is a gap between the two populations where the SFR distribution
function is poorly determined.
In Figure~\ref{sfr_history}, we plot the SFR density per unit
comoving volume for the SCUBA-2 sample with SFRs $>500~M_\sun$~yr$^{-1}$
(black squares) versus redshift and compare it with the compilation by
Hopkins \& Beacom (2006; black solid curve)
for extinction-corrected UV selected samples over the same redshift range.
We compare these results with the SFR density history
of Barger et al.\ (2012)---who used a substantially smaller
SCUBA selected and SMA confirmed sample in the GOODS-N---after
reducing their points by a factor of 1.4 (blue open squares)
to adjust them to the SFR calibration of the present paper.
Note that Casey et al.\ (2013) constructed the most recent SFR density history
using both 450~$\mu$m and 850~$\mu$m selected
SCUBA-2 samples in the COSMOS field, which they compared with
Barger et al.\ (2012) at 850~$\mu$m, Chapman et al.\ (2005) at 850~$\mu$m
using SCUBA, Wardlow et al.\ (2011) at 870~$\mu$m using LABOCA,
Roseboom et al.\ (2012) at 1.2~mm using MAMBO,
and Casey et al.\ (2012a,b) at $250-500~\mu$m using {\em Herschel}-SPIRE
(see their Figure~14).
\vskip 0.5cm
\begin{inlinefigure}
\centerline{\includegraphics[width=3.2in,angle=0]{fig25.pdf}}
\caption{
SFR density per unit comoving volume vs. redshift for the SCUBA-2
sample with SFRs $>500~M_\sun$~yr$^{-1}$.
The black squares show the computations at $z=1.5-2.5$, $2.5-3.5$, $3.5-4.5$,
and $4.5-5.5$ and are plotted at the mean redshift of each bin.
Sources without radio counterparts are placed
at their minimum millimetric redshifts, and we have renormalized
the points by a multiplicative factor of 1.1 to allow for the sources
with multiple radio counterparts. The error bars
are 68$\%$ confidence ranges based on the number of sources in the bin.
The black solid curve shows the SFR density history computed by
Hopkins \& Beacom (2006) based on
UV selected samples, and the red dashed curve shows this
multiplied by 0.16 to match roughly the SFR density history of
the current sample. The blue open diamonds show
the SFR density history computed by Barger et al.\ (2012) based
on the smaller SCUBA sample in the GOODS-N field. We have
reduced these points by a factor of 1.4 to correspond to the present
SFR calibration.
\label{sfr_history}
}
\end{inlinefigure}
With our relatively large sample, we see a smoother evolution than
we saw in Barger et al.\ (2012), and one which closely matches the shape
of the Hopkins \& Beacom (2006) SFR density history. The massive
star-forming galaxies studied in this paper contain about 16\% of the SFR
density seen in the UV selected population
(the red dashed curve shows a scaled down version of Hopkins \& Beacom),
though the systematic uncertainties in the SFR determinations and in the UV
extinction corrections could easily change this by multiplicative factors of two.
As we saw from Figure~\ref{star_formers_plus}, the contributions are
essentially disjoint, and the SMG contributions should be added to the
extinction-corrected UV contributions.
The fraction of the total star formation in galaxies with SFRs $>500~M_\sun$~yr$^{-1}$
is substantially higher than what was found by Rodighiero et al.\ (2011)
using a {\em Herschel\/}-PACS selection, which suggests that the longer
wavelength samples of the present paper are more effective
at finding these galaxies. Indeed, only 13 of the 27 SMA detected sources
lying in the GOODS-{\em Herschel\/} region
(which have exact positions and confirmed 850~$\mu$m fluxes)
are detected above the $4\sigma$ threshold in the PACS
100~$\mu$m image.
The combined extinction-corrected UV and submillimeter data
in Figure~\ref{sfr_history} show that the shape of the SFR density
does not change much in the redshift range $z=1.5-5.5$, though there is
about a factor of 3 increase in the absolute normalization at $z=2$
relative to that at $z=5$.
\section{Discussion}
\label{secdisc}
In this discussion, our goal is to fit together the various pieces of information
that we have presented in this paper to form a comprehensive picture.
From Figure~\ref{star_formers}, we saw a turn-down in the SFR
distribution function for SFRs above $2000~M_\sun$~yr$^{-1}$.
However, even $2000~M_\sun$~yr$^{-1}$ is an extraordinarily high
SFR, and we aim to show that this rate cannot be sustained for any long
period ($\gg 10^8$~yr) in individual galaxies
without producing too many ultra-massive galaxies overall.
Under the simple assumption that all the
SCUBA-2 galaxies have a fixed time period of star formation, $\tau$, that does
not depend on luminosity or redshift, then each
galaxy forms a mass $M=\tau~\times$~SFR. We obtain the number density
by integrating the SFR distribution function (Figure~\ref{star_formers}) over
cosmic time and
dividing by $\tau$. To make the integration,
we assume that the shape of the SFR distribution function is fixed with redshift,
but we allow the normalization to vary with redshift in order to match the
shape of the red dashed curve in Figure~\ref{sfr_history}.
\vskip 0.5cm
\begin{inlinefigure}
\centerline{\includegraphics[width=3.4in,angle=0]{fig26.pdf}}
\caption{
Mass density per unit comoving volume per unit $\log$~M vs.
$\log$~M for $z=2.5-3$, computed from the SFR distribution function
of Figure~\ref{star_formers} assuming a fixed shape with redshift
but allowing the normalization to vary.
The open diamonds (black squares) show the values calculated
if each SFR episode lasts for $5\times10^{8}$~yr
($10^{8}$~yr). The red curve shows the mass distribution function
of Ilbert et al.\ (2013) in this redshift interval corrected to a Salpeter IMF.
\label{mass_den}
}
\end{inlinefigure}
In Figure~\ref{mass_den}, we plot the mass distribution function at
$z=2.5-3$ that we predict using two values of $\tau$,
$10^8$~yr (squares) and $5\times10^8$~yr (diamonds).
We compare these with the mass distribution function determined by
Ilbert et al.\ (2013) using the UltraVISTA DR1 data release for the same
redshift interval, after correcting theirs to a Salpeter IMF (red curve).
Longer star-forming lifetimes than $10^8$~yr greatly over-predict the
high mass end of the mass distribution. This is also consistent
with the measured gas reservoirs in the SMGs, which are only
large enough to sustain such high SFRs for a limited length of time.
For example, for GN20, Hodge et al.\ (2012, 2013a) give a CO based mass of
$1.3\times10^{11}~M_\sun$ and a dynamical
mass of $5.4\times10^{11}~M_\sun$, which could only
sustain GN20's high SFR for a short period.
Thus, there are many generations of high SFR galaxies contributing
to the SFR distribution function through the redshift
interval that are continuously switching on, forming a
massive galaxy, and then switching off. Correspondingly, there
will be a wide range of ages in the most massive galaxies at any redshift.
However, at later redshifts, the distribution of ages will be centered around
a more mature age.
The $K-z$ relation shows that nearly
all high-redshift radio sources lie in very massive galaxies.
The most powerful radio sources lie in galaxies with stellar
masses $>10^{12}~M_\sun$, which must have formed at
high redshifts ($z\gg5$) (Willot03; Rocca-Volmerange et al.\ 2004).
However, the mass dependence on radio power is extremely weak.
We find that even sources $1000-10000$ times less
powerful in the radio than the sources considered in Willott03 must lie
in galaxies with masses in excess of $10^{11}~M_\sun$.
Such galaxies are moderately rare at all redshifts and lie on the exponential
tail of the mass distribution function (see Figure~\ref{mass_den}).
However, approaching this from the other direction, we also find that a substantial
fraction of high-mass galaxies contain powerful radio sources. We
show the mass versus redshift relation for galaxies in the GOODS-N
region in Figure~\ref{radio_mass}.
We calculated the masses following Cowie \& Barger (2008) and using
a Salpeter IMF. We mark the galaxies with spectroscopic redshifts
with black squares and the galaxies with photometric
redshifts with green circles. We enclose the galaxies with high radio powers
($P_{\rm 1.4~GHz}\ge10^{31}$~erg~s$^{-1}$~Hz$^{-1}$) in red diamonds.
\vskip 0.5cm
\begin{inlinefigure}
\centerline{\includegraphics[width=3.4in,angle=180]{fig27.pdf}}
\caption{
Mass vs. redshift in the GOODS-N region. Black
squares denote spectroscopic redshifts, and green circles denote
photometric redshifts. The red diamonds mark high radio power
sources ($P_{\rm 1.4~GHz}\ge10^{31}$~erg~s$^{-1}$~Hz$^{-1}$).
The red dashed line shows the $10^{11}~M_\sun$ limit.
\label{radio_mass}
}
\end{inlinefigure}
As expected from the $K-z$ relation,
nearly all the high radio power sources have galaxy masses $>10^{11}~M_\sun$.
In the other direction, above $z=2$, $34\%\pm11\%$ of the galaxies satisfying
this mass limit contain high radio power sources. Thus,
the integrated lifetime of the powerful radio period
must be long. At $z=2$, the age of the universe is $3.2\times10^{9}$~yr,
and in order to have one-third of the massive galaxies be powerful
radio sources, this phase (or phases) must have a lifetime in excess
of $10^{9}$~yr.
The relative timescales of the star-forming period ($10^{8}$~yr or less)
relative to that of the radio-powerful AGN period(s)
(in excess of $10^{9}$~yr)
imply that, at least at $z\sim2$, $\sim10\%$ of the
high radio power sources will be star formers.
This is roughly consistent with the fraction that we determined in
Section~\ref{subsechighradio} ($13-16$\%).
Thus, after the $>10^{11}~M_\odot$ galaxies form in the short initial starburst,
they spend a substantial fraction of their subsequent lifetime as high radio
power AGNs.
\section{Summary}
\label{secsum}
In this paper, we presented an integrated SCUBA-2, SMA, and 1.4~GHz study of
a 400~arcmin$^2$ area surrounding the GOODS-N field. Using the SCUBA-2
data, we constructed an 850~$\mu$m catalog of 49 sources to 2~mJy ($4\sigma$).
We looked for counterparts to these sources in the ultradeep (11.5~$\mu$Jy
at $5\sigma$) radio data. In cases where there were multiple radio counterparts,
we often were able to use new and existing SMA data to determine the correct
counterparts (the correct radio counterparts to only three SMGs remain uncertain).
Only five SMGs have no
radio counterparts, making them extremely high redshift galaxy candidates.
We either obtained ourselves or located in the literature extensive spectroscopic
redshifts for the radio sources in the field. In the GOODS-N proper, the redshift
identifications are highly complete to $K_s=21$ after including a small number
of photometric redshifts.
For the SMGs without spectroscopic, CO, or photometric redshifts,
we used an Arp~220 based model from Barger et al.\ (2000) to
measure millimetric redshifts from the 1.4~GHz to 860~$\mu$m flux ratios.
The millimetric redshifts predominantly fill in the $z\sim2.5-4$ range.
We found an extended tail of radio sources
with faint optical/NIR counterparts, the faintest of which are undetected even in the
{\em HST\/} ACS images. These sources are not identifiable with optical/NIR
spectroscopy or photometry and may lie at high redshifts. Indeed, we found that
there is a strong correlation between $K_s$ magnitude and redshift in the radio
sample (the $K-z$ relation), making it possible to use the $K_s$ magnitudes as a
crude redshift estimator for the radio sources.
We computed rest-frame radio power for the radio sources with spectroscopic, CO, or
photometric redshifts. At $z\gtrsim3$, even these ultradeep observations are only
sensitive to sources brighter than the ULIRG limit calculated assuming the FIR-radio
correlation.
We are particularly interested in the high radio power
($P_{\rm 1.4~GHz}\ge10^{31}$~erg~s$^{-1}$~Hz$^{-1}$)
sources at high redshifts, as those at $z<1.5$ mostly appear to be AGN powered.
At $z>1.5$, a substantial fraction (37\%) of the spectroscopically identified
high radio power sources are detected in
the submillimeter, suggesting that they are massive star formers. However, it is difficult to
determine the true fraction of high radio power star formers, because there are strong
selection effects in the spectroscopic identifications of the radio sources at high redshifts.
Based on the $K-z$ relation, the unidentified radio sources at $K_s>21$ should
lie at high redshifts. Using the 850~$\mu$m signal-to-noise ratio for the high radio
power sources at these magnitudes, we found a likely star-forming fraction of $13-16$\%.
We computed SFRs for the individual sources from the 1.4~GHz power, assuming
that the FIR-radio correlation is roughly invariant to $z\sim6$ for SMGs down to
our 850~$\mu$m flux threshold of 2~mJy, and from the submillimeter fluxes, assuming
using an SFR conversion computed from the average SEDs of isolated galaxies
in the sample. (The SFR conversion is quite close to that which would be computed from Arp 220.)
We found that the SFRs derived from the two methods are in good agreement for
the sources with spectroscopic, CO, or photometric redshifts $z>1.5$, though the
submillimeter derived SFRs are about 5\% higher, on average, than the radio
derived SFRs. This is well within the uncertainties in the absolute calibration of
the SFRs.
We found that at high radio powers, the submillimeter detected sources are quite
distinct from the other radio sources. A small number of these had
high-resolution radio observations that showed them to be predominantly extended or
star formation dominated, so we assumed that they were all star formation
dominated.
We found that the SFRs of the SMGs ranged from $400~M_\odot$~yr$^{-1}$ to
$\sim6000~M_\odot$~yr$^{-1}$. We constructed the SFR distribution function
for the SMGs at $z>1.5$ with
SFRs~$>500~M_\odot$~yr$^{-1}$ and found a characteristic maximum SFR of
$\sim2000~M_\odot$~yr$^{-1}$. It should be emphasized that while we only
have spectroscopic redshifts for about 40$\%$ of the sources, and the
remaining sources have relatively uncertain redshifts primarily based on the
radio to submillimeter flux, the results are very insensitive to even fairly large
errors in the redshifts. Because the conversion from 850~$\mu$m flux depends
extremely weakly on redshift at $z>1.5$, the result would only change if an
SMG was moved outside this redshift interval by the redshift uncertainty.
We compared our submm results with extinction-corrected
UV selected samples and saw that the LBGs do not have as high of SFRs as the SMGs
but instead cut off at $\sim300~M_\odot$~yr$^{-1}$. Thus, the two samples are
essentially disjoint.
We constructed the SFR density history for the SMG sample and compared it with
the extinction-corrected UV selected SFR density history
compilation of Hopkins \& Beacom (2006) over the
redshift range $z=1-6$. The shapes closely match, with the SMG SFR density
history being about 16\% of the extinction-corrected UV selected SFR density history.
However, since the samples are disjoint, the SMG contributions and the
extinction-corrected UV contributions should be added for a fuller accounting
of the overall star formation history.
Finally, we discussed how the above information could be put together to form a
comprehensive picture. We concluded that nearly all high radio power
sources have galaxy masses $>10^{11}~M_\odot$ and that in order to avoid
over-predicting the high galaxy mass end, there must be many generations
of high SFR galaxies that are continuously switching on, forming a massive galaxy in
a period of $<10^8$~yr, and then switching off. However, the powerful radio
period lasts much longer ($>10^9$~yr), making the high radio power sources without
submillimeter counterparts the most common type of
high radio power source, even at high redshift.
\acknowledgements
We thank the anonymous referee for a thoughtful report.
We gratefully acknowledge support from
the University of Wisconsin Research Committee with funds
granted by the Wisconsin Alumni Research Foundation and the
David and Lucile Packard Foundation (A.~J.~B.),
NSF grants AST-1313150 (A.~J.~B.), AST-0709356 (L.~L.~C., C.-C.~C.),
and AST-1313309 (L.~L.~C.), and
National Science Council of Taiwan grant
102-2119-M-001-007-MY3 (W.-H.~W.).
C.~M.~C. was generously supported by a Hubble Fellowship provided
by Space Telescope Science Institute, grant HST-HF-51268.01-A.
We acknowledge the cultural significance that the summit of
Mauna Kea has to the indigenous Hawaiian community.
|
1,477,468,751,168 | arxiv | \section{Introduction}
An undirected graph $G=(V,E)$ is an {\em{interval graph}} if it is the intersection graph of a family of intervals on the real line in which each vertex is assigned an interval and two vertices are adjacent if and only if their corresponding intervals intersect. The study of interval graphs was spearheaded by Benzer \cite{B} in course of his studies of the topology of the fine structure of genes. Since then interval graphs and their various generalizations were studied thoroughly. Also advances in the field of molecular biology, and genetics in particular, solicited the need for a new model. In \cite{Z}, Zhang introduced another generalization of interval graphs called probe interval graphs, in an attempt to aid a problem called cosmid contig mapping, a particular component of the physical mapping of DNA. A {\em{probe interval graph}} is an undirected graph $G=(V,E)$ in which the set of vertices $V$ can be partitioned into two subsets $P$ and $N$ (called probes and nonprobes respectively) and there is an interval (on the real line) corresponding to each vertex such that vertices are adjacent if and only if their corresponding intervals intersect and at least one of the vertices belongs to $P$. Now several research works are continuing on this topic and some special classes of it \cite{BL,CK,GL,JS,MWZ,LS}. In fact, Golumbic and Trenk have devoted an entire chapter on probe interval graphs in their recent book \cite{GT}. Moreover, motivated by the definition of probe interval graphs, genrally, the concept of probe graph classes has been introduced. Given a class of graphs $\mathscr{G}$, a graph $G$ is a {\em probe graph of} $\mathscr{G}$ if its vertices can be partitioned into a set $P$ of probes and an independent set $N$ of nonprobes such that $G$ can be extended to a graph of $\mathscr{G}$ by adding edges between certain nonprobes. In this way, many more probe graph classes have been defined and widely investigated, eg., probe split graphs, probe chordal graphs, probe tolerance graphs, probe threshold graphs and others \cite{BGL,CKKLP,RB,RLB}.
Among all such studies nothing has been said about the nature of adjacency matrices of probe interval graphs until now. In this paper we will present three characterizations of the adjacency matrix of a probe interval graph. The first one is in this section and two others are in section 3. In section 2, we describe an easy method of obtaining interval representation of an interval bipartite graph from its adjacency matrix. Moreover, we prove that if we add a loop at every probe vertex of a probe interval graph, then the Ferrers dimension of the corresponding symmetric bipartite graph is at most $3$.
We first note that any interval graph $G=(V,E)$ is a probe interval graph with probes $P$ and nonprobes $N$, where $N$ is any independent set (possibly singleton) of $G$ and $P=V\smallsetminus N$. Certainly the converse is false, as $C_4$ is a probe interval graph which is not an interval graph. So probe interval graphs generalize the class of interval graphs. Further generalizations lead to the following concepts. An undirected graph $G=(V,E)$ is an {\em{interval split graph}} \cite{Br} if the set of vertices $V$ are partitioned into two subsets $U_1$ and $U_2$ such that the subgraph induced by $U_1$ is an interval graph and $U_2$ is an independent set. Every probe interval graph is an interval split graph, as $N$ is an independent set and the subgraph induced by $P$ is an interval graph. Again interval bipartite graphs (cf.~\S 2) are generalized to interval $k$-graphs. An undirected graph with a proper coloring by $k$ colors is an {\em{interval $k$-graph}} \cite{Br} if each vertex corresponds to an interval on the real line so that two vertices are adjacent if and only if their corresponding intervals intersect and they are of different colors. Brown \cite{Br} showed that any $k$-chromatic probe interval graph is an interval $k$-graph. Also, since interval $k$-graphs are weakly chordal\footnote{An undirected graph $G$ is {\em weakly chordal} if neither $G$ nor its complement $\bar{G}$ contains an induced cycle of length greater than $4$.} and hence perfect,\footnote{An undirected graph $G$ is {\em perfect} if for every induced subgraph $H$ of $G$, the chromatic number of $H$ is equal to its maximal clique size.} we have that probe interval graphs are also weakly chordal and perfect. While comparing two graphs discussed earlier, Brown \cite{Br} made the comment:
``there are interval split graphs which are not interval $k$-graphs (e.g., $C_5$ or any cycle of length greater than or equal to $5$). The converse is not known -- but has not received much attention.'' The following example shows that there are interval $k$-graphs which are not interval split graphs.
\begin{exmp} {\em Consider the graph $G=K_{2,2,2}$, which is an interval $3$-graph. But it is not an interval split graph, since it has only $3$ independent sets, namely, $\set{a,d},\set{b,c}$ and $\set{x,y}$. For each such choice, the other $4$ vertices induce the subgraph $C_4$ which is not an interval graph.}
\begin{figure}[h]
\begin{center}
\includegraphics*[scale=0.4]{abs6.eps}
\end{center}
\end{figure}
\end{exmp}
Now the class of probe interval graphs lies in the intersection of the class of interval split graphs and the class of interval $k$-graphs but there are examples (Brown presented one such in \cite{Br}) which are both interval split graphs and interval $k$-graphs but not probe interval graphs. Thus the following is an interesting open problem to study.
\begin{prob}
Characterize the class of graphs which are both interval split graphs and interval $k$-graphs.
\end{prob}
Regarding forbidden subgraph characterizations, Brown \cite{Br} showed that interval $k$-graphs and hence probe interval graphs are ATE-free\footnote{An {\em asteroidal triple of edges} (ATE) in an undirected graph $G$ is a set of three edges such that for any two there exists a path in $G$ that avoids the neighborhood of the third edge.}, while Sheng \cite{LS} characterized cycle free probe interval graphs in terms of six forbidden subgraphs (trees). Among the other characterizations, Brown \cite{Br} and Zhang \cite{Z} generalized the well known \cite{GH,G} result that an undirected graph is an interval graph if and only if its maximal cliques are consecutively ordered\footnote{A set of distinct induced subgraphs $\mathcal{G}=\set{G_1,G_2,\ldots,G_t}$ of a graph $G=(V,E)$ is {\em consecutively ordered} when for each $v\in V$, if $i<j<l$ and $v\in G_i\cap G_l$, then $v\in G_j$.}. Brown \cite{Br} proved that if $G=(V,E)$ is an undirected graph with an independent set $N\subseteq V$, then $G$ is a probe interval graph with probes $P=V\smallsetminus N$ and nonprobes $N$ if and only if $G$ has an edge cover of quasi-cliques\footnote{A {\em quasi-clique} $Q$ in a probe interval graph $G=(P,N,E)$ is a set of vertices with all vertices of $Q\cap P$ are adjacent to each other and any vertex of $Q\cap N$ is adjacent to all vertices of $Q\cap P$.} that can be consecutively ordered.
In the following we shall present the first characterization of adjacency matrices of probe interval graphs (cf. Observation \ref{o:char1}), which is simple and immediate. Let $G=(V,E)$ be a simple undirected graph. If we replace\footnote{This replacement is equivalent to add a loop at each vertex of $G$.} all principal diagonal elements\footnote{which are $0$ in the adjacency matrix of $G$.} of the adjacency matrix of $G$ by $1$, then this matrix is known as the {\em \label{'augmented'} augmented adjacency matrix} of $G$. Let $M$ be a symmetric $(0,1)$ matrix with $1$'s in the principal diagonal. Then $M$ is said to satisfy the {\em quasi-linear ones property} if $1$'s are consecutive right to and below the principal diagonal. It is known \cite{MR} that $G$ is an interval graph if and only if rows and columns of the augmented adjacency matrix of $G$ can be suitably permuted (using the same permutation for rows and columns) so that it satisfies the quasi-linear ones property.
\begin{defn}
{\em Let $M$ be a symmetric $(0,1)$-matrix with $1$'s in the principal diagonal. Suppose $M$ contains a principal submatrix\footnote{We call a square submatrix $N$ of $M$ {\em{principal}} if the principal diagonal elements of $N$ are also principal diagonal elements of $M$.} $N$ which is an identity matrix. Denote all the zeros of $N$ by $X$. Then $M$ is said to satisfy the {\em{quasi-x-linear ones property}} if every $0$ right to the principal diagonal has only $0$ and $X$ to its right and every $0$ below the principal diagonal has only $0$ and $X$ below it.}
\end{defn}
\vspace{1em} Now from the definition of a probe interval graph $G$ it follows that the graph obtained by adding edges to $G$ between the pairs of nonprobes whose intervals intersect is an interval graph with the same assignment of intervals to the vertices as in $G$. Conversely, let $G=(V,E)$ be an interval graph and $N\subseteq V$. Then the graph obtained by removing all the edges between any two vertices belonging to $N$ from $G$ is a probe interval graph with probes $P=V\smallsetminus N$ and nonprobes $N$. Thus for an undirected graph $G=(V,E)$ with an independent set $N$, if adding edges between some vertices of $N$ make it an interval graph, then the graph $G$ must be a probe interval graph with probes $P=V\smallsetminus N$ and nonprobes $N$. This simple observation leads to the following characterization of probe interval graphs:
\vspace{2em}\begin{obs}\label{o:char1}
Let $G=(V,E)$ be an undirected graph with an independent set $N\subseteq V$. Let $A(G)$ be the augmented adjacency matrix of $G$. Then $G$ is a probe interval graph with probes $P=V\smallsetminus N$ and nonprobes $N$ if and only if rows and columns of $A(G)$ can be suitably permuted (using the same permutation for rows and columns) in such a way that it satisfies the quasi-x-linear ones property.
\end{obs}
\vspace{-1.5em} \section{Interval representations of interval bipartite graphs}
\vspace{-1.25em} An {\em{interval bipartite graph}} (in short, {\em interval bigraph}) is a bipartite graph $B=(X,Y,E)$ with bipartation $(X,Y)$, representable by assigning each vertex $v\in X\cup Y$ an interval $I_v$ (on the real line) so that two vertices $x\in X$ and $y\in Y$ are adjacent if and only if $I_x\cap I_y\neq\emptyset$ \cite{HKM}. Since $X$ and $Y$ are independent sets in $B$, here we only consider the submatrix of the adjacency matrix of $B$ consisting of the rows corresponding to one partite set and the columns corresponding to the other. This submatrix is known as the {\em{biadjacency matrix}} of $B$. A bipartite graph $B$ is an interval bigraph if and only if the rows and columns of the biadjacency matrix of $B$ can be (independently) permuted so that each $0$ can be replaced by $R$ or $C$ in such a way that every $R$ has only $R$'s to its right and every $C$ has only $C$'s below it. Such a partition of zeros in the biadjacency matrix of $B$ is called an {\em{R-C partition}} of it \cite{SDRW}. Again a $(0,1)$-matrix $A$ has the {\em generalized linear ones property} if it has a stair partition\footnote{A {\em stair partition} of a matrix is a partition of its positions into two sets $(L,U)$ by a polygonal path from the upper left to the lower right, such that the set $L$ [$U$] is closed under leftward or downward [respectively, rightward or upward] movement \cite{SDW}.} $(L,U)$ such that the $1$'s in $U$ are consecutive and appear leftmost in each row, and the $1$'s in $L$ are consecutive and appear topmost in each column. For the biadjacency matrix $A$ of a bipartite graph $B$ this property is equivalent to having an R-C partition, i.e., $B$ is an interval bigraph if and only if the rows and columns of $A$ can be (independently) permuted so that the resulting matrix has the generalized linear ones property \cite{SDW}. Now there are many methods \cite{Mu,SDRW,W} of obtaining interval representation of an interval bigraph when the R-C partition of its biadjacency matrix is given. We present here another one for further use.
\begin{defn}
{\em Let $B$ be an interval bigraph with the biadjacency matrix $A$, which is in R-C partition form. We insert some rows and columns in $A$, each of which has all the entries $X$ except the principal diagonal element which is $1$ such that $A$ is enhanced to a square matrix in which each $R$ is right to the principal diagonal and each $C$ is below the principal diagonal. Now replace each $X$ right to $R$ by $R$ and each $X$ below $C$ by $C$ and the rest by $1$. This matrix, say, $\tilde{A}$ is called a {\em diagonalized} form of $A$ and the above process of obtaining $\tilde{A}$ from $A$ will be called a {\em{diagonalization}} of $A$. We denote the bigraph whose biadjacency matrix is $\tilde{A}$ by $\tilde{B}$}\footnote{Note that $\tilde{B}$ is also an interval bigraph, as $\tilde{A}$ is still in R-C partition form and $B$ is an induced subgraph of $\tilde{B}$.}.
\end{defn}
An easy method of diagonalization is as follows. In the stair partition of $A$, if a step, parallel to rows [columns], is lying through $k$ columns [respectively, rows], then insert $k$ rows [respectively, columns] (as described previously) just above [respectively, after] the step. Accordingly we get a diagonalized matrix $\tilde{A}$ whose number of rows (as well as columns) is the sum of number of rows and columns of $A$. For practical purpose the number of insertions of rows and columns can be reduced as the following example shows.
\begin{exmp}\label{exmp:diag}
{\em Consider the following biadjacency matrix $A$ of an interval bigraph:}
\vspace{-1.5em}
$$\begin{array}{c|ccccc}
\textrm{\small Vertices} & x_1 & x_2 & x_3 & x_4 & x_5\\
\hline
y_1 & 1 & 1 & 1 & 0 & 0 \\
y_2 & 1 & 0 & 0 & 1 & 0 \\
y_3 & 0 & 0 & 0 & 1 & 0
\end{array}\hspace{1in} \begin{array}{c|ccccc}
\textrm{\small Vertices} & x_1 & x_2 & x_3 & x_4 & x_5\\
\hline
y_1 & 1 & 1 & 1 & R & \multicolumn{1}{c@{\,\vline}}{R}\\
\cline{2-4}
y_2 & 1 & C & \multicolumn{1}{c@{\,\vline}}{C} & 1 & \multicolumn{1}{c@{\,\vline}}{R}\\
\cline{5-5}
y_3 & C & C & C & \multicolumn{1}{c@{\,\vline}}{1} & \multicolumn{1}{c@{\,\vline}}{R}\\
\cline{2-6}
\end{array}$$
{\em A diagonalization of $A$ is given by}
$$\begin{array}{c|cccccc}
\textrm{\small v} & x_1 & x_2 & x_3 & x_4 & x_6 & x_5\\
\hline
y_1 & 1 & 1 & 1 & R & X & R \\
y_4 & X & 1 & X & X & X & X \\
y_5 & X & X & 1 & X & X & X \\
y_2 & 1 & C & C & 1 & X & R \\
y_3 & C & C & C & 1 & 1 & R \\
y_6 & X & X & X & X & X & 1
\end{array}\hspace{1in}
\begin{array}{c|cccccc}
\textrm{\small v} & x_1 & x_2 & x_3 & x_4 & x_6 & x_5\\
\hline
y_1 & \textcolor{darkmagenta}{1} & 1 & 1 & R & R & R \\
y_4 & 1 & \textcolor{darkmagenta}{1} & 1 & 1 & 1 & 1 \\
y_5 & 1 & 1 & \textcolor{darkmagenta}{1} & 1 & 1 & 1 \\
y_2 & 1 & C & C & \textcolor{darkmagenta}{1} & 1 & R \\
y_3 & C & C & C & 1 & \textcolor{darkmagenta}{1} & R \\
y_6 & C & C & C & 1 & 1 & \textcolor{darkmagenta}{1}
\end{array}$$
\end{exmp}
\vspace{-1em}Now we present an algorithm to obtain an interval representation of an interval bigraph $B$.
\begin{algo} \label{alg:big1}
{\em {\small\bf Input:}\ Diagonalized matrix $\tilde{A}$ (of order $n\times n$ (say)), where $A$ is the biadjacency matrix (in R-C partition form) of an interval bigraph $B$.\\
{\small\bf Step I:}\ For each $i=1\textrm{ to }n$, define $a_i=i$ and $b_i=r$, where in the $i^{\textrm{th}}$ row the last $1$ appears in the $r^{\textrm{th}}$ column on or after the principal diagonal of $\tilde{A}$.\\
{\small\bf Step II:}\ For each $j=1\textrm{ to }n$, define $c_j=j$ and $d_j=s$, where in the $j^{\textrm{th}}$ column the last $1$ appears in the $s^{\textrm{th}}$ row on or after the principal diagonal \mbox{of $\tilde{A}$}.\\
{\small\bf Output:}\ The closed intervals $[a_i,b_i]$ and $[c_j,d_j]$, which are corresponding to the $i^{\textrm{th}}$ row and $j^{\textrm{th}}$ column of $\tilde{A}$ respectively.}
\end{algo}
Using the above algorithm in the case of the interval bigraph considered in Example \ref{exmp:diag}, we have
$$\begin{array}{c|cccccc|c}
\textrm{\small Vertices} & x_1 & x_2 & x_3 & x_4 & x_6 & x_5 & \textrm{\small Intervals}\\
\hline
y_1 & \textcolor{darkmagenta}{1} & 1 & 1 & R & R & R & [1,3]\\
y_4 & 1 & \textcolor{darkmagenta}{1} & 1 & 1 & 1 & 1 & [2,6]\\
y_5 & 1 & 1 & \textcolor{darkmagenta}{1} & 1 & 1 & 1 & [3,6]\\
y_2 & 1 & C & C & \textcolor{darkmagenta}{1} & 1 & R & [4,5]\\
y_3 & C & C & C & 1 & \textcolor{darkmagenta}{1} & R & [5,5]\\
y_6 & C & C & C & 1 & 1 & \textcolor{darkmagenta}{1} & [6,6]\\
\hline
\textrm{\small Intervals} & [1,4] & [2,3] & [3,3] & [4,6] & [5,6] & [6,6] &
\end{array}$$
Finally removing newly inserted rows and columns we get
$$\begin{array}{c|ccccc|c}
\textrm{\small Vertices} & x_1 & x_2 & x_3 & x_4 & x_5 & \textrm{\small Intervals}\\
\hline
y_1 & 1 & 1 & 1 & 0 & 0 & [1,3]\\
y_2 & 1 & 0 & 0 & 1 & 0 & [4,5]\\
y_3 & 0 & 0 & 0 & 1 & 0 & [5,5]\\
\hline
\textrm{\small Intervals} & [1,4] & [2,3] & [3,3] & [4,6] & [6,6] &
\end{array}$$
\begin{prop}
Algorithm \ref{alg:big1} provides an interval representation of an interval bigraph $B$.
\end{prop}
\vspace{-1.5em}\begin{pf*}{Proof.}
Let $B$ be an interval bigraph with biadjacency matrix $A$ is in R-C partition form. Let us denote the vertex corresponding to the $i^\textrm{th}$ row [$j^\textrm{th}$ column] of $\tilde{A}$ by $u_i$ [respectively, $v_j$]. Now suppose $u_iv_j=1$\footnote{For convenience, an entry of a matrix corresponding to, say, the vertex $u_i$ in the row and the vertex $v_j$ in the column will be denoted by, simply, $u_iv_j$.}. If $i\leqslant j$, then by Algorithm \ref{alg:big1}, $c_j=j\leqslant b_i$ and so $a_i=i\leqslant j\leqslant b_i$. Thus $[a_i,b_i]\cap [c_j,d_j]$ contains $j$ and hence it is nonempty. Again if $i>j$, then by Algorithm \ref{alg:big1}, $a_i=i\leqslant d_j$. So $c_j=j<i=a_i\leqslant d_j$ which implies $[a_i,b_i]\cap [c_j,d_j]\neq\emptyset$ as it contains $i$. Next let $u_iv_j=R$. Since $\tilde{A}$ is diagonalized, $i<j$. But then by Algorithm \ref{alg:big1}, $b_i<j$ and so $a_i\leqslant b_i<j=c_j\leqslant d_j$, i.e., $[a_i,b_i]\cap [c_j,d_j]=\emptyset$. Similarly, if $u_iv_j=C$, then $i>j$ and by Algorithm \ref{alg:big1}, it follows that $c_j=j\leqslant d_j<i=a_i\leqslant b_i$, i.e., $[a_i,b_i]\cap [c_j,d_j]=\emptyset$. Therefore Algorithm \ref{alg:big1} provides an interval representation of $\tilde{B}$ and hence of $B$, as $B$ is an induced subgraph of $\tilde{B}$. \hfill $\qed$
\end{pf*}
\section{Probe interval graphs}
Let $G=(V,E)$ be an undirected graph with an independent set $N\subseteq V$. Let $P=V\smallsetminus N$. We construct a bipartite graph $B=(U_1,U_2,E_1)$ with the partite sets $U_1=P$ and $U_2=V$ and two vertices $p\in U_1$ and $v\in U_2$ are adjacent in $B$ if and only if either $p=v$ (in $G$) or $pv\in E$ (i.e., $p$ and $v$ are adjacent in $G$). That is, $B$ is a bipartite graph whose biadjacency matrix is the submatrix $P\times V$ of the augmented adjacency matrix (cf.~page \pageref{'augmented'}) of $G$ consisting of all the columns, but only the rows corresponding to all the vertices of $P$. Henceforth we refer this graph as $B=(P,V,E_1)$.
We note that if $G=(V,E)$ is a probe interval graph with probes $P$ and nonprobes $N$, then the bipartite graph $B=(P,V,E_1)$ is necessarily an interval bigraph by the same assignment of intervals to the vertices as in $G$. But the following example shows that the above necessary condition is not sufficient.
\begin{exmp}\label{exmp:at}
{\em Consider the following graph, say, $G$. $G$ is not\footnote{Note that $G$ is a probe interval graph with probes $\set{a,c,d}$ and nonprobes $\set{b,e,f}$.} a probe interval graph with probes $P=\set{a,b,c,d}$ and nonprobes $N=\set{e,f}$ as neither $G$ nor the graph $G+ef$ (the graph obtained by joining the edge $ef$ to $G$) is an interval graph.}
\vspace{1em}
\begin{figure}[h]
\begin{center}
\includegraphics*[scale=0.45]{abs7.eps}
\end{center}
\end{figure}
\vspace{1em}{\em But the biadjacency matrix of the bipartite graph $B=(P,V,E_1)$ has an R-C partition showing that $B$ is an interval bigraph.}
$$\begin{array}{c|cccccc}
\textrm{{\small vertices}} & a & b & c & d & e & f \\
\hline
a & 1 & 1 & R & R & R & R \\
b & 1 & 1 & 1 & 1 & R & R \\
c & C & 1 & 1 & 1 & 1 & R \\
d & C & 1 & 1 & 1 & C & 1
\end{array}$$
\end{exmp}
\begin{thm}\label{t:char1}
An undirected graph $G=(V,E)$ with an independent set $N\subseteq V$ is a probe interval graph with probes $P=V\smallsetminus N$ and nonprobes $N$ if and only if
\begin{enumerate}
\item[{\bf (1)}] the bigraph $B=(P,V,E_1)$ is an interval bigraph and
\item[{\bf (2)}] there exists an R-C partition of the biadjacency matrix of $B$ which does not contain the following submatrix for any $p,q\in P$ and $n\in N$:\\[-1em]
\begin{equation}\label{eq:cfg}
\begin{array}{cc|cccccc}
\multicolumn{3}{c}{} & p && q && n\\ \cline{3-8}
p &&& 1 && 1 && R\\
q &&& 1 && 1 && C
\end{array}
\end{equation}
\end{enumerate}
\end{thm}
\begin{pf*}{Proof.}
Let $G=(V,E)$ be a probe interval graph with probes $P$ and nonprobes $N$. Then, as we observed earlier, the bipartite graph $B=(P,V,E_1)$ is an interval bigraph with the same assignment of intervals to all the vertices as in $G$.\footnote{Note that in the interval bigraph $B$, the same interval is assigned to every probe vertex $p$, both as a member of $P$ and as a member of $V$.} So its biadjacency matrix has an R-C partition by arranging vertices in the non-decreasing order of left end points of the intervals corresponding to them. Suppose for some $p,q\in P$ and some $n\in N$, there is a submatrix of the form (\ref{eq:cfg}). Let the intervals corresponding to $p,q$ and $n$ be $[a,b],[c,d]$ and $[l,r]$ respectively. Since $pn=R$, we have $a\leqslant b<l\leqslant r$ and since $qn=C$, we get that $l\leqslant r<c\leqslant d$ which imply $a\leqslant b<c\leqslant d$. Then it follows that the intervals $[a,b]$ and $[c,d]$ are disjoint. But this contradicts the fact that $pq=qp=1$. Thus we have the condition is necessary.
Conversely, let $G=(V,E)$ be an undirected graph with an independent set $N$ and $P=V\smallsetminus N$ such that $B=(P,V,E_1)$ is an interval bigraph and its biadjacency matrix, say, $M$ has an R-C partition which does not contain any submatrix of the form (\ref{eq:cfg}). We first show that it is possible to rearrange the columns of $M$ in such a way that the sequence of vertices of $P$ in the columns is same as that of in the rows of it and still the new matrix will have an R-C partition that does not contain a submatrix of the form (\ref{eq:cfg}).
Suppose in $M$, $p,q\in P$ appear in the following manner:
$$\begin{array}{cc|cccc}
\multicolumn{3}{c}{} & q && p \\ \cline{3-6}
p &&& && 1 \\
q &&& 1 &&
\end{array}$$
Since $M$ is in R-C partition form, $pq$ cannot be $R$ or $C$. So we must have
$$\begin{array}{cc|cccc}
\multicolumn{3}{c}{} & q && p \\ \cline{3-6}
p &&& 1 && 1 \\
q &&& 1 && 1
\end{array}$$
Now if the column of $p$ does not contain $R$, then all columns of $M$ left to that of $p$ also cannot contain $R$. Thus the column of $q$ can be placed just right to that of $p$ and the new matrix thus formed remains to be in the R-C partition form. Also since we did not change any $R,C$ or $1$ of the matrix $M$, the new matrix also does not contain a submatrix of the form (\ref{eq:cfg}). Again if all rows of $M$ for which the column of $p$ contains $R$ also have $R$ in the column of $q$, then we have $R$ in all the columns in between them. Thus, in this case also, shifting the column of $q$ just right to that of $p$ will not disturb the R-C partition of the matrix and will not invite any submatrix of the form (\ref{eq:cfg}).
So suppose there exists $r\in P$ such that $rp=R$ and $rq=1$. Now since $rp=R$ and $rr=1$, the column of $r$ appears left to that of $p$ in $M$. Also since $pr=0$ (as $rp=0$) and $pp=1$, we have $pr=C$. But then we have
$$\begin{array}{cc|cccc}
\multicolumn{3}{c}{} & r && p \\ \cline{3-6}
r &&& 1 && R \\
p &&& C && 1 \\
q &&& 1 && 1
\end{array}$$
as $qr=rq=1$. Since this configuration is not allowed in an R-C partition, this case is not possible. Thus we can have the biadjacency matrix of $B$ in our desired form. Let us denote the matrix of this form by $M_1$ and the probe vertex corresponding to the $i^\textrm{th}$ row of $M_1$ by $p_i$.
Now let in the interval bigraph $B=(P,V,E_1)$ the interval corresponding to each $p_i\in P$ be $[a_i^\prime,b_i^\prime]$ and that corresponding to each $p_i\in V$ be $[a_i^{\prime\prime},b_i^{\prime\prime}]$. Further we assume that the interval representation of $B$ is obtained from $M_1$ by Algorithm \ref{alg:big1}. By Theorem 1 in \cite{SSW}, the assignment of the interval $[a_i^\prime + a_i^{\prime\prime},b_i^\prime + b_i^{\prime\prime}]\ =\ [a_i,b_i]$ (say) with each $p_i$ both as a member of $P$ as well as of $V$ yields an interval representation for the submatrix $P\times P$.
We replace the interval assignment of each $n_j\in N$ by $[l_j,r_j]$, where
\begin{equation}\label{eq:rj}
r_j=\left\{%
\begin{array}{l}
\min \Set{a_i}{p_in_j=C} - 1\\
\infty,\ \ \textrm{ if there is no $C$ in the column of $n_j$\footnotemark .}
\end{array}\right .
\end{equation}
\footnotetext{Here the symbol $\infty$ stands for a sufficiently large positive integer which is greater than all the right end points assigned here.}
\begin{equation}\label{eq:lj}
l_j=\left\{%
\begin{array}{l}
\max \Set{b_i}{p_in_j=R} + 1\\
0,\ \ \textrm{ if there is no $R$ in the column of $n_j$.}
\end{array}\right .
\end{equation}
Let $n_k\in N$. First we ensure that $l_k\leqslant r_k$ indeed.
If the column of $n_k$ does not contain $R$ [$C$], then $l_k=0$ [respectively, $r_k=\infty$] and in either case $l_k\leqslant r_k$.
Next suppose the column of $n_k$ contains both $R$ and $C$. Let $p_in_k=R$ and $p_jn_k=C$. Since any $0$ below a $C$ in an R-C partition is $C$, we have $i<j$. Also due to our hypothesis we must have the following configuration:
$$\begin{array}{cc|cccccc}
\multicolumn{3}{c}{} & p_i && p_j && n_k\\ \cline{3-8}
p_i &&& 1 && 0 && R\\
p_j &&& 0 && 1 && C
\end{array}$$
Further since $p_jp_j=1$, we have $p_ip_j=R$ and $p_jp_i=C$. But then $b_i^\prime < a_j^{\prime\prime}$ and $b_i^{\prime\prime} < a_j^\prime$ and so $b_i=b_i^\prime + b_i^{\prime\prime} < a_j^\prime + a_j^{\prime\prime} = a_j$, which is true for any $i,j$ for which $p_in_k=R$ and $p_jn_k=C$. Thus
$$\max \Set{b_i}{p_in_k=R}\ <\ \min \Set{a_j}{p_jn_k=C}.$$
Then by (\ref{eq:rj}) and (\ref{eq:lj}), we have $l_k<r_k$, as required.
Now we show that the new interval assignments agree with the given matrix $M_1$, i.e., for any $p_i\in P$ and $n_j\in N$, if $p_in_j=0$, then the intervals $[a_i,b_i]$ and $[l_j,r_j]$ do not intersect and if $p_in_j=1$, then the intervals $[a_i,b_i]$ and $[l_j,r_j]$ must intersect. That $[l_j,r_j]$ is disjoint from $[a_i,b_i]$ when $p_in_j=0$ (i.e., $R$ or $C$) is clear from the construction of (\ref{eq:rj}) and (\ref{eq:lj}).
Next suppose $p_kn_j=1$ for some $p_k\in P$ and $n_j\in N$. We show that $l_j\leqslant b_k$ and $a_k\leqslant r_j$. If there is no $R$ in the column of $n_j$, then $l_j=0<b_k$. Suppose $p_in_j=R$ for some probe $p_i\in P$. Since $p_kn_j=1$, $b_i^\prime < b_k^\prime$. Also if there is no $C$ in the column of $p_k$, then $b_k^{\prime\prime}$ is greater than or equal to all right end points of vertices in the column of the matrix $M_1$ and hence $b_i^{\prime\prime}\leqslant b_k^{\prime\prime}$. Let there be a $C$ in the column of $p_k$, then it is below $p_kp_k$ (which is $1$). Suppose $p_tp_k=C$ for some $t>k$. Then $p_kp_t=R$ as $p_tp_t=1$ and $t>k$. So the column of $p_t$ appears right to that of $n_j$ as $p_kn_j=1$. But then $p_ip_t=R$ as $p_in_j=R$. Also since $p_ip_i=1$, the column of $p_i$ appears left to that of $n_j$ and hence also left to the column of $p_t$. Then $p_tp_i=C$ as $p_tp_t=1$. So we have
\begin{equation}\label{eq:kit}
p_tp_i=C\ \textrm{ whenever }\ p_tp_k=C.
\end{equation}
Now if $i<k$, then it follows from (\ref{eq:kit}) that $b_i^{\prime\prime}\leqslant b_k^{\prime\prime}$. Let $i>k$, then $p_kp_i=1$ as $p_kn_j=1$ and $p_ip_i=1$. So $p_ip_k=1$. Then (\ref{eq:kit}) implies again $b_i^{\prime\prime}\leqslant b_k^{\prime\prime}$. Therefore $b_i=b_i^\prime + b_i^{\prime\prime} < b_k^\prime + b_k^{\prime\prime} =b_k$. Hence $\max \Set{b_i}{p_in_j=R} < b_k$ and so $l_j\leqslant b_k$, as required.
\begin{center}
$\begin{array}{cc|cccccccc}
\multicolumn{3}{c}{} & p_i && p_k && n_j && p_t\\ \cline{3-10}
p_i &&& 1 && && R && R\\
p_k &&& && 1 && 1 && R\\
p_t &&& C && C && && 1\\
\end{array}$ \hspace{1in}
$\begin{array}{cc|cccccccc}
\multicolumn{3}{c}{} & p_k && p_i && n_j && p_t\\ \cline{3-10}
p_k &&& 1 && 1 && 1 && R\\
p_i &&& 1 && 1 && R && R\\
p_t &&& C && C && && 1\\
\end{array}$
\end{center}
Again if there is no $C$ in the column of $n_j$, then $r_j=\infty >a_k$. Suppose $p_in_j=C$ for some $p_i\in P$. Then $i>k$ as $p_kn_j=1$. Then $a_k^\prime < a_i^\prime$. Also since probe vertices appear in the same sequence in the columns of $M_1$ as in the rows of it, we have $a_k^{\prime\prime} \leqslant a_i^{\prime\prime}$. Thus $a_k=a_k^\prime +a_k^{\prime\prime} < a_i^\prime +a_i^{\prime\prime}=a_i$. This implies $a_k<\min\Set{a_i}{p_in_j=C}$ and hence $a_k\leqslant r_j$. \hfill $\qed$
\end{pf*}
Now we proceed for another characterization of the adjacency matrix of a probe interval graph. Let $B=(X,Y,E)$ be a bipartite graph. For each $x\in X$, let $n(x)=\Set{y\in Y}{xy\in E}$ be the set of neighbors of $x$ . A {\em Ferrers bigraph} \cite{R} is a bipartite graph $B=(X,Y,E)$ in which sets of neighbors of vertices of $X$ are linearly ordered by set inclusion,\footnote{Similar condition for vertices of $Y$ is equivalent to this one, i.e., from this it follows that sets of neighbors of vertices of $Y$ are also linearly ordered by set inclusion.} i.e., there is a linear ordering of the vertices of $X=\set{x_1,x_2,\ldots ,x_n}$ (say) such that $n(x_i)\subseteq n(x_j)$ for all $i\leqslant j$. Another equivalent condition \cite{R} on a bipartite graph $B$ to be a Ferrers bigraph is that the biadjacency matrix of $B$ does not contain any $2\times2$ permutation matrix:
$$\left(%
\begin{array}{cc}
1 & 0\\[-0.25em]
0 & 1
\end{array}
\right) \hspace{0.5in}
\textrm{ or } \hspace{0.5in}\left(%
\begin{array}{cc}
0 & 1\\[-0.25em]
1 & 0
\end{array}
\right).$$
It is well known that every bipartite graph is an intersection of a finite number of Ferrers bigraphs and the minimum such number is called its {\em Ferrers dimension}. The bipartite graphs of Ferrers dimension at most 2 were characterized by Cogis~\cite{C}. He called every $2\times2$ permutation matrix in a binary matrix a {\em couple} and defined an undirected graph $H(B)$, the graph {\em associated to a bipartite graph} $B$ as follows. The vertices of $H(B)$ correspond to the positions with entry $0$ in the biadjacency matrix, say, $A$ of $B$ and two such vertices are adjacent in $H(B)$ if and only if the corresponding $0$'s form a couple in the matrix $A$. Cogis proved that a bipartite graph $B$ is of Ferrers dimension at most 2 if and only if $H(B)$ is bipartite. In particular, a bipartite graph is an interval bigraph if and only if it is the intersection of two Ferrers bigraphs whose union is complete \cite{SDRW}. Moreover, when $B$ is an interval bigraph, any R-C partition of its biadjacency matrix provides a proper $2$-coloring (by colors $R$ and $C$) of vertices of $H(B)$. Thus it is important to note that, in this case, no two $R$'s [$C$'s] are in the same couple in the biadjacency matrix of $B$.
Let $G=(V,E)$ be an undirected graph having $N\ (\subseteq V)$ as an independent set of vertices. Let $B_1$ be the bipartite graph whose biadjacency matrix is the augmented adjacency matrix (cf.~page \pageref{'augmented'}) of $G$. Now from the graph $H(B_1)$ delete the vertices corresponding to $0$'s in the submatrix $N\times N$ of the biadjacency matrix of $B_1$. Call it $H_1(B_1)$, the {\em reduced associated graph} of $B_1$.
\begin{thm}\label{t:char2}
Let $G=(V,E)$ be an undirected graph with an independent set $N\subseteq V$ and $B_1$ be the bipartite graph whose biadjacency matrix is the augmented adjacency matrix of $G$. Then $G$ is a probe interval graph with probes $P=V\smallsetminus N$ and nonprobes $N$ if and only if
\begin{enumerate}
\item[{\bf (1)}] the bigraph $B=(P,V,E_1)$ is an interval bigraph and
\item[{\bf (2)}] the graph $H_1(B_1)$ is a bipartite graph and there is a bipartation of $H_1(B_1)$ that yields an R-C partition of $B$.
\end{enumerate}
\end{thm}
\begin{pf*}{Proof.}
Let $G$ be a probe interval graph with probes $P$ and nonprobes $N$. Then by Theorem \ref{t:char1}, $B=(P,V,E_1)$ is an interval bigraph and there exists an R-C partition of $B$ which does not contain any submatrix of the form (\ref{eq:cfg}). We note that the graph $H_1(B_1)$ contains the graph $H(B)$ and only vertices of $H_1(B_1)$ which are not in $H(B)$ are the zeros of the biadjacency matrix $B_1$ at the positions $np$ for some $n\in N$ and $p\in P$ (i.e., the zeros of the submatrix $N\times P$). Now since $B$ is an interval bigraph, $H(B)$ is bipartite. Moreover the above R-C partition provides a proper $2$-coloring of vertices of $H(B)$ (by colors $R$ and $C$). Let us extend this coloring of vertices $H(B)$ to the vertices of $H_1(B_1)$ as follows:\\[-1em]
\begin{equation}\label{eq:np}
np=\left\{%
\begin{array}{l}
R, \qquad \textrm{ if } pn=C\\
C, \qquad \textrm{ if } pn=R.
\end{array}\right .
\end{equation}
Now if this assignment of colors provides a $2$-coloring of the vertices of $H_1(B_1)$, then we have nothing to prove. If not, then there exist couples of the forms:
$$\left(%
\begin{array}{cc}
1 & R\\[-0.25em]
R & 1
\end{array}
\right) \hspace{0.5in}
\textrm{ or } \hspace{0.5in}\left(%
\begin{array}{cc}
1 & C\\[-0.25em]
C & 1
\end{array}
\right).$$
in the biadjacency matrix of $B_1$ where none of the zeros ($R$ or $C$) belongs to the submatrix $N\times N$. Also since vertices of $H(B)$ is properly $2$-colored (by $R$ or $C$), at least one of the two rows of these couples must corresponds to a nonprobe (i.e., these couples cannot lie fully in the submatrix $P\times V$). So the following three cases may arise for couples of the first type (containing $R$'s):
$$\begin{array}{cc|cccc}
\multicolumn{3}{c}{} & p && q \\ \cline{3-6}
m &&& 1 && R \\
n &&& R && 1
\end{array}\hspace{0.5in}
\begin{array}{cc|cccc}
\multicolumn{3}{c}{} & p && q \\ \cline{3-6}
r &&& 1 && R \\
n &&& R && 1
\end{array}\hspace{0.5in}
\begin{array}{cc|cccc}
\multicolumn{3}{c}{} & p && n \\ \cline{3-6}
q &&& 1 && R \\
n &&& R && 1
\end{array}$$
where $p,q,r\in P$ and $m,n\in N$. The first case implies the existence of
$$\begin{array}{cc|cccc}
\multicolumn{3}{c}{} & m && n \\ \cline{3-6}
p &&& 1 && C \\
q &&& C && 1
\end{array}$$
in the biadjacency matrix, say, $M$ of $B$ which is not possible. The second one again forces the following in $M$:
$$\begin{array}{cc|cccc}
\multicolumn{3}{c}{} & r && n \\ \cline{3-6}
p &&& 1 && C \\
q &&& X && 1
\end{array}$$
where $X=R$ or $C$. Clearly $X\neq C$. Suppose $X=R$. But then we have $qr=R=rq$ and consequently the couple
$$\begin{array}{cc|cccc}
\multicolumn{3}{c}{} & q && r \\ \cline{3-6}
q &&& 1 && R \\
r &&& R && 1
\end{array}$$
in $M$, which is a contradiction. So finally we consider the last one. In this case we get the submatrix
$$\begin{array}{cc|cccccc}
\multicolumn{3}{c}{} & q && p && n \\ \cline{3-8}
q &&& 1 && 1 && R \\
p &&& 1 && 1 && C
\end{array}$$
in $M$ which is of the form (\ref{eq:cfg}) and so is forbidden as we mentioned at the beginning of the proof. The proof for the couples of other type (contaning $C$'s) is similar and hence omitted. Therefore $H_1(B_1)$ is bipartite and there is a bipartation of it which yields an R-C partition of $B$.
Conversely, let the conditions (1) and (2) be satisfied. So we have the bigraph $B=(P,V,E_1)$ is an interval graph, the graph $H_1(B_1)$ is bipartite and there is a bipartation of $H_1(B_1)$ which gives an R-C partition of $B$. We show that such an R-C partition of $B$ cannot contain any submatrix of the form (\ref{eq:cfg}). Then it will follow that $G$ is a probe interval graph by Theorem \ref{t:char1}.
Now if the R-C partition of $B$ has a submatrix of the form (\ref{eq:cfg}), then we have the following submatrix:
$$\begin{array}{cc|cccccccc}
\multicolumn{3}{c}{} & p &&& q &&& n \\ \cline{3-10}
p &&& 1 &&& 1 &&& R \\
q &&& 1 &&& 1 &&& C \\
n &&& X &&& Y &&& 1
\end{array}$$
in the biadjacency matrix of $B_1$, where $X,Y\in\set{R,C}$.\footnote{Denoting all the vertices of one partite set of $H_1(B_1)$ by $R$ and those of the other by $C$ such that this yields the R-C partition of $B$.} But $X$ cannot be either $R$ or $C$ as we have the following couples in the above submatrix:
$$\begin{array}{cc|cccc}
\multicolumn{3}{c}{} & p && n \\ \cline{3-6}
p &&& 1 && R \\
n &&& X && 1
\end{array}\qquad \textrm{ and }\qquad \begin{array}{cc|cccc}
\multicolumn{3}{c}{} & q && n \\ \cline{3-6}
q &&& 1 && C \\
n &&& X && 1.
\end{array}$$
This contradiction proves that the above R-C partition cannot contain any submatrix of the form (\ref{eq:cfg}), as required.\hfill $\qed$
\end{pf*}
Finally, in the following we show that if we add a loop at every probe vertex of a probe interval graph, then the Ferrers dimension of the corresponding symmetric bipartite graph is at most $3$. We know that an interval bigraph is of Ferrers dimension at most $2$, but the converse is not true. Below we show that the property of being of Ferrers dimension at most $2$ is also a sufficient criterion for a graph to be an interval graph.
\begin{prop}
An undirected graph $G=(V,E)$ is an interval graph if and only if the Ferrers dimension of the corresponding bipartite graph $B$, whose biadjacency matrix is the augmented adjacency matrix of $G$, is at most $2$.
\end{prop}
\begin{pf*}{Proof.}
From the quasi-linear ones property of the augmented adjacency matrix of an interval graph it is clear that the $0$'s in the upper triangle and those in the lower triangle form two Ferrers digraphs whose union is $\overline{G}$, the complement of $G$. This proves the direct part.
Conversely, Cogis \cite{C} proved that a bipartite graph $B_1$ is of Ferrers dimension at most $2$ if and only if its associated graph $H(B_1)$ is bipartite. In fact he proved that if $H(B_1)$ has nontrivial components $H_1, H_2,\ldots ,H_k$ and has $I=\set{I_1,I_2,\ldots ,I_m}$ as its isolated vertices, then there is a $2$-coloring $(R_i,C_i)$ of $H_i\ (i=1,2,\ldots ,k)$ so that $R=R_1\cup R_2\cup\cdots\cup R_k\cup I$ and $C=C_1\cup C_2\cup\cdots\cup C_k\cup I$ are two Ferrers bigraphs whose union is $\overline{G}$. Clearly, if there is no isolated vertex, i.e., $I=\emptyset$, then $\overline{G}$ is decomposed into disjoint Ferrers bigraphs.
Let $F_1$ and $F_2$ be two Ferrers bigraphs whose union is $\overline{G}$, i.e., $\overline{G}=F_1\cup F_2$. Let $A$ be the augmented adjacency matrix of $G$ and so the biadjacency matrix of $B$. Let $uv=0$. Then $vu=0$ and the couple
$$\begin{array}{cc|cccc}
\multicolumn{3}{c}{} & u && v\\ \cline{3-6}
u &&& 1 && 0 \\
v &&& 0 && 1 \\
\end{array}$$
in the matrix $A$ shows that the two $0$'s at positions $uv$ and $vu$ are adjacent in $H(B)$. Let $uv\in F_1$ so that $vu\in F_2$. Thus every $0$ in the matrix $A$ belongs to a non-trivial component of $H(B)$, which implies that $H(B)$ has no isolated vertex. Hence, as noted earlier, the two Ferrers bigraphs $F_1$ and $F_2$ are disjoint. So $B$ is an interval bigraph and consequently by Theorem 1 of \cite{SSW} we have $G$ is an interval graph.\hfill $\qed$
\end{pf*}
Let $G$ be a probe interval graph. Let us add a loop at every probe vertex of $G$ and denote the graph thus obtained by $\widehat{G}$.
\begin{cor}
Let $G=(V,E)$ be a probe interval graph. Then the Ferrers dimension of the bipartite graph, whose biadjacency matrix is the adjacency matrix of $\widehat{G}$, is at most $3$.
\end{cor}
\begin{pf*}{Proof.}
Let $G$ be a probe interval graph with probes $P$ and nonprobes $N$. Let $G_1=(V,E_1)$ be the interval graph with the same assignment of intervals to all the vertices as in $G$. Let $B$ be the bipartite graph whose biadjacency matrix is the adjacency matrix of $\widehat{G}$\footnote{Note that, in the adjacency matrix of $\widehat{G}$, $pp=1$ for each probe vertex $p$ of $G$.} and $B_1$ be the bipartite graph whose biadjacency matrix is the augmented adjacency matrix of $G_1$. Then by the above theorem, we have $B_1$ is of Ferrers dimension at most $2$ and so $B_1=F_1\cap F_2$ for some Ferrers bigraph $F_1$ and $F_2$ such that $F_1\cup F_2$ is complete. Also the bipartite graph, whose biadjacency matrix is the following matrix, is a Ferrers bigraph, say, $F_3$.
$$\begin{array}{cc|ccc|ccc|c}
\multicolumn{3}{c}{} & P & \multicolumn{2}{c}{} &N& \multicolumn{2}{c}{}\\ \cline{3-8}
P &&& \mathbf{1} &&& \mathbf{1} &&\\ \cline{3-8}
N &&& \mathbf{1} &&& \mathbf{0} &&\\ \cline{3-8}
\end{array}$$
Thus we have $B=F_1\cap F_2\cap F_3$, as required.\hfill $\qed$
\end{pf*}
\begin{ack}
The authors are grateful to the learned referees for their meticulous reading and valuable suggestions which have definitely improved the paper.
\end{ack}
|
1,477,468,751,169 | arxiv | \section{Introduction}
It is a well-known fact (see, e.g., \cite{Kuchment}) that the spectrum of
periodic elliptic self-adjoint differential operators has band structure,
i.e. it is a locally finite union of compact intervals called \textit{bands}. In general the bands may overlap, otherwise we have a \textit{gap} in the spectrum - a bounded open interval having an empty intersection with the spectrum, but with ends belonging to it.
The presence of gaps in the spectrum is not guaranteed. For instance, the spectrum of the Laplace operator in $L^2(\mathbb{R}^n)$ has no gaps: $\sigma(-\Delta_{\mathbb{R}^n})=[0,\infty)$. Therefore an interesting question arises here: to construct examples of periodic operators with non-void spectral gaps. This question is motivated by various applications, since the presence of gaps is important for the description of wave processes which are governed by differential operators
under consideration: if the wave frequency belongs to a gap then the corresponding wave cannot propagate in the medium. This feature is a main requirement for so-called photonic crystals, which are materials with periodic dielectric structure {extensively} investigating in recent years.
The problem of existence of spectral gaps for various periodic operators has been actively studied since mid 90th. We refer to the overview \cite{HempelPost}, where one can find a lot of examples and references around this topic.
In the last years there appeared many works, where the problem of opening of spectral gaps for operators posed in unbounded domains with a waveguide geometry (strips, tubes, {graph-like} domains, etc.) is studied, see, e.g., \cite{BNR,Borisov2,Cardone1,Cardone2,EP,Nazarov1+,Nazarov2,Pankrashkin,Yoshi}.
The studies of physical processes (e.g., quantum particle motion) in such domains are of a great physical and mathematical interest because of the big progress in microelectronics during the last decade. We refer to the recent monograph \cite{EK} concerning spectral properties of quantum waveguides.
The simplest way to open up a gap is either to perturb a straight cylinder by a periodic nucleation of small voids (or making other ``small'' perturbation) \cite{BNR,Nazarov2} or to consider a waveguide consisting of an array of identical compact domains connected by narrow ``bridges'' \cite{Nazarov1+,Pankrashkin}. In the first case one has small gaps separating large bands, in the second case one gets large gaps and small bands.
In the current paper we present another example of Neumann waveguide with a gap in the spectrum; the geometry of this waveguide essentially differs from previously studied examples. We are motivated by our recent work \cite{CarKhrab}, where the spectrum of some Neumann problem was studied in a \textit{bounded} domain perturbed by a lot of identical protuberances each of them consisting of two subsets - ``room'' and ``passage'' (in the simplest case, ``room'' is a small square and ``passage'' is a narrow rectangle connecting the ``room'' with the main domain). Peculiar spectral properties of so perturbed domains were observed for the first time by R.~Courant and D.~Hilbert \cite{CH}. Domains with "room-and-passage"-like geometry are widely used in order to construct examples illustrating various phenomena in Sobolev spaces theory and in spectral theory (see, for example, \cite{Fraenkel,HSS}).
Our goal is to show that perturbing a straight strip by a periodic array of ``room-and-passage'' protuberances one may open a spectral gap. Namely, we consider a strip of a width $L>0$ and perturb it by
a family of small identical protuberances, ${\varepsilon}$-periodically distributed along the strip axis. Here ${\varepsilon}>0$ is a small parameter. Each protuberance has ``room-and-passage'' geometry. We denote the obtained domain by $\Omega^\varepsilon$ {(see Figure \ref{figure})}. In $\Omega^\varepsilon$ we consider the operator $\mathcal{A}^\varepsilon=-{\rho^\varepsilon}\Delta_{\Omega^\varepsilon}$, where $\Delta_{\Omega^\varepsilon}$ is the Neumann Laplacian in $L^2(\Omega^\varepsilon)$. The weight $\rho^\varepsilon$ is equal to $1$ everywhere except the union of the ``rooms'', where it is equal to the constant $\varrho^\varepsilon>0$.
\begin{figure}[t]
\begin{picture}(300,85)
\scalebox{0.45}{\includegraphics{waveguide.jpg}}
\put(-40, 20){$\Omega$}
\put(-145, 53){$B_i^\varepsilon$}
\put(-133,56){\vector(1,1){10}}
\put(-104, 53){$T_i^\varepsilon$}
\put(-105,55){\vector(-1,0){15}}
\put(-330,30){\vector(0,1){20}}
\put(-330,20){\vector(0,-1){20}}
\put(-333, 21){$L$}
\put(-297,45){\vector(1,0){14}}
\put(-304,45){\vector(-1,0){14}}
\put(-303, 42){${\varepsilon}$}
\end{picture}
\caption{\label{figure}The waveguide $\Omega^\varepsilon$}
\end{figure}
The main result: we will prove that under suitable assumptions on $L,\ \varrho^\varepsilon$ and sizes of ``rooms'' and ``passages'' the spectrum of $\mathcal{A}^\varepsilon$ converges
to the spectrum of a certain spectral problem on the initial strip containing the spectral parameter in boundary conditions. Its spectrum has the form $[0,\infty)\setminus (\alpha,\beta)$, where
$(\alpha,\beta)$ is a non-empty bounded interval. This, in particular, implies
at least one gap in the spectrum of $\mathcal{A}^\varepsilon$ for small enough ${\varepsilon}$.
\section{Setting of the problem and main result}
In what follows by $x$ and $\mathbf{x}=(x_1,x_2)$ we denote the Cartesian coordinates in $\mathbb{R}$ and $\mathbb{R}^2$, correspondingly.
By ${\varepsilon}$ we denote a small parameter. To simplify the proof of the main theorem we suppose that it takes values from the discrete set $\mathcal{E}=\left\{{\varepsilon}:\ {\varepsilon}^{-1}\in\mathbb{N}\right\}$.
The general case needs slight modifications.
We consider the unbounded strip $\Omega\subset\mathbb{R}^2$ of the width $L>0$:
\begin{align*}
\Omega=\left\{\mathbf{x}\in\mathbb{R}^2: -L<x_2<0\right\}.
\end{align*}
By $\Gamma$ we denote its upper boundary: $\Gamma=\left\{\mathbf{x}\in\mathbb{R}^2: x_2=0\right\}.$
Let $b^\varepsilon$, $d^\varepsilon$, $h^\varepsilon$ be positive constants,
$B$ be an open bounded domain in $\mathbb{R}^{2}$ having Lipschitz boundary and satisfying
\begin{gather}\label{ass11}
B\subset \left\{\textbf{x}\in\mathbb{R}^2:\ x_1\in\left(-{1/2},{1/2}\right),\ x_2>0\right\},\\\label{ass12}
\exists R\in \left(0,1\right):\ \left\{\textbf{x}\in\mathbb{R}^2:\ x_1\in\left(-{R/2},{R/2}\right),\ x_2=0\right\}\subset \partial B,\\
\label{ass14}
R^{-1}d^\varepsilon\leq b^\varepsilon\leq {\varepsilon},\\
\label{ass15}
h^\varepsilon\to 0\text{ as }{\varepsilon}\to 0.
\end{gather}
For $i\in\mathbb{Z}$ we set:
\begin{itemize}
\item[] $B_i^\varepsilon=\left\{\textbf{x}\in\mathbb{R}^2:\ \displaystyle{1\over b^\varepsilon}\left(\textbf{x}- \widetilde{\textbf{x}}^{i,{\varepsilon}}\right)\in B,\text{ where } \widetilde{\textbf{x}}^{i,{\varepsilon}}=(i{\varepsilon}+{\varepsilon}/2,h^\varepsilon)\right\}$\quad (``room''),
\item[] $T_i^\varepsilon=\left\{\textbf{x}\in\mathbb{R}^2:\ \displaystyle |x_1-i{\varepsilon}-{\varepsilon}/2|<{d^\varepsilon\over 2},\ 0\leq x_2\leq h^\varepsilon\right\}$ \quad (``passage'').
\end{itemize}
Conditions \eqref{ass11}-\eqref{ass14} imply that the "rooms" are pairwise disjoint and guarantee correct gluing of the $i$-th "room" and the $i$-th "passage" (the upper face of $T_i^\varepsilon$ is contained in $\partial B_i^\varepsilon$). Moreover, the distance between the neighbouring "passages" is not too small, namely
for $i\not=j$ one has $\mathrm{dist}( {T_i^\varepsilon,T_j^\varepsilon})\geq {\varepsilon}-
d^\varepsilon\geq {\varepsilon}\left (1-R\right)$.
Attaching the "rooms" and "passages" to $\Omega$ we obtain the perturbed
domain
$$\Omega^\varepsilon=\Omega\cup \left({\bigcup\limits_{i\in \mathbb{Z}}\left(T_i^\varepsilon\cup B_i^\varepsilon\right)}\right).$$
Let us define accurately the operator $\mathcal{A}^\varepsilon$. We denote by ${H}^\varepsilon$ the Hilbert space of functions from $L^2(\Omega^\varepsilon)$ endowed with a scalar product
\begin{gather}\label{He}
(u,v)_{{H}^\varepsilon}=\int_{\Omega^\varepsilon} u(\textbf{x})\overline{v(\textbf{x})} (\rho ^\varepsilon(\textbf{x}))^{-1} \d \textbf{x},
\end{gather}
where the function $\rho^\varepsilon(\textbf{x})$ is defined as follows:
$$\rho^\varepsilon(\textbf{x})=\begin{cases}1,&\textbf{x}\in\Omega\cup\left(\bigcup\limits_{i\in\mathbb{Z}}T_i^\varepsilon\right),\\\varrho^\varepsilon,& \textbf{x}\in \bigcup\limits_{i\in\mathbb{Z}}B_i^\varepsilon,\end{cases}\quad \varrho^\varepsilon>0\text{ is a constant.}
$$
By $\mathfrak{a}^\varepsilon$ we denote the sesquilinear form in ${H}^\varepsilon$ defined by
\begin{gather}\label{ae}
\mathfrak{a}^\varepsilon[u,v]=\int_{\Omega^\varepsilon} \nabla u\cdot\overline{\nabla v} \d \textbf{x},\quad \mathrm{dom}(\mathfrak{a}^\varepsilon)=H^1(\Omega^\varepsilon).
\end{gather}
The form $\mathfrak{a}^\varepsilon$ is densely defined, closed, positive and symmetric. We denote by
$\mathcal{A}^\varepsilon$ the operator associated with this form, i.e.
\begin{gather*}
(\mathcal{A}^\varepsilon u,v)_{{H}^\varepsilon}=
\mathfrak{a}^\varepsilon[u,v],\quad\forall u\in
\mathrm{dom}(\mathcal{A}^\varepsilon),\ \forall v\in
\mathrm{dom}(\mathfrak{a}^\varepsilon).
\end{gather*}
In other words, the operator $\mathcal{A}^\varepsilon$ is defined by the operation $-{\rho^\varepsilon}\Delta$ in $\Omega^\varepsilon$ and
the Neumann boundary conditions on $\partial\Omega^\varepsilon$.
The goal of this work is to describe the behaviour of the spectrum $\sigma(\mathcal{A}^\varepsilon)$ as ${\varepsilon}\to 0$
under the assumption that the following limits exist and are positive:
\begin{gather}\label{qere+}
\a:=\lim\limits_{{\varepsilon}\to 0}{d^\varepsilon \varrho^\varepsilon\over h^\varepsilon (b^\varepsilon)^2 |B| },\quad r:=\lim\limits_{{\varepsilon}\to 0}{(b^\varepsilon)^2 |B|\over {\varepsilon}\varrho^\varepsilon },\quad \a>0,\ r>0.
\end{gather}
Also it is supposed that $d^\varepsilon$ tends to zero not very fast, namely
$\lim\limits_{{\varepsilon}\to 0}{\varepsilon} \ln d^\varepsilon= 0.$ The meaning of this condition and the meaning of $\a$ and $r$ are explained in \cite{CarKhrab}.
Now, we introduce the limit operator. By ${H}$ we denote the Hilbert space of functions from $L^2(\Omega)\oplus L^2(\Gamma)$ endowed with the scalar product
\begin{gather}\label{Hlim}
(U,V)_{{H}}=\int_{\Omega}u_1(\textbf{x})\overline{v_1(\textbf{x})} \d \textbf{x}+\int_\Gamma u_2(x)\overline{v_2(x)} r \d x,\ U=\left(u_1,u_2\right),V=(v_1,v_2).
\end{gather}
We introduce the sesquilinear form $\mathfrak{a}$ in ${H}$ by
\begin{align}\label{alim}
\mathfrak{a}[U,V]=\int_\Omega \nabla u_1\cdot \overline{\nabla v_1}\d
\textbf{x}+\int_\Gamma \a r\left(u_1|_\Gamma-u_2\right)\overline{\left(v_1|_\Gamma-v_2\right)}\d x
\end{align}
with $\mathrm{dom}(\mathfrak{a} )=H^1(\Omega)\oplus L^2(\Gamma)$.
Here by $u|_\Gamma$ we denote the trace of $u$ on $\Gamma$.
We denote by $\mathcal{A}$ the self-adjoint operator associated with this form.
Formally, the eigenvalue equation $\mathcal{A} U=\lambda U$ can be written as follows:
\begin{gather*}
\begin{cases}
-\Delta u_1=\lambda u_1&\text{ in }\Omega,\\
\displaystyle{\partial u_1\over\partial n}=\a r(u_2-u_1)&\text{ on }\Gamma,\\
\a (u_2-u_1)=\lambda u_2&\text{ on }\Gamma,\\
\displaystyle{\partial u_1\over\partial n}=0&\text{ on }\partial\Omega\setminus\Gamma.
\end{cases}
\end{gather*}
Here $n$ is the outward-pointing unit normal.
\begin{remark}
Spectral properties of so defined operators $\mathcal{A}$ were investigated in \cite{CarKhrab,KhrabPlum}. In \cite{CarKhrab} one considered the case of a bounded domain $\Omega$, $\Gamma$ is a flat subset of $\partial\Omega$. In this case the discrete spectrum of $\mathcal{A}$ consists of two sequences; one sequence accumulates at $\infty$, while the other one converges to $\a$, which is the only point of the essential spectrum.
In \cite{KhrabPlum} one considered\footnote{In fact, in \cite{KhrabPlum} the Dirichlet conditions on $\partial\Omega$ are prescribed, but similar results can be easily obtained for the Neumann conditions too -- cf. \cite[Remark 3.2]{KhrabPlum}.}, in particular, the case, when $\Omega$ is a straight unbounded strip, the line $\Gamma$ is parallel to its axis and divides $\Omega$ on two unbounded strips. In this case the spectrum of $\mathcal{A}$ turns out to be a union of the interval $[0,\a]$ and the ray $[\b,\infty)$, where $\b>\a$ provided $\a<\left(\pi\over L-L_\Gamma\right)^2$. Here $L$ is strip width and $L_\Gamma\in (0,L)$ is a distance from $\Gamma$ to $\partial\Omega$.
\end{remark}
Using the same arguments as in \cite{KhrabPlum} we arrive at the following formula for the spectrum of \textit{our} operator $\mathcal{A}$:
\begin{gather}\label{aqa}
\sigma(\mathcal{A})=
\begin{cases}
[0,\a]\cup[\b,\infty)&\text{ if }\a<\left({\pi\over 2L}\right)^2,\\
[0,\infty)&\text{otherwise},
\end{cases}
\end{gather}
where the number $\b$ is defined as follows. We denote by $\b(\mu)$ (here $\mu\in\mathbb{R}$) the smallest eigenvalue of the problem
\begin{gather*}
-u''=\lambda u\text{ in }\left(-L,0\right),\quad
u(-L)=0,\ u'(0)=\mu u(0).
\end{gather*}
It is straightforward to show that
the function $\mu\mapsto \b(\mu)$ is continuous, monotonically decreasing and moreover $\b(\mu)\underset{\mu\to -\infty}\to \left({\pi\over 2L}\right)^2$ and $\b(\mu)\underset{\mu\to +\infty}\to -\infty$. Whence, in particular, one can conclude that there exists one and only one point $\b$ satisfying
\begin{gather*}
\exists \mu<-\a r:\ \b=\b(\mu)={\a\mu \over \a r+\mu},
\end{gather*}
provided $\a<\left({\pi\over 2L}\right)^2$.\medskip
Now, we are in position to formulate the main results.
\begin{theorem}\label{th1}
One has:
\begin{itemize}
\item[(i)] Let the family $\left\{\lambda^{{\varepsilon}}\in\sigma(\mathcal{A}^\varepsilon)\right\}_{{\varepsilon}\in\mathcal{E}}$
have a convergent subsequence, i.e. $\lambda^{{\varepsilon}}\to\lambda$ as ${\varepsilon}={\varepsilon}'\to 0$. Then $\lambda\in \sigma(\mathcal{A})$.
\item[(ii)] Let $\lambda\in \sigma(\mathcal{A})$. Then there exists a family $\left\{\lambda^{{\varepsilon}}\in\sigma(\mathcal{A}^\varepsilon)\right\}_{{\varepsilon}\in\mathcal{E}}$ such that $\lim\limits_{{\varepsilon}\to 0}\lambda^\varepsilon=~\lambda$.
\end{itemize}
\end{theorem}
From \eqref{aqa} and Theorem \ref{th1} we immediately obtain the following
\begin{corollary}
Let $\a<\left({\pi\over 2L}\right)^2$. Let $\delta>0$ be an arbitrary number satisfying $2\delta<\b-\a$. Then there exists ${\varepsilon}_\delta>0$ such that
$$\sigma(\mathcal{A}^\varepsilon)\cap (\a+\delta,\b-\delta)=\varnothing,\quad \sigma(\mathcal{A}^\varepsilon)\cap (\a-\delta,\b+\delta)\not=\varnothing\quad\text{provided ${\varepsilon}<{\varepsilon}_\delta$.}$$
\end{corollary}
\section{Proof of Theorem \ref{th1}}
We present only the sketch of the proof since the main ideas are similar to the case of bounded domains $\Omega$ presented in \cite{CarKhrab}.\smallskip
Let $\left\{\lambda^\varepsilon\in \sigma(\mathcal{A}^\varepsilon)\right\}_{{\varepsilon}\in\mathcal{E}}$ and $\lambda^{{\varepsilon}}\to\lambda$ as ${\varepsilon}={\varepsilon}'\to 0$. One has to show that $\lambda\in \sigma(\mathcal{A})$. In what follows we will use the index ${\varepsilon}$ keeping in mind ${\varepsilon}'$.
We denote
$${\widetilde{\Omega}}=(0,1)\times (-L,0),\quad
{\widetilde{\Omega}}^\varepsilon=\Omega^\varepsilon\cap \left((0,1)\times \mathbb{R}\right),\quad \widetilde{\Gamma}=(0,1)\times\{0\}.$$
Recall that ${\varepsilon}^{-1}\in\mathbb{N}$, whence $\Omega^\varepsilon+e_1=\Omega^\varepsilon,$ where $e_1=(1,0),$
and thus $\mathcal{A}^\varepsilon$ is a periodic operator with respect to the period cell $\widetilde{\Omega}^\varepsilon$.
Using Floquet-Bloch theory (see, e.g, \cite{Kuchment}) one can represent the spectrum of $\mathcal{A}^\varepsilon$ as a union of spectra of certain operators on $\widetilde{\Omega}^\varepsilon$. We denote by $\widetilde{H}^\varepsilon$ the space of functions from $L^2(\widetilde{\Omega}^\varepsilon)$ and the scalar product defined by \eqref{He} with $\widetilde{\Omega}^\varepsilon$ instead of $\Omega^\varepsilon$.
Let us fix $\phi\in [0,2\pi)$. In $\widetilde{H}^\varepsilon$ we consider the sesquilinear form $\widetilde{\mathfrak{a}}^{\phi,{\varepsilon}}$ defined by \eqref{ae} with $\widetilde{\Omega}^\varepsilon$ instead of $\Omega^\varepsilon$ and the definitional domain
$$\mathrm{dom}(\widetilde{\mathfrak{a}}^{\phi,{\varepsilon}})=\left\{u\in H^1(\widetilde{\Omega}^\varepsilon):\quad u(1,\cdot){=}e^{i\phi}u(0,\cdot)\right\}.$$
By $\widetilde{\mathcal{A}}^{\phi,{\varepsilon}}$ we denote the operator associated with this form.
The spectrum of $\widetilde{\mathcal{A}}^{\phi,{\varepsilon}}$ is purely discrete. We denote by $\{\widetilde\lambda^{\phi,{\varepsilon}}_k\}_{k=1}^\infty$ the sequence of eigenvalues of $\widetilde{\mathcal{A}}^{\phi,{\varepsilon}}$ arranged in ascending order and with account of their multiplicity.
Then one has
\begin{gather}\label{floque}
\sigma(\widetilde{\mathcal{A}}^\varepsilon)=\bigcup\limits_{k=1}^\infty I_k^\varepsilon,\ \text{ where }I_k^\varepsilon=\bigcup\limits_{\phi\in[0,2\pi)}\left\{\widetilde\lambda_{k}^{\phi,{\varepsilon}}\right\}\text{ are compact intervals}.
\end{gather}
We also introduce the operator $\widetilde{\mathcal{A}}^{\phi}$ as the operator acting in $$\widetilde{H}=
\left\{U\in L^2(\widetilde{\Omega})\oplus L^2(\widetilde{\Gamma}),\text{ the scalar product is defined by \eqref{Hlim}}
\text{ with }\widetilde{\Omega},\widetilde{\Gamma}\text{ instead of }\Omega,\Gamma\right\}$$ and generated by the sesquilinear form $\widetilde{\mathfrak{a}}^\phi$ which is defined by \eqref{alim} (with $\widetilde{\Omega},\widetilde{\Gamma}$ instead of $\Omega,\Gamma$) and definitional domain $\mathrm{dom}(\widetilde{\mathfrak{a}}^\phi)=\mathrm{dom}(\widetilde{\mathfrak{a}}^{\phi,{\varepsilon}})\oplus L^2(\widetilde{\Gamma})$.
\begin{lemma}\label{lmW2}
The spectrum of $\widetilde{\mathcal{A}}^{\phi}$ has the form
\begin{gather*}
\label{spectrum} \sigma(\widetilde{\mathcal{A}}^{\phi})=\{\a\}\cup
\{\widetilde\lambda_k^{\phi,-},k=1,2,3...\}\cup\{\widetilde\lambda_k^{\phi,+},k=1,2,3...\}.
\end{gather*}
The points $\widetilde\lambda_k^{\phi,\pm}, k=1,2,3...$ belong to the discrete
spectrum, $\a$ is a point of the essential spectrum and they are distributed as follows:
$$0\leq\widetilde\lambda_1^{\phi,-}\leq \widetilde\lambda_2^{\phi,-}\leq ...\leq\widetilde\lambda_k^{\phi,-}\leq\dots\underset{k\to\infty}\to
\a< \widetilde\lambda_1^{\phi,+}\leq
\widetilde\lambda_2^{\phi,+}\leq ...\leq\widetilde\lambda_k^{\phi,+}\leq\dots\underset{k\to\infty}\to \infty.$$
Moreover if $\a<\left({\pi\over 2L}\right)^2$ then $\b< \widetilde\lambda_1^{\phi,+}.$
\end{lemma}
This lemma was proved in \cite{CarKhrab} for the case of Neumann boundary conditions on the lateral parts of $\partial\widetilde{\Omega}^\varepsilon$. For the case of $\varphi$-periodic conditions the proof is similar.
\smallskip
Now, in view of \eqref{floque} there exists $\phi^\varepsilon\in [0,2\pi)$ such that $\lambda^\varepsilon\in\sigma(\widetilde{\mathcal{A}}^{\phi^\varepsilon\hspace{-1mm},{\varepsilon}})$. We extract a convergent subsequence (for convenience still indexed by ${\varepsilon}$):
\begin{gather}\label{phi}
\phi^\varepsilon\to\phi\in [0,2\pi]\text{ as }{\varepsilon}\to 0.
\end{gather}
Let $u^\varepsilon$ be an eigenfunction of $\widetilde{\mathcal{A}}^{\phi^\varepsilon\hspace{-1mm},{\varepsilon}}$ corresponding to $\lambda^\varepsilon$ with $\|u^\varepsilon\|_{\widetilde{\mathcal{H}}^\varepsilon}=1$.
We introduce the operator $\Pi^\varepsilon:L^2(\bigcup\limits_{i=1}^{N({\varepsilon})} B_i^\varepsilon)\to L^2(\Gamma)$ defined as follows:
\begin{gather*}
\Pi^\varepsilon u(x)= \sum\limits_{i=1}^{N({\varepsilon})}\left(|B_i^\varepsilon|^{-1}\int_{B_i^\varepsilon} u(\textbf{x})\d\textbf{x}\right) \chi_{i}^\varepsilon(x),
\end{gather*}
where $\chi_i^\varepsilon$ is the characteristic function of the interval $\left[i{\varepsilon}-{\varepsilon},i{\varepsilon}\right]$.
Using the Cauchy inequality and \eqref{qere+} one can easily obtain the estimate
\begin{gather}
\label{Pi_ineq} \|\Pi^\varepsilon u\|^2_{L^2(\Gamma)}\leq \sum\limits_{i=1}^{N({\varepsilon})}{\varrho^\varepsilon{\varepsilon}|B_i^\varepsilon|^{-1}}\int_{B_i^\varepsilon}|u(\textbf{x})|^2(\varrho^\varepsilon)^{-1}\d \textbf{x}\leq C\|u\|^2_{\widetilde{\mathcal{H}}^\varepsilon}.
\end{gather}
From \eqref{Pi_ineq} {and} $\|\nabla^\varepsilon u^\varepsilon\|^2_{L^2(\Omega^\varepsilon)}=\lambda^\varepsilon\leq C$,
we conclude that $\{u^\varepsilon\}_{{\varepsilon}\in\mathcal{E}}$ and $\{\Pi^\varepsilon u^\varepsilon\}_{{\varepsilon}\in\mathcal{E}}$ are bounded in $H^1(\widetilde\Omega)$ and $L^2(\widetilde\Gamma)$, correspondingly. Then there is a subsequence (still indexed by ${\varepsilon}$) and $u_1\in H^1(\widetilde{\Omega})$, $u_2\in L^2(\widetilde{\Gamma})$ such that
\begin{gather*}
u^\varepsilon\rightharpoonup u_1\text{ in }H^1(\widetilde{\Omega}),\quad
\Pi^\varepsilon u^\varepsilon\rightharpoonup u_2\text{ in }L^2(\widetilde{\Gamma}).
\end{gather*}
Also in view of the trace theorem and \eqref{phi} $u^\varepsilon|_{\partial \widetilde\Omega}\to u_1$ in $L^2(\partial\widetilde\Omega)$, whence
$u_1(1,\cdot){=}e^{i\phi}u_1(0,\cdot)$, i.e. $U=(u_1,u_2)\in \mathrm{dom}(\widetilde{\mathfrak{a}}^\phi)$.
If $u_1=0$ then $\lambda=\a$, the proof is completely similar to the proof of this fact in \cite[Theorem 2.1]{CarKhrab}. Then in view of \eqref{aqa} $\lambda\in\sigma(\mathcal{A})$.\smallskip
Now, let $u_1\not= 0$. For an arbitrary $w\in \mathrm{dom}(\widetilde{\mathfrak{a}}^{\phi^\varepsilon\hspace{-1mm},{\varepsilon}})$ we have
\begin{gather}\label{in_eq_phi}
\int_{\widetilde{\Omega}^\varepsilon}\nabla u^\varepsilon(\textbf{x})\cdot \overline{\nabla w(\textbf{x})} \d \textbf{x}=\lambda^\varepsilon\int_{\widetilde{\Omega}^\varepsilon} (\rho^\varepsilon(\textbf{x}))^{-1} u^\varepsilon(\textbf{x}) \overline{w(\textbf{x})} \d \textbf{x}.
\end{gather}
Let $w_1\in C^{\infty}(\overline{\widetilde{\Omega}}),\ w_2\in C^{\infty}(\overline{\widetilde{\Gamma}})$, moreover $w_1(1,\cdot){=}e^{i\phi}w_1(0,\cdot).$
We set
$$w_1^\varepsilon(\textbf{x})=w_1(\textbf{x})\left((e^{i(\phi^\varepsilon-\phi)}-1)x_1+1\right),\ \textbf{x}=(x_1,x_2).$$
It is easy to see that $w_1^\varepsilon(\textbf{x})$ satisfies $w_1(1,\cdot){=}e^{i\phi^\varepsilon}w_1(0,\cdot)$ and \begin{gather}\label{appr1}
w_1^\varepsilon\to w_1\text{ in }C^1(\overline{\widetilde{\Omega}})\text{ as }{\varepsilon}\to 0.
\end{gather}
Using these functions we construct the test-function $w(x)$ by the formula
\begin{gather*}
w(\textbf{x})=
\begin{cases}
w_1^\varepsilon(x)+\displaystyle\sum\limits_{i\in \mathcal{I}\e}(w_1^\varepsilon(\textbf{x}^{i,{\varepsilon}})-w_1^\varepsilon(\textbf{x}))\varphi\left({\varepsilon}^{-1}|\textbf{x}-\textbf{x}^{i,{\varepsilon}}|\right),& \textbf{x}\in\widetilde{\Omega},\\
\displaystyle (h^\varepsilon)^{-1}\left( w_2(\textbf{x}^{i,{\varepsilon}})-w_1^\varepsilon(\textbf{x}^{i,{\varepsilon}})\right)x_2+w_1^\varepsilon(\textbf{x}^{i,{\varepsilon}}),& \textbf{x}=(x_1,x_2)\in T_i^\varepsilon,\\
\displaystyle{ w_2(\textbf{x}^{i,{\varepsilon}})},&\textbf{x}\in B_i^\varepsilon.
\end{cases}
\end{gather*}
Here $\textbf{x}^{i,{\varepsilon}}:=(i{\varepsilon},0)$, $\varphi\in C^\infty(\mathbb{R})$ satisfies $\varphi(t)=1$ as $t\leq {R\over 2}$ and $\varphi(t)=0$ as $t\geq {1\over 2}$, the constant $R\in (0,1)$ comes from \eqref{ass12}-\eqref{ass14}. It is clear that $w^\varepsilon\in \mathrm{dom}(\mathfrak{a}^{\phi^\varepsilon\hspace{-1mm},{\varepsilon}})$.
We plug $w(\textbf{x})$ into \eqref{in_eq_phi} and pass to ${\varepsilon}\to 0$.
Using the same arguments as in the proof of Theorem 2.1 from \cite{CarKhrab} (but with account of
\eqref{appr1}) we obtain:
\begin{gather*}
\int_\Omega {\nabla u_1}\cdot \overline{\nabla w_1}\d
\textbf{x}+\int_\Gamma \a r\left(u_1|_\Gamma-u_2\right)\overline{\left(w_1|_\Gamma-w_2\right)}\d x=\lambda \int_{\widetilde{\Omega}}u_1 \overline{w_1} \d \textbf{x}+\lambda r\int_{\widetilde{\Gamma}}u_2 \overline{w_2} \d x.
\end{gather*}
By the density arguments this equality holds for an arbitrary $(w_1,w_2)\in\mathrm{dom}(\widetilde{\mathfrak{a}}^{\varphi})$ which implies
$\widetilde{\mathcal{A}} ^{\phi}U=\lambda U,\quad U=(u_1,u_2).$
Since $u_1\not=0$ then $\lambda\in\sigma(\widetilde{\mathcal{A}}^{\phi})$.
But in view of \eqref{aqa} and Lemma \ref{lmW2} for each $\varphi$ one has $\sigma(\widetilde{\mathcal{A}} ^{\phi})\subset \sigma(\mathcal{A} )$, therefore $\lambda\in\sigma(\mathcal{A})$.
The property (i) is proved.
\smallskip
The proof of the property (ii) repeats word-by-word the proof for bounded domains $\Omega$ presented in \cite{CarKhrab}.
\section{References}
\renewcommand{\refname}{}
\vspace*{-26pt}
\frenchspacing
|
1,477,468,751,170 | arxiv | \section{Introduction}
The characterization of density fluctuations in many-body systems is a problem of great fundamental interest in the
physical, mathematical and biological sciences \cite{Wi65,Ka66,Fi67,Wi74,Pe93,Wa96,Te97,No98,Sc99,Chand99,Ga05,Ye03,Mu06,Be07,Ji11d}.
The anomalous suppression of density fluctuations at very long wavelengths is central
to the hyperuniformity concept, whose broad importance for condensed matter physics
and materials science was brought to the fore only about a decade ago
in a study that focused on fundamental theoretical aspects, including how it provides
a unified means to classify and categorize crystals, quasicrystals and special
disordered point configurations \cite{To03a}. Hyperuniform systems are poised at an exotic critical point
in which the direct correlation function, defined via the Ornstein-Zernike relation \cite{Ha86}, is long-ranged \cite{To03a}, in
diametric contrast to standard thermal and magnetic critical points in which the total correlation function is long-ranged \cite{Wi65,Ka66,Fi67,Wi74}.
Roughly speaking, a hyperuniform many-particle system in $d$-dimensional Euclidean space
$\mathbb{R}^d$ is one in which (normalized)
density fluctuations are completely suppressed at very large length scales,
implying that the structure factor $S({\bf k})$ tends to zero as the wavenumber $k\equiv |\bf k|$ tends to zero,
i.e.,
\begin{equation}
\lim_{|{\bf k}| \rightarrow 0} S({\bf k}) = 0.
\label{hyper}
\end{equation}
Equivalently, it is one in which the number variance $\sigma^2_{_N}(R)$ of particles within a
spherical observation window of radius $R$ grows more slowly than the window volume in the large-$R$ limit, i.e.,
slower than $R^d$. Typical disordered systems, such as liquids and structural glasses, have the standard volume scaling, that is, $\sigma^2_{_N}(R) \sim R^d$. By contrast, all perfect crystals and quasicrystals are hyperuniform with the surface-area
scaling $\sigma^2_{_N}(R)\sim R^{d-1}$. Surprisingly,
there are a special class of disordered particle configurations that have the same asymptotic behavior as crystals.
There are hyperuniform scalings other than surface-area growth. When the structure factor
goes to zero in the limit $|{\bf k}| \rightarrow 0$ with the power-law form
\begin{equation}
S({\bf k}) \sim |{\bf k}|^\alpha,
\label{power}
\end{equation}
where $\alpha >0$, the number variance has the following large-$R$ asymptotic scaling \cite{To03a,Za09,Za11b}:
\begin{eqnarray}
\sigma^2_{_N}(R) \sim \left\{
\begin{array}{lr}
R^{d-1}, \quad \alpha >1\\
R^{d-1} \ln R, \quad \alpha = 1 \qquad (R \rightarrow \infty).\\
R^{d-\alpha}, \quad 0 < \alpha < 1.
\end{array}\right.
\label{sigma-N-asy}
\end{eqnarray}
Disordered hyperuniform systems are exotic states
of matter that lie between a crystal and liquid: they are like perfect crystals in the way they suppress large-scale density fluctuations and yet are like liquids or glasses in that they are statistically isotropic with no Bragg peaks and hence
have no long-range order. In this sense, they can have a {\it hidden order on large length scales} (see Fig. 2 of Ref. \cite{To15} for a vivid example) and, because of their hybrid nature,
are endowed with novel physical properties, as described below. Figure \ref{pattern} shows a typical scattering pattern
for a crystal and another for a ``stealthy" disordered hyperuniform one in which there is a circular region around the
origin in which there is no scattering and diffuse scattering outside this ``exclusion" zone \cite{Uc04b,Ba08}, a highly unusual for an amorphous material.
\begin{figure}
\begin{center}
\includegraphics*[ width=2.in,clip=keepaspectratio]{fig1a.eps}\vspace{0.3in}
\includegraphics*[ width=2.in,clip=keepaspectratio]{fig1b.eps}
\caption{Top: Scattering pattern for a crystal. Bottom: Scattering pattern for a disordered ``stealthy" hyperuniform material \cite{Uc04b,Ba08}.
Notice that apart from forward scattering, there is a circular region around the origin
in which there is no scattering, a highly exotic situation for an amorphous state of matter.}
\label{pattern}
\end{center}
\end{figure}
We knew only a few examples of {\it disordered} hyperuniform systems (also known as ``superhomogeneous" patterns)
about a decade ago \cite{Leb83,To03a,Ga02}.
We now know that these exotic states of matter can exist as {\it equilibrium} and {\it nonequilibrium} phases,
of both the classical and the quantum-mechanical varieties. Examples include
``stealthy" disordered ground states \cite{Uc04b,Ba08,To15,Zh15a,Zh15b},
maximally random jammed particle packings \cite{Do05d,Za11a,Ji11c,Ch14a}, jammed athermal granular media~\cite{Be11}, jammed thermal colloidal packings~\cite{Ku11,Hu12,Dr15}, dynamical processes in ultracold atoms~\cite{Le14},
disordered networks with large photonic band gaps \cite{Fl09b}, driven nonequilibrium systems
\cite{He15,Ja15,We15,Tj15,Sc15,Di15}, avian photoreceptor patterns \cite{Ji14}, geometry of neuronal tracts \cite{Bur15},
immune system receptors \cite{Ma15}, certain quantum ground states (both fermionic and bosonic) \cite{To08c,Fe56}, high-density transparent materials
\cite{Le16}, and wave dynamics in disordered potentials \cite{Yu15}. Hyperuniformity has pointed to new correlation functions
from which one can extract relevant growing length scales as a function of temperature
as a liquid is supercooled below its glass transition temperature \cite{Ma13a,Co16},
a problem of great interest in glass physics \cite{Lu07,Be07,Sc07,Ka09,Chand10,Hock12}.
Remarkably, the one-dimensional point patterns derived from the nontrivial zeros
of the Riemann zeta function \cite{Mon73} and the eigenvalues of random Hermitian matrices \cite{Dy62a}
are disordered and hyperuniform.
A variety of groups
have recently fabricated disordered hyperuniform materials at the
micro- and nano-scales for various photonic applications \cite{Man13a,Man13b,Ha13},
surface-enhanced Raman spectroscopy \cite{De15},
realization of a terahertz quantum cascade laser \cite{Deg15},
and self-assembly of diblock copolymers \cite{Zi15b}. Moreover, it was
shown that the electronic band gap of amorphous silicon
widens as it tends toward a hyperuniform state \cite{He13}. Recent
X-ray scattering measurements indicate that amorphous-silicon samples
can be made to be nearly hyperuniform \cite{Xie13}.
The hyperuniformity concept was generalized to the case of heterogeneous materials \cite{Za09}, i.e.,
materials consisting of two or more phases \cite{Note1}.
Heterogeneous materials abound in Nature and synthetic
situations. Examples include composite and porous media, biological media (e.g., plant and animal tissue),
foams, polymer blends, suspensions,
granular media, cellular solids, and colloids \cite{To02a}. In the case of two-phase media
(defined more precisely in Sec. \ref{back}), one relevant fluctuating quantity is the local phase volume fraction
within a window. The simplest characterization of such fluctuations is the local volume-fraction
variance $\sigma_{_V}^2(R)$ associated with a $d$-dimensional spherical window of radius $R$ \cite{Lu90b,Qu97b,To02a,Note2}.
It was demonstrated that the hyperuniformity condition in the context of
volume-fraction fluctuations in a two-phase heterogeneous system is
one in which the variance $\sigma_{_V}^2(R)$ for large $R$ goes to zero more
rapidly than the inverse of the window volume \cite{Za09}, i.e., faster than $R^{-d}$, which is equivalent to the following condition
on the relevant spectral density ${\tilde \chi}_{_V}({\bf k})$ (defined in Sec. \ref{back}):
\begin{eqnarray}
\lim_{|\mathbf{k}|\rightarrow 0}\tilde{\chi}_{_V}(\mathbf{k}) = 0.
\label{hyper-2}
\end{eqnarray}
This generalization of the hyperuniformity concept
has been fruitfully applied to characterize a variety of disordered two-phase systems \cite{Za11a,Za11c, Za11d,Dr15,Ch15},
and the rational design of digitized hyperuniform two-phase media with tunable
disorder \cite{Di16}.
As in the case of hyperuniform point configurations \cite{To03a,Za09,Za11b},
it is easily shown that three different scaling regimes
arise in the case of hyperuniform two-phase systems when the spectral density
goes to zero with the power-law form ${\tilde \chi}_{_V}({\bf k})\sim |{\bf k}|^\alpha$:
\begin{eqnarray}
\sigma^2_{_V}(R) \sim \left\{
\begin{array}{lr}
R^{-(d+1)}, \quad \alpha >1\\
R^{-(d+1)} \ln R, \quad \alpha = 1 \qquad (R \rightarrow \infty).\\
R^{-(d+\alpha)}, \quad 0 < \alpha < 1
\end{array}\right.
\label{sigma-V-asy}
\end{eqnarray}
\begin{figure}[bthp]
\centerline{\includegraphics[ width=2.4in, keepaspectratio,clip=]{fig2.eps}}
\caption{(Color online) A schematic indicating a circular observation window of radius $R$ that is centered at
position $\bf x_0$ in a disordered two-phase medium; one phase is depicted
as a blue (darker) region and the other phase as a white region. The phase volume fractions or interfacial
area within the window will fluctuate as the window position ${\bf x}_0$ is varied. }
\label{patterns}
\end{figure}
Given the fundamental as well as practical importance of disordered hyperuniform systems elucidated thus far,
it is natural to explore further generalizations of the hyperuniformity notion and its consequences.
In this paper, we extend the hyperuniformity concept in a variety of different directions.
Before doing so,
we make some remarks about hyperuniformity in two-phase systems in which
one phase is a sphere packing (Sec. \ref{packing-1}).
We then introduce the notion of hyperuniformity
as it concerns local fluctuations in the interfacial area in disordered two-phase media and apply
the mathematical formulation to sphere packings (Sec. \ref{area}). We demonstrate
that surface-area fluctuations are considerably more sensitive
microstructural measures than volume-fraction fluctuations, and hence
provide a more powerful approach to understand
hyperuniformity in two-phase systems.
Subsequently, we extend the hyperuniformity concept to random scalar fields
(Sec. \ref{scalar}). Such phenomena are ubiquitous and include, but are not limited
to, concentration and temperature
fields in heterogeneous media and turbulent flows,
laser speckle patterns, and temperature fluctuations associated
with the cosmic microwave background.
Among other results, we show how a random scalar field can inherit the hyperuniformity
property from an underlying hyperuniform point process.
We also note that the analysis for continuous random fields
is trivially extended to discrete cases derived from
experimental images or computer-simulation studies.
We then generalize the hyperuniformity formalism to treat random vector fields
and find that this extension requires one to broaden the definition
of hyperuniformity to account for the dependence of the relevant spectral
tensor function on the direction in which the origin
is approached (Sec. \ref{vector}). Mathematically, this means that the directional-dependent
spectral tensor associated with a hyperuniform vector field is nonanalytic at the origin.
This is to be contrasted with previous definitions of hyperuniformity, which assumed that the
way in which the origin in Fourier space (scatterling pattern)
is approached is independent of direction. Generalizing the definition of hyperuniformity
to account for directionality provides completely new and potentially exciting avenues for theoretical and experimental work,
including the possibility to
design random vector fields with targeted hyperuniform spectra.
Among other results, we reinterpret and analyze well-known turbulent energy spectra in the context
of this generalization of hyperuniformity. Subsequently, the notion of directional hyperuniformity is proposed
in the context of many-particle systems and heterogeneous media
that are statistically anisotropic (Sec. \ref{aniso}). Here we show that
directionality in Fourier space can again play a pivotal role. In particular,
directional hyperuniformity imparts exotic anisotropic physical
properties (e.g., elastic, optical and acoustic characteristics) to these states of matter.
Finally, we offer concluding remarks and a discussion (Sec. \ref{con}).
\section{Definitions and Background}
\label{back}
\subsection{Point Configurations}
\label{points}
Consider $N$ points with configuration
${\bf r}^N \equiv {\bf r}_1,{\bf r}_2,\ldots,{\bf r}_N$ in a large region $\cal V$ of volume $V$
in $d$-dimensional Euclidean space $\mathbb{R}^d$.
Any single point configuration is specified by its {\it microscopic density} $n({\bf r})$ at
position $\bf r$, which is a random variable defined by
\begin{equation}
n({\bf r})=\sum_{j=1}^N \delta({\bf r}-{\bf r}_j),
\label{local-den}
\end{equation}
where $\delta({\bf r})$ is a $d$-dimensional Dirac delta function.
The point process is statistically characterized by the {\it specific}
probability density function $P_N({\bf r}^N)$, where $P_N({\bf r}^N)d{\bf
r}^N$ gives the probability of finding point 1 in volume element
$d{\bf r}_1$ about ${\bf r}_1$, point 2 in volume element
$d{\bf r}_2$ about ${\bf r}_2$, $\ldots$, and point $N$ in volume element
$d{\bf r}_N$ about ${\bf r}_N$. Thus, $P_N({\bf r}^N)$ normalizes
to unity and $d{\bf r}^N \equiv d{\bf r}_1 d{\bf
r}_2\cdots d{\bf r}_N$ represents the $(Nd)$-dimensional volume element.
The ensemble average of any function $f({\bf r}^N)$ that depends
on the point configuration ${\bf r}^N$ is given by
\begin{equation}
\langle f({\bf r}^N) \rangle = \int_{\cal V} \int_{\cal V} \cdots \int_{\cal V} f({\bf r}^N)
P_N({\bf r}^N)d{\bf r}^N.
\label{ensemble}
\end{equation}
The reduced {\it generic}
density function $\rho_n({\bf r}^n)$ ($n <N$), defined as
\begin{eqnarray}
\rho_n({\bf r}^n) = \frac{N!}{(N-n)!} \int_V \cdots \int_V P_N({\bf r}^N) d{\bf r}^{N-n},
\label{rhon}
\end{eqnarray}
where $d{\bf r}^{N-n}\equiv d{\bf r}_{n+1} d{\bf r}_{n+2} \cdots d{\bf r}_N $.
The quantity $\rho_n({\bf r}^n)d{\bf r}^n$ is proportional to
the probability of finding {\it any} $n$ particles ($n \le N$)
with configuration $\bf r^n$ in volume element $d{\bf r}^n$.
For statistically homogeneous media, $\rho_{n}({\bf r}^n)$
is translationally invariant and hence depends only on the relative
displacements, say with respect to ${\bf r}_1$:
\begin{equation}
\rho_{n}({\bf r}^n)=\rho_{n}({\bf r}_{12},{\bf r}_{13},\ldots,{\bf r}_{1n}),
\end{equation}
where ${\bf r}_{ij}={\bf r}_j-{\bf r}_i$. The one-particle
function $\rho_1$ is just equal to the constant {\it number density}
\index{number density} of particles $\rho$, i.e.,
\begin{equation}
\rho_1({\bf r}_1) = \rho \equiv \lim_{ N,V\rightarrow\infty} \frac{N}{V} .
\label{thermolimit}
\end{equation}
This limit is referred to as the
{\it thermodynamic limit}. It is convenient to define the so-called
{\it $n$-particle correlation function}
\begin{equation}
g_n({\bf r}^n) = \frac{\rho_n({\bf r}^n)}{\rho^n}.
\label{nbody}
\end{equation}
In the absence of long-range order and when the
particles are mutually far from one another (i.e., ${r}_{ij}=|{\bf r}_{ij}|
\rightarrow\infty$,
$1\leq i < j \leq N$), $\rho_n({\bf r}^n) \rightarrow \rho^n$ and
$g_n({\bf r}^n) \rightarrow 1$.
The important two-particle quantity
\begin{equation}
g_2({\bf r}_{12}) = \frac{\rho_2({\bf r}_{12})}{\rho^2}
\label{g2-rho2}
\end{equation}
is usually referred to as the {\it pair correlation function}.
The {\it total correlation function} $h({\bf r}_{12})$ is defined as
\begin{equation}
h({\bf r}_{12})=g_2({\bf r}_{12})-1,
\label{total}
\end{equation}
which is trivially related to the autocovariance function associated
with the random variable (\ref{local-den}), i.e,
\begin{equation}
\frac{1}{\rho}\Bigg\langle \Big(n({\bf x})- \rho \Big) \, \Big(n({\bf x}+{\bf r}) - \rho\Big) \,\Bigg\rangle = \delta({\bf r})
+ \rho h({\bf r})
\label{H}
\end{equation}
where we have invoked statistical homogeneity.
Spectral representations of direct-space pair statistics of various types
are central to the hyperuniformity concept.
We use the following definition of the
Fourier transform of some function $f({\bf r})$, which can represent a {\it tensor
of arbitrary rank} and depends on the
vector $\bf r$ in $\mathbb{R}^d$:
\begin{eqnarray}
\tilde{f}(\mathbf{k}) = \int_{\mathbb{R}^d} f(\mathbf{r}) \exp\left[-i(\mathbf{k}\cdot \mathbf{r})\right] d\mathbf{r},
\end{eqnarray}
where $\mathbf{k}$ is a wave vector.
When it is well-defined, the corresponding inverse Fourier transform is given by
\begin{eqnarray}
f(\mathbf{r}) = \left(\frac{1}{2\pi}\right)^d \int_{\mathbb{R}^d} \tilde{f}(\mathbf{k}) \exp\left[i(\mathbf{k}\cdot \mathbf{r})\right] d\mathbf{k}.
\end{eqnarray}
If $f$ is a radial function, i.e., depends
on the modulus $r=|\mathbf{r}|$ of the vector $\bf r$,
its Fourier transform is given by
\begin{eqnarray}
{\tilde f}(k) =\left(2\pi\right)^{\frac{d}{2}}\int_{0}^{\infty}r^{d-1}f(r)
\frac{J_{\left(d/2\right)-1}\!\left(kr\right)}{\left(kr\right)^{\left(d/2\right
)-1}} \,d r,
\label{fourier}
\end{eqnarray}
where $k=|{\bf k} |$ is wavenumber or modulus of the wave vector $\bf k$
and $J_{\nu}(x)$ is the Bessel function of order $\nu$.
The inverse transform of $\tilde{f}(k)$ is given by
\begin{eqnarray}
f(r) =\frac{1}{\left(2\pi\right)^{\frac{d}{2}}}\int_{0}^{\infty}k^{d-1}\tilde{f}(k)
\frac{J_{\left(d/2\right)-1}\!\left(kr\right)}{\left(kr\right)^{\left(d/2\right
)-1}} d k.
\label{inverse}
\end{eqnarray}
We recall the first several terms in the series expansion of $J_{\nu}(x)$
about $x=0$:
\begin{eqnarray}
\hspace{-0.15in}J_{\nu}(x) &=& \frac{(x/2)^{\nu}}{\Gamma(\nu +1)}- \frac{(x/2)^{\nu+2}}{\Gamma(\nu +2)}+ \frac{(x/2)^{\nu+4}}{2\Gamma(\nu +3)} -{\cal O}(x^{\nu +6}),\nonumber\\
\end{eqnarray}
which we will apply later in the article.
The nonnegative structure factor $S(\bf k)$ is the Fourier transform
of the autocovariance function (\ref{H}) and is trivially related to ${\tilde h}({\bf k})$,
which is the Fourier transform of the total correlation function $h(\bf r)$:
\begin{equation}
S({\bf k})=1+\rho {\tilde h}({\bf k}).
\label{factor}
\end{equation}
The structure factor is proportional to the scattering intensity. It is useful to recall
the relationship between the local number variance $\sigma^2_N(R)$ associated
with a spherical window of radius $R$ for
a point configuration \cite{To03a}:
\begin{eqnarray}
\sigma_N^2(R)&=& \rho v_1(R)\Big[1+\rho \int_{\mathbb{R}^d} h({\bf r})
\alpha(r;R) d{\bf r}\Big] \nonumber \\
&=&
\rho v_1(R)\Big[\frac{1}{(2\pi)^d} \int_{\mathbb{R}^d} S({\bf k})
{\tilde \alpha}(k;R) d{\bf k}\Big],
\label{local}
\end{eqnarray}
where
\begin{equation}
v_1(R) =\frac{\pi^{d/2} R^d}{\Gamma(1+d/2)}
\label{v1}
\end{equation}
is the volume of a $d$-dimensional sphere of radius $R$, and
$\alpha(r;R)$ is the {\it scaled intersection volume}, the ratio of the intersection volume of two spherical windows
of radius $R$ whose centers are separated by a distance $r$ to the volume of
a spherical window, known analytically in any space dimension \cite{To02a,To06b}. Its Fourier transform
is given by
\begin{equation}
{\tilde \alpha}(k;R)= 2^d \pi^{d/2} \Gamma(1+d/2)\frac{[J_{d/2}(kR)]^2}{k^d},
\label{alpha-k}
\end{equation}
which clearly is a nonnegative function. Here $J_{\nu}(x)$ is the Bessel function of order $\nu$.
The hyperuniformity condition (\ref{hyper})
defined through the structure factor and relation (\ref{local}) implies that
the number variance $\sigma^2_N(R)$ grows more slowly than $R^d$ for large $R$.
Observe that hyperuniformity requirement (\ref{hyper}) dictates that the
volume integral of $\rho h({\bf r})$ over all space is exactly
equal to $-1$, i.e.,
\begin{equation}
\rho \int_{\mathbb{R}^d} h({\bf r}) d{\bf r}=-1,
\label{sum-1}
\end{equation}
which can be thought of as a sum rule.
{\it Stealthy} configurations are those in which
the structure factor $S({\bf k})$ is exactly zero for a subset of wave vectors, meaning that they completely suppress
single scattering of incident radiation for those wave vectors.
{\it Stealthy hyperuniform} patterns \cite{Uc04b,Ba08,To15} are a subclass of hyperuniform
systems in which $S({\bf k})$ is zero for a range
of wave vectors around the origin, i.e.,
\begin{equation}
S({\bf k})= 0 \qquad \mbox{for}\; 0 \le |{\bf k}| \le K,
\label{stealth}
\end{equation}
where $K$ is some positive number.
An example of a disordered stealthy and hyperuniform scattering pattern
is shown in the bottom panel of Fig. \ref{pattern}.
\subsection{Two-Phase Media}
\vspace{-0.1in}
A two-phase random medium is a domain of space $\mathcal{V} \subseteq \mathbb{R}^d$ of volume $V$
that is partitioned into two disjoint regions: a
phase 1 region $\mathcal{V}_1$
and A phase 2 region $\mathcal{V}_2$
such that $\mathcal{V}_1 \cup \mathcal{V}_1 =\mathcal{V}$ \cite{To02a}.
Denote by $\partial {\mathcal V}$ the interface between $\mathcal{V}_1$ and $\mathcal{V}_2$.\vspace{-0.25in}
\subsubsection{Phase Statistics}\vspace{-0.1in}
The phase indicator function ${\cal I}^{(i)}({\bf x})$ for a given realization is defined as
\begin{equation}
{\cal I}^{(i)}({\bf x}) = \left\{
{\begin{array}{*{20}c}
{1, \quad\quad {\bf x} \in {\cal V}_i,}\\
{0, \quad\quad {\bf x} \notin {\cal V}_i},
\end{array} }\right.
\label{phase-char}
\end{equation}
\noindent
The one-point correlation function $S_1^{(i)}({\bf x})= \langle {\cal I}^{(i)}({\bf x}) \rangle$
(where angular brackets indicate an ensemble average) is independent of position $\bf x$,
for statistically homogeneous media, namely, the constant phase volume fraction, i.e.,
\begin{equation}
\phi_i = \langle {\cal I}^{(i)}({\bf x}) \rangle.
\end{equation}
The two-point correlation function is defined as $S^{(i)}_2({\bf x}_1,{\bf x}_2) = \left\langle{{\cal I}^{(i)}({\bf x}_1){\cal I}^{(i)}({\bf x}_2)}\right\rangle$,
This function is the probability
of finding two points ${\bf x}_1$ and ${\bf x}_2$ in phase $i$, and for homogeneous media,
depends only on the relative displacement vector ${\bf r} \equiv {\bf x}_2-{\bf x}_1$
and hence $S_2^{(i)}({\bf x}_1,{\bf x}_2)=S_2^{(i)}({\bf r})$.
The autocovariance function $\chi_{_V}({\bf r})$ associated with the random variable ${\cal I}^{(i)}({\bf x})$
is given by
\begin{equation}
\label{eq108}
\chi_{_V}({\bf r}) \equiv S^{(1)}_2({\bf r}) - \phi^2_ 1 = S^{(2)}_2({\bf r}) - \phi^2_2.
\end{equation}
The nonnegative spectral density ${\tilde \chi}_{_V}({\bf k})$, which can be obtained from scattering experiments \cite{De49,De57},
is the Fourier transform of $\chi_{_V}({\bf r})$.
Higher-order versions of these correlation functions
\cite{To82b,To83a,To02a} (not considered here) arise in rigorous bounds
and exact expressions for effective transport \cite{To85f,Be85a,Be88b,Se89,Gi95a,To02a,Mi02,Ph03,To04a},
elastic \cite{Be88b,Gi95a,To97b,To02a,Mi02} and electromagnetic \cite{Re08a}
properties of two-phase media.
It is known that the volume-fraction variance $\sigma_{V}^2(R)$
within a $d$-dimensional spherical window of radius $R$ can be expressed in terms of the autocovariance function $\chi_{_V}({\bf r})$ \cite{Lu90b} or of the spectral density ${\tilde \chi}_{_V}(\mathbf{k})$:
\begin{eqnarray}
\sigma_{_V}^2(R) &=& \frac{1}{v_1(R)} \int_{\mathbb{R}^d} \chi_{_V}(\mathbf{r}) \alpha(r; R) d\mathbf{r} \nonumber \\
&=& \frac{1}{v_1(R)(2\pi)^d} \int_{\mathbb{R}^d} {\tilde \chi}_{_V}(\mathbf{k}) {\tilde \alpha}(k; R) d\mathbf{k},
\label{phi-var-2}
\end{eqnarray}
where, as in relation (\ref{local}), $\alpha(r;R)$ is the scaled intersection volume of two
spherical windows, and ${\tilde \alpha}(k; R)$ is its Fourier transform.
The hyperuniformity requirement (\ref{hyper-2}) dictates that the autocovariance
function $\chi_{_V}({\bf r})$ exhibits both positive and negative correlations such that
its volume integral over all space is exactly zero, i.e.,
\begin{equation}
\int_{\mathbb{R}^d} \chi_{_V}({\bf r}) d{\bf r}=0,
\label{sum-2}
\end{equation}
which can be regarded to be a sum rule.
We note in passing that realizability conditions for the existence of hyperuniform
autocovariances and spectral densities of general
two-phase media have recently been explored \cite{To16b}. These conditions
restrict the possible class of functional forms that can be hyperuniform. \vspace{-0.25in}
\subsubsection{Interfacial Statistics}\vspace{-0.1in}
The interface between the phases of a realization of a two-phase medium
is generally known probabilistically and is characterized by
the interface indicator function ${\cal M}({\bf x})$ \cite{To02a} defined as
\begin{equation}
{\cal M}({\bf x})= |\nabla {\cal I}^{(1)}({\bf x})|=|\nabla {\cal
I}^{(2)}({\bf x})|
\label{surf-char}
\end{equation}
and therefore is a {\it generalized} function that is nonzero when
$\bf x$ is on the interface. The specific surface $s$
(interface area per unit volume) is a one-point correlation
given by the expectation of ${\cal M}({\bf x})$:
\begin{equation}
s= \langle {\cal M}({\bf x}) \rangle,
\label{s(x)}
\end{equation}
where, because of the assumption of statistical homogeneity, $s$ is independent of the position $\bf x$.
One can define a variety of higher-order surface correlation functions \cite{To02a}, but for
our purposes in this paper, we will restrict ourselves to the following two-point correlation
function:
\begin{equation}
F_{ss}({\bf r}) = \left\langle{{\cal M}({\bf x}){\cal M}({\bf x} +{\bf r})}\right\rangle,
\label{surface}
\end{equation}
which is called the {\it surface-surface} correlation function. Note the definition (\ref{surface}) invokes
the statistical homogeneity of the process. The surface-surface correlation function
arises in rigorous bounds on the effective rate constants for diffusion-controlled
reactions \cite{Do76,Ru88} and fluid permeability \cite{Do76,Ru89a} of fluid-saturated porous media. The autocovariance associated
with the random variable ${\cal M}$ for homogeneous media
is given by
\begin{equation}
\chi_{_S}(\mathbf{r}) = F_{ss}({\bf r}) - s^2,
\label{auto-S}
\end{equation}
which, unlike the dimensionless autocovariance $\chi_{_V}({\bf r})$, has dimensions of inverse of length squared,
independent of the dimension $d$.
The nonnegative spectral density ${\tilde \chi}_{_S}({\bf k})$ is the Fourier transform of $\chi_{_S}({\bf r})$,
when it exists. \vspace{-0.1in}
\section{Some Remarks About Two-Point Statistics and Hyperuniform Sphere Packings}
\label{packing-1}\vspace{-0.1in}
Here we collect various known results scattered throughout
the literature concerning the autocovariance function
$\chi_{_V}({\bf r})$ and spectral density ${\tilde \chi}_{_V}({\bf k})$ for two-phase
media in $\mathbb{R}^d$ in which one phase is a sphere packing in order to compare them
to corresponding results for the surface-surface correlation function and the generalization of hyperuniformity
to surface-area fluctuations introduced in the subsequent section.
A particle packing is a configuration of nonoverlapping (i.e., hard)
particles in $\mathbb{R}^d$.
For statistically homogeneous packings of congruent spheres of radius $a$ in $\mathbb{R}^d$ at number density $\rho$,
the two-point probability function $S_2({\bf r})$ of the particle (sphere) phase is known exactly in terms of the pair correlation function \cite{To85b,To02a}; specifically,
\begin{eqnarray}
{\chi}_{_V}({\bf r}) &=& \rho\, m_v(r;a) \otimes m_v(r;a) +\rho^2 m_v(r;a) \otimes m_v(r;a) \otimes h({\bf r}) \nonumber \\
&=& \rho \,v_2^{int}(r;a) +\rho^2 v_2^{int}(r;a) \otimes h({\bf r}),
\label{S2-spheres}
\end{eqnarray}
where \vspace{-0.35in}
\begin{equation}
m_v(r;a) =\Theta(a-r)=\Bigg\{{1, \quad r \le a,\atop{0, \quad r > a,}}
\label{indicator}
\end{equation}
is a spherical particle indicator function \cite{Note3}.
$\Theta(x)$ is the Heaviside step-function,
and $v_2^{int}(r;a)=v_1(a)\alpha(r;a)$ is the intersection volume of two spheres
of radius $a$ whose centers are separated by a distance $r$, where $v_1(a)$ and $\alpha(r;a)$
are defined as in (\ref{phi-var-2}), and $\otimes$ denotes the convolution of two
functions $F({\bf r})$ and $G({\bf r})$:\vspace{-0.27in}
\begin{equation}
F({\bf r}) \otimes G({\bf r}) =\int_{\mathbb{R}^d} F({\bf x}) G({\bf r}-{\bf x}) d{\bf x}.
\end{equation}
Fourier transformation of (\ref{S2-spheres}) gives the spectral
density in terms of the structure factor \cite{To85b,To02a,Za09}:\vspace{-0.25in}
\begin{eqnarray}
{\tilde \chi}_{_V}({\bf k})&=& \rho \,{\tilde m}^2(k;a)+ \rho^2 {\tilde m}^2(k;a) {\tilde h}({\bf k}) \nonumber \\
&=& \rho\, {\tilde m}^2(k;a) S({\bf k}) \nonumber \\
&=& \phi {\tilde \alpha}(k;a) S({\bf k})
\label{chi_V-S}
\end{eqnarray}
\vspace{-0.35in}
\noindent{where}\vspace{-0.3in}
\begin{equation}
\hspace{-0.1in}{\tilde \alpha}(k;a)= \frac{1}{v_1(a)} {\tilde m}^2(k;a)= \frac{1}{v_1(a)} \left(\frac{2\pi a}{k}\right)^{d} J_{d/2}^2(ka),
\end{equation}
and
\vspace{-0.35in}
\begin{equation}
\phi =\rho v_1(a),
\end{equation}
is the {\it packing fraction}.
Using relation (\ref{chi_V-S}), it follows that the hyperuniformity of a sphere packing
can only arise if the underlying point configuration (sphere
centers) is itself hyperuniform, i.e., ${\tilde \chi}_{_V}({\bf k})$ inherits the hyperuniformity property (\ref{hyper-2})
only through the structure factor, not ${\tilde \alpha}(k;a)$; see Ref. \cite{To16b} for more details.
The stealthiness property, i.e., no scattering at some finite subset of wave vectors (Sec. \ref{points}),
is a bit more subtle. Relation (\ref{chi_V-S}) dictates
that ${\tilde \chi}_{_V}({\bf k})$ is zero at those wave vectors where $S({\bf k})$ is zero as well as
at the zeros
of the function ${\tilde \alpha}(k;a)$, which is determined by the zeros of the Bessel function
$J_{d/2}(ka)$. The function ${\tilde \chi}_{_V}({\bf k})$ will be zero at all
of the zeros of ${\tilde \alpha}(k;a)$ for any disordered packing free of any Dirac delta functions (Bragg peaks),
hyperuniform or not.
These results for the pair statistics in direct and Fourier spaces
have been generalized to the case of impenetrable spheres
with a size distribution at overall number density $\rho$ \cite{Lu91,To02a}.
The Supplemental Material describes these equations as they concern hyperuniformity \cite{Note4}.
\section{Interfacial Area Fluctuations and Hyperuniformity}
\label{area}\vspace{-0.1in}
Here we introduce the idea of hyperuniformity associated with local fluctuations in the interfacial area of two-phase media in $\mathbb{R}^d$
and derive the relevant formulas. This generalization provides new tools to analyze
a variety of phenomena that occur in physical and biological systems
in which interfaces play a dominant role. For example, the geometry of the interface in a fluid-saturated
porous medium is crucial in determining the fluid permeability \cite{Do76,Ru89a}
and trapping rate \cite{Do76,Ru88} associated with diffusion and reaction in such systems.
Another striking class of examples include surface-energy driven coarsening phenomena,
such as those that occur in spinodal decomposition and morphogenesis \cite{Ca58,Sw77}.\vspace{-0.2in}
\subsection{Local Specific-Surface Fluctuations}\vspace{-0.1in}
While the global specific surface defined by (\ref{s(x)}) is a fixed constant, the specific surface on a local scale
determined by an observation window clearly fluctuates, as in the case of the local phase volume fraction.
Here we derive an explicit expression for the variance associated with the local specific surface and the corresponding hyperuniformity condition.
For simplicity, we consider a $d$-dimensional spherical window of radius $R$ centered
at position ${\bf x}_0$ (see Fig. \ref{patterns}) for statistically homogeneous
two-phase media. The associated \emph{local dimensionless specific surface} $\tau_{_S}(\mathbf{x}_0;R)$ within
a window of radius $R$ centered at position ${\bf x}_0$ is specified
explicitly by
\begin{eqnarray}\label{one}
\tau_{_S}(\mathbf{x}_0; R) = \frac{1}{s v_1(R)}\int {\cal M}(\mathbf{x}) w(\mathbf{x}-\mathbf{x}_0; R) d\mathbf{x},
\label{tau}
\end{eqnarray}
where $v_1(R)$ is given by (\ref{v1}), ${\cal M}(\mathbf{x})$ is the interface indicator
function defined by (\ref{surf-char}), $s$ is the specific surface given by (\ref{s(x)}), and $w$ is the corresponding window indicator function
defined by
\begin{equation}
w({\bf r};R)=\Bigg\{{1, \quad |{\bf r}| \le R,\atop{0, \quad |{\bf r}| > R.}}
\label{window}
\end{equation}
Notice that in the limit $R \rightarrow \infty$, the dimensionless random variable $\tau_{_S}(\mathbf{x}_0; R)$
tends to unity.
The variance $\sigma_{S}^2(R)$ associated with fluctuations in dimensionless specific surface is defined by
\begin{eqnarray}\label{three}
\sigma^2_{_S}(R) &\equiv& \langle\tau_{_S}^2(\mathbf{x}_0; R) \rangle - \langle\tau_{_S}(\mathbf{x}_0; R) \rangle^2 \nonumber\\
&=& \langle\tau_{_S}^2(\mathbf{x}_0; R) \rangle - 1,
\label{def}
\end{eqnarray}
where we have used the fact that the ensemble average $\langle\tau_{_S}(\mathbf{x}_0; R) \rangle=1$,
which is independent of the window position ${\bf x}_0$ because the system is statistically homogeneous.
Substitution of (\ref{tau}) into (\ref{def}) yields
\begin{eqnarray}
\hspace{-0.3in}\sigma^2_{_S}(R) &=&\frac{1}{s^2 v_1^2(R)} \Big[\int F_{ss}({\bf r}) w(\mathbf{x}_1-\mathbf{x}_0; R) \nonumber \\
&& \qquad \times w(\mathbf{x}_2-\mathbf{x}_0; R) d\mathbf{x}_1d\mathbf{x}_2\Big] -1,
\end{eqnarray}
where ${\bf r}={\bf x}_2 -{\bf x}_1$.
Using the definition of the scaled intersection volume of two windows of radius $R$,
\begin{equation}
\alpha(r;R)= \frac{1}{v_1(R)} \int_{\mathbb{R}^d} w(\mathbf{x}_1-\mathbf{x}_0; R) w(\mathbf{x}_2-\mathbf{x}_0; R) d\mathbf{x}_0,
\end{equation}
and the identity \cite{To03a}
\begin{equation}
\frac{1}{v_1(R)} \int_{\mathbb{R}^d} \alpha(r;R) d{\bf r}=1
\end{equation}
leads to the desired relation for the local specific-surface variance:
\begin{eqnarray}
\sigma_{_S}^2(R) = \frac{1}{s^2 v_1(R)} \int_{\mathbb{R}^d} \chi_{_S}(\mathbf{r}) \alpha(r; R) d\mathbf{r},
\label{s-var-1}
\end{eqnarray}
where $\chi_{_S}(\mathbf{r})$ is the autocovariance function associated with the interface
indicator function [cf. (\ref{auto-S})], $r=|\bf r|$, and we have invoked statistical homogeneity.
The alternative Fourier representation of the surface-area variance
that is dual to the direct-space representation (\ref{s-var-1}) is trivially obtained
by applying Parseval's theorem to (\ref{s-var-1}), provided that the
spectral density ${\tilde \chi}_{_S}({\bf k})$ exists:
\begin{eqnarray}
\sigma_{_S}^2(R) = \frac{1}{s^2 v_1(R)(2\pi)^d} \int_{\mathbb{R}^d} {\tilde \chi}_{_S}(\mathbf{k}) {\tilde \alpha}(k; R) d\mathbf{k}.
\label{s-var-2}
\end{eqnarray}
A two-phase system is hyperuniform with respect to surface-area fluctuations if the spectral density ${\tilde \chi}_{_S}({\bf k})$
obeys the condition
\begin{eqnarray}
\lim_{|\mathbf{k}|\rightarrow 0}\tilde{\chi}_{_S}(\mathbf{k}) = 0,
\label{hyper-3}
\end{eqnarray}
which implies the sum rule
\begin{equation}
\int_{\mathbb{R}^d} \chi_{_S}({\bf r}) d{\bf r}=0.
\label{sum-3}
\end{equation}
This hyperuniformity property is equivalent to requiring that the surface-area variance $\sigma_{S}^2(R)$ for large $R$ goes to zero more rapidly than $R^{-d}$, which is the same condition as that for the volume-fraction variance discussed in the Introduction.
Using precisely the same analysis as for point configurations \cite{To03a,Za09,Za11b},
it is simple to show that three different hyperuniform scaling regimes
arise from (\ref{s-var-2}) when the surface-area spectral density
goes to zero with the power-law form ${\tilde \chi}_{_S}({\bf k}) \sim |{\bf k}|^\alpha$:
\begin{eqnarray}
\sigma^2_{_S}(R) \sim \left\{
\begin{array}{lr}
R^{-(d+1)}, \quad \alpha >1\\
R^{-(d+1)} \ln R, \quad \alpha = 1 \qquad (R \rightarrow \infty).\\
R^{-(d+\alpha)}, \quad 0 < \alpha < 1
\end{array}\right.
\label{sigma-S-asy}
\end{eqnarray}
Note that these scaling forms are exactly the same as those for volume-fraction fluctuations
[cf. (\ref{sigma-V-asy})].
\subsection{Sphere Packings}\vspace{-0.1in}
\label{packing-2}
Here we make some remarks about hyperuniformity associated with specific-surface
fluctuations in the case of sphere packings. To do so, we first must collect some known
results for their interfacial two-point statistics.
In the special instance of packings of congruent spheres of radius $a$ in $\mathbb{R}^d$ at number density $\rho$,
the autocovariance function $\chi_{_S}({\bf r})$ is known exactly in terms of the pair correlation function \cite{To86i,To02a}:
\begin{equation}
{\chi}_{_S}({\bf r}) = \rho\, m_s(r;a) \otimes m_s(r;a) +\rho^2 m_s(r;a) \otimes m_s(r;a) \otimes h({\bf r}),
\label{F-spheres}
\end{equation}
where
\begin{equation}
m_s(r;a)= \frac{\partial m_v(r;a)}{\partial a}= \delta(r-a) ,
\label{delta}
\end{equation}
is a interface indicator function for a sphere,
$\delta(r)$ is a radial Dirac delta function, and $m(r;a)$ is defined
by (\ref{indicator}). Note that the first term on the right side
of relation (\ref{F-spheres}), which has support in the interval $[0,2a]$, generally possesses an integrable singularity at the origin \cite{To86f}.
Fourier transformation of (\ref{F-spheres}) gives the corresponding spectral
density in terms of the structure factor \cite{To86f,To02a}:
\begin{equation}
{\tilde \chi}_{_S}({\bf k})=\rho\, {\tilde m}_s^2(k;a) S({\bf k}),
\label{chi_S-S}
\end{equation}
where ${\tilde m}_s(k;a)$ is the Fourier transform of the radial Dirac delta function (\ref{delta}) given by
\begin{equation}
{\tilde m}_s(k;a)=\frac{\partial {\tilde m}_v(k;a)}{\partial a}=\left(\frac{2\pi a}{k}\right)^{d/2} k \, J_{d/2-1}(ka).
\end{equation}
The global specific surface $s$, defined generally by (\ref{s(x)}), is given by
\begin{equation}
s= \rho {\tilde m}_s(k=0;a) = \rho s_1(a) = \frac{d\phi}{a},
\end{equation}
where
\begin{equation}
s_1(a) \equiv \frac{\partial v_1(a)}{\partial a}= \frac{d \pi^{d/2} a^{d-1}}{\Gamma(1+d/2)},
\end{equation}
is the surface area of a $d$-dimensional sphere of radius $a$. Thus, since ${\tilde m}_s(k;a)$ is a positive well-behaved function in the vicinity of $k=0$, it immediately follows from expression (\ref{chi_S-S}) that if the underlying
point process is hyperuniform and/or stealthy, then
the spectral density ${\tilde \chi}_{_S}({\bf k})$ inherits the same hyperuniformity property (\ref{hyper-3}).
More generally, relation (\ref{chi_S-S}) requires that the spectral density ${\tilde \chi}_{_S}({\bf k})$ is zero at those wave vectors where $S({\bf k})$ is zero (or stealthy) and
at the zeros of the function ${\tilde m}_s(k;a)$.
To compare volume-fraction and surface-area fluctuations statistics to one another, we consider an example where these quantities can be calculated exactly for a sphere-packing
model as density increases up to a hyperuniform state. Specifically, we consider
$d$-dimensional sphere packings corresponding to a certain $g_2$-invariant process
introduced by Torquato and Stillinger \cite{To03a}. A $g_2$-invariant process
is one in which a chosen nonnegative form for
the pair correlation function $g_2$ remains
invariant over a nonvanishing density \cite{To02b}. The upper
limiting ``terminal'' density is the point above which
the nonnegativity condition on the structure factor
[cf. (\ref{factor})] would be violated. Thus, whenever the structure
factor attains its minimum value of zero at ${\bf k}=0$ at the terminal
or critical density, the system, if realizable, is hyperuniform.
In Ref. \cite{To03a}, a variety of hyperuniform $g_2$-invariant processes
in which the number variance $\sigma^2_{_N}(R)$ grows like the window surface
area (i.e., $R^{d-1}$) were exactly studied in arbitrary space dimensions.
For our purposes, we use the ``step-function" $g_2$-invariant process, namely, a
$g_2(r)$ that is defined by the unit step function $\Theta(r-D)$, where
$D=2a$ is the sphere diameter. It is noteworthy that large particle configurations in one, two and three dimensions
that achieve the step-function $g_2(r)$ for densities
up to the terminal density $\rho_c$ have been numerically constructed \cite{Cr03,Uc06a}.
Interestingly, the ``ghost" random-sequential-addition packing is an exactly solvable model
with an identical terminal density $\rho_c=[2^d v_1(D/2)]^{-1}$ and a pair correlation function
that is very nearly equal to a step function and indeed exactly approaches the step function
in the large-$d$ limit \cite{To06a}. The structure factor for the step-function $g_2$-invariant process
in the density range $0 \le \rho \le \rho_c$ is exactly given by
\begin{equation}
S({\bf k})=1-\Gamma(1+d/2)
\left(\frac{2}{kD}\right)^{d/2}
\left(\frac{\rho}{\rho_c}\right) J_{d/2}(kD),
\label{invariant}
\end{equation}
where $\rho_c=[2^dv_1(D/2)]^{-1}$ is the terminal density
at which the packing is hyperuniform \cite{To03a} with a small-$k$ asymptotic scaling
given by
\begin{equation}
S({\bf k}) = \frac{1}{2(d+2)} (kD)^2 + {\cal O}\left((kD)\right)^4.
\end{equation}
For $\rho < \rho_c$, the packing
is not hyperuniform.
Substitution of (\ref{invariant}) into relations (\ref{chi_V-S}) and (\ref{chi_S-S}) yields for this model in $d$ dimensions
the associated spectral densities for the phase volumes and interface, respectively,
\begin{eqnarray}
\hspace{-0.1in}{\tilde \chi}_{_V}({\bf k})&=&\rho \left(\frac{\pi D}{k}\right)^{d} J_{d/2}^2(kD/2)\nonumber \\
&&\times \Bigg[ 1-\Gamma(1+d/2)
\left(\frac{2}{kD}\right)^{d/2}
\left(\frac{\rho}{\rho_c}\right) J_{d/2}(kD)\Bigg]\nonumber \\
\label{CHI}
\end{eqnarray}
and
\begin{eqnarray}
\hspace{-0.1in}{\tilde \chi}_{_S}({\bf k})&=&\rho \left(\frac{\pi D}{k}\right)^{d} k^2 J_{d/2-1}^2(kD/2)\nonumber \\
&&\times\Bigg[ 1-\Gamma(1+d/2)
\left(\frac{2}{kD}\right)^{d/2}
\left(\frac{\rho}{\rho_c}\right) J_{d/2}(kD)\Bigg].\nonumber \\
\label{CHI-S}
\end{eqnarray}
(Note that formula (\ref{CHI}) was reported and studied elsewhere \cite{To16b}.)
At the terminal density $\rho_c$, these spectral functions also go to zero quadratically in $k$
in the limit $k \rightarrow 0$ such that
\begin{equation}
{\tilde \chi}_{_V}({\bf k}) = \frac{1}{2(d+2)4^d v_1(1)} (kD)^2 + {\cal O}\left((kD)\right)^4.
\end{equation}
and
\begin{equation}
{\tilde \chi}_{_S}({\bf k}) = \frac{d^2}{2(d+2)4^{d-1} v_1(1)} (kD)^2 + {\cal O}\left((kD)\right)^4,
\end{equation}
but the latter has a coefficient that grows quadratically faster in the dimension relative
to that in the former.
Figure \ref{specs} shows the two spectral functions, ${\tilde \chi}_{_V}({\bf k})$ and ${\tilde \chi}_{_S}({\bf k})$,
for the step-function $g_2$-invariant packing process in three dimensions at the terminal density $\rho_c=3/(4\pi)$,
as obtained from (\ref{chi_V-S}), (\ref{chi_S-S}) and (\ref{invariant}) with $a=D/2$.
Figure \ref{vars} depicts the associated local variances for the same system, as obtained
from these spectral functions, and relations (\ref{phi-var-2}) and (\ref{s-var-2}).
Notice that the surface-area spectral function exhibits stronger and longer-ranged correlations
compared to the volume-fraction spectral function, indicating that the former
is a more sensitive microstructural descriptor.
Figure \ref{vars} depicts the corresponding local variances for the same system.
Similarly, while the corresponding local variances decay like $R^{-4}$ for large $R$,
the surface-area variance does so at a slower rate relative to the volume-fraction
counterpart.
\begin{figure}
\begin{center}
\includegraphics[ width=3.in, keepaspectratio,clip=]{fig3.eps}
\caption{(Color online) Comparison of the two hyperuniform spectral functions ${\tilde \chi}_{_V}(k)$ (lower curve) and ${\tilde \chi}_{_S}(k)$
versus wavenumber $k$ for a sphere packing corresponding to the step-function $g_2$-invariant process in three dimensions at the
hyperuniform terminal density $\rho_c=3/(4\pi)$ \cite{To03a}.
Here $D$ is the diameter of a hard sphere.}
\label{specs}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[ width=3.in, keepaspectratio,clip=]{fig4.eps}
\caption{(Color online) Comparison of the volume-fraction variance $\sigma^2_{_V}(R)$ (lower curve) and
surface-area variance $\sigma^2_{_S}(R)$
versus window sphere radius $R$ for a sphere packing corresponding to the step-function $g_2$-invariant process in three dimensions at the
hyperuniform terminal density $\rho=3/(4\pi)$ \cite{To03a}. Here $D$ is the diameter of a hard sphere.}
\label{vars}
\end{center}
\end{figure}
The aforementioned results for the surface-area pair statistics
were generalized to the case of sphere packings
with a continuous or discrete size distribution \cite{Lu91,To02a}.
These results are collected in Appendix A in order to
describe the conditions under which they are ``multihyperuniform."
\section{Random Scalar Fields and Hyperuniformity}\vspace{-0.1in}
\label{scalar}
Here we generalize the hyperuniformity concept to random scalar fields in $\mathbb{R}^d$.
Such fields can arise in a variety of physical contexts, including concentration and temperature
fields in heterogeneous and porous media \cite{To02a,Sa03} as well as in turbulent flows \cite{Ba59,Mo75},
laser speckle patterns \cite{Pi88,Wi08,Dog15,Dib16}, and temperature fluctuations associated
with the cosmic microwave background \cite{Pe93,Kom03}. Other example include spatial patterns that arise
in biological and chemical systems that have been theoretically described by, for example,
Cahn-Hilliard \cite{Ca58} and Swift-Hohenberg equations \cite{Sw77}.
In what follows, we derive the relevant equations to quantify
hyperuniform scalar fields, present
illustrative calculations, and remark on two-phase media that
result from level cuts.
\subsection{Local Field Fluctuations}\vspace{-0.1in}
Consider a statistically homogeneous random scalar field $F(\bf x)$ in $\mathbb{R}^d$ that is real-valued with an autocovariance function
\begin{equation}
\psi({\bf r})= \Bigg\langle \Big(F({\bf x}_1)- \langle F({\bf x})_1\rangle\Big) \, \Big(F({\bf x}_2) - \langle F({\bf x}_2)\rangle\Big) \,\Bigg\rangle,
\label{spec-field}
\end{equation}
where we have invoked the statistical homogeneity of the field, since ${\bf r}={\bf x}_2 -{\bf x}_1$, which is a $d$-dimensional vector. We assume that
the associated spectral density ${\tilde \psi}({\bf k})$ (Fourier
transform of the autocovariance) exists. The hyperuniformity condition
is simply that the nonnegative spectral density obeys the
small-wavenumber condition:
\begin{equation}
\lim_{|{\bf k}| \rightarrow {\bf 0}} {\tilde \psi}({\bf k})=0,
\label{hyp-field}
\end{equation}
which implies the sum rule
\begin{equation}
\int_{\mathbb{R}^d} \psi({\bf r}) d{\bf r}=0.
\end{equation}
The local variance associated with fluctuations in the field, denoted by $\sigma_{_F}^2(R)$,
is related to the autocovariance function or spectral function in the usual way:
\begin{eqnarray}
\sigma^2_{_F}(R)&=& \frac{1}{v_1(R)} \int_{\mathbb{R}^d} \psi(\mathbf{r}) \alpha(r; R) d\mathbf{r},\nonumber \\
&=&\frac{1}{ v_1(R)(2\pi)^d} \int_{\mathbb{R}^d} {\tilde \psi}({\bf k})
{\tilde \alpha}(k;R) d{\bf k}.
\label{local-scalar}
\end{eqnarray}
While the main focus of this section is continuous random scalar fields,
it should be noted that when simulating random fields on the computer or when
extracting them from experimentally obtained images,
one must inevitably treat discrete or digitized renditions of the fields.
The ``pixels" or "voxels" (smallest components of the digitized systems
in 2D and 3D dimensions, respectively) take on
gray-scale intensities that span the intensity range
associated with the continuous field. Thus, the discrete versions
of relations (\ref{spec-field}) and (\ref{local-scalar}) are to be applied
in such instances; see, for example, Ref. \cite{Bl93}.
\subsection{Random Fields Derived from Point Configurations}\vspace{-0.1in}
Now we prove that a class of fields derived from underlying hyperuniform point configurations
are themselves hyperuniform.
Consider a general ensemble of point configurations of $N$ points
in a large region of volume $V$ in $\mathbb{R}^d$.
Let $K({\bf x};{\bf C})$ represent a nonnegative dimensionless scalar kernel function that is
radial in $\bf x$ and sufficiently localized so that its Fourier transform exists. Here $\bf C$ represents a set
of parameters that characterizes the shape of the radial function.
Following Blumenfeld and Torquato \cite{Bl93}, the random scalar field $F(\bf x)$
is defined as a convolution of the microscopic density and the kernel, i.e.,
\begin{eqnarray}
F({\bf x}) &= &\int_{\mathbb{R}^d} n({\bf x}^\prime) K({\bf x}-{\bf x}^\prime) d{\bf x}^\prime \nonumber\\
&=& \sum_{i=1}^N K({\bf x}-{\bf r}_i)
\label{field}
\end{eqnarray}
where we have dropped indicating the explicit dependence of the kernel on the
parameter set $\bf C$.
It is seen that the effect of the kernel is to smooth out the point ``intensities."
Ensemble averaging (\ref{field}) and using the definition (\ref{ensemble}), yields
the expectation of the field:
\begin{eqnarray}
\langle F({\bf x}) \rangle &=& \left< \sum_{i=1}^N K({\bf x}-{\bf
r}_i) \right> \nonumber \\
&=& \int_V \int_V \cdots \int_V \sum_{i=1}^N K({\bf x}-{\bf
r}_i) P_N({\bf r}^N) d{\bf r}^N \nonumber \\
&=& \int_V \rho_1({\bf r}_1) K({\bf x}-{\bf r}_1) d{\bf r}_1 \nonumber \\
&=& \rho \int_{\mathbb{R}^d} K({\bf x}) d{\bf x},
\label{N(R)}
\end{eqnarray}
where in the last line we have invoked the statistical homogeneity of the field and hence have taken the thermodynamic
limit.
Similarly, the autocorrelation function associated with the field is given by
\begin{eqnarray}
\hspace{-0.3in}\langle F({\bf x}) F({\bf x}+{\bf r}) \rangle &=& \left < \sum_{i=1}^N K({\bf x}-{\bf r}_i) K({\bf x}+{\bf r}-{\bf r}_i)\right> \nonumber \\
&+& \left< \sum_{i\neq j}^N K({\bf x}-{\bf r}_i) K({\bf x}+{\bf r}-{\bf r}_j) \right> \nonumber \\
&=&\rho K({\bf r}) \otimes K({\bf r}) \nonumber \\ &+& \rho^2 K({\bf r}) \otimes K({\bf r}) \otimes h({\bf r})
+ \langle F \rangle^2,
\end{eqnarray}
where $h({\bf r})$ is the total correlation function
for the point configuration defined by (\ref{total}).
Thus, the autocovariance function $\psi({\bf r})$, defined generally by (\ref{spec-field}), is given by
\begin{eqnarray}
\psi({\bf r}) = \rho K({\bf r}) \otimes K({\bf r}) + \rho^2 K({\bf r}) \otimes K({\bf r}) \otimes h({\bf r}).
\label{chi-F}
\end{eqnarray}
Fourier transforming (\ref{chi-F}) yields the corresponding nonnegative spectral density:
\begin{equation}
{\tilde \psi}({\bf k})=\rho {\tilde K}^2({\bf k}) S({\bf k}),
\label{chi-field}
\end{equation}
where ${\tilde K}({\bf k})$ is the Fourier transform of the kernel $K({\bf x})$
and $S({\bf k})$ is the ensemble-averaged structure factor [cf. (\ref{factor})].
We see from (\ref{chi-field}) that if the underlying point process is hyperuniform,
i.e., $S({\bf k})$ tends to zero in the limit $|{\bf k}|\rightarrow 0$, and ${\tilde K}({\bf k})$
is well-behaved at $\bf k=0$, the spectral
density obeys the hyperuniformity condition (\ref{hyp-field}).
\begin{figure}[bthp]
\begin{center}
\includegraphics[ width=3.in, keepaspectratio,clip=]{fig5.eps}
\caption{ (Color online)
The spectral function ${\tilde \psi}({\bf k})$ versus wavenumber $k$ for the three-dimensional Gaussian field derived from
the step-function $g_2$-invariant packing for a nonhyperuniform case ($\rho=\rho_c/2$) and
the unique hyperuniform instance ($\rho=\rho_c$). Here $\rho_c=3/(4\pi)$ and $a=D$, where $D$ is a hard-sphere diameter.}
\label{gauss}
\end{center}
\end{figure}
As a simple example, consider the Gaussian kernel function:
\begin{equation}
K({\bf r}) =\exp(-(r/a)^2)
\end{equation}
where $a$ is a characteristic length scale that is proportional to the standard
deviation of the Gaussian. The corresponding Fourier transform is given by
\begin{equation}
{\tilde K}({\bf k}) =\pi^{d/2} a^d\exp[-(ka)^2/4].
\end{equation}
Consider the hyperuniform structure factor (\ref{invariant})
for the step-function $g_2$-invariant packing. Substitution of (\ref{invariant}) into relation (\ref{chi_V-S}) yields
the associated spectral density for this model in $d$ dimensions:
\begin{eqnarray}
\hspace{-0.7in}{\tilde \psi}({\bf k})&=&\rho \pi^{d/2} a^d\exp[-(ka)^2/4] \nonumber\\
&&\times \Bigg[ 1-\Gamma(1+d/2)
\left(\frac{2}{kD}\right)^{d/2}
\left(\frac{\rho}{\rho_c}\right) J_{d/2}(kD)\Bigg].
\label{PSI}
\end{eqnarray}
Substituting this expression into (\ref{chi-field}) with $\rho=\rho_c$ and expanding the spectral density in powers of $k^2$ about the
origin yields
\begin{equation}
{\tilde \psi}({\bf k})= \frac{\pi^{d/2} \rho_c a^d}{2(d+2)} k^2 + {\cal O}(k^4).
\end{equation}
Note that this scalar field is hyperuniform such
that ${\tilde \psi}({\bf k})$ goes to zero quadratically in $k$ as the wavenumber tends to zero,
independent of the space dimension $d$.
Figure \ref{gauss} shows
this spectral function ${\tilde \psi}({\bf k})$
in the special case of three dimensions at the hyperuniform terminal density
as well as at a nonhyperuniform case. The scaled corresponding variances,
obtained from relations (\ref{local-scalar}) and (\ref{PSI}), are shown in Fig. \ref{gauss-2}.
Note that since $\sigma^2_{_F}(R)$ for the non-hyperuniform case must
decay like $R^{-3}$ for large $R$, the product $R^3 \sigma^2_{_V}(R)$ asymptotes to a constant value.
By contrast, the product $R^3 \sigma^2_{_F}(R)$ for $\rho=\rho_c$ decays like $R^{-1}$ for large $R$,
as it should for this three-dimensional hyperuniform random scalar field.
\begin{figure}[bthp]
\begin{center}
\includegraphics[ width=3.in, keepaspectratio,clip=]{fig6.eps}
\caption{ (Color online) Comparison of the field variance $\sigma^2_{F}(R)$ [multiplied by $(R/D)^3$]
versus window sphere radius $R/D$ for the three-dimensional Gaussian field derived from
the step-function $g_2$-invariant packing for a nonhyperuniform case ($\rho=\rho_c/2$) and
the hyperuniform case ($\rho=\rho_c$). Here $\rho_c=3/(4\pi)$ and $a=D$, where $D$ is a hard-sphere diameter.}
\label{gauss-2}
\end{center}
\end{figure}
\subsection{Level Cuts of Random Fields}\vspace{-0.1in}
In the random-field approach to modeling the microstructure
of random media, the interface between the phases is defined
by level cuts of random fields \cite{Berk87,Be91,Te91,Cr91,Bl93,Ro95}.
There is great flexibility in the choice of the
random field $F({\bf x})$ and hence in the class of microstructures that can
be produced. This approach is particularly useful in modeling
{\it bicontinuous} media (two-phase media in which each phase percolates),
such as microemulsions \cite{Berk87}, carbonate rocks \cite{Cr91},
Vycor glass \cite{Cr91}, amorphous alloys, \cite{Ro95} and aerogels \cite{Ro97}.
It is noteworthy that the use of level cuts of random fields
to create disordered hyperuniform two-phase or multiphase heterogeneous systems
has heretofore not been carried out, and thus represents a fruitful
area for future research. To derive a hyperuniform two-phase
medium from a thresholded random field $F({\bf r})$, the field must possess the special
correlations required to yield an autocovariance function $\chi_{_V}({\bf r})$ that satisfies the
rule (\ref{sum-2}).
\section{Divergence-Free Random Vector Fields and Hyperuniformity}\vspace{-0.1in}
\label{vector}
It is natural to generalize the hyperuniformity concept for scalar fields to
random vector fields. In order to narrow the enormous possibilities in this
substantially broader context, we will focus primarily on
divergence-free random vector fields, but the basic ideas
apply to more general vector fields. Excellent physical examples within this class of fields
occur in heterogeneous media, including
divergence-free heat, current or mass flux fields, divergence-free electric displacement
fields associated with dielectrics, divergence-free magnetic induction fields, and divergence-free low-Reynolds-number velocity
fields \cite{To02a,Sa03}. Incompressible turbulent flow fields provide yet
other very well-known set of examples \cite{Ba59,Mo75}.
Here, we derive the relevant equations to quantify
hyperuniform vector fields, present
illustrative calculations, and make contact with turbulent-flow spectra.
Consider a statistically homogeneous divergence-free (solenoidal) random vector field ${\bf u}({\bf x})$ in $\mathbb{R}^d$
that is real-valued with zero mean, i.e.,
\begin{equation}
\nabla \cdot {\bf u}({\bf x})=0,
\label{div}
\end{equation}
where
\begin{equation}
\langle {\bf u}({\bf x}) \rangle =0.
\end{equation}
Taking the Fourier transform of (\ref{div}) yields
\begin{equation}
{\bf k} \cdot {\tilde {\bf u}}({\bf k})=0, \qquad \mbox{for all} \; {\bf k},
\end{equation}
where ${\tilde {\bf u}}({\bf k})$ is the Fourier transform of ${\bf u}({\bf x})$.
A key quantity is the autocovariance function $\Psi_{ij}({\bf r})$ ($i,j=1,2,\ldots,d$)
associated with the vector field ${\bf u}({\bf x})$, which is a second-rank
tensor field defined by
\begin{equation}
\Psi_{ij}({\bf r})= \langle u_i({\bf x}) u_j({\bf x}+{\bf r}) \rangle,
\label{auto}
\end{equation}
where we have invoked the statistical homogeneity of the field.
The divergence-free condition (\ref{div}) implies
\begin{equation}
\frac{\partial \Psi_{ij}({\bf r})}{\partial r_i}=0
\label{1}
\end{equation}
and
\begin{equation}
\frac{\partial \Psi_{ij}({\bf r})}{\partial r_j}=0,
\label{2}
\end{equation}
where the second equation follows from the symmetry property $\Psi_{ij}({\bf r})=\Psi_{ji}(-{\bf r})$
and Einstein indicial summation notation is implied. Taking the Fourier transforms of (\ref{1}) and (\ref{2}) yield the
identities
\begin{equation}
k_i {\tilde \Psi}_{ij}({\bf k})= k_j {\tilde \Psi}_{ij}({\bf k})=0, \qquad \mbox{for all} \; {\bf k}.
\label{X}
\end{equation}
where ${\tilde \Psi}_{ij}({\bf k})$ is the spectral density tensor, i.e., the Fourier transform of
the autocovariance tensor (\ref{auto}). The real-valued spectral density tensor
is positive semi-definite, i.e., for an arbitrary real vector $\bf a$,
\begin{equation}
a_i {\tilde \Psi}_{ij}({\bf k}) a_j \ge 0, \qquad \mbox{for all} \; {\bf k}.
\end{equation}
From the theory of turbulence of an incompressible fluid \cite{Ba59,Mo75}, it is well known that if an arbitrary
divergence-free vector field ${\bf u}({\bf x})$ is also isotropic, then the spectral
density tensor must take the following general form:
\begin{equation}
{\tilde \Psi}_{ij}({\bf k})=\left(\delta_{ij} -\frac{k_ik_j}{k^2}\right) {\tilde \psi}(k),
\label{spec-tensor}
\end{equation}
where $\delta_{ij}$ is the Kronecker delta or identity tensor,
and ${\tilde \psi}(k)$ is a nonnegative scalar radial function of the wavenumber $k=|\bf k|$.
A random vector field is isotropic if all of its associated $n$-point correlation
functions are independent of translations, rotations and reflections of the
coordinates. Note that the trace of ${\tilde \Psi}_{ij}({\bf k})$ is trivially related
to ${\tilde \psi(k)}$, i.e.,
\begin{equation}
{\tilde \Psi}_{ii}({\bf k})=(d-1) {\tilde \psi}(k),
\end{equation}
and so we see that
\begin{equation}
{\tilde \Psi}_{ii}({\bf k}={\bf 0})=(d-1) {\tilde \psi}(k=0)=\int_{\mathbb{R}^d} \Psi_{ii}({\bf r}) d{\bf r}
\end{equation}
and
\begin{equation}
\Psi_{ii}({\bf r}={\bf 0})=\frac{(d-1)}{(2\pi)^d} \int_{\mathbb{R}^d} {\tilde \psi}(k) d{\bf k}.
\end{equation}
Now if the radial function ${\tilde \psi}(k)$ is continuous but positive at $k=0$ (not hyperuniform), it immediately follows from
the form (\ref{spec-tensor}) that the spectral tensor can only be hyperuniform
in {\it certain directions}. For example, the component ${\tilde \Psi}_{11}({\bf k})$ is
zero for $k=k_1$ (all wave vectors along the $k_1$-axis) and the component ${\tilde \Psi}_{12}({\bf k})$ is
zero whenever $k_1=0$ or $k_2=0$. The fact that the value of ${\tilde \Psi}_{11}({\bf k})$
depends on the direction in which the origin is approached means that it is nonanalytic at $\bf k=0$.
On the other hand, if ${\tilde \psi}(k)$ is hyperuniform and continuous at $k=0$,
then each component of ${\tilde \Psi}_{ij}({\bf k})$ will inherit the radial hyperuniformity
of ${\tilde \psi}(k)$, and hence is independent of the direction in which
the origin is approached. For example, consider the situation in which ${\tilde \psi}(k)$
admits the following small-wavenumber expansion
\begin{equation}
{\tilde \psi}(k) = a_1 |{\bf k}|^\alpha + {o}(|{\bf k}|^\alpha),
\label{exp}
\end{equation}
where $\alpha$ is a positive constant and $o$ signifies higher order terms. Note that whenever $\alpha$ is a noninteger
or odd integer, ${\tilde \psi}(k)$ is a nonanalytic function at the origin due to a derivative discontinuity.
(An analytic radial function would admit an expansion in even powers of the wavenumber only.)
For any $\alpha >0$, substitution of (\ref{exp}) in $(\ref{spec-tensor})$ reveals that the
spectral tensor is radially hyperuniform near ${\bf k=0}$ such that it vanishes as $|{\bf k}|^\alpha$.
We conclude that we need an even more general hyperuniformity concept
in the case of a spectral tensor, namely, one in which hyperuniformity
depends on the direction in which the origin is approached in Fourier
space. Let ${\bf k}_Q$ represent a $d$-dimensional unit vector emanating
from the origin $\bf k=0$.
We say that the field is hyperuniform for a particular component $i=I$ and $j=J$ of the spectral
tensor of a vector field (isotropic or not) in the direction ${\bf k}_Q$ if
\begin{equation}
\lim_{t \rightarrow {0}} {\tilde \Psi}_{IJ}(t{\bf k}_Q)={\bf 0},
\label{HYPER}
\end{equation}
where $t$ is a scalar parameter.
Note that there are many different unit vectors (directions) for a particular spectral tensor that can satisfy
this condition, whether this set is countable, or it is uncountable because
these unit vectors can occur in a continuous range of directions. Moreover, if the condition
applies independent of the direction of the unit vector, then
it reduces to the standard spectral definition of hyperuniformity.
To illustrate the hyperuniformity concept in the context of a divergence-free isotropic
vector field, let us consider the
following hyperuniform radial function:
\begin{equation}
{\tilde \psi}(k)= c(d) (ka) \exp(-(ka)^2),
\label{radial}
\end{equation}
where
\begin{equation}
c(d)= \frac{\Gamma(d/2)a^d}{2^d \pi^{d/2} \Gamma((d+1)/2)}.
\end{equation}
This is a valid (nonnegative) spectral function in any dimension with
an associated autocovariance function $\psi(r)$ such that $\psi(r=0)=1$.
For visual purposes, we examine the two-dimensional outcome when (\ref{radial}) is substituted into
the spectral tensor (\ref{spec-tensor}). Figure \ref{tensor} shows three components of this
symmetric tensor and the radial function ${\tilde \psi}(k)$. The hyperuniformity
property in a compact region around the origin for all
components is readily visible.
\begin{figure}[bthp]
\begin{center}
\includegraphics*[ width=2.8in,clip=keepaspectratio]{fig7a.eps}
\includegraphics*[ width=2.8in,clip=keepaspectratio]{fig7b.eps}
\includegraphics*[ width=2.8in,clip=keepaspectratio]{fig7c.eps}\vspace{0.2in}
\includegraphics*[ width=2.5in,clip=keepaspectratio]{fig7d.eps}
\caption{(Color online) Spectral patterns for the tensor components
of a divergence-free isotropic vector field in $\mathbb{R}^2$ generated
from the radial function (\ref{radial}) with $d=2$, depicted in the bottom panel. Note that unlike the nonnegative 11- and 22-components,
the 12-component can be both positive and negative, and so its color map
indicating zero intensity (darkest shade) is different from those for the diagonal components.}
\label{tensor}
\end{center}
\end{figure}
It is instructive to place some well-known results from the theory
of isotropic turbulence in the context of the generalization of hyperuniformity
to divergence-free random vector fields. For three-dimensional incompressible
turbulent flow with an isotropic velocity field, the radial function ${\tilde \psi}(k)$
appearing in (\ref{spec-tensor})
is simply related to the so-called {\it energy spectrum} of the velocity field, $E(k)$,
via the expression ${\tilde \psi}(k)=E(k)/(4\pi k^2)$.
Thus, we see from the analysis given above that if $E(k)$ goes to zero
faster than $k^2$ in the limit $k \rightarrow 0$, then each component of the
spectral tensor ${\tilde \Psi}_{ij}({\bf k})$ will inherit the radial hyperuniformity
of ${\tilde \psi}(k)$, and hence is independent of the direction in which
the origin is approached. An example of such energy spectra is one due to
Batchelor \cite{Ba59}, where $E(k) \sim k^4$ or ${\tilde \psi}(k) \sim k^2$ in the small wavenumber limit.
Note that the corresponding radial autocovariance function $\psi(r)$ decays
to zero for large $r$ exponentially fast.
On the other hand, if the energy spectrum goes to zero like $k^2$ or slower in the limit $k \rightarrow 0$,
then the value of the spectral tensor will be hyperuniform only in special
directions. An example within the class of energy spectra is one due to
Saffman \cite{Sa67}, where $E(k) \sim k^2$ or ${\tilde \psi}(k) \sim \mbox{constant}$ in the small wavenumber limit.
Here ${\tilde \Psi}_{ij}({\bf k})$ is nonanalytic at $\bf k=0$.
Of course, the significance of energy spectra in turbulence {\it vis a vis}
hyperuniformity was previously not discussed.
\section{Structural Anisotropy and Hyperuniformity}\vspace{-0.1in}
\label{aniso}
Other classes of disordered systems in which ``directional" hyperuniformity is relevant
include many-particle and heterogeneous systems that are statistically
anisotropic, but otherwise statistically homogeneous; see Figs. \ref{lemniscate} and \ref{nematic}
for two illustrations. In such cases,
the spectral function conditions (\ref{hyper}), (\ref{hyper-2}) and (\ref{hyper-3}) should be replaced
with the following ones, respectively:
\begin{equation}
\lim_{t \rightarrow 0} S(t{\bf k}_Q) = 0,
\label{Hyper}
\end{equation}
\begin{eqnarray}
\lim_{t \rightarrow 0}\tilde{\chi}_{_V}(t\mathbf{k}_Q) = 0,
\label{Hyper-2}
\end{eqnarray}
\begin{eqnarray}
\lim_{t \rightarrow 0}\tilde{\chi}_{_S}(t\mathbf{k}_Q) = 0,
\label{Hyper-3}
\end{eqnarray}
where the vector ${\bf k}_Q$ is defined in Sec. \ref{vector}.
\begin{figure}[bthp]
\begin{center}
{\includegraphics[ width=3.3in, keepaspectratio,clip=]{fig8a-small.eps}\vspace{-0.4in}
\includegraphics[ width=3.3in, keepaspectratio,clip=]{fig8b-small.eps}}\vspace{-0.4in}
\caption{(Color online) Top panel: A targeted scattering pattern showing a lemniscate region around
the origin in which the scattering intensity is exactly zero (darkest shade). This ``stealthy" pattern clearly shows
that hyperuniformity depends on the direction in which the origin $\bf k=0$
is approached. Bottom panel: A statistically anisotropic ground-state configuration of 10,000 particles
that corresponds
to the unusual scattering pattern shown in the top panel, which is generated using the collective-coordinate
optimization procedure \cite{Uc04b,Zh15a,Zh15b} in a square simulation box under periodic boundary conditions .}
\label{lemniscate}
\end{center}
\end{figure}
Are structurally anisotropic configurations associated with
such exotic spectral functions realizable?
To vividly demonstrate that the answer to this question is in the affirmative,
the collective-coordinate optimization scheme \cite{Uc04b,Ma13,Zh15a,Zh15b} is employed
to produce a many-particle system that is hyperuniform in only certain
directions in Fourier space. This powerful procedure by construction enables
the structure factor to be constrained to take exact targeted values at a subset of wave vectors.
Whenever the structure factor is constrained to be exactly zero
for this subset of wave vectors, the resulting configuration exactly corresponds
to the classical ground state of a long-ranged but bounded pair interaction \cite{Note5}.
For example, one can target stealthy and hyperuniform structure factors that vanish in a
spherical region around the origin (as in Fig. \ref{pattern}) such that the
associated disordered particle configurations are statistically homogeneous and isotropic ground states \cite{Uc04b,Zh15a,Zh15b}.
Targeted anisotropic structure factors have been attained that correspond to statistically anisotropic ground-state
structures with directional pair interactions \cite{Ma13}, but none of the specific targets computed there were hyperuniform. Here we target a lemniscate region
around the origin $\bf k=0$ in Fourier space in which scattering is completely
suppressed, i.e., this entire region is stealthy, but hyperuniform in only certain
directions; see the top panel of Fig. \ref{lemniscate}. The corresponding
disordered ground states are due to directional long-ranged pair interactions that are stronger
in the horizontal direction than in the vertical direction, and hence are
characterized by like-linear ``filamentary" chains of particles that run more or less horizontally.
Such an example is shown in the bottom panel of Fig. \ref{lemniscate}.
These ground states are characterized by directional-dependent physical properties, including optical, acoustic and elastic behaviors.
Interestingly, although such anisotropic ground-state configurations cannot support shear (for similar reasons
as in their isotropic counterparts \cite{Zh15b}), they are generally elastically anisotropic
because the stress tensor is asymmetric, as will be detailed in a future study.
In particular, the asymmetry of the stress tensor is associated with internal force couples that resist out-of-plane torques.
While such behavior is known to occur in liquid crystals and predicted by continuum elastic
theories \cite{De95}, our results are distinguished because the asymmetry of the stress
tensor arises from a microscopic statistical-mechanical model of interacting {\it structureless (point)}
particles. To our knowledge, such a microscopic model has heretofore not been identified.
\begin{figure}
\begin{center}
\includegraphics[ width=2.5in, keepaspectratio,clip=]{fig9.eps}
\caption{(Color online) Schematic illustration of a statistically homogeneous and anisotropic nematic liquid crystal configuration.
An appropriately shaped window that occupies region $\Omega$ is also shown. Here $\bf x_0$ denotes
both the centroidal position and orientation of the window, the latter of which
is chosen generally from a prescribed probability distribution that depends on the specific
structure of interest. }
\label{nematic}
\end{center}
\end{figure}
Many-particle systems that respond to external fields are often characterized
by anisotropic structure factors and hence provide a class of systems
where directional hyperuniformity can potentially arise. Huang, Wang and Holm \cite{Hu05} have carried out molecular dynamics
simulations of colloidal ferrofluids subjected to external fields that capture the salient
structural features observed in corresponding experimental systems as measured
by the structure factor. Structural anisotropy
arises in these systems due to the formation of particle chains that tend to align in the direction
of the applied magnetic field. Figure \ref{ferro} shows an anisotropic structure factor
taken from Ref. \cite{Hu05}. It is apparent that depending on the direction in which the origin
is approached, the structure factor can exhibit effective hyperuniformity.
\begin{figure}[bthp]
\centerline{\includegraphics[ width=2.2in, keepaspectratio,clip=]{fig10.eps}}
\caption{Anisotropic structure factor of a colloidal ferrofluid in the plane in which the particle chains
align, as obtained from Fig. 6 of Ref. \cite{Hu05}. Dark and light regions indicate low and high intensities,
respectively. Note that depending on the direction in which the origin
is approached, the structure factor can exhibit effective hyperuniformity.}
\label{ferro}
\end{figure}
We can generalize the expressions for the number variance for point configurations and variances for
a structurally anisotropic two-phase medium
by replacing spherical windows with an appropriately shaped nonspherical
window occupying region $\Omega$ with an orientation distribution that maximizes sensitivity to direction.
This general formulation was given in Ref. \cite{To03a} for point configurations,
but no explicit calculations were presented. Figure \ref{nematic} schematically depicts
a statistically homogeneous, anisotropic nematic liquid crystal configuration
of particles and an appropriate window shape and orientational distribution to distinguish ``directional" fluctuations associated
with either the centroidal positions, volume fraction, or interfacial
area of the particles. It is clear that window sampling in the direction indicated
in Fig. \ref{nematic} will produce fluctuations that are different from those obtained by sampling in the
orthogonal direction.
Note that the volume-fraction formulas for the autocovariance $\chi_{_V}({\bf r})$
and spectral density ${\tilde \chi}_{_V}({\bf k})$ for sphere packings presented in Sec. \ref{packing-1} apply as well to the more general class of packings of oriented nonspherical particles by a simple replacement of the
spherical particle indicator function (\ref{indicator}) with the following one for a
nonspherical particle that occupies a region $\omega$:
\begin{equation}
m_v({\bf r};{\bf a}) =\Bigg\{{1, {\bf r} \in \omega,\atop{0, {\bf r} \notin \omega,}}
\label{indicator-2}
\end{equation}
where the vector $\bf r$ emanates from the particle centroid and the vector $\bf a$ represents the set of parameters that defines the shape of the particle.
For example, for a $d$-dimensional ellipsoid, this is given explicitly by
\begin{equation}
m_v({\bf r};{\bf a}) =\Bigg\{{1, \;\;\frac{r_1^2}{a_1^2} +\frac{r_2^2}{a_2^2}+\cdots +\frac{r_d^2}{a_d^2} \le 1,\atop{\hspace{-0.75in}0, \;\; \mbox{otherwise},}}
\label{indicator-3}
\end{equation}
where $r_i$ ($i=1,2,\ldots,d$) is the $i$th the Cartesian component of $\bf r$ and
$a_1,a_2,\ldots,a_d$ are the semi-axes of the ellipsoid. Of course, the structural
anisotropy for configurations of oriented particles of general shape is reflected in a total correlation
function $h({\bf r})$ or an autocovariance $\chi_{_V}({\bf r})$ that depends not only on the magnitude but direction of $\bf r$.
Observe also that the calculation of $h({\bf r})$ and $\chi_{_V}({\bf r})$ for the special case
of oriented ellipsoids is greatly simplified by exploiting the fact that an ellipsoid
is an affine scale transformation of a sphere \cite{Le83,La90}.
Similarly, the surface-area formulas for the autocovariance $\chi_{_S}({\bf r})$ and spectral density ${\tilde \chi}_{_S}({\bf k})$
for sphere packings presented in Sec. \ref{packing-2} still apply
to packings of oriented nonspherical particles when the radial functions $m_s(r;a)$ are replaced with the appropriate
vector-dependent interface indicator function for a particle $m_s({\bf r};{\bf a})$, which is a generalized function that has measure
only on the particle surface. As in the case of anisotropic point configurations, the variances for both volume fraction
and surface area, $\sigma_{_V}^2(R)$ and $\sigma^2_{_S}(R)$, for sphere packings using spherical
windows of radius $R$ can be generalized to allow for anisotropic packings of nonspherical
particles with an appropriately shaped nonspherical window \cite{To03a}.
\section{Conclusions and Discussion}\vspace{-0.1in}
\label{con}
We have generalized the hyperuniformity concept in four different directions:
(1) interfacial area fluctuations in heterogeneous materials; (2) random scalar fields;
(3) divergence-free random vector fields; and (4) statistically anisotropic many-particle
systems and heterogeneous media. These generalizations provide theoreticians and experimentalists new research avenues to understand a very broad
range of phenomena across a variety of fields through the hyperuniformity ``lens."
The surface-area variance $\sigma_{_S}^2(R)$ and associated
spectral density function ${\tilde \chi}_{_S}({\bf k})$ could play a new and major role in characterizing the
microstructure of two-phase systems, including fluid-saturated porous media, physical
properties that intimately depend on the interface geometry, such
as reaction rates and fluid permeabilities \cite{To02a},
and evolving microstructures that depend on interfacial
energies (e.g., spinodal decomposition). It should not go unnoticed that the hyperuniformity
concept for two-phase media specified by the volume-fraction and surface-area
variances $\sigma_{_V}^2(R)$ and $\sigma_{_S}^2(R)$, respectively, are fluctuations
that describe two of the Minkowski functionals \cite{Sc11}.
In the case of sphere packings, we showed that the surface-area spectral function exhibits stronger and longer-ranged correlations
compared to the volume-fraction spectral function, indicating that the former
is a more sensitive microstructural descriptor.
Little is known about the hyperuniformity of random scalar fields
and its potential significance. Now that we know what to look for
in such contexts, exploration of this uncharted territory
may prove to be profitable. For example, one could imagine designing random scalar fields
to be hyperuniform (e.g., laser speckle patterns) for photonics
applications \cite{Wi08,Dib16}.
Our generalization of the hyperuniformity concept to random vector fields
is the most encompassing to date. This setting generally involves a spectral density tensor,
which of course contains random scalar fields as special cases. Even the restricted class of divergence-free
vector fields that we focused on here revealed the need to extend
the ``isotropic" hyperuniformity notion, since the spectral tensor is nonanalytic
at zero wave vector, i.e., it depends on the direction in which the origin in Fourier space
is approached. Among other results,
we placed well-known energy spectra from the theory
of isotropic turbulence in the context of this generalization of hyperuniformity. More generally, our work provides a motivation to design
random vector fields with targeted directional hyperuniform spectra,
which heretofore has never been considered.
Structurally anisotropic many-particle and heterogeneous systems can also possess
directional hyperuniformity. To illustrate the implications of this generalization,
we presented a disordered directionally hyperuniform many-particle configuration
that remarkably is the ground state associated with a bounded anisotropic pair potential;
see Fig. \ref{lemniscate}. These filamentary-like ground-state configurations
will be characterized by directional-dependent physical properties, including optical and elastic behaviors.
Interestingly, such anisotropic ground-state configurations generally will possess
internal force couples that resist out-of-plane torques, which will be shown in detail elsewhere.
Based on our previous investigations using disordered isotropic ground-state configurations
to produce disordered dielectric network solids with large isotropic band gaps \cite{Fl09b,Man13b},
we expect that one can design directional hyperuniform ground-state configurations to
yield disordered network solids that can be tuned to have photonic and acoustic band gaps with widths
that are relatively uniform for a continuous range of directions and no band gaps
for a different continuous range of directions. Such tunablity could have technological
relevance for manipulating light and sound waves in ways heretofore not thought possible.
Moreover, materials made of dense disordered scatterers
that are directionally hyperuniform can be designed to be transparent in selected
directions, as a recent study of traditional hyperuniform systems would suggest \cite{Le16}.
Directional structural hyperuniformity raises the interesting possibility that there
may exist disordered many-particle systems in equilibrium that at positive
temperature $T$ are incompressible in certain directions and compressible in other directions
- a highly unusual situation. To understand this proposition, it is
useful to recall the well-known fluctuation-compressibility theorem for a single-component
many-particle system in equilibrium at number density $\rho$ and temperature $T$:
\begin{equation}
\rho k_B T \kappa_T = \lim_{|{\bf k}| \rightarrow 0} S({\bf k}),
\label{comp}
\end{equation}
where $\kappa_T$ is the isothermal compressibility.
We see that in order to have a hyperuniform
system at positive $T$, the isothermal compressibility must be zero; i.e.,
the system must be incompressible \cite{Za11b,To15}. A well-known model
that exhibits such behavior is the one-component plasma \cite{Ja81}.
However, if the system possesses directional structural hyperuniformity,
relation (\ref{comp}) no longer applies. Therefore, one must first generalize
this fluctuation-compressibility theorem to account for directional elastic responses
of the system to different components of stress due to nonanalyticities of
the spectral density at the origin.
While (\ref{comp}) has been extended to treat crystals under certain
restrictions \cite{Still66}, to our knowledge, there is
currently no known generalization of (\ref{comp}) that accounts for the anisotropic
elastic response of a disordered equilibrium system to directional stresses
due to nonanalytic spectral densities. Such a generalization of the fluctuation-compressibility theorem
would enable one to quantify the directions in which the aforementioned hypothesized disordered system is incompressible
or compressible. This represents an intriguing area for future research.
In particular, this possibility challenges experimentalists
to search for such exotic states of matter.
Finally, we note that the hyperuniformity concept has recently been generalized to spin systems,
including a capability to construct disordered stealthy hyperuniform spin configurations as ground states \cite{Ch16a}.
The implications and significance of the existence of such disordered
spin ground states warrant further study, including
whether their bulk physical properties and excited states, like their many-particle
system counterparts, are singularly remarkable, and can be experimentally realized.
\vspace{-0.2in}
\acknowledgments{
The author is very grateful to Duyu Chen, Jaeuk Kim and Zheng Ma
for their careful reading of the manuscript. He is especially thankful to Duyu Chen and Ge Zhang
for their assistance in creating some of the figures.}
|
1,477,468,751,171 | arxiv | \section{Introduction}
West's stack-sorting map is a function $s$ that sends permutations to permutations; it was defined by West in his dissertation \cite{West} as a deterministic version of a stack-sorting machine introduced by Knuth in \emph{The Art of Computer Programming} \cite{Knuth}. There has been a great deal of interest in $s$ from the point of view of sorting permutations (see \cite{Bona, BonaSurvey, Zeilberger, DefantCounting, DefantTroupes, DefantMonotonicity} and the references therein) since, as is easily verified, $s^{n-1}$ sends every permutation in $S_n$ to the identity permutation $123\cdots n$. The stack-sorting map also has interesting properties that closely link it with other parts of combinatorics such as combinatorial free probability theory (see \cite{DefantCatalan, DefantEngenMiller, DefantTroupes, Hanna} and the references therein).
Many of the classical questions about the stack-sorting map concern the notion of a \emph{$t$-stack-sortable} permutation, which is a permutation $\pi$ such that $s^t(\pi)$ is increasing. Knuth \cite{Knuth} initiated both the study of stack-sorting and the study of permutation patterns when he showed that a permutation is $1$-stack-sortable if and only if it avoids the pattern $231$. He was also the first to use what is now called the ``kernel method'' (see \cite{Banderier} for details) when he proved that the number of $1$-stack-sortable (i.e., $231$-avoiding) permutations in $S_n$ is the $n^\text{th}$ Catalan number $\frac{1}{n+1}\binom{2n}{n}$. West \cite{West} characterized $2$-stack-sortable permutations and formulated the conjecture, which Zeilberger \cite{Zeilberger} later proved, that the number of $2$-stack-sortable permutations in $S_n$ is $\frac{2}{(n+1)(2n+1)}\binom{3n}{n}$. \'Ulfarsson \cite{Ulfarsson} found a complicated characterization of $3$-stack-sortable permutations in terms of what he called ``decorated patterns,'' but it was not until recently that a polynomial-time algorithm for counting $3$-stack-sortable permutations was found in \cite{DefantCounting}. It is likely that there is no simple formula that enumerates $3$-stack-sortable permutations, and $4$-stack-sortable permutations are probably even more unwieldy.
In \cite{Bousquet}, Bousquet-M\'elou defined a permutation to be \emph{sorted} if it is in the image of $s$, and she found a recurrence relation that can be used to count sorted permutations. However, the asymptotics of the sequence enumerating sorted permutations is still not well understood; the current author recently proved that the limit $\displaystyle\lim_{n\to\infty}\left(|s(S_n)|/n!\right)^{1/n}$ exists and lies between $0.68631$ and $0.75260$ (the proof of the upper bound makes use of free probability theory and generalized hypergeometric functions). We say a permutation is \emph{$t$-sorted} if it is in the image of $s^t$. The article \cite{DefantDescents} proves that the maximum number of descents that a $t$-sorted permutation of length $n$ can have is $\left\lfloor\frac{n-t}{2}\right\rfloor$ and also characterizes the permutations that achieve this maximum when $n\equiv t\pmod 2$.
In this paper, we continue the study of $t$-sorted permutations in $S_n$, focusing on their characterization and enumeration when $t$ is close to $n$; in this case, we casually call $t$-sorted permutations of length $n$ \emph{highly sorted}. Our motivation for this line of work comes from the recent article \cite{Asinowski}, which thoroughly explores many aspects of the pop-stack-sorting map $\mathsf{Pop}$ (an interesting variant of West's stack-sorting map). Just as $s^{n-1}(S_n)=\{123\cdots n\}$, it is known (though surprisingly difficult to prove) that $\mathsf{Pop}^{n-1}(S_n)=\{123\cdots n\}$. The paper \cite{Asinowski} gives a very nice characterization of the set $\mathsf{Pop}^{n-2}(S_n)$. For each fixed $m\geq 1$, we will characterize and enumerate the sets $s^{n-m}(S_n)$ for all $n\geq 2m-2$.
The notion of a $t$-sorted permutation is in some sense dual to that of a $t$-stack-sortable permutation. Hence, another motivation for studying highly sorted permutations comes from results in the literature concerning $t$-stack-sortable permutations of length $n$ for $t$ close to $n$. West \cite{West} showed that a permutation in $S_n$ is $(n-2)$-stack-sortable if and only if it does not end in the suffix $n1$. He also characterized and enumerated the $(n-3)$-stack-sortable permutations in $S_n$.
Claesson, Dukes, and Steingr\'imsson \cite{Claessonn-4} continued this line of work by characterizing and enumerating $(n-4)$-stack-sortable permutations in $S_n$. In the same article, the authors conjectured that for every fixed $m\geq 1$, there exist positive integers $a_0,\ldots,a_{m-1}$ such that the number of $(n-m)$-stack-sortable permutations in $S_n$ that are not $(n-m-1)$-stack-sortable is $\frac{(m-1)!(n-m-1)!}{(2m-2)!}\sum_{i=0}^{m-1}a_i{n-2m\choose i}$ for all $n\geq 2m$. One can think of Theorem~\ref{Thm1} below as a sort of dual of this conjecture.
We need just a bit more notation in order to state our main result. Throughout this paper, a \emph{permutation} is an ordering of a finite set of positive integers (for example, we consider $2584$ to be a permutation). We write $S_n$ for the set of permutations of the set $[n]:=\{1,\ldots,n\}$. A \emph{descent} of a permutation $\pi=\pi_1\cdots\pi_n$ is an index $i\in[n-1]$ such that $\pi_i>\pi_{i+1}$. If $i$ is a descent of $\pi$, then we call the entry $\pi_i$ a \emph{descent top} of $\pi$. A \emph{left-to-right maximum} of $\pi$ is an entry $\pi_j$ such that $\pi_j>\pi_\ell$ for all $1\leq \ell<j$; let $\operatorname{LRmax}(\pi)$ denote the set of left-to-right maxima of $\pi$. The \emph{tail length} of a permutation $\pi=\pi_1\cdots\pi_n\in S_n$, denoted $\operatorname{tl}(\pi)$, is the largest integer $\ell\in\{0,\ldots,n\}$ such that $\pi_i=i$ for all $i\in\{n-\ell+1,\ldots,n\}$. For example, $\operatorname{tl}(23145)=2$, $\operatorname{tl}(23154)=0$, and $\operatorname{tl}(12345)=5$. Recall that the $n^\text{th}$ \emph{Bell number} $B_n$ is defined to be the number of set partitions of the set $[n]$. The Bell numbers form the OEIS sequence A000110 \cite{OEIS} and can alternatively be defined via their exponential generating function \[\sum_{n\geq 0}B_n\frac{x^n}{n!}=e^{e^x-1}.\]
\begin{theorem}\label{Thm1}
Let $m$ and $n$ be positive integers such that $n\geq 2m-2$. A permutation $\pi\in S_n$ is in the image of $s^{n-m}$ if and only if $\operatorname{tl}(\pi)\geq n-m$ and every descent top of $\pi$ is a left-to-right maximum of $\pi$. Consequently, \[|s^{n-m}(S_n)|=B_m.\]
\end{theorem}
One might ask if the hypothesis $n\geq 2m-2$ in the previous theorem can be replaced by, say, $n\geq 2m-3$. The next theorem shows that it cannot. For $3\leq \ell\leq 2m-3$, we define $\zeta_{\ell,m}$ to be the permutation $\ell21345\cdots(\ell-1)(\ell+1)(\ell+2)\cdots (2m-3)$ in $S_{2m-3}$. This is the permutation obtained by swapping the entries $1$ and $2$ in the identity permutation $123\cdots (2m-3)$ and then moving the entry $\ell$ to the beginning of the permutation. For example, $\zeta_{3,3}=321$, $\zeta_{3,4}=32145$, $\zeta_{4,4}=42135$, and $\zeta_{5,4}=52134$.
\begin{theorem}\label{Thm2}
Let $m\geq 3$ be an integer. A permutation $\pi\in S_{2m-3}$ is in the image of $s^{m-3}$ if and only if one of the following holds:
\begin{itemize}
\item $\operatorname{tl}(\pi)\geq m-3$ and every descent top of $\pi$ is a left-to-right maximum of $\pi$;
\item $\pi=\zeta_{\ell,m}$ for some $\ell\in\{3,\ldots,m\}$.
\end{itemize}
Consequently, \[|s^{m-3}(S_{2m-3})|=B_m+m-2.\]
\end{theorem}
\section{Preliminaries}
A \emph{permutation} is an ordering of a finite set of positive integers, which we write as a word in one-line notation. Let $S_n$ denote the set of permutations of $[n]$. The \emph{standardization} of a permutation $\pi=\pi_1\cdots\pi_n$ is the permutation obtained by replacing the $i^\text{th}$-smallest entry in $\pi$ with $i$ for all $i$. For example, the standardization of $3869$ is $1324$. We say entries $b,a$ \emph{form an occurrence of the pattern $21$ in $\pi$} if $a<b$ and $b$ appears to the left of $a$ in $\pi$. We say entries $b,c,a$ \emph{form an occurrence of the pattern $231$ in $\pi$} if $a<b<c$ and the entries $b,c,a$ appear in this order in $\pi$. We will also make use of the barred pattern $32\overline{4}1$. We say $\pi$ \emph{contains} $32\overline{4}1$ if there are indices $i_1<i_2<i_3$ such that $\pi_{i_1}>\pi_{i_2}>\pi_{i_3}$ and such that $\pi_j<\pi_{i_1}$ whenever $i_2<j<i_3$. In this case, we say $\pi_{i_1},\pi_{i_2},\pi_{i_3}$ \emph{form an occurrence of the pattern $32\overline{4}1$ in $\pi$}. If $\pi$ does not contain $32\overline{4}1$, we say it \emph{avoids} $32\overline{4}1$. Let $\operatorname{Av}_n(32\overline{4}1)$ be the set of permutations in $S_n$ that avoid $32\overline{4}1$.
The importance of the barred pattern $32\overline{4}1$ for our purposes comes from the following result due to Callan. We include its short proof for the sake of completeness.
\begin{theorem}[\!\!\cite{Callan}]\label{Lem3241}
A permutation $\pi$ avoids $32\overline{4}1$ if and only if every descent top of $\pi$ is a left-to-right maximum of $\pi$. Furthermore, $|\operatorname{Av}_n(32\overline{4}1)|=B_n$.
\end{theorem}
\begin{proof}
Write $\pi=\pi_1\cdots\pi_n$. If $\pi_i$ is a descent top of $\pi$ that is not a left-to-right maximum of $\pi$, then there exists $j<i$ such that $\pi_j>\pi_i$. Then $\pi_j,\pi_i,\pi_{i+1}$ form an occurrence of the pattern $32\overline{4}1$ in $\pi$.
Conversely, suppose entries $c,b,a$ form an occurrence of $32\overline{4}1$ in $\pi$. Let $b'$ be the rightmost entry that appears between $c$ and $a$ in $\pi$ and is greater than $a$. We cannot have $b'>c$ since $c,b,a$ form an occurrence of $32\overline{4}1$. Therefore, $b'$ is a descent top of $\pi$ that is not a left-to-right maximum of $\pi$.
Given $\pi\in\operatorname{Av}_n(32\overline{4}1)$, let $j(1)<\cdots<j(\ell)$ be the indices such that $\pi_{j(1)},\ldots,\pi_{j(\ell)}$ are the left-to-right maxima of $\pi$. Let $\beta_r(\pi)=\{\pi_i:j(r)\leq i<j(r+1)\}$. Let $\mathcal B(\pi)=\{\beta_1(\pi),\ldots,\beta_{\ell-1}(\pi)\}$. It is straightforward to check that the map $\mathcal B$ is a bijection from $\operatorname{Av}_n(32\overline{4}1)$ to the set of set partitions of $[n]$. Hence, $|\operatorname{Av}_n(32\overline{4}1)|=B_n$.
\end{proof}
We now define the stack-sorting map $s$. Assume we are given an input permutation $\pi=\pi_1\cdots\pi_n$. Throughout this procedure, if the next entry in the input permutation is smaller than the entry at the top of the stack or if the stack is empty, the next entry in the input permutation is placed at the top of the stack. Otherwise, the entry at the top of the stack is annexed to the end of the growing output permutation. This procedure stops when the output permutation has length $n$. We then define $s(\pi)$ to be this output permutation. Figure~\ref{Fig1} illustrates this procedure and shows that $s(4162)=1426$.
\begin{figure}[h]
\begin{center}
\includegraphics[width=1\linewidth]{FertilityPIC1.pdf}
\caption{The stack-sorting map $s$ sends $4162$ to $1426$.}
\label{Fig1}
\end{center}
\end{figure}
There is also a simple recursive definition of the map $s$. First, we declare that $s$ sends the empty permutation to itself. Given a nonempty permutation $\pi$, we can write $\pi=LmR$, where $m$ is the largest entry in $\pi$. We then define $s(\pi)=s(L)s(R)m$. For example,
\[s(5273614)=s(52)\,s(3614)\,7=s(2)\,5\,s(3)\,s(14)\,67=253\,s(1)\,467=2531467.\]
We now collect some basic facts about the stack-sorting map that will be useful in the next section. The following lemma is an immediate consequence of either definition of $s$.
\begin{lemma}\label{Lem231}
Let $\pi$ be a permutation. Two entries $b,a$ form an occurrence of the pattern $21$ in $s(\pi)$ if and only if there is an entry $c$ such that $b,c,a$ form an occurrence of the pattern $231$ in $\pi$.
\end{lemma}
Given a nonempty permutation $\pi$, we define $\operatorname{del}_1(\pi)$ to be the permutation obtained by deleting the smallest entry from $\pi$. For example, $\operatorname{del}_1(49628)=4968$.
\begin{lemma}\label{Lem:del}
If $\pi$ is a nonempty permutation, then $s^t(\operatorname{del}_1(\pi))=\operatorname{del}_1(s^t(\pi))$ for every $t\geq 0$.
\end{lemma}
\begin{proof}
It suffices to prove the case in which $t=1$; the general case will then follow by induction on $t$. We proceed by induction on the length $n$ of $\pi$, noting first that the proof is trivial if $n=1$. Assume $n\geq 2$. Write $\pi=LmR$, where $m$ is the largest entry in $\pi$. If the smallest entry in $\pi$ is in $L$, then $s(\operatorname{del}_1(\pi))=s(\operatorname{del}_1(L)mR)
=s(\operatorname{del}_1(L))s(R)m=\operatorname{del}_1(s(L))Rm=\operatorname{del}_1(s(L)s(R)m)=\operatorname{del}_1(s(\pi))$, where we have used the induction hypothesis to see that $s(\operatorname{del}_1(L))=\operatorname{del}_1(s(L))$. Similarly, if the smallest entry in $\pi$ is in $R$, then $s(\operatorname{del}_1(\pi))=s(Lm\operatorname{del}_1(R))
=s(L)s(\operatorname{del}_1(R))m=s(L)\operatorname{del}_1(s(R))m=\operatorname{del}_1(s(L)s(R)m)=\operatorname{del}_1(s(\pi))$.
\end{proof}
\begin{lemma}\label{Lem:Increasing}
Let $\sigma\in S_n$ be a permutation whose last entry is $1$, and let $t\geq 0$. The entries to the right of $1$ in $s^t(\sigma)$ appear in increasing order in $s^t(\sigma)$.
\end{lemma}
\begin{proof}
The lemma is vacuously true if $t=0$, so we may assume $t\geq 1$ and proceed by induction on $t$. We can write $s^{t-1}(\sigma)=L1R$, where $R$ is an increasing permutation. Suppose by way of contradiction that there are entries $a,b$ appearing to the right of $1$ in $s^t(\sigma)$ such that $b$ appears to the left of $a$ and $a<b$. Then $b,a$ form an occurrence of the pattern $21$ in $s^t(\sigma)$, so it follows from Lemma~\ref{Lem231} that there is an entry $c$ such that $b,c,a$ form an occurrence of $231$ in the permutation $s^{t-1}(\sigma)=L1R$. Notice that $c$ must be in $L$ because $R$ is an increasing permutation. However, this means that $b,c,1$ form an occurrence of the pattern $231$ in $s^{t-1}(\sigma)$, so $b,1$ form an occurrence of the pattern $21$ in $s^t(\sigma)$. This contradicts the fact that $b$ appears to the right of $1$ in $s^t(\sigma)$.
\end{proof}
The next lemma follows immediately from a simple inductive argument and the definition of $s$; it is the reason why $s^{n-1}(S_n)=\{123\cdots n\}$.
\begin{lemma}\label{Lem:tl}
If $\sigma\in S_n$ and $t\geq 0$, then $\operatorname{tl}(s^t(\sigma))\geq t$.
\end{lemma}
Theorems~\ref{Thm1} and \ref{Thm2}, which we prove in the next section, determine the sizes of $s^{n-m}(S_n)$ when $n\geq 2m-3$. We end this section with a simple proposition that gives some information about the sizes of $s^{n-m}(S_n)$ for all $n\geq m$ when $m$ is fixed.
\begin{proposition}\label{Prop2}
Fix a positive integer $m$. The sequence $(|s^{n-m}(S_n)|)_{n\geq m}$ is nonincreasing.
\end{proposition}
\begin{proof}
Suppose $n\geq m+1$ and $\pi\in s^{n-m}(S_n)$. Let $\sigma$ be such that $s^{n-m}(\sigma)=\pi$. We can write $\pi=\pi^*n$ and $s(\sigma)=\tau n$ for some $\pi^*,\tau\in S_{n-1}$. We have \[\pi^*n=\pi=s^{n-m}(\sigma)=s^{n-m-1}(\tau n)=s^{n-m-1}(\tau)n,\] so $\pi^*=s^{n-m-1}(\tau)\in s^{(n-1)-m}(S_{n-1})$. This shows that the map $\pi\mapsto \pi^*$ is an injection from $s^{n-m}(S_n)$ to $s^{(n-1)-m}(S_{n-1})$.
\end{proof}
Theorem~\ref{Thm2} and Proposition~\ref{Prop2} tell us that $|s^{n-m}(S_n)|\geq B_m+m-2$ whenever $m\leq n\leq 2m-3$.
\section{Proofs of the Main Theorems}
We now establish some results that will lead up to the proofs of Theorems~\ref{Thm1} and \ref{Thm2}. Recall that $\operatorname{LRmax}(\pi)$ denotes the set of left-to-right maxima of a permutation $\pi$. The reader may find it helpful to refer to Example~\ref{Exam1} for an illustration of the next lemma's proof.
\begin{lemma}\label{Lem1}
If $\pi$ is a permutation that avoids $32\overline{4}1$ and ends in its largest entry, then there exists $\sigma\in s^{-1}(\pi)$ such that $\sigma$ avoids $32\overline{4}1$ and $\operatorname{LRmax}(\sigma)=\operatorname{LRmax}(\pi)$.
\end{lemma}
\begin{proof}
Let $n$ be the length of $\pi$, and write $\pi=\pi_1\cdots\pi_n$. The lemma is trivial if $n=1$, so we may assume $n\geq 2$ and proceed by induction on $n$. The lemma is true for $\pi$ if and only if it is true for the standardization of $\pi$, so we may assume without loss of generality that $\pi\in S_n$. Let $r$ be the index such that $\pi_r=1$. Let $\pi'=\operatorname{del}_1(\pi)$. Since $\pi'$ is a permutation of length $n-1$ that avoids $32\overline{4}1$ and ends in its largest entry, it follows by induction that there exists $\sigma'\in s^{-1}(\pi')$ such that $\sigma'$ avoids $32\overline{4}1$ and $\operatorname{LRmax}(\sigma')=\operatorname{LRmax}(\pi')$. Suppose for the moment that $r=1$, and let $\sigma=1\sigma'$. It follows immediately from the definition of $s$ that $s(\sigma)=1s(\sigma')=1\pi'=\pi$. Furthermore, $\sigma$ avoids $32\overline{4}1$ because $\sigma'$ does. Finally, \[\operatorname{LRmax}(\sigma)=\{1\}\cup\operatorname{LRmax}(\sigma')=\{1\}\cup\operatorname{LRmax}(\pi')=\operatorname{LRmax}(\pi).\]
Now assume $r\geq 2$, and let $a=\pi_{r-1}$. Since $\pi$ avoids $32\overline{4}1$ and $a$ is a descent top of $\pi$, we know by Theorem~\ref{Lem3241} that $a\in\operatorname{LRmax}(\pi)=\operatorname{LRmax}(\pi')=\operatorname{LRmax}(\sigma')$. We assumed that $\pi$ ends in its largest entry, so $a$ is not the largest entry of $\pi$. Therefore, $a$ is not the largest entry of $\sigma'$. Because $a\in\operatorname{LRmax}(\sigma')$, there must be an entry to the right of $a$ in $\sigma'$ that is larger than $a$; among all such entries, let $b$ be the one that is farthest to the left in $\sigma'$. Let $\sigma$ be the permutation obtained by inserting the entry $1$ immediately after $b$ in $\sigma'$. Note that $\operatorname{LRmax}(\sigma)=\operatorname{LRmax}(\sigma')=\operatorname{LRmax}(\pi)$. In particular, $a\in\operatorname{LRmax}(\sigma)$. Since $b$ is the leftmost entry in $\sigma$ that is larger than $a$ and to the right of $a$, we must have $b\in\operatorname{LRmax}(\sigma)$. Because $\sigma'$ avoids $32\overline{4}1$, we know by Theorem~\ref{Lem3241} that every descent top of $\sigma'$ is in $\operatorname{LRmax}(\sigma')$. Every descent top of $\sigma$, except possibly $b$, is a descent top of $\sigma'$. It follows that every descent top of $\sigma$ is in $\operatorname{LRmax}(\sigma)$, so $\sigma$ avoids $32\overline{4}1$.
We are left to prove that $s(\sigma)=\pi$. Imagine applying the stack-sorting procedure to $\sigma$. Because $a$ is a left-to-right maximum of $\sigma$, it will never sit on top of any entries in the stack. By the choice of $b$, there must be a point in time during the procedure when $b$ is next in line to enter the stack and $a$ is the only entry in the stack. In the next steps in the procedure, $a$ is popped out, $b$ is pushed in, $1$ is pushed in, and then $1$ is popped out. It follows that $1$ appears immediately to the right of $a$ in $s(\sigma)$. Recall that $1$ also appears immediately to the right of $a$ in $\pi$. We know that $\operatorname{del}_1(\pi)=\pi'=s(\sigma')=s(\operatorname{del}_1(\sigma))=\operatorname{del}_1(s(\sigma))$ by Lemma~\ref{Lem:del}. Therefore, $s(\sigma)=\pi$.
\end{proof}
\begin{example}\label{Exam1}
Let us give a concrete example of the proof of Lemma~\ref{Lem1}. Suppose $\pi=527148369$. We have $r=4$ and $\pi'=52748369$. We can take $\sigma'=57284936$ since this permutation avoids $32\overline 41$, has the same left-to-right maxima as $\pi$, and satisfies $s(\sigma')=\pi'$. Because $r\geq 2$, we put $a=\pi_{r-1}=7$. The entries that are larger than $a$ and to the right of $a$ in $\sigma'$ are $8$ and $9$; we choose $b$ to be the one that is farthest to the left in $\sigma'$, which is $8$. Then $\sigma=572814936$. Observe that $\sigma$ does indeed avoid $32\overline 41$ and satisfy $s(\sigma)=\pi$. \hfill$\lozenge$
\end{example}
\begin{proposition}\label{Prop1}
If $\pi\in \operatorname{Av}_n(32\overline{4}1)$ has tail length $\ell$, then $\pi\in s^\ell(\operatorname{Av}_n(32\overline{4}1))$.
\end{proposition}
\begin{proof}
The statement is trivial if $\ell=0$, and it is immediate from Lemma~\ref{Lem1} if $\ell=1$. Let us now assume $\ell\geq 2$ and proceed by induction on $\ell$. Let $\pi^*$ be the permutation in $\operatorname{Av}_{n-1}(32\overline{4}1)$ obtained by removing the entry $n$ from $\pi$. Since $\pi^*$ has tail length $\ell-1$, it follows by induction that there exists $\tau\in\operatorname{Av}_{n-1}(32\overline{4}1)$ such that $s^{\ell-1}(\tau)=\pi^*$. Now consider the permutation $\tau n$, which avoids $32\overline{4}1$ and ends in its largest entry. By Lemma~\ref{Lem1}, there exists $\sigma\in\operatorname{Av}_n(32\overline{4}1)$ such that $s(\sigma)=\tau n$. We have $s^\ell(\sigma)=s^{\ell-1}(\tau n)=s^{\ell-1}(\tau)n=\pi^* n=\pi$, as desired.
\end{proof}
\begin{lemma}\label{Lem2}
Suppose $\pi\in S_n$ contains an occurrence of the pattern $32\overline{4}1$ that involves the entry $1$, and suppose $\sigma\in s^{-1}(\pi)$. Then $\sigma$ also contains an occurrence of the pattern $32\overline{4}1$ that involves the entry $1$. Furthermore, there exist distinct entries $c,d\in\{3,\ldots,n\}$ that appear to the left of $1$ in $\sigma$ and appear to the right of $1$ in $\pi$.
\end{lemma}
\begin{proof}
Write $\pi=\pi_1\cdots\pi_n$. Let $r$ be the index such that $\pi_r=1$. Let $a=\pi_{r-1}$. Since $\pi$ contains an occurrence of the pattern $32\overline{4}1$ that involves the entry $1$, there must be an entry $b>a$ that appears to the left of $a$ in $\pi$. Since $b,a$ form an occurrence of the pattern $21$ in $\pi$, it follows from Lemma~\ref{Lem231} that there is an entry $c>b$ such that $b,c,a$ form an occurrence of the pattern $231$ in $\sigma$; among all such entries $c$, choose the one that appears farthest to the right in $\sigma$. This choice of $c$ implies that there are no entries between $c$ and $a$ in $\sigma$ that are greater than $c$. It follows that $c$ appears to the right of $a$ in $s(\sigma)=\pi$. Because $\pi_{r-1}=a$ and $\pi_r=1$, the entry $c$ must appear to the right of $1$ in $\pi$. By Lemma~\ref{Lem231}, there cannot be an entry that is greater than $c$ and lies between $c$ and $1$ in $\sigma$. Hence, $c,a,1$ form an occurrence of the pattern $32\overline{4}1$ in $\sigma$.
To prove the second statement of the lemma, let $d$ be the rightmost entry in $\sigma$ that is greater than $a$ and lies between $a$ and $1$ in $\sigma$; note that such an entry necessarily exists because $a,1$ form an occurrence of the pattern $21$ in $\pi$. Since $c$ and $d$ are both greater than $a$, which is itself greater than $1$, we must have $c,d\in\{3,\ldots,n\}$. Furthermore, $c$ and $d$ appear to the left of $1$ in $\sigma$. We saw above that $c$ lies to the right of $1$ in $\pi$. Our choice of $d$ implies that there are no entries greater than $d$ lying between $d$ and $1$ in $\sigma$, so $d$ also lies to the right of $1$ in $\pi$.
\end{proof}
We can now complete the proofs of Theorems~\ref{Thm1} and \ref{Thm2}.
\begin{proof}[Proof of Theorem~\ref{Thm1}]
First, suppose that $\operatorname{tl}(\pi)\geq n-m$ and that every descent top of $\pi$ is a left-to-right maximum of $\pi$. Theorem~\ref{Lem3241} tells us that $\pi\in \operatorname{Av}_n(32\overline{4}1)$, so it follows from Proposition~\ref{Prop1} that $\pi\in s^{\operatorname{tl}(\pi)}(\operatorname{Av}_n(32\overline{4}1))\subseteq s^{\operatorname{tl}(\pi)}(S_n)\subseteq s^{n-m}(S_n)$.
We now prove that if $n\geq 2m-2$ and $\pi\in s^{n-m}(S_n)$, then $\operatorname{tl}(\pi)\geq n-m$ and $\pi\in\operatorname{Av}_n(32\overline{4}1)$ (again, the latter condition is equivalent to the condition that every descent top of $\pi$ is a left-to-right maximum of $\pi$). When $m=1$, the claim is easy since $s^{n-1}(S_n)=\{123\cdots n\}$. We now assume $m\geq 2$ and proceed by induction on $m$. Suppose $\pi=s^{n-m}(\sigma)$ for some $\sigma\in S_n$. We already know from Lemma~\ref{Lem:tl} that $\operatorname{tl}(\pi)\geq n-m$, so we need only show that $\pi$ avoids $32\overline{4}1$. Let $\widehat\pi$ and $\widehat\sigma$ be the standardizations of $\operatorname{del}_1(\pi)$ and $\operatorname{del}_1(\sigma)$, respectively. By Lemma~\ref{Lem:del}, we have \[\operatorname{del}_1(\pi)=\operatorname{del}_1(s^{n-m}(\sigma))=s^{n-m}(\operatorname{del}_1(\sigma)),\] so $\widehat\pi=s^{n-m}(\widehat\sigma)$. This shows that $\widehat\pi\in s^{(n-1)-(m-1)}(S_{n-1})$, so it follows by induction that $\widehat\pi$ avoids $32\overline{4}1$. Consequently, $\operatorname{del}_1(\pi)$ avoids $32\overline{4}1$. To complete the proof, it suffices to show that $\pi$ does not contain an occurrence of the pattern $32\overline{4}1$ that involves the entry $1$. Assume by way of contradiction that
$\pi$ does contain an occurrence of the pattern $32\overline{4}1$ that involves the entry $1$. For $0\leq i\leq n-m$, let $\sigma^{(i)}=s^{n-m-i}(\sigma)$. Thus, $\sigma^{(0)}=\pi$ and $\sigma^{(n-m)}=\sigma$. Since $\sigma^{(i)}\in s^{-1}(\sigma^{(i-1)})$, we can use Lemma~\ref{Lem2} and induction on $i$ to see that each of the permutations $\sigma^{(i)}$ contains an occurrence of the pattern $32\overline{4}1$ that involves the entry $1$. Let $r_i$ be the index such that the $r_i^\text{th}$ entry in $\sigma^{(i)}$ is $1$. Note that every entry appearing to the right of $1$ in $\sigma^{(i+1)}$ also appears to the right of $1$ in $\sigma^{(i)}$. Furthermore, Lemma~\ref{Lem2} tells us that there are at least $2$ entries that lie to the left of $1$ in $\sigma^{(i+1)}$ and lie to the right of $1$ in $\sigma^{(i)}$. It follows that $r_i\leq r_{i+1}-2$ for all $0\leq i\leq n-m-1$. Consequently, $r_0\leq r_{n-m}-2(n-m)\leq n-2(n-m)=2m-n\leq 2m-(2m-2)=2$. This says that $1$ is either the first or second entry of $\pi$, which contradicts our assumption that $\pi$ contains an occurrence of the pattern $32\overline{4}1$ involving $1$.
We have shown that a permutation $\pi\in S_n$ is in the image of $s^{n-m}$ if and only if it avoids $32\overline{4}1$ and has tail length at least $n-m$. It follows that $|s^{n-m}(S_n)|=|\operatorname{Av}_m(32\overline{4}1)|$, and we know by Theorem~\ref{Lem3241} that $|\operatorname{Av}_m(32\overline{4}1)|$ is the Bell number $B_m$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Thm2}]
Let $\pi\in S_{2m-3}$ for some $m\geq 3$. If $\operatorname{tl}(\pi)\geq m-3$ and every descent top of $\pi$ is a left-to-right maximum of $\pi$, then $\pi\in \operatorname{Av}_{2m-3}(32\overline{4}1)$, so it follows from Proposition~\ref{Prop1} that $\pi\in s^{\operatorname{tl}(\pi)}(\operatorname{Av}_{2m-3}(32\overline{4}1))\subseteq s^{\operatorname{tl}(\pi)}(S_{2m-3})\subseteq s^{m-3}(S_{2m-3})$. Now suppose $\pi=\zeta_{\ell,m}$ for some $\ell\in\{3,\ldots,m\}$. Let \[\xi_{\ell,m}=\ell(m+1)(m+2)\cdots (2m-3)23\cdots(\ell-1)(\ell+1)\cdots m1.\] In other words, $\xi_{\ell,m}$ is obtained from the permutation $(m+1)(m+2)\cdots (2m-3)123\cdots m$ by moving the entry $\ell$ to the beginning and moving the entry $1$ to the end. It is straightforward to check that $s^{m-3}(\xi_{\ell,m})=\zeta_{\ell,m}$, so $\pi=\zeta_{\ell,m}$ is in the image of $s^{m-3}$.
To prove the converse, suppose $\pi=s^{m-3}(\sigma)$ for some $\sigma\in S_{2m-3}$. We know by Lemma~\ref{Lem:tl} that $\operatorname{tl}(\pi)\geq m-3$. Now suppose that not every descent top of $\pi$ is in $\operatorname{LRmax}(\pi)$; we will prove that $\pi=\zeta_{\ell,m}$ for some $\ell\in\{3,\ldots,m\}$. Let $\widehat\pi$ and $\widehat\sigma$ be the standardizations of $\operatorname{del}_1(\pi)$ and $\operatorname{del}_1(\sigma)$, respectively. By Lemma~\ref{Lem:del}, we have \[\operatorname{del}_1(\pi)=\operatorname{del}_1(s^{m-3}(\sigma))=s^{m-3}(\operatorname{del}_1(\sigma)),\] so $\widehat\pi=s^{m-3}(\widehat\sigma)$. This shows that $\widehat\pi\in s^{(2m-4)-(m-1)}(S_{2m-4})$, so Theorem~\ref{Thm1} guarantees that $\widehat\pi$ avoids $32\overline{4}1$. Consequently, $\operatorname{del}_1(\pi)$ avoids $32\overline{4}1$. Since not every descent top of $\pi$ is in $\operatorname{LRmax}(\pi)$, it must be the case that
$\pi$ contains an occurrence of the pattern $32\overline{4}1$ that involves the entry $1$.
For $0\leq i\leq m-3$, let $\sigma^{(i)}=s^{m-3-i}(\sigma)$. Thus, $\sigma^{(0)}=\pi$ and $\sigma^{(m-3)}=\sigma$. Since $\sigma^{(i)}\in s^{-1}(\sigma^{(i-1)})$, we can use Lemma~\ref{Lem2} and induction on $i$ to see that each of the permutations $\sigma^{(i)}$ contains an occurrence of the pattern $32\overline{4}1$ that involves the entry $1$. Let $R_i$ be the set of entries that appear to the right of $1$ in $\sigma^{(i)}$. Note that $R_0\supseteq R_1\supseteq\cdots\supseteq R_{m-3}$. Furthermore, Lemma~\ref{Lem2} tells us that there are at least $2$ elements of $\{3,\ldots,2m-3\}$ in $R_i\setminus R_{i+1}$ for each $i\in\{0,\ldots,m-4\}$. This proves that there are at least $2m-6$ elements of $\{3,\ldots,2m-3\}$ in $R_0\setminus R_{m-3}$. Since $\pi$ has length $2m-3$ and contains an occurrence of $32\overline{4}1$ that involves the entry $1$, we must have $\pi_1>\pi_2>\pi_3=1$. Furthermore, there must be exactly $2m-6$ entries to the right of $1$ in $\pi$, so \[2m-6\leq |(R_0\setminus R_{m-3})\cap\{3,\ldots,2m-3\}|\leq|R_0\setminus R_{m-3}|\leq|R_0|=2m-6.\] These inequalities must, of course, be equalities, so $R_{m-3}=\emptyset$ and $R_0\subseteq \{3,\ldots,m-3\}$. The fact that $R_{m-3}$ is empty tells us that $\sigma$ ends in the entry $1$. Since $\pi=s^{m-3}(\sigma)$, it follows from Lemma~\ref{Lem:Increasing} that all of the entries to the right of $1$ in $\pi$ appear in increasing order. The fact that $R_0\subseteq\{3,\ldots,m-3\}$ tells us that $2$ appears to the left of $1$ in $\pi$. Putting this all together, we find that $\pi=\zeta_{\ell,m}$ for some $\ell\in\{3,\ldots,2m-3\}$. We have seen that the tail length of $\pi$ is at least $m-3$, so we must actually have $\ell\in\{3,\ldots,m\}$, as desired.
\end{proof}
\section{Open Problems}
We saw in Proposition~\ref{Prop2} that for each fixed $m\geq 1$, the sequence $(|s^{n-m}(S_n)|)_{n\geq m}$ is nonincreasing. It is clear that the first term in this sequence is $|s^0(S_m)|=|S_m|=m!$. The second term in the sequence is $|s(S_{m+1})|$, the number of sorted permutations in $S_{m+1}$; as mentioned in the introduction, the asymptotics of $|s(S_{m+1})|$ for large $m$ are not precisely known. In this article, we showed that $|s^{n-m}(S_n)|=B_m$ for all $n\geq 2m-2$ and that $|s^{n-m}(S_n)|=B_m+m-2$ when $n=2m-3$. What can be said about $|s^{m-4}(S_{2m-4})|$? What can be said asymptotically about the approximate sizes of the numbers $|s^{n-m}(S_n)|$ for $m\leq n\leq 2m-2$ when $m$ is large? It would be interesting to have a more precise understanding of how the sequence $(|s^{n-m}(S_n)|)_{n=m}^{2m-2}$ decreases from $m!$ to $B_m$.
\section{Acknowledgments}
The author was supported by a Fannie and John Hertz Foundation Fellowship and an NSF Graduate Research Fellowship (grant no. DGE-1656466).
|
1,477,468,751,172 | arxiv | \section{introduction}
Excesses in searching for diboson resonance using boson-tagged jets were recently reported by the ATLAS collaboration~\cite{Aad:2015owa}. It shows local excesses in the $WZ$, $WW$ and $ZZ$ channels with significance of $3.4\sigma$, $2.6\sigma$ and $2.9\sigma$ respectively. Similarly, the CMS collaboration~\cite{Khachatryan:2014hpa,Khachatryan:2014gha} has reported an excess of $1.9\sigma$ significance in the dijet resonance channel and $e\nu \bar b b$ channel which may arise from $Wh$ with $h$ decaying hadronically. These excesses may be evidences of new symmetries or new particles near $2~{\rm TeV}$.
Since the resonances decay into two gauge boson, they should be bosonic states. Possible origins of this excess were studied by several groups~\cite{Hisano:2015gna,Fukano:2015hga,Franzosi:2015zra,Cheung:2015nha,Dobrescu:2015qna,Aguilar-Saavedra:2015rna,Gao:2015irw,Thamm:2015csa,Brehmer:2015cia,Cao:2015lia,Cacciapaglia:2015eea,Abe:2015jra,Allanach:2015hba,Abe:2015uaa,Carmona:2015xaa,Dobrescu:2015yba,Chiang:2015lqa,Cacciapaglia:2015nga,Sanz:2015zha,Chen:2015xql}, where the excesses were explained as spin-1 gauge bosons~\cite{Hisano:2015gna,Cheung:2015nha,Dobrescu:2015qna,Gao:2015irw,Thamm:2015csa,Brehmer:2015cia,Cao:2015lia,Cacciapaglia:2015eea,Abe:2015jra,Dobrescu:2015yba} in an extended gauge group, composite spin-1 resonances~\cite{Fukano:2015hga,Franzosi:2015zra,Carmona:2015xaa}, spin-0 or spin-2 composite particles~\cite{Chiang:2015lqa,Cacciapaglia:2015nga,Sanz:2015zha} and extra scalar bosons~\cite{Chen:2015xql}. The key points in explaining the excesses are the interactions of new resonance with the Standard Model (SM) gauge bosons, quarks and(or) gluons, the former of which is relevant to the branching ratio of the new resonance and the latter of which is relevant to the production of the new resonance at the LHC. One the one hand, one needs the couplings of new interactions to be large enough so as to give rise to a sizable production cross section at the LHC, on the other hand the strengths of these interactions should be consistent with current constraints of colliders and electroweak precision measurements. These two requirements are mutual restraint. A new resonance is not able to explain the ATLAS excesses if its interaction strengths are not mutually compatible with these two requirements.
In this paper, we explain the ATLAS excesses in the stealth doublet model, where the second Higgs doublet, $H_2$, gets no vacuum expectation value, with mass near 2 TeV, and only the CP-even part of $H_2$ mixes with the SM Higgs boson. We assume $H_2$ has sizable Yukawa interaction with the first generation quarks, which is consistent with constraints of flavor physics. Such that the heavy CP-even Higgs boson can be produced at the LHC via the Yukawa interaction and decays into diboson states through the mixing with the SM Higgs boson. Our numerical simulations show that one has $\sigma(pp\to H\to WW/ZZ)\sim 5~{\rm fb}$ by setting $\xi\sim0.15$ and $\alpha\sim0.06$, where $\xi $ is the Yukawa coupling of the $H_2$ with the first generation quarks and $\alpha$ is the mixing angle between two CP-even neutral states. This result is consistent with current constraints from colliders and electroweak precision measurements.
The remaining of the paper is organized as follows: In section II we give a brief introduction to the model. Section III is the study of constraints on the model. We investigate the ATLAS diboson excesses arising from this stealth doublet model in section IV. The last part is the concluding remarks.
\section{the model}
We work in the modified stealth doublet model~\cite{Enberg:2013jba,Enberg:2013ara}, where the second Higgs doublet gets no vacuum expectation value (VEV) but its CP-even part mixes with the SM Higgs boson. In the following, we describe the modified stealth doublet model first, and then study its implications in the ATLAS diboson excesses. The Higgs potential is the same as that in the general two Higgs doublet model (2HDM), which can be written as
\begin{eqnarray}
V&=& -m_1^2 H_1^\dagger H_1^{} + m_2^{} H_2^\dagger H_2^{} +\left( {m_{12}^2 H_1^\dagger H_2^{} }+ {\rm h.c.} \right)
\nonumber \\&&+ \lambda_1 (H_1^\dagger H_1)^2 + \lambda_2 (H_2^\dagger H_2^{} )^2 + \lambda_3 (H_1^\dagger H_1^{} ) (H_2^\dagger H_2^{}) + \lambda_4 (H_1^\dagger H_2^{} )(H_2^\dagger H_1^{}) \nonumber \\
&&+ \left\{ {1\over 2 }\lambda_5 (H_1^\dagger H_2^{} )^2 + (\lambda_6 H_1^\dagger H_1+ \lambda_7 H_2^\dagger H_2 ) H_1^\dagger H_2+ {\rm h.c.} \right\} \label{potential}
\end{eqnarray}
In this paper, we assume the Higgs potential is CP-conserving, so all couplings in eq.(\ref{potential}) are real. Only one Higgs doublet gets nonzero VEV in the stealth doublet model, we take it be $H_1$. The tadpole conditions for the electroweak symmetry breaking become
\begin{eqnarray}
m_1^2 = \lambda_1^{} v_1^2 \; ,\hspace{1cm} m_{12}^2 = -{1\over 2 } \lambda_6^{} v_1^2
\end{eqnarray}
where $v_1=\sqrt{2}\langle H_1^{} \rangle \approx 246~{\rm GeV}$. After spontaneous breaking of the electroweak symmetry, there are two CP-even scalars $h$ and $H$, one CP-odd scalar $A$ and two charged scalars $C^\pm$, the mass eigenvalues of which can be written as~\cite{Enberg:2013jba}
\begin{eqnarray}
m_{A~~}^2 & =& m_2^2 +{1\over 2 } (\lambda_3+\lambda_4-\lambda_5) v_1^2 \\
m_{C~~}^2 &=& m_2^2 + {1\over 2 } \lambda_3 v_1^2 \\
m_{h,H}^2 &=&{1\over 2 }\left\{ m_1^{2} + m_A^2 +\lambda_5 v_1^2 \pm \sqrt{ (m_1^2 -m_A^2 -\lambda_5v_1^2 )^2 -4 \lambda_6^2 v_1^4}\right\}
\end{eqnarray}
The mixing angle $\alpha$ between $h$ and $H$ can be calculated directly, we take it as a new degree of freedom in this paper. $H$ interacts with dibosons through the mixing. We refer the reader to Ref. \cite{Enberg:2013jba} for the feynman rules of Higgs interactions.
The Yukawa interactions of $H_1$ with SM fermions are exactly the same as Yukawa interactions of the SM Higgs with fermions in the SM. We assume $H_2$ has sizable Yukawa coupling with the first generation quarks:
\begin{eqnarray}
{\cal L}_N = \sqrt{2} \xi \overline{Q_1} \tilde H_2 u_R^{} + {\rm h.c.} \; . \label{yukawah}
\end{eqnarray}
where $Q_1 =(u_L, ~d_L)^T$ and $\tilde H_2 =i \sigma_2 H_2^*$.
Since $\langle H_2 \rangle =0$, there is almost no constraint on this Yukawa coupling, and $H$ can be produced at the LHC via this interaction.
\section{Constraints}
Before proceeding to study ATLAS diboson excesses, let us investigate constraints on the mixing angle $\alpha$. Couplings of the SM-like Higgs to other SM particles were measured by the ATLAS and CMS collaboration. Comparing with SM Higgs couplings, couplings of $h$ and $ H$ to all SM states (except $u$ quark) are rescaled by $\cos \alpha$ and $\sin \alpha$, respectively:
\begin{eqnarray}
g_{hXX} = \cos\alpha g_{hXX}^{SM}\; , \hspace{1cm} g_{HXX}=\sin\alpha g_{hXX}^{\rm SM}
\end{eqnarray}
where $X$ represents SM states.
Thus signal rates of the Higgs measurements relative to SM Higgs expectations are the function of $\cos \alpha$. Performing a global $\chi^2 $ fit to the Higgs data given by ATLAS and CMS, one has $\cos\alpha \geq 0.84$~\cite{Profumo:2014opa}, at the $95\%$ confidence level.
Another constraint comes from the oblique parameters~\cite{Peskin:1990zt,Peskin:1991sw}, which are defined in terms of contributions to the vacuum polarizations of gauge bosons. The explicit expressions of $\Delta S$ and $\Delta T$, which involve effects of all scalars, can be written as~\cite{Haber:2010bw}
\begin{eqnarray}
\Delta S&=&{1\over \pi m_Z^2 } \left\{ (-1)^i s^2\sum_{i} [ {\cal B }_{2} (m_Z^2; m_Z^2, m_i^2 ) -m_Z^2 {\cal B}_0 (m_Z^2; m_Z^2, m_i^2 ) ] + c^2 {\cal B}_{2} (m_Z^2; m_H^2, m_{A}^{2} )\right. \nonumber \\ \label{scal}
&&\left.+ s^2 {\cal B}_{2} (m_Z^2; m_h^2, m_{A}^{ 2} )-{\cal B}_{2} (m_Z^2; m_C^2,m_C^2){\over}\right\}\\
\Delta T&=& {1\over 4\pi s_W^2 m_W^2 }\{ s^2 B_{2} (0; m_C^2, m_h^2)+c^2 B_{2}(0;m_C^2,m_H^2) + B_{2} (0;m_C^2,m_{A}^{2}) -s^2 B_{2} (0; m_h^2, m_{A}^2 )\nonumber \\
&&-c^2 B_{2}(0;m_H^2,m_A^2) -s^2 B_{2} (0;m_W^2, m_h^2 ) +s^2 B_{2} (0; m_W^2, m_H^2 )+s^2 B_{2} (0; m_Z^2,m_h^2 )\nonumber \\&& -s^2 B_{2}(0; m_Z^2, m_H^2)+m_W^2 s^2 [B_0(0;m_W^2,m_h^2 )-B_0 (0; m_W^2, m_H^2 )]\nonumber \\&&+M_Z^2 s^2 [- B_0 (0,m_W^2, m_h^2 ) + B_0(0;m_Z^2,m_H^2)]-{1\over 2 } A_0(m_C^2 ) \} \label{tcal}
\end{eqnarray}
where ${\cal B}_{i} (x; y, z) = B_{i} (x; y,z)-B_{i} (0;y,z)$, $i=(0,~2)$, the expressions of $B_i(x;y,z)$ and $A_0(x)$ can be find in Ref.~\cite{Haber:2010bw}, $c=\cos\alpha$ and $s=\sin \alpha$, $s_W=\sin \theta_W$ with $\theta_W$ the weak mixing angle, $M_Z$ and $M_W$ are masses of $Z$ and $W$ bosons respectively.
\begin{figure}[t!]
\centering
\includegraphics[width=0.46\textwidth]{stcontour.pdf}
\hfill
\includegraphics[width=0.46\textwidth]{stu.pdf}
\caption{ Left panel: Predictions of heavy state in the $S-T$ plane by setting $M_C=M_A$ and $M_H=2~{\rm TeV}$; Right panel: Constraints on the masses of the charged and CP-odd neutral states from oblique parameters by setting $m_H=2~{\rm TeV}$ and $\sin \alpha \sim 0.1$. }
\label{fig:stu}
\end{figure}
The most recent electroweak fit (by setting $m_{h,ref} =126~{\rm GeV}$ and $m_{t,ref}=173~{\rm GeV}$) to the oblique parameters performed by the Gfitter group~\cite{Baak:2012kk} yields
\begin{eqnarray}
S\equiv \Delta S^0 \pm \sigma_S=0.03\pm0.10\; , \hspace{1cm} T\equiv \Delta T^0 \pm \sigma_T =0.05\pm0.12 \; .
\end{eqnarray}
The $\Delta \chi^2 $ can be written as
\begin{eqnarray}
\Delta \chi^2 =\sum_{ij}^2 (\Delta {\cal O}_i - \Delta {\cal O }_i^0 )( \sigma^2 )_{ij}^{-1} (\Delta {\cal O}_j - \Delta {\cal O}_j^0)
\end{eqnarray}
where ${\cal O}_1=S$ and ${\cal O}_2=T$; $\sigma^2_{ij} =\sigma_i \rho_{ij} \sigma_j$ with $\rho_{11}=\rho_{22}=1$ and $\rho_{12}=0.891$.
As can be seen from eqs. (\ref{scal}) and (\ref{tcal}), there are four free parameters contributing to the oblique parameters, $m_A$, $m_C$, $m_H$ and $\alpha$. To preform electroweak fit, we set $M_C=M_A\equiv M$, which can be easily achieved by setting $\lambda_3=\lambda_4$, and $m_H=2~{\rm TeV}$, so that only two free parameters left. Blue points In the left panel of FIG.\ref{fig:stu} show the contribution to the $\Delta S$ and $\Delta T$ by setting $M$ and $\sin \alpha$ random parameters varying in the range $ (1.8,~2.3)~{\rm TeV}$ and $(0,~1)$ respectively. The contour in the same plot shows the allowed region in the $S-T$ plane in the $95\%$ C.L. A direct numerical calculation shows that $|\sin \alpha| \leq 0.3$. In the right panel of FIG. \ref{fig:stu} we show the region that are allowed by the oblique observations in the $M_C-M_A$ plane by setting $\sin\alpha =0.1$ and $M_H=2~{\rm TeV}$. To summarize, electroweak precision measurements put stronger constraint on the $\alpha$ even for the nearly degenerate heavy states.
\begin{figure}[t!]
\centering
\includegraphics[width=0.46\textwidth]{branching.pdf}
\hfill
\includegraphics[width=0.46\textwidth]{css.pdf}
\caption{ Left panel: Branching ratios of $H$ as the function of $m_H$ by setting $s\sim 0.05$ and $\xi=0.5$; Right panel: Production cross section of the heavy CP-even Higgs boson at the LHC by setting $\xi=0.5$, with solid, dotted and dashed lines correspond to $\sqrt{s}=8,~13,~14~{\rm TeV}$ respectively. }
\label{fig:BR&PRO}
\end{figure}
\section{diboson excesses}
Heavy scalar states in our model can be produced at the LHC through its Yukawa interaction with the first generation quarks as was shown in eq. (\ref {yukawah}) and can decay into diboson final states from the mixing with the SM-like Higgs boson. The main decay channels of $H$ are $\bar u u$, $\bar t t$, $W^+W^-$ and $ZZ$. The decay rates can be written as
\begin{eqnarray}
\Gamma_{u\bar u} ~&=& {n_C \xi^2 m_H \over 8\pi} \label{dr1}\\
\Gamma_{t\bar t}~~ &=& { s^2 n_C m_t^2 (m_H^2 -m_t^2 )^{3/2} \over 8\pi m_H^2 v^2 } \\
\Gamma_{VV} &=& { s^2 m_V^4 \sqrt{m_H^2 -m_V^2 } \over 4 \pi m_H^2 v^2 } \left( 3 - {m_H^2 \over m_V^2 } + {m_H^4 \over 4 m_V^4}\right)
\end{eqnarray}
where $n_C=3$, being the color index; $V=W,~Z$ respectively. We show in the left panel of FIG. \ref{fig:BR&PRO} the branching ratios of $H$ by setting $s=0.05$ and $\xi=0.5$, where the solid, dotted and dashed lines correspond to the branching ratios of $WW/ZZ$, $\bar u u$ and $\bar t t$ respectively. We plot in the right panel of FIG. \ref{fig:BR&PRO} the production cross section of $H$ at the LHC. The solid, dotted and dashed lines correspond to $\sqrt{s}=8~{\rm TeV},~13~{\rm TeV}$ and $14~{\rm TeV}$, respectively.
\begin{figure}[t!]
\centering
\includegraphics[width=0.46\textwidth]{final.pdf}
\caption{ Contour plot of $\sigma(pp\to H \to WW)$ in the $\sin\alpha-\xi$ plane. The blue dotted, black solid and green dashed lines correspond to $\sigma (pp\to H \to WW) =20, ~10, ~5 ~{\rm fb^{}}$ respectively. The region below the gray solid line satisfies $ \sigma(pp\to A\to hZ) <7~{\rm fb}$. The region below the red dot-dashed line satisfies $ \sigma(pp\to C \to hW ) <7~{\rm fb}$. The region below the cyan solid line has $\sigma (pp\to R\to jj ) <100~{\rm fb}$. }
\label{fig:CROSSS}
\end{figure}
We show in FIG. \ref{fig:CROSSS} the contours of $\sigma(pp\to H\to WW)$ in the $\sin \alpha -\xi$ plane. The dashed, solid and dotted lines correspond to $\sigma(pp\to H\to WW)=5,~10,~20~{\rm fb^{}}$ respectively. One can get similar numerical results for the $(pp\to H\to ZZ)$ process. The ATLAS reported number of excesses is about $8\sim 9$ events near the $2~{\rm TeV}$ peak. Given a luminosity of $20.3 ~{\rm fb^{-1}}$, one has $ \sigma(pp\to H\to WW)\approx 5\sim 6 ~{\rm fb^{}}$ for a $13\%$ \cite{Aad:2015owa} selection efficiency of the event topology and boson-tagging requirements. Although large enough cross section can be produced at the LHC, the model is constrained by other LHC experimental results. We will discuss these constraints one-by-one as follows:
\begin{itemize}
\item The CMS collaboration~\cite{Khachatryan:2015bma} has reported an upper bound for the $\sigma(pp\to R\to W^+ h)$, where $R$ is a new resonance. It gives $\sigma (pp\to R \to W^+ h) \leq 7 ~{\rm fb^{}}$. The resonance can be the charged component of the heavy scalar doublet in our model. Its decay rate can be written as
\begin{eqnarray}
\Gamma_{C\to Wh}&=&{g^2 s^2 \over 64 \pi m_W^2 m_C^3 } \lambda^{3/2} (m_C^2, m_h^2, m_W^2 ) \; ,
\end{eqnarray}
where $\lambda(x,~y,~z)=x^2 + y^2 +z^2 -2xy-2xz-2yz$ and $g$ is the $SU(2)$ gauge coupling. FIG. \ref{fig:CROSSS} the numerical results by setting $m_C=2.2~{\rm TeV}$, where the region below the red dot-dashed line satisfy this constraint.
\item The CP-odd component of the heavy scalar doublet can be the mediator of the process $pp\to R \to Zh $, which was also measured the CMS collaboration. One has $\sigma(pp\to A \to Zh)<7~{\rm fb^{}}$ The decay rate of $A\to Z h $ can be written as
\begin{eqnarray}
\Gamma_{A\to Z h} = {g^2 s^2 \over 64 \pi c_W^2 m_Z^2 m_A^3} \lambda^{3/2} (m_A^2, m_Z^2, m_h^2 )
\end{eqnarray}
where $c_W =\cos \theta_W$ with $\theta_W$ the weak mixing angle. $A$ can also decay into dijet final states with the decay rate the same as eq. (\ref{dr1}). We show in FIG. \ref{fig:CROSSS} the numerical results, where the region to the top-right of the gray solid line are excluded by this constraint.
\item Both ATLAS and CMS has searched for resonances decaying into dijets. We use $\sigma(pp\to R\to jj)\leq 100~{\rm fb^{}}$ with the acceptance ${\cal A }\sim 0.6$. Both the CP-even and the CP-odd heavy scalars as well as the charged scalar in our model mainly decay into dijet via the Yukawa interaction. We show in FIG. \ref{fig:CROSSS} the region (to the bottom-right corner of the cyan solid line) allowed by this constraint.
\end{itemize}
Since the decay rate of $H\to t\bar t$ is tiny, there is almost no constraint on the model from $t\bar t$ resonance searches. As can be seen from FIG. \ref{fig:CROSSS}, $\sigma(pp\to H \to WW)$ should be less than $6\sim 7$ fb. One has $\sigma(pp\to H \to WW/ZZ) \sim 5~{\rm fb} $ for $\xi\sim 0.15$ and $ \alpha \sim 0.06$, which is consistent with the constraints of colliders and electroweak precision measurements. No direct excess in the $WZ$ channel comes out of our model. But the the ATLAS observed excess in the $WZ$ channel can be interpreted as the misidentification of the $W/Z$-tagged jet owing to uncertainties of the tagging slections.
\section{Summary}
We investigated the prospects of the stealth doublet model as a possible explanation to the diboson excesses observed by the ATLAS collaboration. The mass of heavy Higgs boson was fixed at near $2~{\rm TeV}$ in our study. We showed that excesses in the $WW$ and $ZZ$ channels can be interpreted as the decay of the heavy CP-even Higgs boson $H$, which can be produced at the LHC via its Yukawa interaction with the first generation quarks. One needs the Yukawa coupling $\xi\sim0.15$ and the mixing angle between two CP-even Higgs bosons $\alpha\sim0.06$, which is consistent with precision measurements, so as to has a $5~{\rm fb}$ production cross section at the LHC. Constraints on the model from the exclusion limits in $Wh$ and $Zh$ channels given by CMS collaboration and dijet searches was also studied, which showed the limited parameter space (in FIG. \ref{fig:CROSSS}) that can be accommodated with the interpretation of the ATLAS diboson excesses in the same model. We expect the running of the 13 TeV LHC to tell us the detail about the diboson excesses and show us more clear hints of new physics behind this phenomena.
\begin{acknowledgments}
The author thanks to Huaike Guo and Peter Winslow for very helpful discussions.
This work was supported in part by DOE Grant DE-SC0011095
\end{acknowledgments}
|
1,477,468,751,173 | arxiv | \section{Introduction}
Critical behavior in the gravitational collapse of a massless scalar field was
discovered by Choptuik~\cite{Choptuik:1992jv}, who sought to answer the question
``What happens at the threshold of black hole formation?'' Choptuik considered
a massless scalar field undergoing gravitational collapse in a spherically
symmetric spacetime. He found that for some parameter $p$ in the initial data,
for example the amplitude of a Gaussian-distributed scalar field, the final mass
of the black hole is related to $p$ by
\begin{align}
\label{eq:simple mass relation}
M_{\mathrm{BH}}\propto\left|\frac{p}{p_\star}-1\right|^{\gamma_M}.
\end{align}
Here $p_\star$ is the critical value of the parameter $p$ that separates initial
data that form a black hole (supercritical) from initial data that do not form a
black hole (subcritical). Choptuik observed that the critical exponent
$\gamma_M$ is independent of the initial data chosen---the critical behavior is
universal. The currently accepted value of the critical exponent is
$\gamma_{M}=0.374\pm0.001$~\cite{Gundlach:1996eg}. Not much later, Garfinkle and
Duncan~\cite{Garfinkle:1998va} discovered that in subcritical evolutions the
maximum absolute value of the Ricci scalar at the center of the collapse obeys
the scaling relation
\begin{align}
\label{eq:simple ricci relation}
R_{\max} \propto \left|\frac{p}{p_\star}-1\right|^{2 \gamma_{R_{\max}}}.
\end{align}
Interestingly, $\gamma_{R_{\max}}$ was found to have the same value as
$\gamma_M$.
Another key aspect of the critical behavior observed by Choptuik is that of a
discretely self-similar solution, or ``echoing''. In the strong-field regime
near the critical solution, Choptuik noticed that any gauge-invariant quantity
$U$ obeys the relation
\begin{align}
\label{eq:rescaling}
U(\mathcal{T}, x^i) = U(e^\Delta \mathcal{T}, e^\Delta x^i),
\end{align}
where $\Delta$ is a dimensionless constant. Here $\mathcal{T}=\tau-\tau_\star$,
where $\tau$ is the proper time of a central observer and $\tau_\star$ is the
value of $\tau$ when a naked singularity forms in the limit $p\to p_\star$.
$\tau_\star$ is referred to as the accumulation time. As one
moves closer in time to the critical solution by $e^\Delta$, the same field
profile is observed for $U$ but at spatial scales $e^\Delta$ smaller. The
echoing period $\Delta$, like the critical exponent, is universal in the sense
that it does not depend on the initial data, only on the type of matter
undergoing gravitational collapse. The currently accepted value for a massless
scalar field is $\Delta=3.4453\pm0.0005$~\cite{Gundlach:1996eg}.
Since the seminal work by Choptuik, many studies to better understand critical
behavior in gravitational collapse have been performed. Studies of critical
collapse of a massless scalar field in spherical symmetry have found that the
critical exponent and echoing period are both independent of the initial data
profile but depend on the dimensionality of the
spacetime~\cite{Garfinkle:1999zy,Bland:2005vu,Sorkin:2005vz,Taves:2011yt}. Similar
studies observed that the critical exponent, echoing period, and possibly even
the type of phase transition are changed in modified theories of
gravity~\cite{Deppe:2012wk,Golod:2012yt}. Interestingly, the presence of
critical behavior appears to be independent of the matter source, but the value
of the critical exponent, echoing period, and type of phase transition depend on
the type of
matter~\cite{Choptuik:1996yg,Gundlach:1996je,Brady:1997fj,Garfinkle:2003jf,Baumgarte:2015aza,Gundlach:2016jzm,Baumgarte:2016xjw,Gundlach:2017tqq}.
Vacuum critical collapse was first studied
in~\cite{Abrahams:1993wa,Abrahams:1994nw}, which found that critical behavior is
present and that the critical exponent and echoing period have values different
from those found in simulations with matter. Unfortunately, studying vacuum
gravitational collapse has proven to be quite
difficult~\cite{Sorkin:2009wh,Sorkin:2010tm,Hilditch:2013cba,Hilditch:2015aba}.
In critical collapse the phase transition is either Type I or Type II.
In Type II phase transitions the black hole mass continuously goes to zero as
$p_\star$ is approached. This has been the most common case observed so far when
studying critical collapse. In Type I transitions the mass of the black hole
that forms approaches a constant, non-zero value as $p_\star$ is
approached. Type I phase transitions have been clearly identified in critical
collapse of a massive scalar field\cite{Brady:1997fj}.
The discussion in this paper is only relevant for Type II critical behavior.
In 1997 both Gundlach~\cite{Gundlach:1996eg}, and Hod and
Piran~\cite{Hod:1996az} independently discovered fine structure in addition to
the power-law behavior of the black hole masses: There is a small-amplitude
modulation of~\eqref{eq:simple mass relation}. Specifically, the scaling
relation is altered to
\begin{align}
\ln(M_{\mathrm{BH}})=&\gamma_{M}\ln\left|p/p_\star-1\right|+C\notag\\
&+A\sin(w\ln\left|p/p_\star-1\right|+\delta),
\end{align}
where $C$, $A$, $w$, and $\delta$ are constants. These authors predicted and
verified that $w=\Delta/(2\gamma_{M})$ for massless scalar field collapse in
spherical symmetry. Whether or not this relation holds for different matter
sources and beyond spherical symmetry is an open question.
Unfortunately, answering the question of how symmetry assumptions affect the
critical exponent and echoing period has turned out to be quite challenging. The
reason is that spatiotemporal scales varying over four to six orders of
magnitude must be resolved in order to properly study the fine structure and
echoing, and a large number of high-resolution simulations are necessary. In
addition, the well-posedness and stability of the formulation of the Einstein
equations solved and the choice of gauge has proven to be as problematic here as
in other simulations in numerical relativity. Akbarian and
Choptuik~\cite{Akbarian:2015oaa} have recently studied how formulations of the
Einstein equations commonly used for binary black hole mergers behave when
studying critical collapse. However, that work was restricted to spherical
symmetry.
Critical collapse of a massless scalar field in axial symmetry was studied using
perturbation theory by Martin-Garcia and Gundlach~\cite{MartinGarcia:1998sk},
who found that all non-spherical modes decay. In 2003 Choptuik
et.~al~\cite{Choptuik:2003ac} performed numerical simulations of massless scalar
field collapse in axial symmetry. They found that the critical solution in this
case is the same as the solution found in spherical symmetry. However, in
contrast to~\cite{MartinGarcia:1998sk}, they also found tentative evidence for a
non-decaying $l=2$ mode. More recently, Healy and Laguna~\cite{Healy:2013xia}
studied critical collapse of a massless scalar field that is symmetric about the
$xz$-plane. Healy and Laguna observed results consist with spherically symmetric
collapse, but were unable to verify the echoing of gauge-independent fields. The
work of Healy and Laguna has been followed by a study of massless scalar field
collapse with a quartic potential by Clough and
Lim~\cite{Clough:2016jmh}. Clough and Lim also studied initial data similar to
that of~\cite{Healy:2013xia} and obtained results similar to those of Healy and
Laguna.
In this paper we present a study of critical collapse of a massless scalar field
with no symmetry assumptions, and the first study beyond spherical symmetry that
is able to resolve the fine structure in the black hole mass scaling
relation. We are able to resolve small-scale dynamics in both supercritical and
subcritical evolutions, allowing us to directly compare the results. In
$\S$\ref{sec:Equations} we review the equations solved, in
$\S$\ref{sec:InitialData} we discuss the initial data used, in
$\S$\ref{sec:NumericalMethods} we provide details about the numerical method, in
$\S$\ref{sec:Results} we present the results, and we conclude in
$\S$\ref{sec:Conclusions}.
After this work was completed, a paper by
Baumgarte appeared\cite{Baumgarte:2018fev} in which axially symmetric initial
data similar to that of~\cite{Choptuik:2003ac} is studied. We discuss the
relation between this paper and our work at the end of $\S$\ref{sec:Results}.
\section{Equations}\label{sec:Equations}
We study the dynamics near the critical solution in gravitational collapse of
the Einstein-Klein-Gordon system. We solve the Einstein equations,
\begin{align}
\label{eq:EE}
R_{ab}=8\pi\left(T_{ab}-\frac{1}{2}\psi_{ab}T^c{}_c\right)
\end{align}
where $R_{ab}$ is the Ricci tensor, $\psi_{ab}$ the spacetime metric, and
$T_{ab}$ the stress tensor. Here and throughout the rest of the paper we will
use latin indices at the beginning of the alphabet, e.g.~$a,b,c,\ldots$ to refer
to spacetime indices running from 0 to 3, and later indices, $i,j,k,\ldots$ to
refer to spatial indices running from 1 to 3. We use the ADM form of the metric,
\begin{align}
ds^2=-N^2dt^2+g_{ij}\left(N^i dt + dx^i\right) \left(N^j dt + dx^j\right)
\end{align}
where $N(t,x^i)$ is the lapse, $N^j(t,x^i)$ the shift, and $g_{ij}(t, x^k)$ the
spatial metric. We denote the timelike unit normal orthogonal to the spacelike
hypersurfaces by
\begin{align}
t^a = (N^{-1},-N^i/N).
\end{align}
We solve Eq.~(\ref{eq:EE}) using a first-order generalized harmonic (GH)
formulation~\cite{Lindblom:2005qh}.
The matter source is a massless scalar field $\varphi$ with
\begin{align}
\label{eq:StressTensor}
T_{ab}=\partial_a\varphi\partial_b\varphi-
\frac{1}{2}\psi_{ab}\psi^{cd}\partial_c\varphi\partial_d\varphi.
\end{align}
To bring the resulting equations of motion into first-order form, we define the
auxiliary variables $\Phi_i=\partial_i\varphi$ and
$\Phi_{iab}=\partial_i\psi_{ab}$, and the conjugate variables
$\Pi=-N^{-1}\left(\partial_t\varphi-N^i\partial_i\varphi\right)$ and
$\Pi_{ab}=-N^{-1}\left(\partial_t \psi_{ab}-N^i\Phi_{iab}\right)$.
The first-order GH system is~\cite{Lindblom:2005qh}
\begin{align}
\label{eq:metric_evolution}
\partial_t\psi_{ab}-&\left(1+\gamma_1\right)N^k\partial_k\psi_{ab}=-N\Pi_{ab}-\gamma_1N^i\Phi_{iab},\\
\label{eq:metric_conjugate_evolution}
\partial_t\Pi_{ab}-&N^k\partial_k\Pi_{ab}+Ng^{ki}\partial_k\Phi_{iab}-\gamma_1\gamma_2N^k\partial_k\psi_{ab}\notag\\
=&2N\psi^{cd}\left(g^{ij}\Phi_{ica}\Phi_{jdb}-\Pi_{ca}\Pi_{db}-\psi^{ef}\Gamma_{ace}\Gamma_{bdf}\right)\notag\\
&-2N\nabla_{(a}H_{b)}-\frac{1}{2}Nt^c t^d\Pi_{cd}\Pi_{ab}-Nt^c \Pi_{ci}g^{ij}\Phi_{jab}\notag\\
&+N\gamma_0\left(2\delta^c{}_{(a} t_{b)}-\psi_{ab}t^c\right)\left(H_c+\Gamma_c\right)\notag\\
&-\gamma_1\gamma_2N^i\Phi_{iab}\notag\\
&-16\pi N\left(T_{ab}-\frac{1}{2}\psi_{ab}T^c{}_c\right),\\
\label{eq:metric_derivative_evolution}
\partial_t\Phi_{iab}-&N^k\partial_k\Phi_{iab}+N\partial_i\Pi_{ab}-N\gamma_2\partial_i\psi_{ab}\notag\\
=&\frac{1}{2}Nt^c t^d\Phi_{icd}\Pi_{ab}+Ng^{jk}t^c\Phi_{ijc}\Phi_{kab}\notag\\
&-N\gamma_2\Phi_{iab},
\end{align}
where $H_a$ is the so-called gauge source function and must satisfy the
constraint $H_a=\psi_{ab}\Gamma^b_{cd}\phi^{cd}$. The parameters
$\gamma_0,\gamma_1$ and $\gamma_2$ are described in
$\S$\ref{sec:ConstraintDamping}. The first-order massless-Klein-Gordon system is
\begin{align}
\label{eq:sw_psi_evolution}
\partial_t\psi =& N^i\partial_i\psi-N\Pi+\gamma^{KG}_1N^i\left(\partial_i\psi-\Phi_i\right),\\
\label{eq:sw_pi_evolution}
\partial_t\Pi=&N\Pi K+N^i\partial_i\Pi+N\Phi_i g^{jk}\Gamma^i_{jk}\notag\\
&+\gamma^{KG}_1\gamma^{KG}_2N^i\left(\partial_i\psi-\Phi_i\right)\notag\\
&-g^{ij}\left(N\partial_j\Phi_i+\Phi_j\partial_i N\right),\\
\label{eq:sw_phi_evolution}
\partial_t\Phi_i=&-N\partial_i\Pi-\Pi\partial_i N-\gamma^{KG}_2N\left(\Phi_i-\partial_i\psi\right)\notag\\
&+N^j \partial_j\Phi_i + \Phi_j\partial_i N^j.
\end{align}
The parameters $\gamma^{KG}_1$ and $\gamma^{KG}_2$ are described in
$\S$\ref{sec:ConstraintDamping}, and $K$ is the trace of the extrinsic
curvature.
\section{Initial Data}\label{sec:InitialData}
We generate initial data for the evolutions by solving the extended conformal
thin-sandwich equations~\cite{Pfeiffer:2002iy} using the spectral elliptic
solver~\cite{2003CoPhC.152..253P} in \texttt{SpEC}~\cite{SpECwebsite}. The
contributions to the equations from the scalar field are given by
\begin{align}
\label{eq:rhoID}
\rho =&t^a{}t^b{}T_{ab}=\frac{1}{2}\left(\Pi{}^2+g^{ij}\Phi_i\Phi_j\right),\\
\label{eq:momentumID}
S^i =&-g^{ij}t^a{}T_{aj}=g^{ij}\Pi\Phi_{j},
\end{align}
and
\begin{align}
\label{eq:stressID}
S =& g_{ij}g^{ia}g^{jb}T_{ab}=\frac{1}{2}\left(3\Pi{}^2-g^{ij}\Phi_i\Phi_j\right),
\end{align}
where $g^{ia}$ projects the spacetime index $a$ onto the spatial hypersurface
orthogonal to $t^a$.
Let $r=\delta_{ij}x^ix^j$ and
\begin{align}
\label{eq:1d gaussian}
f(r) = \varphi_0 \exp\left[-\left(\frac{r-r_0}{\sigma}\right)^2\right].
\end{align}
For concreteness we focus on three types of initial data: spherically symmetric
data given by
\begin{align}
\label{eq:Spherical ID}
\varphi(t,x^i)=\varphi_{\mathrm{sph}}=\frac{f(-r)+f(r)}{r},
\end{align}
data where the second term has no $y$-coordinate dependence
(recall $xz\sim r\cos\phi\sin2\theta$) similar to that studied
in~\cite{Healy:2013xia,Clough:2016jmh}
\begin{align}
\label{eq:Reflection ID}
\varphi(t,x^i)=\varphi_{\Re (Y^2_1)}:=\varphi_{\mathrm{sph}}\left(1-\delta\cos\phi\sin2\theta\right),
\end{align}
and finally generic initial data of the form
\begin{align}
\label{eq:Generic ID}
\varphi(t,x^i)=\varphi_{3-d}:=\varphi_{\mathrm{sph}}
&\left\{1-\frac{\delta}{1.56}\left[(\cos\phi+\sin\phi)\sin2\theta\right.\right.\notag\\
&\left.\left.-\left(3\cos^2\theta-1\right)\right]\right\}.
\end{align}
The conjugate momentum to the $\varphi$ in the spherically symmetric case is
given by
\begin{align}
\Pi_{\mathrm{sph}}=\frac{\partial_rf(-r)-\partial_rf(r)}{r},
\end{align}
and is multiplied by the same non-spherical terms as $\varphi$. This is ingoing
spherical wave initial data. The numerical factor $1.56$ is chosen so that when
$\delta=1$, the maximum of the second term is approximately unity. We choose
$\sigma=1$ and $r_0=5$ for the results presented here. For the initial
data~\eqref{eq:Reflection ID} we (arbitrarily) choose $\delta=0.9$ and for data
given by~\eqref{eq:Generic ID} we choose $\delta=1$.
\section{Numerical Methods}\label{sec:NumericalMethods}
\subsection{Domain Decomposition}
\texttt{SpEC} decomposes the computational domain into possibly overlapping
subdomains. Within each subdomain a suitable set of basis functions that depends
on the topology of the subdomain is chosen to approximate the solution. The
domain decomposition for finding the initial data is a cube at the center with
an overlapping spherical shell that is surrounded by concentric spherical
shells. For the evolution, a filled sphere surrounded by non-overlapping
spherical shells is used until a black hole forms. At this point a ringdown or
excision grid nearly identical to that used during the ringdown phase of binary
black hole merger evolutions is used~\cite{Scheel:2008rj, Szilagyi:2009qz,
Hemberger:2012jz}. The ringdown grid consists of a set of non-overlapping
spherical shells with the inner shell's inner radius approximately $94\%$ of the
apparent horizon radius.
\subsection{Dual Frames and Mesh Refinement}
To resolve the large range of spatial and temporal scales required,
finite-difference codes typically use adaptive mesh refinement (AMR). However,
for the spatiotemporal scales required here, AMR is computationally
prohibitively expensive in 3+1 dimensions without any symmetries.
\texttt{SpEC} achieves its high accuracy by using spectral methods to solve the
PDEs rather than finite differencing. In addition, two further tools are
employed to achieve high accuracy: dual frames~\cite{Scheel:2006gg,
Szilagyi:2009qz, Hemberger:2012jz} and spectral AMR~\cite{Szilagyi:2014fna}.
In the dual frames approach, the PDEs are solved in what is called the grid
frame. This frame is related to the ``inertial frame'', the frame in which the
PDEs are originally written, by time-dependent spatial coordinate maps. The dual
frames method ``moves'' the grid points inward as the scalar field collapses,
which gives an additional two orders of magnitude of resolution compared to the
initial inertial coordinates without the use of any mesh refinement. We also
employ a coordinate map to slowly drift the outer boundary inward so that any
constraint-violating modes near the outer boundary are propagated out of the
computational domain. While the slow drift of the outer boundary is not
essential for stability, it is helpful in long evolutions.
Denote the coordinate map that moves the grid points inward during collapse by
$\mathcal{M}_{\mathrm{scaling}}$ and the map that drifts the outer boundary
inward by $\mathcal{M}_{\mathrm{drift}}$. Then the coordinate map used during
collapse before a black hole forms is given by
$\mathcal{M}_{\mathrm{collapse}}=\mathcal{M}_{\mathrm{drift}}\circ\mathcal{M}_{\mathrm{scaling}}$.
The mapping $\mathcal{M}_{\mathrm{collapse}}$ relates the initial coordinates,
$\bar{x}^i$ to the grid coordinates $x^i$ by
$\bar{x}^i=\mathcal{M}_{\mathrm{collapse}}x^i$. The specific spatial coordinate
map we use for both $\mathcal{M}_{\mathrm{drift}}$ and
$\mathcal{M}_{\mathrm{scaling}}$ is of the form
\begin{align}
\label{eq:cubicScale}
\bar{r}=a(t)r+\left[1-a(t)\right]\frac{r^3}{r_{\mathrm{outer}}^2},
\end{align}
where $r=\delta_{ij}x^ix^j$, $\bar{r}=\delta_{ij}\bar{x}^i\bar{x}^j$, $a(t)$ is
a time-dependent function we call an expansion factor, and $r_{\mathrm{outer}}$
is a parameter of the map. For $\mathcal{M}_{\mathrm{scaling}}$ we choose
\begin{align}
\label{eq:aScaling}
a_{\mathrm{scaling}}(t) = A
\exp\left[-{\left(\frac{t}{\sigma_{\mathrm{scaling}}}\right)}^{2n}\right]
+B
\end{align}
with $A=0.99$, $B=0.01$, $n=2$ and $\sigma_{\mathrm{scaling}}=3.8$. The value of
$r_{\mathrm{outer}}$ for $\mathcal{M}_{\mathrm{scaling}}$ is
$r_{\mathrm{outer}}=100$. For $\mathcal{M}_{\mathrm{drift}}$ we use
$r_{\mathrm{outer}}=180$ and
\begin{align}
\label{eq:aDrift}
a_{\mathrm{drift}}(t)=1+v\frac{t^3}{b+t^2},
\end{align}
with $b=10^{-4}$ and $v=-3.23\times10^{-3}$. We find these choices for the
coordinate maps lead to accurate and stable long-term evolutions with sufficient
resolution to resolve both scaling and echoing.
After an apparent horizon is found we switch over to an excision grid and use
the same coordinate maps used in the ringdown portion of the binary black hole
evolutions~\cite{Scheel:2008rj, Szilagyi:2009qz, Hemberger:2012jz}.
Specifically, we excise the interior of the apparent horizon with the excision
surface's radius being approximately 94 per cent of the apparent horizon's
coordinate radius. Near the apparent horizon, all the characteristics are
directed toward the center of the apparent horizon and so no boundary conditions
need to be imposed there. Thus, as long as the excision surface remains close to
the apparent horizon, the simulation remains stable without the need to impose
additional boundary conditions. One difficulty is that during the very early
phase of ringdown the apparent horizon's coordinate radius increases very
rapidly. To deal with the rapid expansion, a control system is used to track the
apparent horizon and adjust the location of the excision boundary to follow the
apparent horizon~\cite{Scheel:2006gg, Scheel:2008rj, Hemberger:2012jz}.
While the spatial coordinate maps work extremely well for resolving the small
length scales that appear near the critical solution, they do not provide any
guarantees about the truncation error of the simulations. The temporal error is
controlled by using an adaptive, fifth-order Dormand-Prince time stepper. The
spatial error is controlled using the spectral AMR algorithm described
in~\cite{Szilagyi:2014fna}. Using AMR we control the relative error in the
metric, the spatial derivative of the metric and the conjugate momentum of the
metric. For the results presented in this manuscript we set a relative maximum
spatiotemporal error of $10^{-8}$.
\subsection{Gauge Choice}
In binary black hole evolutions with the GH system, large constraint violations
occur unless an appropriate gauge condition is chosen. The key ingredient in a
successful choice~\cite{Lindblom:2009tu} is to control the growth of
$\sqrt{g}/N$, where $g$ is the determinant of the spatial metric. As one might
expect, evolutions of critical behavior at black hole formation require even
more stringent control of the gauge than in binary simulations. We find that
without such control, explosive growth in both $\sqrt{g}/N$ and $1/N$ prevents
the code from finding an apparent horizon before the constraints blow up and the
evolution fails. Accordingly, we adopt a modified version of the damped harmonic
gauge used in Ref.~\cite{Lindblom:2009tu}:
\begin{align}
\label{eq:targetGauge}
H_a=&\left[\mu_{L,1}\log\left(\frac{\sqrt{g}}{N}\right)
+\mu_{L,2}\log\left(\frac{1}{N}\right)\right]t_a\notag\\
&-\mu_S N^{-1}g_{ai}N^i.
\end{align}
The coefficients $\mu_{L,1}$, $\mu_{L,2}$ and $\mu_{S}$ are described below.
Fortunately, the region of the spatial hypersurfaces where $\sqrt{g}/N$ diverges
is different from that where $1/N$ diverges and so having the coefficients
$\mu_{L,1}$ and $\mu_{L,2}$ depend on $\log(\sqrt{g}/N)$ and $\log{1/N}$
respectively allows us to control both divergences with a single equation. The
functional forms of the coefficients are
\begin{align}
\label{eq:muL1}
\mu_{L,1}=&R(t)W(x^i)\left[\log\left(\frac{\sqrt{g}}{N}\right)\right]^4,\\
\label{eq:muL2}
\mu_{L,2}=&R(t)W(x^i)\left[\log\left(\frac{1}{N}\right)\right]^4,
\end{align}
and
\begin{align}
\label{eq:muS}
\mu_{S}=&\mu_{L,1}.
\end{align}
The roll-on function $R(t)$ is given by
\begin{align}
\label{eq:rollon}
R(t)=1-\exp\left[-\left(\frac{t-t_0}{\sigma_t}\right)^4\right],
\end{align}
where we choose $t_0=0$ and $\sigma_t=2$, while the spatial weight function,
$W(x^i)$ is given by
\begin{align}
\label{eq:spatialWeight}
W(x^i)=\exp\left[-34.54\left(\frac{r}{r_{\max}}\right)^2\right],
\end{align}
where we set $r_{\max}=30$. The function $R(t)$ is used to transition from the
initial maximal slicing to the damped harmonic gauge needed later in the
evolution, while $W(x^i)$ makes the gauge be pure harmonic near the outer
boundary of the computational domain. The $\log$ factors in Eq.~\eqref{eq:muL1}
and~\eqref{eq:muL2} make the gauge pure harmonic in the region of the spatial
slice where $\sqrt{g}/N$ and $1/N$ are near unity, respectively. We found that
using the fourth power as opposed to the second power that is typically used for
controlling the growth of $\sqrt{g}/N$ in binary black hole evolutions is
required for stable long-term evolutions.
\subsection{Constraint Damping}\label{sec:ConstraintDamping}
Both the Klein-Gordon and the GH system have constraints that must remain
satisfied during evolutions. For the Klein-Gordon system the constraint is
\begin{align}
\label{eq:KgConstraint}
\mathcal{C}^{KG}_i=\partial_{i}\psi-\Phi_{i}=0.
\end{align}
The constraints for the GH system are given in reference~\cite{Lindblom:2005qh}.
Failure to satisfy the constraints indicates that the numerical simulation is no
longer solving the physical system of interest and should not be trusted. To
control growth of constraint violations from numerical inaccuracies, constraint
damping parameters are added to the evolution equations. For the GH system the
constraint damping parameters are $\gamma_0, \gamma_1$ and $\gamma_2$, and for
the Klein-Gordon system $\gamma_1^{\mathrm{KG}}$ and
$\gamma_2^{\mathrm{KG}}$. See
Eqs.(\ref{eq:metric_evolution}--\ref{eq:sw_phi_evolution}) for how the
constraint damping parameters appear in the evolution equations. We find that
choosing $\gamma_1^{\mathrm{KG}}=1$ and $\gamma_2^{\mathrm{KG}}=0$ works well
for the scalar field. For the GH system, finding good constraint damping
parameters is more difficult, especially during ringdown. The dimensions of the
constraint damping parameters are $\mathrm{time}^{-1}$, which suggests that for
smaller black holes where the characteristic time scale is shorter, the
constraint damping parameters must be increased. During ringdown we choose
\begin{align}
\gamma_0 &= A_0\exp\left(-\frac{r^2}{10^2}\right)+10^{-3},\\
\gamma_1 &= A_1\left[\exp\left(-\frac{r^2}{1000^2}\right)-1\right],\\
\gamma_2 &= A_2\exp\left(-\frac{r^2}{10^2}\right)+10^{-3},
\end{align}
with $A_0\in [20, 100]$, $A_1=0.999$, and $A_2\in[20, 80]$. Larger values of
$A_0$ and $A_2$ are used for smaller black holes. During the collapse phase of
the evolutions we find less sensitivity to the choice of the damping
parameters. We use the same functional form as during the ringdown but always
choose $A_0 = A_2 = 20$.
\section{Results}\label{sec:Results}
All files used to produce figures in this paper, including the data, are
available from the arXiv version of this paper.
\subsection{Scaling}\label{sec:Scaling}
In this section we present two sets of scaling relations. The first involves
the final mass of the black hole $M_{\mathrm{BH}}$ for supercritical
evolutions. For each class of initial data we evolve the data with amplitudes
large enough that a black hole forms and gradually decrease the amplitude. While
decreasing the amplitude we focus on simulations that form a black hole. Rather
than performing a binary search to estimate $p_\star$, we fit the relationship
$\ln(M_{\mathrm{BH}})=\gamma\ln(p/p_\star-1)+C$ to the data for $\gamma$,
$p_\star$, and $C$, where we take $p$ to be the amplitude $\varphi_0$ of the
initial data. We then use the $p_\star$ from the fit to determine an amplitude
that should form a black hole but is closer to the critical solution. This is
repeated until $\log_{10}(p/p_\star-1)\approx-6$, the target value. Choosing
suitable values of $p$ to fit for $\gamma$ and $\Delta$ is tricky. We describe
our procedure in the \hyperref[sec:Appendix]{Appendix}. Note that the
relationship used for determining which amplitude to use next is not used for
analyzing the results.
The second scaling relation involves, $R_{\max}$ the maximum Ricci scalar at the
center for subcritical evolutions. We run simulations to obtain an
approximately even distribution of masses and maximum Ricci scalars for
$\ln(p/p_\star-1)\in(-14,-5]$. We estimate the errors in the final mass of the
black hole and $R_{\max}$ using convergence tests with values of $p$ nearest
$p_\star$.
Once we have reached the target number of simulations, with the lowest
amplitude that forms a black hole having $\log_{10}(p/p_\star-1)\approx-6$, we
fit the mass of the resulting black hole to
\begin{align}
\label{eq:SineMassFit}
\ln(M_{\mathrm{BH}})=&\gamma^M\ln(p/p_\star-1)+C^M\notag\\
&+A^M\sin\left[w^M\ln(p/p_\star-1)+\delta^M\right],
\end{align}
as suggested in~\cite{Gundlach:1996eg, Hod:1996az}. Note that the superscript
$M$ is not an exponent but denotes that parameter was obtained from fitting to
the mass of the black hole rather than the maximum Ricci scalar at the
center. We find that the probability of $\chi^2$ and the reduced $\chi^2$ are
better for this function than the one where the sinusoidal term is omitted. We
fit for all parameters in~\eqref{eq:SineMassFit}, including $p_\star$. The
fitting function used for the maximum Ricci scalar at the origin is
\begin{align}
\label{eq:SineRicciFit}
\ln(R_{\max})=&2\gamma^R\ln(p/p_\star-1)+C^R\notag\\
&+A^R\sin\left[w^R\ln(p/p_\star-1)+\delta^R\right].
\end{align}
However, for consistency we use the value of $p_\star$ obtained from fitting to
the masses when fitting to the maximum Ricci scalar as well.
In Fig.~\ref{fig:ScalingMasses} we plot $\ln(M_{\mathrm{BH}})$ as a function of
$\ln(p/p_\star-1)$ for the three types of initial data studied. For data
$\varphi_{\Re(Y^2_1)}$ we arbitrarily choose $\delta=0.9$, which is a large
deviation from the spherical solution. For reference, when $\delta=1$ the scalar
field profile is zero at the zeros of $1-\cos(\varphi)\sin(2\theta)$. For
initial data $\varphi_{\text{3-d}}$ we choose $\delta=1$, an even stronger
deviation from spherical symmetry. In Fig.~\ref{fig:ScalingMasses} we offset the
curves vertically by $\beta_{i}=\{0.3,0,-0.3\}${} so that they do not overlap and are easier to
compare. The critical exponents we find are \gammamass{}, where the number in
parentheses is the uncertainty in the last digit. These are all close to the
accepted value for spherically symmetric initial data,
$0.374\pm0.001$~\cite{Gundlach:1996eg} strongly suggesting that the spherical
mode dominates.
\begin{figure}[]
\centering \includegraphics[width=0.47\textwidth]{ScalingMasses.pdf}
\caption{Plotted is $\ln(M_{\mathrm{BH}})$ as a function of $\ln(p/p_\star-1)$
for the three types of initial data studied. We find critical exponents
\gammamass{}. We shift the curves vertically by $\beta_{i}=\{0.3,0,-0.3\}${} so that data
points from different initial data are easily
distinguished.}\label{fig:ScalingMasses}
\end{figure}
In addition to studying the final mass of the resulting black hole, we
follow~\cite{Garfinkle:1998va} and calculate the maximum Ricci scalar at the
center of the collapse for subcritical evolutions. In
Fig.~\ref{fig:ScalingRicci} we plot $\ln(R_{\max})$ as a function of
$\ln(p/p_\star-1)$ along with a fit using Eq.~(\ref{eq:SineRicciFit}) for the
initial data studied. We again offset the plots vertically by amounts
$\beta_{i}=\{0.4, 0, -0.4\}${} to aid readability. In this case we find critical exponents
\gammaricci{}, which are comparable to the values for mass scaling and to the
accepted value in spherically symmetric critical collapse,
$\gamma=0.374\pm0.001$.
\begin{figure}[]
\centering \includegraphics[width=0.47\textwidth]{ScalingRicci.pdf}
\caption{Plotted is $\ln(R_{\max})$ as a function of $\ln(1-p/p_\star)$ for
the three types of initial data studied. We find critical exponents
\gammaricci{}. We shift the curves vertically by $\beta_{i}=\{0.4, 0, -0.4\}${} so that data
points from different initial data are easily
distinguished.}\label{fig:ScalingRicci}
\end{figure}
\subsection{Echoing}\label{sec:Echoing}
\begin{figure}[]
\centering
\includegraphics[width=0.47\textwidth]{Residuals_DataSphericalMasses.pdf}
\caption{The residuals of the fitting
$\ln(M_{\mathrm{BH}})=\gamma^M\ln(p/p_\star-1)+C$ (blue dots)
and Eq.~\ref{eq:SineMassFit} (green triangles) to the black hole
masses for the spherical symmetry case, $\varphi_{\rm{sph}}$. The
sinusoidal residual of the straight line fit is identical to what
is observed
in~\cite{Hod:1996az}.}\label{fig:ResidualsSs}
\end{figure}
Having studied the scaling we now turn to the fine
structure and echoing of the critical behavior. Echoing of any gauge-invariant
quantity was described by Eq.~\eqref{eq:rescaling} above. A small-amplitude
sinusoidal modulation about the straight line expected from critical
behavior was conjectured and observed
in~\cite{Hod:1996az}. Fig.~\ref{fig:ScalingMasses} and
~\ref{fig:ScalingRicci} both show this feature. In
Fig.~\ref{fig:ResidualsSs} we plot the residuals when fitting only the
linear term and when fitting the linear plus sine term for the
spherically symmetric mass scaling case.\footnote{The residuals of the
fits for non-spherical initial data and for Ricci scaling are qualitatively
identical.} The sinusoidal modulation is
much clearer in Fig.~\ref{fig:ResidualsSs} than in
Fig.~\ref{fig:ScalingMasses}.
From the fit, Eq.~(\ref{eq:SineMassFit}), we estimate the period,
$T=2\pi/w$. In~\cite{Hod:1996az} it was found that the
relationship between the echoing period, $\Delta$ and the scaling
period, $T$ is $T=\Delta/ (2 \gamma)$. To test this relationship, we calculate
$\Delta$ using $T$ and also by estimating it directly from the
Ricci scalar at the origin as a function of the logarithmic time,
$-\ln(1-\tau/\tau_\star)$.
$\tau$ is the proper time at the origin given by
\begin{align}
\label{eq:ProperTimeOrigin}
\tau=\int_0^t N(\tilde{t}, 0)d\tilde{t},
\end{align}
and $\tau_\star$ is the accumulation time of the self-similar solution.
We find that despite being able to resolve the fine
structure and knowing $p_\star$ to six significant figures, the
estimate of $\tau_\star$ from the apparent horizon formation time is
only accurate to about two digits. This is because the
formation time of an apparent horizon is a gauge-dependent quantity. We
estimate
$\tau_\star$ by assuming that the logarithmic time between
successive echoes becomes constant and adjusting $\tau_\star$ until
this is true. The resulting $\tau_\star$ is consistent with what we estimate
from apparent horizon formation times. In Fig.~\ref{fig:Echoing} we
plot $\ln(R(t,r=0))$, a
geometric invariant, which shows the expected echoing that has been
studied in previous work~\cite{Garfinkle:1998va,Sorkin:2005vz}. From
Fig.~\ref{fig:Echoing} we estimate the echoing period to be
$\Delta=3.2\pm0.1$.
\begin{figure}[]
\centering
\includegraphics[width=0.47\textwidth]{Echoing.pdf}
\caption{Plotted is $\ln(R(t,r=0))$ as a function of
$\ln(1-\tau/\tau_\star)$ for the three types of initial data
studied. The echoing is clearly visible and very similar between
the different evolutions, which all have
$\ln(1-p/p_\star)\approx-6$. The echoing period is
$\Delta=3.2\pm0.1$ for all
simulations.}\label{fig:Echoing}
\end{figure}
\begin{table}
\centering
\begin{tabularx}{\columnwidth}{@{\extracolsep{\stretch{1}}}*{7}{c}@{}}
\hline
Initial Data & $2\gamma^MT^M$ &
$2\gamma^RT^R$ & $\Delta_{\mathrm{echoing}}$ \\ \hline
$\varphi_{\mathrm{sph}}$ & $3.46\pm0.01$ & $3.557\pm0.001$ &
$3.2\pm0.1$ \\
$\varphi_{\Re (Y^2_1)}$ & $3.46\pm0.02$ & $3.518\pm0.002$ & $3.2\pm0.1$ \\
$\varphi_{\mathrm{3-d}}$ & $3.67\pm0.04$ & $3.512\pm0.003$ & $3.2\pm0.1$ \\ \hline
\end{tabularx}
\caption{Comparison of $2\gamma^MT^M$ and the echoing period $\Delta$.
In~~\cite{Hod:1996az} it was found that $\Delta=2\gamma T$, which
we are unable to verify within our error estimates.
The accepted value of the echoing period in spherical symmetry is
$\Delta=3.4453\pm0.0005$~\cite{Gundlach:1996eg}.}\label{tab:EchoingPeriods}
\end{table}
In Table~\ref{tab:EchoingPeriods} we summarize and compare direct
estimates of $\Delta$ to $2\gamma T$. Specifically, we find that
$2\gamma^MT^M\approx3.46$, near the best known value of
$\Delta=3.4453\pm0.0005$~\cite{Gundlach:1996eg}. For simulations
that do not form a horizon, where we compute $2\gamma^RT^R$ from the
Ricci scalar scaling plot, Fig.~\ref{fig:ScalingRicci}, we find
that
$2\gamma^R_{\mathrm{sph}}T^R_{\mathrm{sph}}=3.556\pm0.001$,
$2\gamma^R_{\Re (Y^2_1)}T^R_{\Re (Y^2_1)}=3.518\pm0.002$, and
$2\gamma^R_{\text{3-d}}T^R_{\text{3-d}}=3.512\pm0.003$. The discrepancy
between $2\gamma T$ from mass scaling and Ricci scalar scaling is
currently not understood. When studying the echoing of $\ln(-R(t,r=0))$, we find
$\Delta=3.2\pm0.1$, where the larger error is explained by the difficulty
in estimating $\tau_\star$.
\begin{figure}[]
\centering
\includegraphics[width=0.47\textwidth]{PowerPsiPlanar.pdf}
\caption{The power in $\varphi_\ell$ for $\ell=0, 2$ for the $\Re(Y^2_1)$
initial data with $\varphi_0=0.07586803$.}\label{fig:PowerPsiPlanar}
\end{figure}
A power spectrum analysis shows that the spherical mode dominates the
evolution. We define the power in a given $\ell$-mode as
\begin{align}
P_\ell = \frac{1}{N_r}
\sum_{i=0}^{N_r-1}\sum_{m=-\ell}^{\ell}\left|C_{i,\ell,m}\right|^2
\end{align}
where $N_r$ is the number of radial points, and $C_{i,\ell,m}$ are the
coefficients in spectral expansion. This definition is consistent with
Parseval's theorem given that
\begin{align}
\int \rvert Y_m^\ell(\theta, \phi)\rvert^2d\Omega=1.
\end{align}
Also note that with this definition at a given radius
\begin{align}
\int \rvert f(\theta, \phi)\rvert^2d\Omega = \sum_{\ell=0}^{\infty}P_\ell.
\end{align}
For the $\Re(Y^2_1)$ data we find that initially
\begin{align}
\frac{P_2}{P_0} = \frac{27}{125}
\Rightarrow \frac{P_2}{\sum_{\ell}P_\ell} = \frac{P_2}{P_0 + P_2} \approx 0.18,
\end{align}
or that approximately 18 percent of the power is in the $\ell=2$ mode. For the
3-d initial data we find that initially
\begin{align}
\frac{P_2}{P_0} \approx 0.548
\Rightarrow \frac{P_2}{\sum_{\ell}P_\ell} = \frac{P_2}{P_0 + P_2} \approx 0.35,
\end{align}
or that approximately 35 percent of the power is in the $\ell=2$ mode.
In Fig.~\ref{fig:PowerPsiPlanar} we plot the power in $\varphi_\ell$ for
$\ell=0, 2$ for the $\Re(Y^2_1)$ initial data. Fig.~\ref{fig:PowerPsiPlanar}
shows that the $\ell=2$ mode decays much more rapidly than the $\ell=0$ mode,
suggesting that the spherically symmetric critical solution is approached.
However, given the different initial data and that we are further from the
critical solution than~\cite{Choptuik:2003ac}, we are unable to corroborate or
dispute their results.
The initial data used in~\cite{Baumgarte:2018fev} is given by
\begin{align}
\label{eq:Baumgarte data}
\varphi_{Y_2^{2}} =& \varphi_0
\exp\left(-\frac{r}{r_0}\right)\left[\sin^2\theta
+\left(1-\delta^2\right)\cos^2\theta\right]\notag \\
=&\varphi_0
\exp\left(-\frac{r}{r_0}\right)\left(1 - \delta^2 +
\delta^2\sin^2\theta\right).
\end{align}
The deformation in this case is proportional to the $Y_2^{\pm2}$ spherical
harmonics as opposed to the $Y_2^1$ spherical
harmonic. Ref.~\cite{Baumgarte:2018fev} found that
for $\delta=0.75$ the critical behavior differs significantly from that of the
spherically symmetric evolutions. For example, the critical exponent is observed
to be $\gamma\approx0.306$. The percentage of the power
in the $\ell=2$ mode for $\delta=0.75$ is approximately 47 percent. This
is 12 percent more than our 3-d initial data that has behavior consistent
with the spherically symmetric evolutions. This raises the question as to
whether the reason~\cite{Baumgarte:2018fev} see different behavior is because of
the increased power in the $\ell=2$ modes or because the initial data is
proportional to the $Y_2^{\pm2}$ spherical harmonics instead of the $Y_2^1$
spherical harmonic. Work is underway to attempt to resolve this question.
\section{Conclusions}\label{sec:Conclusions}
We present results of a study of critical behavior in the 3-d gravitational
collapse of a massless scalar field with no symmetry assumptions. We are able to
resolve the dominant critical behavior as well as the fine structure in both
supercritical and subcritical evolutions. We use the Spectral Einstein Code,
\texttt{SpEC}~\cite{SpECwebsite} to perform the evolutions, with several key
changes to the gauge condition and constraint damping. We study how the
critical exponent and echoing period obtained from the data depend on how close
to the critical solution the simulations are, as well as how the simulations are
distributed in parameter space. This is especially important in 3-d where
simulations are costly to perform. We find the critical exponents to be
\gammamass{}, consistent with the accepted result in spherical symmetry of
$0.374\pm0.001$~\cite{Gundlach:1996eg}. The accepted value of the echoing
period $\Delta$ in spherical symmetry is
$\Delta=3.4453\pm0.0005$~\cite{Gundlach:1996eg}, while we find echoing periods
$\Delta=3.2\pm0.1$ for all initial data consider. The discrepancy can be
attributed to the difficulty in directly measuring the echoing period. We also
test the predicted relationship~\cite{Gundlach:1996eg, Hod:1996az} between the
echoing period and the fine structure of the scaling, $2\gamma T=\Delta$. We
find that for mass scaling \Deltamass{}, where $T^M$ is the period of the
sinusoidal fine structure.
The agreement of the critical exponent, echoing
period, and fine structure between the spherically symmetric and highly
non-spherical simulations leads us to conclude that even for initial data far
from spherical symmetry the critical solution is that of spherical symmetry.
However, the reason why our results differ from those of~\cite{Choptuik:2003ac}
and~\cite{Baumgarte:2018fev}, where data far
from spherical symmetry approaches a different critical solution, is not yet
fully understood. One reason for the discrepancy could be that in our data
approximately 18 percent of the total power is in the $\ell=2$ mode for the
$\Re(Y_1^2)$ initial data and 35 percent for the $3-d$ initial data, while
in~\cite{Baumgarte:2018fev} approximately 47 percent of the power is in the
$\ell=2$ mode. In other words, more power than we used is needed in the $\ell=2$
mode. Another possible reason is that~\cite{Baumgarte:2018fev} studied
$\ell=2, m=2$ initial data while we study $\ell=2,m=1$ initial data. Work is
underway to understand if either of these scenarios are responsible for the
discrepancy and to independently reproduce the simulations
of~\cite{Baumgarte:2018fev}.
\section{Acknowledgements}
We are grateful to Andy Bohn, Fran\c{c}ois H\'{e}bert, and Leo Stein for
insightful discussions and feedback on earlier versions
of this paper. We are also grateful to the anonymous referee for the
feedback. This work was supported in part by a Natural Sciences and Engineering
Research Council of Canada PGS-D grant to ND,
NSF Grant PHY-1606654 at Cornell University, and by a grant from the Sherman
Fairchild Foundation. Computations were performed
on the Zwicky cluster at Caltech, supported by the Sherman
Fairchild Foundation and by NSF award PHY-0960291.
|
1,477,468,751,174 | arxiv | \section{Introduction}
In some way, ``prehistorical'' beginning of Leavitt path algebras
started with Leavitt algebras (\cite{leav:tmtoar} and \cite{leav:tmtohi}),
Bergman algebras (\cite{bergman:casurc}), and graph C$^{\ast}$-algebras
(\cite{cunt:acgby}), considering rings with the \textit{Invariant
Basis Number} property, universal ring constructions, and the
structure of a separable simple infinite C$^{\ast}$-algebra,
respectively. As to the algebraic structures known as \textit{Leavitt
path algebras} themselves, they were initiated and developed
independently, and using different approaches, in the foundational
papers on the subject \cite{ap:tlpaoag05} and \cite{amp:nktfga}. Then,
during the last decade, these algebras have continuously been of
significant interest to mathematicians from different areas of
mathematics such as ring and group theorists, analysts working in
C$^{\ast}$-algebras, and symbolic dynamicists, for example. For a
detailed history and overview of the Leavitt path algebras we refer
our potential readers to a recent quite remarkable and motivating
survey on the subject~\cite{a:lpatfd}.
In our time, we may clearly observe a steadily growing interest in
developing algebraic and homological theories of semirings and
semimodules, as well as in their numerous connections with, and
applications in, different branches of mathematics, computer science,
cryptography, quantum physics, and many other areas of science (see,
\textit{e.g.}, \cite{gla:agttlosataimais}). As is well known,
structure theories for varieties of algebras constitute an important
``classical'' area of the sustained interest in algebraic research. In
those theories, so-called simple algebras, \textit{i.e.}, algebras
possessing only two trivial congruences -- the identity and universal
ones -- play a very important role of ``building blocks.'' In
addition, simple semirings, constituting another booming area of
semiring research, have quite interesting and promising applications
in various fields, in particular in cryptography (see, \textit{e.g.},
\cite{mmr:pkcbosa}). However, in contrast to the varieties of groups
and rings, research on simple semirings has been started only
recently, and therefore not much on the subject is known. Also,
investigating semirings and their representations, one should
undoubtedly use methods and techniques of both ring and lattice theory
as well as diverse techniques and methods of categorical and universal
algebra, and work in a ``nonabelian environment.'' Perhaps all these
circumstances explain why research on simple semirings is still behind
of that for rings and groups (for some recent activity and results on
this subject one may consult \cite{mf:ccs}, \cite{bshhurtjankepka:scs},
\cite{monico:ofcss}, \cite{bashkepka:css}, \cite{zumbr:cofcsswz},
\cite{jezkepkamaroti:tesoas}, \cite{knt:mosssparp}, \cite{kz:fsais},
\cite{knz:ososacs}).
Motivated by \cite{a:lpatfd}, in this paper we initiate a study of
Leavitt path algebras (it deserves to be mentioned that, in some way,
a generalization of an idea of Leavitt algebras from \cite{leav:tmtoar}
to a semiring setting was earlier considered in \cite{hebwei:otrosos})
in a nonadditive/nonabelian semiring setting --- working with
semirings and semimodules, we live in a ``world without subtraction'' and,
therefore, have no privilege of the classical well developed
techniques of additive/abelian categories of modules over rings. More
precisely, we consider the concepts of Leavitt path algebras with
coefficients in a commutative semiring~$S$, and of ideal- and
congruence-simpleness for those algebras; note that in our semiring
setting, in contrast to the ``additive'' ring case, these two
notions of simpleness are not the same (see, \textit{e.g.}, \cite[%
Examples~3.8]{knz:ososacs}) and should be differed. In light of this,
presenting some new, important and interesting in our view,
considerations, results and techniques regarding characterizations of
ideal- and congruence-simple Leavitt path algebras over a commutative
ground semiring~$S$, extending the ``classical'' ring characterizations
(see, \cite[Theorem~3.11]{ap:tlpaoag05}, \cite[Theorem~3.1]{ap:tlpaoag08},
\cite[Theorem~6.18]{tomf:utaisflpa} and \cite[Theorem~3.11]{g:lpaadl}),
as well as motivating an interest to this direction of research, is a
main goal of our paper.
For the reader's convenience, all subsequently necessary basic
concepts and facts on semirings and Leavitt path algebras with
coefficients in a commutative semiring are collected in
Section~\ref{sec:basic}.
In Section~\ref{sec:ideal}, together with establishing some
important properties of the Leavitt path algebras with coefficients in
a commutative semiring~$S$, we provide a complete characterization of
ideal-simple Leavitt path algebras with coefficients in a semifield~$S$
(Theorem~3.4), constituting one of the central results of the paper
and extending the well-known characterizations (see,
\cite[Theorem~3.11]{ap:tlpaoag05}, \cite[Theorem~3.1]{ap:tlpaoag08},
\cite[Theorem~6.18]{tomf:utaisflpa} and \cite[Theorem~3.11]{g:lpaadl})
when the ground semiring $S$ is a field.
In Section~\ref{sec:congr}, together with establishing some
fundamental facts about the Leavitt path algebras with coefficients in
the Boolean semifield~$\mathbf{B}$ and combining them with
Theorem~3.4, we present a complete characterization of
congruence-simple Leavitt path algebras over row-finite graphs with
coefficients in a commutative semiring~$S$ (Theorem~4.5), constituting
another main result of the paper and extending the well-known
characterizations from [\textbf{op.\ cit.}]. It should be emphasized
that, in contrast to the ``classical'' case of the ground
structure~$S$ to be a commutative ring, in order to establish these
results in our semiring setting, one needs to exploit some innovative
approach and techniques of universal algebra based on dealing with
congruences rather then with ideals. Also, resolving
\cite[Problem~2]{knt:mosssparp} in the class of Leavitt path algebras
with coefficients in a commutative semiring~$S$, we show
(Corollary~4.2) that for algebras of this class the
congruence-simpleness implies their ideal-simpleness.
Finally, for notions and facts from semiring theory, we refer to
\cite{golan:sata}.
\section{Basic concepts}\label{sec:basic}
\subsection{Preliminaries on semirings}
Recall \cite{golan:sata} that a \emph{hemiring\/} is an algebra $(S,+,\cdot
,0)$ such that the following conditions are satisfied:
(1) $(S,+,0)$ is a commutative monoid with identity element $0$;
(2) $(S,\cdot)$ is a semigroup;
(3) Multiplication distributes over addition from either side;
(4) $0s=0=s0$ for all $s\in S$.
A hemiring~$S$ is \emph{commutative} if $(S, \cdot)$ is a commutative
semigroup; and a hemiring $S$ is \emph{additively idempotent} if
$a+a=a$ for all $a\in S$. Moreover, a hemiring $S$ is a \emph{semiring}
if its multiplicative semigroup $(S, \cdot)$ actually is a monoid
$(S, \cdot, 1)$ with identity element~$1$. A commutative semiring~$S$
is a \emph{semifield} if $(S \setminus \{0\}, \cdot, 1)$ is a group. Two
well-known examples of semifields are the additively idempotent two
element semiring $\mathbf{B} = \{0,1\}$, the so-called \emph{Boolean
semifield}, and the tropical semifield $(\mathbb{R}\cup \{-\infty
\},\vee ,+,-\infty ,0\})$.
As usual, given two hemirings $S$ and $S'$, a map $\varphi: S
\longrightarrow S'$ is a \emph{homomorphism} iff $\varphi(x+y) =
\varphi(x) + \varphi(y)$ for all $x,y\in S$; and a submonoid~$I$ of
$(S,+,0)$ is an \emph{ideal} of a hemiring~$S$ iff $sa$ and $as\in I$ for
all $a \in I$ and $s \in S$; an equivalence relation $\rho$ on a hemiring~$S$
is a \emph{congruence} iff $(s+a,s+b)\in \rho v$, $(sa,sb)\in \rho $ and
$(as, bs) \in \rho$ for all pairs $(a,b) \in \rho $ and $s \in S$. On every
hemiring $S$ there are always the two trivial congruences --- the \emph{%
diagonal congruence}, $\vartriangle_{S} := \{(s,s) \mid s\in S\}$, and the
\emph{universal congruence}, $S^{2} := \{(a,b) \mid a,b\in S\}$. Following
\cite{bshhurtjankepka:scs}, a hemiring~$S$ is \textit{congruence-simple} if
$\vartriangle_{S}$ and $S^{2}$ are the only congruences on~$S$;
and~$S$ is \textit{ideal-simple} if $0$ and~$S$ are the only ideals
of~$S$. It is clear that a hemiring~$S$ is congruence-simple iff
every nonzero hemiring homomorphism $\varphi: S\longrightarrow S'$ is
injective. Obviously, the concepts congruence- and ideal-simpleness
are the same for rings and, therefore, we have just simple rings, but
they are different ones for semirings in general (see, \textit{e.g.},
\cite[Examples 3.8]%
{knz:ososacs}).
An $S$-\emph{semimodule} over a given commutative semiring~$S$ is a
commutative monoid $(M,+,0_{M})$ together with a scalar multiplication
$(s,m) \mapsto sm$ from $S\times M$ to~$M$ which satisfies the identities
$(ss')m = s(s'm)$, $s(m+m') = sm+sm'$, $(s+s')m = sm+s'm$, $1m=m$,
$s0_{M} = 0_{M} = 0m$ for all $s,s'\in S$ and $m,m'\in M$. \emph{Homomorphisms}
between semimodules and \emph{free} semimodules are defined in the standard
manner.
By an $S$-algebra $A$ over a given commutative semiring $S$ we mean an
$S$-semimodule $A$ with an associative bilinear $S$-semimodule multiplication
``\,$\cdot$\,'' on $A$. An $S$-algebra~$A$ is \emph{unital} if $(A,\cdot)$
is actually a monoid with a neutral element $1_{A}\in A$, \textit{i.e.},
$a1_{A}=a=1_{A}a$ for all $a\in A$. For example, every hemiring is an
$\mathbb{N}$-algebra, where $\mathbb{N}$ is the semiring of the natural
numbers with added $0$; and, of course, every additively idempotent hemiring
is a $\mathbf{B}$-algebra.
Let $S$ be a commutative semiring and $\{x_{i} \mid i\in I\}$ a set of
independent, noncommuting indeterminates. Then $S\langle x_{i} \mid i\in
I\rangle $ will denote the free $S$-algebra generated by the indeterminates
$\{x_{i} \mid i\in I\}$, whose elements are polynomials in the noncommuting
variables $\{x_{i} \mid i\in I\}$ with coefficients from $S$ that commute
with each variable $x_{i},i\in I$.
Finally, let~$S$ be a commutative semiring and $(G,\cdot ,1)$ a group. Then
we can form the \emph{group semiring} $S[G]$, whose elements are formal sums
$\sum_{g\in G}a_{g}g$ with the \emph{coefficients} $a_{g}\in $ $S$ and the
finite \emph{support}, \textit{i.e.}, almost all $a_{g}=0$. As usual, the
operations of addition and multiplication on $S[G]$ are defined as follows
\begin{gather*}
\textstyle\sum\limits_{g\in G}a_{g}g + \textstyle\sum\limits_{g\in G}b_{g}g = \textstyle\sum\limits_{g\in G}
(a_{g}+b_{g})g , \\
(\textstyle\sum\limits_{g\in G}a_{g}g) (\textstyle\sum\limits_{h\in G}b_{h}h) = \textstyle\sum\limits_{t\in G}c_{t}t ,
\end{gather*}%
where $c_{t}=\sum a_{g}b_{h}$, with summation over all $(g,h)\in G\times G$
such that $gh=t$. Clearly, the elements of $S:=$ $S\cdot 1$ commute with the
elements of $G:=1\cdot G$ under the multiplication in $S[G]$. In particular,
one may easily see that $S[\mathbb{Z}]\cong S[x,x^{-1}]$, where $S[x,x^{-1}]$
is the algebra of the \textit{Laurent polynomials} over~$S$.
\subsection{Basics on Leavitt path algebras with coefficients in
a commutative semiring}
In this subsection, we introduce Leavitt path algebras having coefficients
in an arbitrary commutative semiring~$S$. The construction of such algebras
is, certainly, a straightforward generalization of the constructions of the
Leavitt path algebras with the semiring~$S$ to be a field and a commutative
ring with unit originated in~\cite{ap:tlpaoag05} and~\cite{tomf:lpawciacr},
respectively. All these constructions are crucially based on some general
notions of graph theory that for the reader's convenience we reproduce here.
A (directed) graph $\Gamma =(V,E,s,r)$ consists of two disjoint sets~$V$
and~$E$ -- \textit{vertices} and \textit{edges}, respectively -- and two
maps $s,r: E\longrightarrow V$. If $e\in E$, then $s(e)$ and $r(e)$ are
called the \textit{source} and \textit{range} of $e$, respectively. The
graph $\Gamma$ is \emph{row-finite} if $|s^{-1}(v)| < \infty$ for every
$v \in V$. A vertex~$v$ for which $s^{-1}(v)$ is empty is called a
\emph{sink}; and a vertex~$v$ is \emph{regular} iff $0 < |s^{-1}(v)| < \infty$.
A \emph{path} $p = e_{1} \dots e_{n}$ in a graph $\Gamma$ is a sequence of
edges $e_{1}, \dots, e_{n}$ such that $r(e_{i}) = s(e_{i+1})$ for
$i = 1, \dots, n-1$. In this case, we say that the path~$p$ starts at
the vertex $s(p) := s(e_{1})$ and ends at the vertex $r(p) := r(e_{n})$,
and has \emph{length} $|p| := n$. We consider the vertices in~$V$ to be
paths of length~$0$. If $s(p) = r(p)$, then~$p$ is a \emph{closed path
based at} $v = s(p) = r(p)$. Denote by $\text{CP}(v)$ the set of all
such paths. A closed path based at~$v$, $p = e_{1} \dots e_{n}$, is a
\emph{closed simple path based at}~$v$ if $s(e_i) \neq v$ for every $i > 1$.
Denote by $\text{CSP}(v)$ the set of all such paths. If $p = e_{1}
\dots e_{n}$ is a closed path and all vertices $s(e_{1}), \dots, s(e_{n})$
are distinct, then the subgraph $(s(e_{1}), \dots, s(e_{n}); e_{1},
\dots, e_{n})$ of the graph $\Gamma$ is called a \emph{cycle}. An
edge~$f$ is an \emph{exit} for a path $p = e_{1} \dots e_{n}$ if
$s(f) = s(e_{i})$ but $f \ne e_{i}$ for some $1 \le i \le n$.
\begin{defn}[{\textit{cf.}~\cite[Definition~1.3]{ap:tlpaoag05} and
\cite[Definition~2.4]{tomf:lpawciacr}}]
Let $\Gamma = (V,E,s,r)$ be a graph and $S$ a commutative semiring.
The \emph{Leavitt path algebra} $L_{S}(\Gamma)$ of the graph~$\Gamma$
\emph{with coefficients in}~$S$ is the $S$-algebra presented by the
set of generators $V\cup E\cup E^{\ast}$ -- where $E\rightarrow E^{\ast}$,
$e\mapsto e^{\ast}$, is a bijection with $V$, $E$, $E^{\ast}$ pairwise
disjoint -- satisfying the following relations:
(1) $v v' = \delta_{v,v'} v$ for all $v, v' \in V$;
(2) $s(e) e = e = e r(e)$, $r(e) e^{\ast} = e^{\ast} = e^{\ast} s(e)$ for
all $e\in E$;
(3) $e^{\ast} f = \delta_{e,f} r(e)$ for all $e,f \in E$;
(4) $v = \sum_{e\in s^{-1}(v)} e e^{\ast}$ whenever $v\in V$ is a regular
vertex.
\end{defn}
It is easy to see that the mappings given by $v \mapsto v$, for $v \in V$,
and $e \longmapsto e^{\ast}$, $e^{\ast} \longmapsto e$ for $e\in E$, produce
an involution on the algebra $L_{S}(\Gamma)$, and for any path $p = e_{1}
\dots e_{n} $ there exists $p^{\ast} := e_{n}^{\ast} \dots e_{1}^{\ast}$.
Observe that the Leavitt path algebra $L_{S}(\Gamma)$ can also be
defined as the quotient of the free $S$-algebra $S\langle v,e,e^{\ast}
\mid v\in V, e\in E, {e^{\ast}\in E^{\ast}} \rangle$ by the congruence
$\sim$ generated by the following ordered pairs:
(1) $(v v', \delta_{v,v'} v)$ for all $v, v' \in V$,
(2) $(s(e) e, e), (e, e r(e))$ and $(r(e) e^{\ast}, e^{\ast}), (e^{\ast},
e^{\ast} s(e))$ for all $e \in E$,
(3) $(e^* f, \delta_{e,f} r(e))$ for all $e, f \in E$,
(4) $(v, \sum_{e\in s^{-1}(v)} e e^{\ast})$ for all regular vertices $v\in V$.
\begin{rem}
As will be shown in Proposition~2.4, for any graph $\Gamma =
(V,E,s,r)$, all generators $\{ v, e, e^{\ast} \mid v\in V, e\in E,
e^{\ast}\in E^{\ast}\}$ of $L_{S}(\Gamma)$ are nonzero.
Furthermore, from the observation above, it readily follows that
$L_{S}(\Gamma)$ is, in fact, the ``largest'' algebra generated by
the elements $\{v, e, e^{\ast} \mid v\in V, e\in E, e^{\ast}\in E^{\ast}\}$
satisfying the relations (1) -- (4) of Definition~2.1, in other words,
$L_{S}(\Gamma)$ has the following \textit{universal} property:
If~$A$ is an $S$-algebra generated by a family of elements
$\{a_{v}, b_{e}, c_{e^{\ast}} \mid c\in V, e\in E, {e^{\ast } \in E^{\ast}}\}$
satisfying the analogous to (1) -- (4) relations in Definition~2.1,
then there always exists an $S$-algebra homomorphism
$\varphi: L_{S}(\Gamma) \rightarrow A$ given by ${\varphi(v) = a_{v}}$,
${\varphi(e) = b_{e}}$ and ${\varphi(e^{\ast}) = c_{e^{\ast}}}$.
\end{rem}
The following examples illustrate that some well-known (classical) algebras
actually can be viewed as the Leavitt path algebras as well.
\begin{exas}[{\textit{cf}.~\cite[Examples~1.4]{ap:tlpaoag05}}]
Let $S$ be a commutative semiring.
(i) Let $\Gamma = (V,E,s,r)$ be a graph with $V = \{v_{1}, \dots,
v_{n}\}$ and $E = \{e_{1}, \dots, e_{n-1}\}$, where $s(e_{i}) =
v_{i}$, $r(e_{i}) = v_{i+1}$ for all $i = 1, \dots, n-1$. Then it is
easy to check that the map $\varphi: L_{S}(\Gamma) \longrightarrow
M_{n}(S)$, given by $\varphi(v_{i}) = E_{i,i}$, $\varphi(e_{i}) = E_{i,i+1}$
and $\varphi(e_{i}^{\ast}) = E_{i+1,i}$, where $\{E_{i,j} \mid 1 \leq i,j
\leq n\}$ are the standard elementary matrices in the $n \times n$ matrix
semiring $M_{n}(S)$, is an $S$-algebra isomorphism.
(ii) Let $\Gamma = (V,E,s,r)$ be a graph given by $V = \{ v \}$ and
$E = \{ e \}$. Then it is obvious that the Leavitt path algebra
$L_{S}(\Gamma)$ is isomorphic to the Laurent polynomial algebra
$S[x,x^{-1}]$ with $x:=e$ and $x^{-1} := e^{\ast}$.
(iii) In \cite{leav:tmtohi}, investigating rings with the Invariant
Basis Number property there were introduced what we now call the
Leavitt algebras of the form $L_{K}(1,n)$, where $K$ is a field and
$n \geq 2$ is a natural number, of type $(1,n)$. Then, in
\cite{hebwei:otrosos}, the authors, generalizing the Leavitt algebra
construction in a semiring setting, constructed an $S$-algebra
$L_{S}(1,n)$ over a commutative semiring~$S$ which was defined by
the generators $\{x_{i}, y_{i} \mid 1 \leq i \leq n\}$ and relations
$x_{i} y_{j} = \delta_{ij}$ for all $1 \leq i,j \leq n$, and
$\sum_{i=1}^{n} y_{i} x_{i} = 1$. Considering the graph $\Gamma =
(V,E,s,r)$ given by $V = \{v\}$ and $E = \{e_{1}, \dots, e_{n}\}$, one
may easily verify that the Leavitt path algebra $L_{S}(\Gamma)$ is,
in fact, isomorphic to $L_{S}(1,n)$ by letting $y_{i} := e_{i}$ and
$x_{i} := e_{i}^{\ast}$ for all $1 \leq i \leq n$.
\end{exas}
The following proposition is an analog of \cite[Proposition~3.4]%
{tomf:lpawciacr} for a non-abelian semiring setting and presents some
fundamental properties of the Leavitt path algebras.
\begin{prop}[{\textit{cf}.~\cite[Proposition~3.4]{tomf:lpawciacr}}]
Let $\Gamma = (V,E,s,r)$ be a graph and~$S$ a commutative semiring. Then,
the Leavitt path algebra $L_{S}(\Gamma)$ has the following properties:
(1) All elements of the set $\{ v, e, e^{\ast} \mid v\in V,e\in E,
e^{\ast} \in E^{\ast}\}$ are nonzero;
(2) If $a, b$ are distinct elements in~$S$, then $av\neq bv$ for all
$v\in V$;
(3) Every monomial in $L_{S}(\Gamma)$ is of the form $\lambda p q^{\ast}$,
where $\lambda \in S$ and $p, q$ are paths in $\Gamma$ such that
$r(p) = r(q)$.
\end{prop}
\begin{proof}
The proof given for the case of rings in \cite[Proposition~3.4]%
{tomf:lpawciacr}, which, in turn, uses a similar construction as for
the case of fields from \cite[Lemma~1.5]{g:lpaadl}, is based on
Remark~2.2 --- there should be constructed an $S$-algebra $A$ as in
Remark~2.2 having all generators $\{a_{v}, b_{e}, c_{e^{\ast}} \mid
v\in V, e\in E, e^{\ast}\in E^{\ast}\}$ to be nonzero. It almost
does not depend on the ``abelianness'' of the ring case and,
therefore, it works in our semiring setting as well. Just for the
reader's convenience, we have decided to sketch it here.
Thus, let~$I$ be an infinite set of the cardinality at least $|V \cup E|$,
and let $Z:=S^{(I)}$ a free $S$-semimodule with the basis $I$, \textit{i.e.},
$Z$ is a direct sum of~$|I|$ copies of~$S$. For each $e\in E$, let
$A_{e}:=Z$ and, for each $v\in V$, let
\[ A_{v} := \begin{cases}
\bigoplus_{s(e)=v} A_{e} & \text{if } |s^{-1}(v)| \ne \varnothing , \\
Z & \text{if } v \text{ is a sink.}%
\end{cases} \]
Note that all~$A_{e}$ and~$A_{v}$ are all mutually isomorphic, since each
of them is the direct sum of~$|I|$ many copies of~$S$. Let $A :=
\bigoplus_{v\in V} A_{v}$. For each $v \in V$ define $T_{v}: A_{v}
\longrightarrow A_{v}$ to be the identity map and extend it to a
homomorphism $T_{v}: A \longrightarrow A$ by defining~$T_{v}$ to be zero
on $A \ominus A_{v}$. Also, for each $e \in V$ choose an isomorphism
$T_{e}: A_{r(e)} \longrightarrow A_{e} \subseteq A_{s(e)}$ and extend it to a
homomorphism $T_{e}: A \longrightarrow A$ by mapping to zero on
$A \ominus A_{r(e)}$. Finally, we define $T_{e^{\ast}}: A \longrightarrow A$
by taking the isomorphism $T_{e}^{-1}: A_{e} \subseteq A_{s(e)} \longrightarrow
A_{r(e)}$ and extending it to a homomorphism $T_{e^{\ast}}: A \longrightarrow A$
by letting $T_{e^{\ast}}$ to be zero on $A \ominus A_{e}$.
Now consider the subalgebra of $\text{Hom}_{S}(A,A)$ generated by $\{T_{v},
T_{e}, T_{e^{\ast}} \mid v\in V, e\in E, e^{\ast}\in E^{\ast}\}$. It is
straightforward to check (\textit{cf.}~\cite[Lemma~1.5]{g:lpaadl}) that
$\{T_{v},T_{e},T_{e^{\ast}} \mid {v\in V}, {e\in E}, {e^{\ast}\in E^{\ast}}\}$
is a collection of nonzero elements satisfying the relations described
in Definition~2.1. By the universal property of $L_{S}(\Gamma)$, we
get that the elements of the set $\{v,e,e^{\ast} \mid {v\in V}, {e\in E},
{e^{\ast}\in E^{\ast}}\}$ are nonzero and (1) is established.
Next we note that for each $v\in V$ we have $A_{v}=S\oplus M$ for some $S$%
-semimodule $M$. Let $a,b$ be two distinct elements in $S$. We have
\[ aT_{v}(1,0) = T_{v}(a,0) = (a,0) \ne (b,0) = T_{v}(b,0) = bT_{v}(1,0), \]
so $a T_{v} \ne b T_{v}$. The universal property of $L_{S}(\Gamma)$ then
implies that $av \ne bv$, and (2) is established.
As to (3), it follows immediately from the fact that $e^{\ast}f=\delta
_{e,f}r(e)$ for all $e,f\in E$.
\end{proof}
As usual, for a hemiring~$S$ a \emph{set of local units}~$F$ is a set
$F \subseteq S$ of idempotents in~$S$ such that, for every finite subset
$\{s_{1}, \dots, s_{n}\} \subseteq S$, there exists an element $f \in F$ with
$fs_{i} = s_{i} = s_{i}f$ for all $1 \le i \le n$. Using Proposition~2.4 and
repeating verbatim the proof of \cite[Lemma~1.6]{ap:tlpaoag05}, one obtains
the following useful fact.
\begin{prop}
Let $\Gamma = (V,E,s,r)$ be a graph and $S$ a commutative semiring.
Then $L_{S}(\Gamma)$ is a unital $S$-algebra if~$V$ is finite; and
if~$V$ is infinite, the set of all finite sums of distinct elements
of~$V$ is the set of local units of the $S$-algebra $L_{S}(\Gamma)$.
\end{prop}
Let $\Gamma = (V,E,s,r)$ be a graph. A subset $H \subseteq V$ is called
\emph{hereditary} if $s(e) \in H$ implies $r(e) \in H$ for all $e\in E$;
and $H \subseteq V$ is \emph{saturated} if $v \in H$ for any regular
vertex $\nu$ with $r(s^{-1}(v)) \subseteq H$. Obviously, the two trivial
subsets of $V$, $\varnothing$ and $V$, are hereditary and saturated ones.
We note the following useful observation whose proof is completely
analogous to the ones in \cite[Lemma~3.9]{ap:tlpaoag05} and
\cite[Lemma~2.3]{ap:tlpaoag08} and which, for the reader's
convenience, we provide here.
\begin{lem}
Let $\Gamma = (V,E,s,r)$ be a graph, $S$ a commutative semiring, and
$I$ an ideal of $L_{S}(\Gamma)$. Then, $I \cap V$ is a hereditary and
saturated subset of~$V$.
\end{lem}
\begin{proof}
For any $e \in E$ with $s(e) \in H$ we have $r(e) \in H$, since
$e = s(e)e \in I$, and thus $r(e) = e^{\ast} e \in I$. Furthermore, if a
regular vertex $v \in V$ satisfies $r(e) \in H$ for all $e\in E$ with
$s(e) = v$, then $v \in H$, since $e = e r(e) \in I$ for all these edges
$e \in E$, and hence $v = \sum_{e\in s^{-1}(v)} e e^{\ast} \in I$.
\end{proof}
We conclude this section with the following, although simple but quite
useful, technical remark obviously following from the identity
$e^{\ast} f = \delta_{e,f} r(e)$ for all $e, f \in E$.
\begin{rem}
For any two paths $p, q$ in $\Gamma$ we have
\[ p^{\ast} q = \begin{cases}
q' & \text{if } q = p q' , \\
r(p) & \text{if } p = q , \\
p'^{\ast} & \text{if } p = q p' , \\
0 & \text{otherwise} .
\end{cases} \]
\end{rem}
\section{Ideal-simpleness of Leavitt path algebras with
coefficients in a semifield}\label{sec:ideal}
The main goal of this section is to present a description of the
ideal-simple Leavitt path algebras $L_{S}(\Gamma)$ of arbitrary graphs
$\Gamma = (V,E,s,r)$ with coefficients in a semifield $S$ that extends
the well-known description when the ground semifield~$S$ is a
field~$K$ (\cite[Theorem~3.11]{ap:tlpaoag05}, \cite[Theorem~3.1]%
{ap:tlpaoag08}, \cite[Theorem~6.18]{tomf:utaisflpa} and \cite%
[Theorem~3.11]{g:lpaadl}). For that we have to establish some
subsequently needed important facts.
\begin{prop}
A graph $\Gamma = (V,E,s,r)$ of an ideal-simple Leavitt path
algebra $L_{S}(\Gamma)$ with coefficients in a commutative
semiring~$S$ satisfies the following two conditions:
(1) The only hereditary and saturated subset of~$V$ are~$\varnothing$
and~$V$;
(2) Every cycle in~$\Gamma$ has an exit.
\end{prop}
\begin{proof}
(1) Actually the proof of the statement given in \cite[Theorem~3.11]%
{ap:tlpaoag05} does not use the additive ring/module setting and,
therefore, it can be easily modified for our (nonadditive) semiring
setting. For the reader's convenience, we briefly sketch central
ideas of that modification here.
Assume that~$V$ contains a nontrivial hereditary and saturated subset~$H$.
In the same way as was shown in \cite[Theorem 3.11]{ap:tlpaoag05}, one may
easily observe that
\[ \Gamma' = (V',E',r_{\Gamma'},s_{\Gamma'})
:= (V\setminus H, r^{-1}(V\setminus H), r|_{V\setminus H},
s|_{V\setminus H}) \]
is a graph, too. Then, as in \cite[Theorem~3.11]{ap:tlpaoag05}, let
us consider an $S$-algebra homomorphism $\varphi: L_{S}(\Gamma)
\longrightarrow L_{S}(\Gamma')$ given on the generators of the free
$S$-algebra $A := S \langle v, e, e^{\ast} \mid v\in V, e\in E,
e^{\ast}\in E^{\ast} \rangle$ as follows: $\varphi(v) = \chi_{V'}(v)v$,
$\varphi(e) = \chi_{E'}(e)e$ and $\varphi(e^{\ast}) = \chi_{(E')^{\ast}}
(e^{\ast})e^{\ast}$, where $\chi_{X}$ denotes the usual characteristic
function of a set~$X$. To be sure that in a such manner defined map
$\varphi: L_{S}(\Gamma) \longrightarrow L_{S}(\Gamma')$, indeed, provides
us with the desired hemiring homomorphism, we only need to verify that
all following pairs
$(v v', \delta_{v, v'} v)$ for all $v, v' \in V$,
$(s(e)e, e), (e, er(e))$ and $(r(e)e^{\ast}, e^{\ast}), (e^{\ast},
e^{\ast}s(e))$ for all $e\in E$,
$(e^* f, \delta_{e,f} r(e))$ for all $e, f\in E$,
$(v, \sum_{e\in s^{-1}(v)} e e^{\ast})$ for a regular vertex $v\in V$,
\noindent are in the kernel congruence
\[ \text{ker}(\varphi) := \{ (x,y)\in A^{2} \mid \varphi(x)=\varphi(y) \} \]
of $\varphi$. But the latter can be established right away by repeating
verbatim the corresponding obvious arguments in the proof of \cite%
[Theorem~3.11]{ap:tlpaoag05}. Note that $|(s_{\Gamma '})^{-1}(v)| <
\infty$ in $\Gamma'$ for any regular vertex~$v$ in $\Gamma$. For
$\varnothing \neq H \varsubsetneqq V$ and Proposition~2.4, $\varphi$
is a nonzero homomorphism and $H\subseteq \varphi^{-1}(0)$; and
therefore, $L_{S}(\Gamma)$ contains a proper ideal and, hence, is not
ideal-simple.
(2) Let~$\Gamma$ contain a cycle~$p$, based at~$v$, without any exit.
Then, by repeating verbatim the corresponding arguments in the proof of
\cite[Theorem~3.11]{ap:tlpaoag05}, one gets that $vL_{S}(\Gamma)v =
S[p,p^{\ast}]$, \textit{i.e.}, each element in $vL_{S}(\Gamma)v$ is written in
the form $\sum_{i=r}^s \lambda_i p^i$, where $r,s \in \mathbb{Z}$ and
$\lambda_i \in S$; and let $p^0 := v$ and $p^{-j} := (p^{\ast})^j$ for all
$j>0$. For $L_{S}(\Gamma)$ is ideal-simple and \cite[Proposition~5.3]%
{knz:ososacs}, $vL_{S}(\Gamma)v$ is an ideal-simple commutative semiring as
well. The latter, by \cite[Theorem 11.2]{bshhurtjankepka:scs}, implies that
$vL_{S}(\Gamma)v = S[p,p^*]$ is a semifield. We claim that $S[p,p^*] \cong
S[x, x^{-1}]$, the Laurent polynomial semiring over~$S$; as this is clearly
not a semifield, this contradiction finishes the proof.
It remains to show that the natural homomorphism $S[x, x^{-1}] \to S[p, p^*]$
given by $f \mapsto f(p)$ is, indeed, injective. Let~$I$, $Z = S^{(I)}$,
$A_e$ for $e \in E$, $A_v$ for $v \in V$, and $A$ be as in the
proof of Proposition~2.4, and consider the endomorphisms $T_v$, $T_e$,
$T_{e^*}$ of~$A$, for $v \in V$, $e \in E$, $e^{\ast} \in E^{\ast}$.
Without loss of generality, we may assume that $I = \mathbb{Z} \times I'$
for some nonempty set~$I'$.
Write the cycle~$p$ based at~$v$ as $p = e_1 \dots e_n$ with $e_i \in E$,
where $v_{i-1} := s(e_i)$ and $v_i := r(e_i)$ for $1 \le i \le n$, so that
$v_0 = v_n = v$. By the construction in the proof of Proposition~2.4 we
have $A_{v_{i-1}} = A_{e_i}$, since~$p$ has no exit, and $T_{e_i}$ restricts to
an isomorphism $A_{v_i} \longrightarrow A_{v_{i-1}}$, for all~$i$.
Consider the endomorphism $T_p := T_{e_1} \circ \dots \circ T_{e_n}$,
which restricts to an isomorphism $T: A_v \longrightarrow A_v$,
where $A_v = Z = S^{(I)} = S^{(\mathbb{Z} \times I')}$. Observe that there is
no limitation in choosing these isomorphisms, hence we may assume that
$T(\delta_{(k,i)}) = \delta_{(k+1,i)}$ for all $k \in \mathbb{Z}$ and
$i \in I'$, where by $\delta_{(k,i)}$ we denote the standard basis vectors
of the free $S$-semimodule $S^{(\mathbb{Z} \times I')}$.
Now suppose that $f(p) = g(p)$ for some Laurent polynomials $f, g \in
S[x, x^{-1}]$. Since the $T_v$, $v \in V$, $T_e$, $e \in E$, $T_{e^{\ast}}$,
$e^{\ast} \in E^{\ast}$, satisfy the relations described in Definition~2.1,
it follows that $f(T) = g(T)$ holds in $\text{Hom}_S(A, A)$. Writing
$f = \sum_{j=r}^s f_j x^j$ and $g = \sum_{j=r}^s g_j x^j$ for some $r, s \in
\mathbb{Z}$ and $f_j, g_j \in S$, from our choice of~$T$ we see that $f(T)
(\delta_{0,i}) = \sum_{j=r}^s f_j T^j (\delta_{0,i}) = \sum_{j=r}^s f_j
\delta_{j,i}$, and similarly $g(T) (\delta_{0,i}) = \sum_{j=r}^s g_j
\delta_{j,i}$, for any~$i \in I'$. From this we readily deduce that
$f = g$ and thus the injectivity follows.
\end{proof}
Following~\cite{ap:tlpaoag05}, a monomial in $L_{S}(\Gamma)$ is a \emph{%
real path} if it contains no terms of the form $e^{\ast}\in E^{\ast}$, and
a polynomial $\alpha \in L_{S}(\Gamma)$ is in \emph{only real edges} if it
is a sum of real paths; let $L_{S}(\Gamma)_{real}$ denote the
subhemiring of all polynomials in only real edges in $L_{S}(\Gamma)$. The
following technical observation will prove to be useful.
\begin{lem}[{\textit{cf.}~\cite[Corollary~3.2]{ap:tlpaoag05}}]
Let $\Gamma = (V,E,s,r)$ be a graph with the property that every
cycle has an exit and~$S$ a semifield. Then, if $\alpha \in
L_{S}(\Gamma)_{real} \subseteq L_{S}(\Gamma)$ is a nonzero
polynomial in only real edges, then there exist $a, b \in
L_{S}(\Gamma)$ such that $a \alpha b \in V$.
\end{lem}
\begin{proof}
The proof of~\cite[Corollary~3.2]{ap:tlpaoag05} does not use the
``additiveness'' of the setting and, therefore, repeating verbatim
the latter, one gets the statement in our nonadditive setting as
well. However, we provide a new proof which is much shorter than the
Abrams and Aranda Pino's original proof.
Namely, we write $\alpha$ in the form $\alpha = \sum_i \lambda_i
q_i$ with $q_i$ distinct real paths and $0 \ne \lambda_i \in S$. Out
of the set $\{ q_i \}$ choose $p$ such that no proper prefix path
of~$p$ is contained therein. Let $v = r(p)$. Then, using Remark~2.7
we get $p^* \alpha v = \lambda v + \sum_i \lambda_i p^* q_i$, where
the sum is over all~$q_i$ that have~$p$ as a proper prefix path and
$r(q_i) = v$, so that $p^* q_i \in \text{CP}(v)$.
Hence, without loss of generality, we may assume that $\alpha = \lambda v
+ \sum^n_{i=1} \lambda_i p_i$, where $p_i \in \text{CP}(v)$ of positive length
and $0\ne\lambda \in S$. Fix some $c \in \text{CSP}(v)$. For any $p_i \in
\text{CP}(v)$ we may write $p_i = c^{n_i} p_i'$ with $n_i \in \mathbb N$
maximal, so that either $p_i' = v$ or $p_i' = d_i p_i''$ with $d_i \in
\text{CSP}(v)$, $d_i \ne c$, in which case $(c^*)^{n_i+1} p_i = c^* p_i' =
c^* d_i p_i'' = 0$ by Remark~2.7. With $n := \max \{n_i \mid i = 1,
\dots, n\} + 1$, we then have that $(c^*)^n p_i c^n = p_i$ if $p_i = c^{n_i}$,
and $(c^*)^n p_i c^n = 0$ otherwise. Therefore, we have $\alpha' :=
(c^*)^n \alpha c^n = \lambda v + \sum_j \lambda_j c^{n_j}$ with $n_j
> 0$, \textit{i.e.}, $\alpha' = \lambda v + c P(c)$ for some polynomial~$P$.
Now, we write $c$ in the form $c = e_1 \dots e_m$. By our hypothesis
and \cite[Lemma 2.5]{ap:tlpaoag05}, there exists an exit $f\in E$ for~$c$,
that is, there exists $j\in \{1, \dots, m\}$ such that $s(f) = s(e_j)$
but $f\neq e_j$. Let $z:= e_1 \dots e_{j-1}f$. We get that $s(z) = v$ and
$z^*c = 0$, so that $\lambda^{-1}z^* \alpha' z = z^* z + \lambda^{-1}z^*
c P(c) z = r(z) \in V$, as desired.
\end{proof}
As was shown in \cite[Theorem~6]{col:tsiilpa}, every nonzero ideal of the
Leavitt path algebra of a row-finite graph with coefficients in a field
always contains a nonzero polynomial in only real edges. The following
observation extends this result to the Leavitt path algebra over an
arbitrary graph.
\begin{prop}
Let $\Gamma = (V,E,s,r)$ be a graph and~$S$ a commutative semiring.
Then any nonzero ideal~$I$ of $L_{S}(\Gamma)$ contains a nonzero
polynomial in only real edges.
\end{prop}
\begin{proof}
Let $I_{real} := I \cap L_{S}(\Gamma)_{real}$ for a nonzero ideal~$I$,
and suppose that $I_{real} = 0$. Choose $0 \ne \alpha = \sum_{i=1}^{d}
\lambda_{i} p_{i} q_{i}^{\ast}$ in $I$, where $d$ is minimal such that
$p_{1}, \dots, p_{d}, q_{1}, \dots, q_{d}$ are paths in $\Gamma$ and
$0 \ne \lambda_{i} \in S$, $i = 1, \dots, d$. By using \cite[Remark~3]%
{col:tsiilpa}, as in the proof \cite[Lemma~4]{col:tsiilpa},
one can easily get that the element~$\alpha$ can be presented in the form
$\alpha = \mu_{1} + \dots + \mu_{m}$, where all monomials in $\mu_{j}\in I$,
$j = 1, \dots, m$, have the same source and the same range. Moreover,
for $\alpha \ne 0$ and the minimality of $d$, we can assume that actually
$\alpha = \sum_{i=1}^{d} \lambda_{i} p_{i} q_{i}^{\ast}$ with $s(p_{i}) = s(p_{j})$
and $s(q_{i}) = s(q_{j}) = w \in V$ for all~$i$ and~$j$. Among all such
$\alpha = \sum_{i=1}^{d} \lambda_{i} p_{i} q_{i}^{\ast} \in I$ with minimal~$d$,
select one for which $(|q_{1}|, \dots, |q_{d}|)$ is the smallest in the
lexicographic order of $\mathbb{N}^{d}$. Obviously, $|q_{i}| > 0$ for
some~$i$ (otherwise, $0 \ne \alpha \in I_{real} = 0$). If $e \in E$, then
\[ \alpha e = \textstyle\sum\limits_{i=1}^{d} \lambda_{i} p_{i} q_{i}^{\ast} e
= \textstyle\sum\limits_{i=1}^{d'} \lambda_{i} p_{i}' (q_{i}')^{\ast}, \]
where we either have $d' < d$, or $d' = d$ and $(|q_{1}'|, \dots,
|q_{d}'|)$ is smaller than $(|q_{1}|, \dots, |q_{d}|)$.
Whence, for the minimality of $(|q_{1}|, \dots, |q_{d}|)$, we get
$\alpha e = 0$ for all $e\in E$. For $|q_{i}| > 0$ for some~$i$, we have
that~$w$ is not a sink, and if it is a regular vertex, we have
\[ 0 \ne \alpha = \alpha w = \alpha \textstyle\sum\limits_{e \in s^{-1}(w)} e e^{\ast} =
\textstyle\sum\limits_{e \in s^{-1}(w)} (\alpha e) e^{\ast} = 0 . \]
Therefore, we need only to consider two possible cases when the
vertex~$w$ emits infinitely many edges:
\emph{Case~1.} Let $|q_{j}| > 0$ for all~$j$, and $A := \{ e \in
s^{-1}(w) \mid q_{i}^{\ast}e \ne 0$ for some $1 \le i \le d\}$.
Notice that $q_{i}^{\ast} e \ne 0$ if and only if the path~$q_i$
has the form $q_i = f_1 \dots f_k$ with $k \ge 1$ and $f_1 = e$.
Specially, in this case, we have that $q_{i}^{\ast} e e^{\ast} =
q_{i}^{\ast}$. It is clear that $|A| < \infty$, and hence, $\alpha =
\sum_{e\in A} \alpha e e^{\ast}$. For $\alpha e = 0$ for all $e\in E$, we
have $0 \ne \alpha = \sum_{e\in A} \alpha e e^{\ast} = 0$.
\emph{Case~2.} If $|q_{j}| = 0$ for some~$j$, the element~$\alpha$ can be
presented as
\[ \alpha = \lambda_1 p_{1} + \dots + \lambda_m p_{m} + \lambda_{m+1} p_{m+1}
q_{m+1}^{\ast} + \dots + \lambda_d p_{d} q_{d}^{\ast}, \]
where $p_1, \dots, p_m$ are distinct paths in~$\Gamma$ and
$r(p_{i}) = w = s(q_{j})$ for all $i = 1, \dots, d$ and $j = m+1,
\dots, d$. Set $\beta := \lambda_1 p_{1} + \dots + \lambda_mp_{m}$.
By Remark~2.7, we may choose a path~$p$ in $\Gamma$ such that
\[ p^{\ast} \beta = \lambda w + \textstyle\sum\limits_{j=1}^k \nu_j p'_j, \]
where $0 \ne \lambda \in S$, $\nu_j\in S$ and $p'_j \in \text{CP}(w)$ for
all~$j$. For~$w$ emits infinitely many edges, there is an edge $e \in
s^{-1}(w)$ such that $q_{i}^{\ast} e = 0 = e^{\ast} p'_j$ for all $i = m+1,
\dots, d$ and $j = 1, \dots, k$. Then, $0 = \alpha e = \beta e \in I$
and, hence, $p^{\ast} \beta e = \lambda e + \sum_{j=1}^k \nu_j p'_j e = 0$.
It implies that $e^{\ast} p^{\ast} \beta e = \lambda r(e) = 0$. Using
Proposition~2.4\,(2), we get that $\lambda = 0$, a contradiction.
Hence, the ideal~$I$ contains a nonzero polynomial in only real edges.
\end{proof}
In \cite[Theorem~3.11]{ap:tlpaoag05}, the authors characterized the simple
Leavitt path algebras over countable row-finite graphs with coefficients in
a field. Then, the row-finiteness hypothesis independently was eliminated by
the authors (\cite[Theorem~3.1]{ap:tlpaoag08}) and in (\cite[Theorem~6.18]%
{tomf:utaisflpa}), and finally this characterization has been extended in
\cite[Theorem~3.11]{g:lpaadl} to arbitrary graphs. The next and main result
of this section is an extension of the latter characterization to the
Leavitt path algebras with coefficients in a semifield.
\begin{thm}
A Leavitt path algebra $L_{S}(\Gamma)$ of a graph $\Gamma = (V,E,s,r)$ with
coefficients in a semifield~$S$ is ideal-simple if and only if
the graph~$\Gamma$ satisfies the following two conditions:
(1) The only hereditary and saturated subset of~$V$ are~$\varnothing$
and~$V$;
(2) Every cycle in~$\Gamma$ has an exit.
\end{thm}
\begin{proof}
$\Longrightarrow$. It follows from Proposition~3.1.
$\Longleftarrow$. Let $I$ be a nonzero ideal of $L_{S}(\Gamma)$. By
Proposition~3.3, $I$ contains a nonzero polynomial~$\alpha$ in only real
edges. By Lemma~3.2, there exist $a, b \in L_{S}(\Gamma)$ such that
$a \alpha b \in V$, \textit{i.e.}, $I \cap V \ne \varnothing$. Now,
applying Lemma~2.6 and Proposition~2.5, we conclude that $I =
L_{S}(\Gamma)$.
\end{proof}
Taking into consideration \cite[Theorem~7.20]{tomf:lpawciacr}, the following
question seems to be reasonable, interesting and promising. \medskip
\noindent \textbf{Problem.} How far can Theorem~3.4 be extended for the
commutative ground semiring $S$? \medskip
We finish this section by demonstrating the use of Theorem~3.4 in
re-establishing the ideal-simpleness of the Leavitt path algebras of
Examples~2.3.
\begin{exas}[{\textit{cf.}~\cite[Corollary~3.13]{ap:tlpaoag05}}]
Note that all Leavitt path algebras in these examples are algebras
with coefficients in a semifield $S$.
(i) By \cite[Proposition 4.7]{knt:mosssparp}, $M_{n}(S)$ is an ideal-simple
algebra. However, this fact can also be justified by Theorem~3.4, since it
is easy to check that the graph $\Gamma$ of Examples~2.3\,(i) satisfies (1)
and (2) of Theorem~3.4.
(ii) By Examples~2.3\,(ii), the Laurent polynomial algebras $S[x,x^{-1}]
\cong L_{S}(\Gamma)$ where the graph $\Gamma$ contains a cycle without an
exit, and therefore, by Theorem~3.4, $S[x,x^{-1}]$ is not ideal-simple.
(iii) By Examples~2.3\,(iii), the Leavitt algebras $L_{S}(1,n)$ for $n\geq 2$
are isomorphic to the Leavitt path algebras $L_{S}(\Gamma)$ such that for
the graphs~$\Gamma$ conditions~(1) and~(2) of Theorem~3.4 are obviously
satisfied, and therefore, the algebras $L_{S}(1,n)$ are ideal-simple.
(Note that we consider here an $S$-algebra analog of a Leavitt algebra over
a field, see \cite[Theorem~2]{leav:tmtohi}).
\end{exas}
\section{Congruence-simpleness of Leavitt path algebras
with coefficients in a commutative semiring}%
\label{sec:congr}
Providing necessary and sufficient conditions for a Leavitt path algebra
over a row-finite graph with coefficients in a commutative semiring to be
congruence-simple is the main goal of this section. We start with necessary
conditions for such algebras to be congruence-simple, namely:
\begin{prop}
For a congruence-simple Leavitt path algebra $L_{S}(\Gamma)$ of a
graph $\Gamma = (V,E,s,r)$ with coefficients in a commutative
semiring $S$ the following statements are true:
(1) $S$ is either a field, or the Boolean semifield~$\mathbf{B}$;
(2) The only hereditary and saturated subset of~$V$ are~$\varnothing$
and~$V$;
(3) Every cycle in~$\Gamma$ has an exit.
\end{prop}
\begin{proof}
(1) First, let us show that there are only the two trivial congruences
on~$S$. Indeed, if $\sim$ is a proper congruence on~$S$, the natural
surjection $\pi: S \longrightarrow \overline{S} := S / {\sim}$, defined
by $\pi(\lambda) = \overline{\lambda}$, is neither zero nor an injective
homomorphism. As one can easily verify, the homomorphism~$\pi$ induces
a nonzero surjective hemiring homomorphism $\varphi: L_{S}(\Gamma)
\longrightarrow L_{\overline{S}}(\Gamma)$ such that $\varphi (\lambda p
q^{\ast}) = \overline{\lambda} p q^{\ast}$, where $\lambda \in S$ and
$p, q$ are paths in $\Gamma$ with $r(p) = r(q)$. For $\pi$ is not
injective, there exist two distinct elements $a, b \in S$ such that
$\overline{a} = \overline{b}$ and, by Proposition~2.4\,(2), $a v \ne b v$
in $L_{S}(\Gamma)$ for any $v\in V$. However,
\[ \varphi (a v) = \overline{a} v = \overline{b} v = \varphi (b v) , \]
and hence, $\varphi$ is not injective, and therefore, $L_{S}(\Gamma)$ is
not congruence-simple. Thus, $S$ is congruence-simple, and it follows by
\cite[Theorem~10.1]{bshhurtjankepka:scs} (see also \cite[Theorem~3.2]%
{mf:ccs}) that~$S$ is either a field, or the semifield~$\mathbf{B}$.
(2) A proof of this statement is established by the proof of
Proposition~3.1\,(1): Indeed, in the notation of the latter, one readily
concludes that the map $\varphi: L_{S}(\Gamma) \longrightarrow
L_{S}(\Gamma')$ is a nonzero homomorphism and $H\subseteq \varphi
^{-1}(0)$, and hence, $L_{S}(\Gamma)$ is not congruence-simple.
(3) This statement can be proven analogously to the proof of
Proposition~3.1\,(2); and in the notations of the latter, one readily
concludes that $v L_S(\Gamma) v = S[p,p^*] \cong S[x, x^{-1}]$, the Laurent
polynomial semiring over~$S$. By \cite[Proposition~5.3\,(2)]{knz:ososacs},
$vL_{S}(\Gamma)v$ is a congruence-simple semiring; that means, $S[x, x^{-1}]$
is congruence-simple, too. This would imply, by \cite[Theorem~10.1]%
{bshhurtjankepka:scs} (see also \cite[Theorem~3.2]{mf:ccs}), that
$S[x, x^{-1}]$ is either a field or the Boolean semifield $\mathbf{B}$,
what is obviously is not a case.
\end{proof}
Combining Theorem 3.4 and Proposition 4.1, one immediately obtains that the
congruence-simpleness of a Leavitt path algebra over an arbitrary graph with
coefficients in a commutative semiring implies its ideal-simpleness, what,
in turn, actually resolves \cite[Problem 2]{knt:mosssparp} in the class of
Leavitt path algebras, namely:
\begin{cor}
A congruence-simple Leavitt path algebra $L_{S}(\Gamma)$ over an
arbitrary graph~$\Gamma$ with coefficients in a commutative
semiring~$S$ is ideal-simple as well.
\end{cor}
Next, modifying the ideas and techniques used in the proof of~\cite%
[Theorem~6]{col:tsiilpa}, we obtain a semiring version of this result
for the Leavitt path algebras over the Boolean semifield~$\mathbf{B}$.
\begin{prop}
Let $\Gamma = (V,E,s,r)$ be a row-finite graph, $\rho$ a congruence
on $L_{\mathbf{B}} (\Gamma)$, and $\rho_{real} := \rho \cap
(L_{\mathbf{B}} (\Gamma)_{real})^{2}$. Then~$\rho$ is generated by
$\rho_{real}$.
\end{prop}
\begin{proof}
Let $\tau$ be the congruence on $L_{\mathbf{B}}(\Gamma)$ generated by
$\rho_{real}$, then the inclusion $\tau \subseteq \rho$ is obvious.
Suppose that $\tau \ne \rho$, i.e., there exists $(x, y) \in \rho$
with $(x, y) \notin \tau$. By Proposition~2.5 we may choose a
finite subset $F \subseteq V$ such that $x = x \sum_{v \in F} v$ and
$y = y \sum_{v \in F} v$, and therefore
\[ (x, y) = ( x \textstyle\sum\limits_{v \in F} v, y \textstyle\sum\limits_{v \in F} v ) =
\textstyle\sum\limits_{v \in F} ( x v, y v) . \]
Since $(x, y) \notin \tau$, there exists $v \in F$ such that $(x v, y v)
\notin \tau$, and we have $(x v, y v) \in \rho$. Therefore, we may assume
that $x = \sum_{i=1}^{k} p_{i} q_{i}^{\ast}$ and $y = \sum_{j=1}^{l}
\gamma_{j} \delta_{j}^{\ast}$ with $p_{i}, q_{i}, \gamma_{j}, \delta_{j}$
paths in~$\Gamma$ and $r(q_{i}^{\ast}) = r(\delta_{j}^{\ast}) = v$ for
all~$i,j$. Among all such pairs $(\sum_{i=1}^{k} p_{i} q_{i}^{\ast} ,\,
\sum_{i=1}^{l} \gamma_{i} \delta_{i}^{\ast}) \in \rho \setminus \tau$ with
minimal $d:=k+l$, select one for which $(|q_{1}|, \dots, |q_{k}|,
|\delta_{1}|, \dots, |\delta_{l}|)$ is the smallest in the lexicographic
order of~$\mathbb{N}^{d}$. As $(x,y) \notin \tau$, one has $|q_{i}| > 0$
for some~$i$, or $|\delta_{j}|>0$ for some~$j$. For all $e \in E$,
\[ (xe,ye) = (\textstyle\sum\limits_{i=1}^{k} p_{i}q_{i}^{\ast}e,
\textstyle\sum\limits_{i=1}^{l} \gamma_{i}\delta_{i}^{\ast}e)
= (\textstyle\sum\limits_{i=1}^{k'} p_{i}' (q_{i}')^{\ast},
\textstyle\sum\limits_{i=1}^{l'} \gamma_{i}' (\delta_{i}')^{\ast}) , \]
and either $d' := k' + l' < d$, or $d' = d$ and $(|q_{1}'|, \dots, |q_{k}'|,
|\delta_{1}'|, \dots, |\delta_{l}'|)$ is smaller than $(|q_{1}|, \dots,
|q_{k}|, |\delta_{1}|, \dots, |\delta_{l}|)$, whence $(xe,ye) \in \tau$,
by minimality. As some $|q_{i}| > 0$ or some $|\delta_{j}| > 0$,
it follows that~$v$ is not a sink, and hence, $(x,y) = (x v,y v) =
(x \sum_{e\in s^{-1}(v)}ee^{\ast}, y \sum_{e\in s^{-1}(v)}ee^{\ast}) =
\sum_{e\in s^{-1}(v)} ((xe)e^{\ast}, (ye)e^{\ast}) \in \tau$, contradicting
that $(x,y) \notin \tau$. This shows that $\rho = \tau$.
\end{proof}
The following result, being an $\mathbf{B}$-algebra analog of
\cite[Theorem~3.11]{ap:tlpaoag05}, characterizes the congruence-simple
Leavitt path algebras over the Boolean semifield $\mathbf{B}$.
\begin{thm}
A Leavitt path algebra $L_{\mathbf{B}}(\Gamma)$ of a row-finite
graph $\Gamma = (V,E,s,r)$ is congruence-simple if and only if the
graph~$\Gamma$ satisfies the following two conditions:
(1) The only hereditary and saturated subset of~$V$ are~$\varnothing$
and~$V$;
(2) Every cycle in~$\Gamma$ has an exit.
\end{thm}
\begin{proof}
$\Longrightarrow$. It follows from Proposition~4.1.
$\Longleftarrow$. Let $\rho \neq \Delta_{L_{\mathbf{B}}(\Gamma)}$ be a
congruence on $L_{\mathbf{B}}(\Gamma)$. Then, by Proposition~4.3, $\rho$
is generated by $\rho_{real} := \rho \cap (L_{\mathbf{B}}(\Gamma)_{real})^{2}$
and $\rho_{real} \neq \Delta_{L_{\mathbf{B}}(\Gamma)_{real}}$. Hence,
there exist two elements $a, b \in L_{\mathbf{B}}(\Gamma)_{real}$ such that
$a\neq b$ and $(a,b) \in \rho$. We claim that there exists a nonzero
polynomial $x \in L_{\mathbf{B}}(\Gamma)$ in only real edges such that
$(0,x) \in \rho$.
It is clear that $L_{\mathbf{B}}(\Gamma)$ is an additively idempotent
hemiring, \textit{i.e.}, $L_{\mathbf{B}}(\Gamma)$ is a partially ordered
hemiring with its unique partial order defined as follows: $s \leq s'
\Longleftrightarrow s + s' = s'$. Whence, $(a,a+b) = (a+a,a+b) \in \rho$,
$(b,a+b) = (b+b,a+b) \in \rho$, and since $a \ne b$, either $a < a+b$ or
$b < a+b$. Thus, keeping in mind that $(a+x,b+x) \in \rho$ for all
$x \in L_{\mathbf{B}}(\Gamma)$ and without loss of generality, one may
assume that $a < a + b$ and $a$, $a+b$ are written in the form
\[ a = p_{1} + \dots + p_{n}, \quad
a+b = p_{1} + \dots + p_{n} + p , \]
where $p_{1}, \dots, p_{n}, p$ are distinct paths in $\Gamma$.
Moreover, we may choose~$a$ having the minimal number~$n$ of such
$\{p_{1} , \dots, p_{n}\}$.
Let $v := s(p)$, $w := r(p) \in V$. Then $(v a w, v (a+b) w) \in
\rho$, where $v a w = v p_1 w + \dots + v p_n w$ and $v b w = v p w
= p$, hence by minimality we may assume that $s(p_i) = v$ and
$r(p_i) = w$ for all $i = 1, \dots, n$.
Suppose that $v \ne w$. Write $p = q p'$, where~$q$ is a path
from~$v$ to~$w$ of shortest length and~$p'$ is a closed path based
at~$w$. Taking into account Remark~2.7, for every~$p_{j}$ such that
$q^{\ast} p_{j} \ne 0$ we have $p_{j} = q p_{j}'$ for some closed
path~$p_{j}'$ based at~$w$. Then we have $(q^{\ast} a, q^{\ast} (a+b))
= (q^{\ast} p_{1} +\dots+ q^{\ast}p_{n}, q^{\ast}p_{1} +\dots+ q^{\ast}p_{n}
+ q^{\ast}p) = (\sum_{j} p_{j}', \sum_{j} p_{j}' + p') \in \rho$.
Therefore, without loss of generality, we may assume that $v = w$,
\textit{i.e.}, $p, p_{1} , \dots, p_{n}$ are distinct closed paths
based at~$v$, and consider the following two possible cases.
\emph{Case~1.} There exists exactly one closed simple path based at~$v$,
say $c := e_{1} \dots e_{m}$. It follows that~$c$ is actually a cycle, and
by condition~(2), $c$ has an exit~$f$, \textit{i.e.}, there exists
$j \in \{1, \dots, m\}$ such that $e_{j}\neq f$ and $s(f)=s(e_{j})$. Then,
there are some distinct positive integers $k$, $k_{i}$, $i=1, \dots, n$,
such that $p =c^{k}$ and $p_{i} = c^{k_{i}}$, $i=1, \dots, n$, and let
\begin{gather*}
x := (c^{\ast})^{k}a = (c^{\ast})^{h_{1}} +\dots+ (c^{\ast})^{h_{r}}
+ c^{h_{r+1}} +\dots+ c^{h_{n}} \\
y := (c^{\ast})^{k}(a+b) = (c^{\ast})^{h_{1}} +\dots+ (c^{\ast})^{h_{r}}
+ c^{h_{r+1}} +\dots+ c^{h_{n}} + v.
\end{gather*}%
Obviously, $(x,y)\in \rho$, and therefore, $(0, r(f)) = (z^{\ast} x z,
z^{\ast} y z ) \in \rho$ for $z := e_{1} \dots e_{j-1} f$.
\emph{Case~2.} There exist at least two distinct closed simple paths
based at~$v$, say~$c$ and~$d$, and we have $c^{\ast} d = 0 = d^{\ast} c$
by Remark~2.7. Note that $(p^{\ast} a, p^{\ast} (a+b) ) \in \rho$
and let
\begin{gather*}
x := p^{\ast} a = q_{1}^{\ast} +\dots+ q_{s}^{\ast} + q_{s+1} +\dots+ q_{n} \\
y := p^{\ast}(a+b) = q_{1}^{\ast} +\dots+ q_{s}^{\ast} + q_{s+1} +\dots+ q_{n}
+ v ,
\end{gather*}%
where $q_{1}, \dots, q_{n}$ are closed paths in~$\Gamma$ based at~$v$.
Then for some $k\in \mathbb{N}$, where $|c^{k}| > \max \{|q_{1}| , \dots,
|q_{n}|\}$, we get $x' := (c^{\ast})^{k} x c^{k} = (c^{\ast})^{k}
q_{1}^{\ast}c^{k} +\dots+ (c^{\ast})^{k} q_{s}^{\ast}c^{k} +
(c^{\ast})^{k}q_{s+1}c^{k} +\dots+ (c^{\ast})^{k}q_{n}c^{k}$ and
$y' := (c^{\ast})^{k} y c^{k} = (c^{\ast})^{k}q_{1}^{\ast}c^{k}
+\dots+ (c^{\ast})^{k} q_{s}^{\ast}c^{k} + (c^{\ast})^{k}q_{s+1}c^{k}
+\dots+ (c^{\ast})^{k}q_{n}c^{k} + v$, and $(x',y') \in \rho$.
If $(c^{\ast})^{k} q_{i}^{\ast}c^{k} = 0 = (c^{\ast})^{k} q_{j}c^{k}$
for all $i=1, \dots, s$ and $j=s+1, \dots, n$, then $(0,v) = (x',y')
\in \rho$. Note that if $(c^{\ast})^{k} q_{j}c^{k} \neq 0$, then
$(c^{\ast})^{k} q_{j}\neq 0$, and as $|c^{k}| > |q_{j}|$, $c^{k}
= q_{j} q_{j}'$ for some closed path $q_{j}'$
based at~$v$. Whence, $q_{j} = c^{r}$ for some positive integer
$r\leq k$. Similarly, in the case $(c^{\ast})^{k} q_{i}^{\ast}c^{k}
\neq 0$, we get that $q_{i}^{\ast} = (c^{\ast})^{s}$ for some positive
integer $s\leq k$. Since $c^{\ast} d = 0 = d^{\ast} c$, for every $i,j$, one
gets $d^{\ast} (c^{\ast})^{k} q_{i}^{\ast} c^{k} d = 0 = d^{\ast} (c^{\ast})^{k}
q_{j} c^{k} d$, and hence, $(0,v) = (d^{\ast} x' d,
d^{\ast} y' d) \in \rho$.
Finally, let us consider the ideal of $L_{\mathbf{B}}(\Gamma)$ defined as
follows:
\[ I := \{ x\in L_{\mathbf{B}}(\Gamma) \mid (0,x) \in \rho \} . \]
From the observations above, $I$ contains a nonzero polynomial in
only real edges. By our assumption and Theorem 3.4, $L_{\mathbf{B}}
(\Gamma)$ is an ideal-simple hemiring, and hence, $I = L_{\mathbf{B}}
(\Gamma)$. It immediately follows that $\rho = L_{\mathbf{B}}(\Gamma)^{2}$,
which ends the proof.
\end{proof}
Combining Proposition~4.1, Theorem~4.4 and \cite[Theorem~3.11]%
{ap:tlpaoag05}, we obtain a complete characterization of the
congruence-simple Leavitt path algebras $L_{S}(\Gamma)$ of row-finite
graphs~$\Gamma$ over commutative semirings.
\begin{thm}
A Leavitt path algebra $L_{S}(\Gamma)$ of a row-finite graph $\Gamma
= (V,E,s,r)$ with coefficients in a commutative semiring~$S$ is
congruence-simple if and only if the following three conditions are
satisfied:
(1) $S$ is either a field, or the Boolean semifield~$\mathbf{B}$;
(2) The only hereditary and saturated subset of~$V$ are~$\varnothing$
and~$V$;
(3) Every cycle in~$\Gamma $ has an exit.
\end{thm}
In light of \cite[Theorem~3.1]{ap:tlpaoag08}, \cite[Theorem~6.18]%
{tomf:utaisflpa} and \cite[Theorem~3.11]{g:lpaadl}, and to stimulate
an interest of some potential readers in research in this, in our
view, quite interesting and promising direction, we post the
following\medskip
\noindent \textbf{Conjecture}. Theorem~4.5 is true for the Leavitt path
algebras $L_{S}(\Gamma)$ over an arbitrary graph $\Gamma$. \medskip
As was done in the previous section, we end this section and the paper
by re-establishing the congruence-simpleness of the Leavitt path
algebras of Examples~2.3.
\begin{exas}[{\textit{cf.}~\cite[Corollary~3.13]{ap:tlpaoag05}}]
We can re-establish the congruence-simpleness of the algebras given
in Example~3.2 above.
(i) By \cite[Corollary~4.8]{knt:mosssparp}, $M_{n}(\mathbf{B})$ is
congruence-simple. However, this fact can be also justified by
Theorem~4.5, since it is easy to check that the graph~$\Gamma$ of
Examples~2.3\,(i) satisfies~(1) and~(2) of Theorem~4.5.
(ii) By Examples~2.3\,(ii), the Laurent polynomial algebra
$\mathbf{B}[x,x^{-1}]\cong L_{S}(\Gamma)$ where the graph $\Gamma$
contains a cycle without an exit, and therefore, by Theorem~4.5,
$\mathbf{B}[x,x^{-1}]$ is not congruence-simple.
(iii) By Examples~2.3\,(iii), the Leavitt algebras $L_{\mathbf{B}}(1,n)$
for $n \ge 2$ are isomorphic to the Leavitt path algebras
$L_{\mathbf{B}}(\Gamma)$ such that for the graphs~$\Gamma$
conditions~(1) and~(2) of Theorem~4.5 are obviously satisfied, and
therefore, the algebras $L_{\mathbf{B}}(\Gamma)$ are congruence-simple.
\end{exas}
|
1,477,468,751,175 | arxiv |
\section{Introduction}
Reconstructing the state of a system, through a discrete-time sequence of noisy observations, is a problem which dates far back in time. Although many solutions have been proposed, one of the most prominent approach dates to the sixties, when \cite{KFSeminal} presented the optimal solution when linear dynamical systems with additive white Gaussian noise (AWGN) are considered. Such a filter alternates a prediction phase, also known as time update, where a mathematical model is employed to guess the evolution of the system state, to a measurement update, where the information coming from the observations is incorporated to improve the ``a posteriori'' state knowledge. Since then, many advances have been done in the filtering problem, mostly related to cases where linearity and noise Gaussianity are not granted anymore. For instance, when nonlinear non-Gaussian models are considered, the posterior state distribution can become analytically intractable, hence it is necessary to introduce approximations in order to describe the process uncertainty. In the Assumed Density Filtering (ADF) setting, the predicted or posterior state distributions are approximated by a Gaussian density by means, for instance, of several techniques like model linearization EKF~(\cite{gelb1974book}), sigma-point methods UKF~(\cite{julier2004unscented}), CKF~(\cite{arasaratnam2009cubature}), or particle filtering approaches (\cite{Gordon1993NovelAT}); a detailed survey on distribution moments approximation in the filtering context can be found in \cite{Roth2016NonlinearKF}.
In the past years, the usage of sensors like radars, cameras or lidars has raised the issue of dealing with the phenomenon of clutter, that is the presence of unwanted disturbances when a sensor scan is performed. In terms of tracking, this amounts to receive several observations for a single time step; moreover, it may happen that, even if the target is present in the sensor Field of View (FOV), it goes undetected, hence none of the observations may belong to the object of interest. In order to address such additional challenges, many approaches have been proposed in the literature (\cite{BlackmanPopoliBook,BarShalomBookEstimationandTracking,BarShalom2011Handbook}) where, to achieve the optimal state estimation of one or many unknown targets, it is necessary to consider all the possible associations between the observed data and previous state hypotheses. Such process of Data Association (DA) may spawn a set of new state hypotheses in every filter update; when this happens, the posterior state distribution is represented by a sum Gaussian densities (Gaussian mixture), which can grow unbounded in the number over time, making the corresponding uncertainty representation computationally intractable. When a combinatorial explosion of this kind takes place, it is necessary to introduce approximations to the so obtained Gaussian mixture/sum. The problem of reducing the number of hypotheses in a mixture model falls under the name of Mixture Reduction Problem (MRP), for which many solutions have been proposed in the literature (see \cite{SalmondReduction,Blackman2004,Williams06,Runnalls,PGMR,Crouse,CGMR}).
Although the mixture reduction can appear as a marginal block in a target tracking scheme, it represents in fact a fundamental step in the filter recursion, since approximating accurately a given state distribution can impact significantly on the filter robustness and accuracy.
Nonetheless, the MRP proves to be a very difficult task for many reasons, among which are the lack of closed forms for some computations involved, the computational burden introduced by such routines and what a suitable number of components should be in order to balance model complexity and accuracy (Occam's razor).
In this work, a recently developed mixture reduction algorithm (\cite{adaCTDReduction}) is considered in order to tackle the mentioned problems and to assess its performances when applied to the tracking of a single object in the presence of clutter. The main goal of this work is first to prove how important the hypotheses reduction step can be for the filter accuracy and robustness, and then to show how an adaptive model selection can represent a good compromise when solving the MRP in the case of target tracking in the presence of clutter.
The work is organized as follows: in Section \ref{sec:ProbForm}, the problem of target tracking in clutter is formulated together with mixture reduction. In Section \ref{sec:PropSol}, a mixture reduction pipeline is proposed in order to solve efficiently and intuitively the MRP. In Section \ref{sec:NumTests}, several numerical tests are reported to both assess the performances of the proposed solution and to raise several points about the importance of a sound mixture reduction routine. Conclusions follow.
\subsection{Notation}
In this work, all the quantities will be defined when required, but some preliminary notation is required in any case. $\mathbb{R}^n_+$ denotes the set of non-negative vectors in $\mathbb{R}^n$, $I_n$ is the identity matrix in $\mathbb{R}^{n\times n}$ while $ \mathds{1} _n$ denotes a vector of ones in $\mathbb{R}^n$.
$S_{++}^d\subset\mathbb{R}^{d\times d}$ denotes the open cone of symmetric positive definite (SPD) $d\times d$ matrices.
\section{Problem Formulation}\label{sec:ProbForm}
\subsection{Target Tracking in Clutter}\label{subsec:TTIC}
The problem of tracking a single target in the presence of missed detections and false alarms (FAs) (called clutter in the tracking literature) inherently leads to mixtures appearing in the posterior densities. We here consider the case when the target state, denoted $x_k$, is modeled to evolve according to the following linear Gauss-Markov model.
\begin{align}
x_k=Fx_{k-1}+w_{k-1},
\end{align}
where $x_0\sim\nu(x_0\vert\boldsymbol{0},\Sigma_0)$ and $w_k\sim\nu(w_k\vert\boldsymbol{0},Q)$ is the white process noise with the notation $\nu(\cdot\vert \mu, \Sigma)$ denoting the Gaussian density with the mean $\mu$ and the covariance $\Sigma$. The target originates a measurement $z_k$ in the sensor report with probability $P_D$, which is modeled as
\begin{align}
z_k=Hx_{k}+v_{k},
\end{align}
where $v_k\sim\nu(v_k\vert\boldsymbol{0},R)$ is the white measurement noise. The number of FAs is distributed according to a Poisson distribution with the mean value $\lambda_c$ and the spatial distribution of the FAs is uniform over the surveillance region.
In a Bayesian framework the aim of the tracker at each time step $k$ is to estimate the posterior distribution $p(x_k|Z_{0:k})$ of the state $x_k$ given all of measurements $Z_{0:k}=\{Z_0,Z_1,\cdots,Z_k\}$, where $Z_i=\{z_i^j\}_{j=1}^{m_i}$ denotes the set of measurements collected from the sensor at time $i$ and $m_i\ge 0$ is the number of measurements collected from the sensor at time $i$. At the time step $k$ the following $m_k+1$ association hypotheses can be formed about origin of the measurements in $Z_k$.
\begin{align}
\mathcal{H}_k^0:&\,\, \text{All measurements in $Z_k$ are FA.}\\
\mathcal{H}_k^j:&\,\, \text{The $j$th measurement $z_k^j$ belongs to the target and}\nonumber\\
&\text{the other measurements in $Z_k$ are FA.}
\end{align}
for $j=1,\ldots,m_k$. Given an hypothesis sequence/history $\mathcal{H}_{0:k}=\{\mathcal{H}_0,\ldots,\mathcal{H}_k\}$, hypothesis conditioned state posterior distribution $p(x_k|Z_{0:k},\mathcal{H}_{0:k})$ can be calculated using a Kalman filter as a Gaussian distribution using the measurements associated to the target in the hypothesis sequence $\mathcal{H}_{0:k}$. The overall posterior state distribution can be calculated as a mixture as follows.
\begin{align}\label{eq:posteriorMHT}
p(x_k|Z_{0:k})=\sum_{\mathcal{H}_{0:k}} p(x_k|Z_{0:k},\mathcal{H}_{0:k})P(\mathcal{H}_{0:k}|Z_{0:k})
\end{align}
where $P(\mathcal{H}_{0:k}|Z_{0:k})$ denotes the posterior probability of the hypothesis sequence $\mathcal{H}_{0:k}$. Unfortunately the number of possible hypothesis sequences, which is $\prod_{i=0}^k (m_i+1)$, and hence the number of components in the mixture above, increases exponentially and hence one must resort to mixture reduction algorithms to keep the storage and computational requirements at a reasonable level.
The classical target tracking methods differ in how they handle the exponentially growing number of association hypotheses. The single hypothesis tracking approaches reduce the number of association hypotheses to unity at each time step by either pruning all hypotheses except the ``best'' one like the nearest and the strongest neighbor filters~(\cite{LiBarShalom1996,Li1998}), or by merging all association hypotheses into a single composite hypothesis like the probabilistic data association filter (PDAF)~(\cite{BarShalomTse1975,BarShalomDH2009}). Single hypothesis tracking approaches are effective in high SNR environments under low to moderate clutter. For medium SNR or dense clutter environments, Multiple Hypotheses Trackers (MHT)~(\cite{SingerSH1974,Reid1979,Blackman2004}) which can keep and propagate multiple association hypotheses are preferred. In an MHT, an effective mixture reduction scheme is essential to keep the number of association hypotheses under control.
\subsection{Mixtures of Gaussians}
A Gaussian Mixture (GM) is a mixture model defined as:
\begin{align} \label{eq:MoG}
p(x\vert \Theta) = \boldsymbol{w}^T \boldsymbol{\nu}(x\vert \boldsymbol{\theta}) = \sum_{i=1}^n w_i \nu(x\vert \mu_i,\Sigma_i) = \sum_{i=1}^n w_i \nu_i,
\end{align}
where $n$ is the size of the mixture, $\boldsymbol{\nu}$ is a vector of Gaussian pdfs,
i.e.\ $\boldsymbol{\nu}=[\nu_1,...,\nu_n]^T$, $\nu_i=\nu(\cdot\vert \mu_i, \Sigma_i)$ is the $d$-dimensional Gaussian density of parameters $\mu_i\in \mathbb{R}^d$ (mean value) and $\Sigma_i\in S_{++}^d$ (covariance matrix). $\boldsymbol{\theta}=(\boldsymbol{\mu},\boldsymbol{\Sigma})\in(\mathbb{R}^d\times S_{++}^d)^n$ is the collection of all the GM means and covariances, with $\theta_i=(\mu_i,\Sigma_i)\in(\mathbb{R}^d\times S_{++}^d)$. $\boldsymbol{w} = [w_1,...,w_n]^T$ is a vector of \textit{weights} belonging to the standard $(n-1)$-dimensional simplex
$\Delta^{n-1}=\{\boldsymbol{w}\in [0,1]^n:\ \boldsymbol{w}^T \mathds{1} _n=1\} \subset \mathbb{R}_+^n$,
so that the mixture $p(x\vert\Theta)$ defined in \eqref{eq:MoG} is
a \textit{convex sum} of the $n$ components $\nu_i$. Finally, $\Theta =(\boldsymbol{w},\boldsymbol{\theta}) = (\boldsymbol{w}, \boldsymbol{\mu}, \boldsymbol{\Sigma})\in
\mathcal{B}_{n}$ is the collection of all mixture parameters, with $\mathcal{B}_{n}=\Delta^{n-1}\times (\mathbb{R}^d\times S_{++}^d)^n$.
\subsection{The Mixture Reduction Problem}\label{subsec:MRP}
\subsubsection{Kullback-Leibler Divergence.} For the sake of discussion, it is necessary to define a way to quantify how dissimilar two distributions are. In the literature, there exist several dissimilarity measures ($D$-measures, for short) between distributions (\cite{FPI}), but for the goal of this work we will restrict to the \textit{Forward Kullback-Leibler Divergence} (FKLD) (\cite{KLD}, $D_\textit{KL}$ for short), also known as \textit{differential relative entropy}; the term \textit{forward} comes from the fact that the KL is not a symmetric measure, and the order in which two distributions are considered can impact significantly on the outcome.
Given two $d$-dimensional Gaussian distributions $\nu_i(x)$ and $\nu_j(x)$, the $D_\textit{KL}$ takes the following form:
\begin{equation}\label{eq:KLD}
\begin{aligned}
&D_\textit{KL}(\nu_i\Vert \nu_j) = \int \nu_i(x) \log \frac{\nu_i(x)}{\nu_j(x)}\mathrm{d} x\\
&=\frac{1}{2}(\mathrm{tr}(\Sigma_j^{-1} \Sigma_i) + (\mu_i-\mu_j)^T\Sigma_j^{-1}(\mu_i-\mu_j) - d + \log\frac{\vert \Sigma_j\vert}{\vert \Sigma_i\vert})
\end{aligned}
\end{equation}
and is such that:
\begin{align}
& D_\textit{KL}(\nu_i\Vert \nu_j)\ge 0,\label{eq:Dnonneg}\\
& D_\textit{KL}(\nu_i\Vert \nu_j) = 0\ \Longleftrightarrow\ \nu_i=\nu_j.\label{eq:Dident}
\end{align}
The $D_\textit{KL}$ is a very sound $D$-measure used in many fields and applications, since it possesses a long list of useful properties (it is strictly linked to concepts like \textit{Maximum Likelihood Estimation} (MLE) and information gain; see \cite{pml1Book}).
\subsubsection{Approximating a Mixture Model.}
Given a mixture $p^a=(\boldsymbol{w}^a)^T\boldsymbol{q}^a$, of size $n^a$, and a dissimilarity measure $D(\cdot\Vert \cdot)$, one wants to find a reduced mixture $p^b$, of size $n^b<n^a$, which is as close as possible to $p^a$. Formally, one wants to solve the problem (\cite{CGMR}):
\begin{align} \label{eq:MRformulaz}
{\Theta^b}^* = \argmin_{\Theta^b\in\mathcal{B}_{n^b}} D\big(p(x\vert \Theta^a)\Vert p(x\vert \Theta^b)\big),
\end{align}
This is in general a complex, non-convex constrained nonlinear optimization problem, which does not admit a closed form solution. Moreover, when dealing with mixtures, only very few $D$-measures admit a closed form; for instance, in the $D_\textit{KL}$ case, it is not possible to evaluate the divergence between mixtures. For this and other reasons, the MRP is usually approached by means of several heuristics, often driven by ease of computation.
A general approach to reduce a mixture model is to perform a \textit{greedy reduction} of the components, where subsets of hypotheses are \textit{pruned} or \textit{merged together} according to some criteria until a desired $n^b$ is reached; such a reduced order model can possibly serve as initialization for a refinement phase, where the reduced mixture parameters are optimized by exploiting the information contained in the original model.
\section{Proposed Approach}\label{sec:PropSol}
As discussed, reducing the complexity of a mixture model can be a very difficult task, given that, often, closed forms to compute key quantities are not available. Nonetheless, in \cite{CGMR}, a MR reduction framework has been presented that only requires two key ingredients: the closed form computation of the dissimilarity of pairs of mixture components, and the easy evaluation of the $D$-barycenter of a set of weighted Gaussian densities. The $D$-barycenter $\hat{\nu}(x\vert \hat{\mu}, \widehat{\Sigma})$ of a set of weighted Gaussian densities $\{w_i\nu_i\}_{i=1}^n$ is obtained by solving the following problem (see \cite{FPI}):
\begin{align}
(\hat{\mu},\widehat{\Sigma}) = \argmin_{(\mu,\Sigma)\in \mathbb{R}^d\times S_{++}^d}\sum_{i=1}^n w_i D(\nu_i\Vert \nu(\cdot\vert \mu, \Sigma)).
\end{align}
\vspace{-0.3cm}
\subsection{Greedy Reduction of Mixture Models}
If the $D_\textit{KL}$ is considered, the pairwise dissimilarities between Gaussian hypotheses can be computed as in \eqref{eq:KLD}; regarding the $D_\textit{KL}$-barycenter, it is also known in the literature as \textit{moment matching} or \textit{moment-preserving merge}\footnote{This name comes from the fact that such a way of merging mixture components preserves the first two mixture moments.}; given a set of weighted Gaussian densities $(\boldsymbol{w},\boldsymbol{\nu})=\{w_i \nu_i\}_{i=1}^n$, the corresponding moment matching approximation (or $D_\textit{KL}$-barycenter) denoted by $\hat{\nu} = \nu(\cdot\vert \hat{\mu}, \widehat{\Sigma})$, can be computed as:
\begin{equation}\label{eq:KLDbaryGauss}
\begin{aligned}
& \hat{\mu} = \frac{1}{\boldsymbol{w}^T \mathds{1} _n}\sum_{i=1}^n w_i \mu_i,\\
& \widehat{\Sigma} = \frac{1}{\boldsymbol{w}^T \mathds{1} _n}\sum_{i=1}^n w_i(\Sigma_i + (\mu_i-\hat{\mu})(\mu_i-\hat{\mu})^T).
\end{aligned}
\end{equation}
Given two Gaussian hypotheses $\nu_i$ and $\nu_j$, let us define the cost of merging those two components together as follows (see \cite{CGMR}):
\smallskip
\begin{equation}\label{eq:BBound}
B(w_i\nu_i,w_j\nu_j) = w_iD_\textit{KL}(\nu_i\Vert \hat{\nu}_{i,j}) + w_jD_\textit{KL}(\nu_j\Vert \hat{\nu}_{i,j})
\end{equation}
where $\hat{\nu}_{i,j}$ is the $D_\textit{KL}$-barycenter computed as in \eqref{eq:KLDbaryGauss} of the components $i$ and $j$ of a mixture. Note that such costs are symmetric, even if the $D_\textit{KL}$ is not; this represents a good advantage in terms of computational operations, since, given a mixture model of size $n$, one has to evaluate only $\frac{n(n-1)}{2}$ merging costs. Given these ingredients, it is possible to formulate a greedy reduction algorithm which, at each step, selects for merging the two components $i$ and $j$ associated to the least value of \eqref{eq:BBound}, and replaces $\nu_i$ and $\nu_j$ with their barycenter $\hat{\nu}_{i,j}$ in the current mixture model; as discussed in \cite{CGMR}, such a choice corresponds to minimize an upper bound on the true, yet intractable, $D_\textit{KL}$ between the mixture before and the one after the merging: such an upper bound is the \textit{Composite Transportation Dissimilarity}, $C_D$ for short. The procedure goes on until the desired number of components $n^b$ is reached. This algorithm is also known in the literature as the Runnalls' algorithm (\cite{Runnalls}), a special case of the framework presented in \cite{CGMR} when the $D_\textit{KL}$ is considered.
\subsection{Finding the Appropriate Number of Mixture Components}
Although the discussed algorithm results to be very performing, the issue of deciding what is a suitable number of components for the approximated model remains open; such an operation is a \textit{model selection} problem.
In the perspective of the previously mentioned framework, the costs $B(\cdot,\cdot)$ assume an interesting meaning since, as reported in \cite{adaCTDReduction}, they possess useful properties which allow to embed a model selection routine directly into the greedy reduction phase, thus obtaining an adaptive greedy reduction algorithm. In order to do so, it is necessary to define few more quantities.
Given a GM $p^a = \sum_{i=1}^{n^a} w_i^a \nu_i^a$, let us denote the corresponding $D_\textit{KL}$-barycenter as $\hat{\nu}^a$, let us denote with $m$ the \textit{current} mixture model order in a greedy reduction algorithm, and with $p^{(m)}$ the corresponding mixture; $m$ will be a decreasing integer starting from $n^a$ and ending to $n^b$. In \cite{adaCTDReduction} is reported that it is possible to evaluate the cost of merging all the mixture components into a single hypothesis $\hat{\nu}^a$ as:
\begin{equation} \label{eq:mDofp}
c(\hat{\nu}^a\vert p^a) = \frac{1}{(\boldsymbol{w}^a)^T \mathds{1} _n}\sum_{i=1}^{n^a} w_i^a D_\textit{KL}(\nu_i^a\Vert \hat{\nu}^a).
\end{equation}
In addition, the following quantity can be defined:
\smallskip
\begin{equation}\label{eq:CumRTLsBinequality}
\tilde{\mathcal{L}}^{(m)} = \frac{\sum_{l=n^a}^{m}\breve{B}^{(l)}}{c({\hat{\nu}^a}\vert p^a)} \in [0,1]
\end{equation}
that is the sum of all the bounds $\breve{B}^{(l)}$, which are the lowest costs \eqref{eq:BBound} computed for the reduced mixture $p^{(l)}$, so that $\tilde{\mathcal{L}}^{(m)}$ is the sum of all the costs associated to the optimal merging actions from $p^a$ to $p^{(m)}$; $\tilde{\mathcal{L}}^{(m)}$ can be proven to be an upper bound on the true, yet intractable, $D_\textit{KL}$ (scaled by a factor $c(\hat{\nu}^a\vert p^a)$) between the original mixture $p^a$ and its greedily reduced approximation of order $m\leq n^a$. From another perspective, \eqref{eq:CumRTLsBinequality} represents the relative accuracy loss w.r.t. the original mixture model. By providing a threshold $\alpha_{\tilde{\mathcal{L}}}\in [0,1]$\footnote{For the extreme case of $\alpha_{\tilde{\mathcal{L}}}=1$ the reported algorithm is equivalent to perform single hypothesis filtering as the PDAF.} it is possible to halt the greedy reduction during the descent when the prescribed loss threshold is exceeded.
The adaptive greedy reduction algorithm so obtained is reported in Algorithm \ref{alg:adaCTDgreedy}\footnote{The Runnalls algorithm has the same structure, but the computation of $\tilde{\mathcal{L}}$ and $c(\hat{\nu}^a\vert p^a)$ is not required, since the only halting condition is to reach the desired number $n^b$.}.
\IncMargin{1.5em}
\begin{algorithm2e}[hbtp]
\label{alg:adaCTDgreedy}
\SetAlgoLined
\KwData{Original GM $p^a$, of size $n^a$, \\ relative loss threshold $\alpha_{\tilde{\mathcal{L}}}$.}
\KwResult{Reduced GM $p^b$ of size $n^b\leq n^a$.}
$m:=n^a$, $p^{(m)}:=p^a$, $\tilde{\mathcal{L}}^{(m)}:=0$\;
Compute $c({\hat{\nu}^a}\vert p^a)$\;
\While{$\tilde{\mathcal{L}}^{(m)} \leq\alpha_{\tilde{\mathcal{L}}}$}
{ find $(i,j)\in[1\!:m]$:\ $B(w_i^{(m)}\nu_i^{(m)},w_j^{(m)}\nu_j^{(m)})\le
B(w_r^{(m)}\nu_r^{(m)},w_s^{(m)}\nu_s^{(m)})$, $\forall r>s\in[1:m]$ \label{op:adafindij}\;
$\breve{B}^{(m)} := B(w_i^{(m)}\nu_i^{(m)},w_j^{(m)}\nu_j^{(m)})$\;
$\tilde{\mathcal{L}}^{(m-1)}:=\tilde{\mathcal{L}}^{(m)} + \frac{\breve{B}^{(m)}}{c({\hat{\nu}^a}\vert p^a)}$\;\smallskip
\If{$\tilde{\mathcal{L}}^{(m-1)}\leq\alpha_{\tilde{\mathcal{L}}}$}
{\smallskip $p^{(m-1)}:=p^{(m)}-w_i^{(m)} \nu_i^{(m)} -w_j^{(m)} \nu_j^{(m)}+(w_i^{(m)}+w_j^{(m)})\hat{\nu}_{i,j}^{(m)}$\;}
$m:=m-1$\;
}
$p^b:=p^{(m)}$\;
\caption{Adaptive $C_{D_\textit{KL}}$-based Greedy reduction Algorithm}
\end{algorithm2e}
\DecMargin{1.5em}
Note that the algorithm here reported only performs merging of pairwise components and it does not take into account actions like pruning. This is due to the fact that, in the framework presented in \cite{CGMR}, merging is the only optimal action to consider when reducing greedily a mixture model. Nonetheless, compared to pruning, merging can be computationally burdensome.
\subsection{Capping Hypotheses}
In several processing systems, computational resources may be limited, hence the number of hypotheses to be maintained in an MHT has to be capped to a maximum number; in this regard, \textit{capping} is an operation often considered when a mixture has to be approximated. Such a procedure can simply be performed by sorting the hypotheses w.r.t. the corresponding weights, and to preserve only the $n^b$ most significant ones. In Algorithm \ref{alg:adaCTDgreedy}, there is no actual guarantee that the resulting mixture will possess a number of hypotheses below the maximum allowed number $n^b$; to address such an issue, one could either perform capping as described, or modify line $6$ of the reported scheme by adding the condition $m\leq n^b$, that is the accuracy threshold has been violated and the maximum allowed number of components has been reached. In other words, even if one is losing more accuracy than the desired one, the merging in the $D_\textit{KL}$ sense continues until $n^b$ is reached. When this happens, the user should reconsider the filter parameters or allocated resources, since it may happen that, for the process of interest, the chosen $n^b$ may be insufficient. The adaptive reduction here proposed can be exploited to figure out a suitable number of components for the problem of interest.
\subsection{Pruning Hypotheses}
Before starting a merging procedure of mixture components in a target tracking filter, one can first consider pruning low weight hypotheses in order to save computational power. Although this is a very efficient way to reduce the number of components, like capping, it is in fact a \textit{destructive} practice, in the sense that hypotheses are just discarded and the corresponding information is lost, at the opposite of what happens when merging hypotheses in the $D_\textit{KL}$ sense.
A common way to perform pruning, which we denote here as \textit{Standard Pruning} (SP), is to consider a threshold $\gamma$ on the component weights and to discard all the hypotheses for which $w_i< \gamma$, $i=1,...,n$.
Alternatively, we propose here a slightly more refined pruning method, namely \textit{Normalized-Weight Pruning} (NWP), where the thresholding is operated on the weights $w_i$ scaled by the square root of the determinant of the corresponding covariance matrix, i.e., the $i$-th component is pruned if
\begin{equation}\label{eq:NWP}
\tilde{w}_i < \tilde{\gamma}, \qquad \text{where} \quad
\tilde{w}_i =\frac{w_i}{\sqrt{\vert \Sigma_i\vert}},
\end{equation}
and $\tilde{\gamma}$ is the chosen threshold for the NWP.
Such a modification is suggested from the fact that the $D_\textit{KL}$ is an \textit{inclusive}\footnote{With the term inclusive we refer to $D$-measures which, when employed in the barycenter computation for a set of weighted components, provide a covariance that is larger than those of the merged components (covariance spread), rather than neglecting low density regions. For other details on inclusivity and exclusivity of $D$-measures see \cite{minka2005divergence, Consistency}.} $D$-measure, so that the covariance significantly spreads if two remarkably dissimilar hypotheses are merged together.
Indeed, the merging criterion \eqref{eq:BBound} may happen to select for merging two distant hypotheses with very low weights, generating unlikely hypotheses with rather large covariance; by employing a suitable threshold in the NWP, these unlikely hypotheses are removed.
\section{Numerical Tests and Discussion}\label{sec:NumTests}
Several numerical tests are here reported to assess the performances of the proposed approach and to lay the basis for the related discussion.
In order to study the effects of pruning of the components of the GM that models the posterior state distribution in the tracking filter, we consider a capping-only reduction scheme that preserves
the $n^b$ most significant GM components, with $n^b=30$.
This scheme is denoted {\it Capping-30}. In general, for the sake of notation, the target number of components in a greedy reduction algorithm is appended to the end of its name (e.g., Runnalls-5 denotes the Runnalls procedure where the target number of components is $n^b=5$).
In addition to capping, we consider the following reduction pipelines:
\begin{enumerate}
\item SP $\rightarrow$ Runnalls-5 $ \rightarrow$ NWP%
\footnote{In the considered scenario, suitable values for the NWP threshold $\tilde{\gamma}$ are in the interval $[10^{-12},10^{-6}]$ (physical unit of $\tilde{\gamma}$ is $\frac{s^2}{m^4}$.)}\\
\item SP $\rightarrow$ Runnalls-30 $\rightarrow$ NWP\\
\item SP $\rightarrow$ Adaptive-30 (Algo \ref{alg:adaCTDgreedy}) $ \rightarrow$ NWP.
\end{enumerate}
The experimental scenario considers constant velocity motion and measurement models as in \cite{SalmondReduction}:
\begin{equation} \label{eq:constvelmod}
\begin{cases}
x_k = F x_{k-1} + w_{k-1}, \\
z_k = H x_k + v_k,
\end{cases}
\end{equation}
with $x_k = [p_x,p_y,v_x,v_y]$, $w_k\sim \nu(x\vert \boldsymbol{0}, Q)$, and:
\begin{equation}
F =\! \begin{bmatrix}
1 & 0 & \Delta t & 0\\
0 & 1 & 0 & \Delta t\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{bmatrix}, \ G =\! \begin{bmatrix}
\Delta t^2/2 & 0\\
0 & \Delta t^2/2\\
\Delta t & 0\\
0 & \Delta t
\end{bmatrix}, \ Q=\sigma_q^2GG^T,
\end{equation}
and $\Delta t = 1 s$. $z_k = [p_x, p_y]$, $v_k \sim \nu(x\vert \boldsymbol{0}, R)$, and
\begin{equation} \label{eq:outmod}
H = \begin{bmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0
\end{bmatrix}, \quad R = \sigma_r^2 I_2.
\end{equation}
$\sigma_q$ and $\sigma_r$ have respectively units of $m/s^2$ and $m$. Regarding the MHT, a probability of detection of $P_D=0.9$ has been considered, and the gating probability\footnote{Gating is a practice which consists in considering only the measurements falling in a confidence region of a hypothesis.} has been set to $P_G = 0.999$. For the sake of investigation, a ground-truth trajectory of $K=100$ steps has been generated as follows:
\begin{equation}\label{eq:trajGen}
\begin{aligned}
& x_0 = [0,0,10,-10],\\
& x_k=F x_{k-1} + Gu^1,\ u^1=[\ 0.2 \ \ 0.6\,]^T,& \text{for $k\in[1,50]$},\\
& x_k=F x_{k-1} + Gu^2,\ u^2=[\ \ 0 \ \ \ \ \-2\,]^T,& \text{for $k\in[51,75]$},\\
& x_k=F x_{k-1} + Gu^3,\ u^3=[-3 \ \ \ \ 1\,]^T,& \text{for $k\in[76,100]$}.
\end{aligned}
\end{equation}
The measurements $z_k$ have been generated out of such ground-truth using the model \eqref{eq:outmod} with $\sigma_r^2 = 60$.
In the filter, the values $\tilde{\sigma}_q^2= 9$ and $\tilde{\sigma}_r^2 = 70$ have been used for the computations of gains to simulate a moderate mismatch of the filter w.r.t.\ the underlying system model.
A measurement clutter uniformly distributed over a rectangular FOV of the sensor has been assumed in the performed tests.
The clutter rate (average number of false alarms per scan) in the FOV is denoted $\lambda_c$.
For the sake of comparison, all the results of the numerical tests here reported refer to the same ground-truth \eqref{eq:trajGen} and are carried out by changing the parameters of the various GM reduction schemes
(target mixture size $n^b$, SP threshold $\gamma$, NWP threshold $\tilde{\gamma}$, normalized loss threshold $\alpha_{\tilde{\mathcal{L}}}$)\footnote{Note that the thresholds $\gamma$ and $\alpha_{\tilde{\mathcal{L}}}$ are unitless, while the metric unit of $\tilde{\gamma}$, in the 2D tracking application, is $s^2/m^4$.}
, for different values of the clutter rate $\lambda_c$.
For each filter, the state estimate $\hat{x}_{k\vert k}$ at time step $k$ is computed in the \textit{Minimum Mean Squared Error} (MMSE) sense:
denoting with $\{w_{k\vert k},\mu_{k\vert k,i},\Sigma_{k\vert k,i}\}_{i=1}^n,$
the parameters of the GM of size $n$ that approximates the posterior distribution \eqref{eq:posteriorMHT}, the MMSE state estimate and its covariance are the mean and covariance of the GM, computed as in \eqref{eq:KLDbaryGauss}:
\begin{align}
\hat{x}_{k\vert k} & = \sum_{i=1}^{n} w_{k\vert k,i}\, \mu_{k\vert k,i}\\
\widehat{\Sigma}_{k\vert k} & = \sum_{i=1}^n w_{k\vert k,i}
\big(\Sigma_{k\vert k,i} + (\mu_{k\vert k,i}-\hat{x}_{k\vert k} )(\ast)^T\big).
\end{align}
In Fig.\ \ref{fig:example} the outcome of a single experiment is reported, where
$\lambda_c=150$ (FOV area $\approx 700\times 1300\ m^2$),
$\gamma=5\cdot 10^{-4}$, $\tilde{\gamma}=10^{-10}\, s^2/m^4$, and $\alpha_{\tilde{\mathcal{L}}}=0.05$.
\begin{figure*}[hbtp]
\centering
\includegraphics[width=\textwidth]{Figures/exampleRework.eps}\vspace{-1.4em}
\caption{(a). Position ground-truth (black, generated as in \eqref{eq:trajGen}) vs. position estimates. (b) Number of maintained hypotheses over epochs. (c) Average measurement number per gate over epochs.}\vspace{-0.5em}
\label{fig:example}
\end{figure*}
By looking at Fig. \ref{fig:example} it is possible to spot how, in the final phase, the Runnalls-based reductions succeed in helping the filter to recover the state track, while the capping-only reduction leads to filter divergence (track loss).
In Fig.\ \ref{fig:example}.(b) the number of maintained hypotheses over epochs are reported; the capping-only reduction saturates to $n^b=30$ instantly, closely followed by the Runnalls-30 alternative. Nonetheless, the latter undergoes some pruning effects near the end. This is due to the fact that, after recovering the state track, many of the existing hypotheses quickly decreased in the importance (or had very broad covariance), hence falling below the pruning thresholds $\gamma$ and $\tilde{\gamma}$. The Runnalls-5, even if possessing a low number of hypotheses, still succeeds in recovering the track. Finally, the adaptive scheme oscillates a lot below the limit $n^b=30$, while rarely reaching its maximum allowed number.
In the last plot (Fig.~\ref{fig:example}.(c)) the average number of measurements falling inside the hypotheses gates is reported.
As it can be observed, less hypotheses obtained by merging may correspond to a higher number of measurements per gate.
This is due to the fact that the moment-matching merging \eqref{eq:KLDbaryGauss}
while preserving the mixture first and second moments, increases the distribution entropy\footnote{Recall that the Gaussian is the maximum entropy pdf when mean and covariance are given, so that, at the same covariance, mixtures have lower entropy than a single Gaussian.}:
a higher entropy amounts to more possibilities which, in terms of tracking, may lead to a virtual increase in the uncertainty about the system state (spread of the covariance).
If compared to the capping case, where the uncertainty related to a single hypothesis is the outcome of the Kalman filter only, in the case of schemes where moment-matching takes place, subsequent merging actions increase the overall mixture entropy. Such a phenomenon leads to two main facts:
\begin{itemize}
\item unlike capping, there is interaction between state tracks, in the sense that merging hypotheses together can be seen as fusing two or more state trajectories into a single one, while increasing the resulting uncertainty.
This can enhance the filter robustness to bad modelling or high clutter levels.
\item In terms of \textit{Root Mean Squared Error} (RMSE), this may lead to a slightly decrease in the performances when a high clutter is present and aggressive moment-matching is performed. This is due to the fact that reducing significantly the number of hypotheses by merging can lead to larger gates, hence to consider a larger number of data associations mostly associated to clutter; consequently, the corresponding state estimate will be perturbed by the presence of unwanted information. As mentioned, though, this can increase the robustness to bad modelling, thus, in parallel, it could even provide more accurate estimates: the balance between number of maintained hypotheses (model complexity) and approximation methods (model accuracy) may hence be impactful in terms of overall performances.
\end{itemize}
To assess the previous argumentation, Monte Carlo simulations have been performed for several scenarios, and three metrics have been considered: the position RMSE, the track loss (TL) percentage, and the loop time (LT).
The average number of maintained hypotheses ($\bar{n}^b$) is also reported.
A track is considered to be lost if, at a given time step, the true state is not contained in any of the hypotheses gates.
The RMSE is computed only over track sequences which were not lost.
The loop time considers both the filtering and reduction times
\footnote{ All tests have been performed with a laptop with an Intel Core i7-8750H CPU @ 2.20GHz × 12.
Note that the reported loop times should only have a relative meaning, basically useful to establish an ordering of the algorithms in terms of computational costs.}.
\subsection{Monte Carlo Tests Over Different Clutter Intensities}
The first test considers a low-to-mid average number of FAs $\lambda_c=150$ over the sensor FOV ($\approx 700\times 1300 m^2)$, for the trajectory reported in \eqref{eq:trajGen}.
With such a rate, the average in-gate measurements for each hypothesis results to be approximately $\bar{m}=3$.
\begin{small}
\begin{table}[hbtp]
\begin{centering}
\begin{tabular}{
|p{1.6cm}||p{1.7cm}|p{1.2cm}|p{1cm}|p{1cm}|}
\hline
\multicolumn{5}{|c|}{Monte Carlo average results for $N=1000$ runs} \\
\hline
\textbf{Algorithm} & \textbf{RMSE (m)} & \textbf{TL (\%)} & \textbf{LT (s)} & $\boldsymbol{\bar{n}^b}$\\
\hline
Capping-30 & 13.1429 & 45.8\% & 0.0060 & 29.2176 \\
\hline
Runnalls-5 & 11.9744 & 3.2\% & 0.0059 & 4.9596\\
\hline
Runnalls-30 & 11.6724 & 2.0\% & 0.0398& 28.8352\\
\hline
Adaptive-30 & 11.7408 & 1.8\% & 0.0162& 15.6205\\
\hline
\end{tabular}
\smallskip
\caption{Comparison of mixture reduction schemes for $\lambda_c=150$, $n^b=30$, $\gamma=5\cdot10^{-4}$, $\tilde{\gamma}=10^{-10}$, $\alpha_{\tilde{\mathcal{L}}}=0.05$.\label{table:lowClutterMC}\vspace{-1.6em}}
\end{centering}
\end{table}
\end{small}
Table \ref{table:lowClutterMC} shows that, for tracks which were not lost, the Capping-only lags slightly behind in terms of RMSE, while the three other cases are comparable.
Nonetheless, in terms of track loss, the capping-only scheme has definitely the worst performances.
For the Runnalls-based alternatives, the filters appear to be significantly more robust. In terms of execution times, the scheme taking up more resources is, as expected, the Runnalls-30, since it involves many cost evaluations and merging actions, while the Runnalls-5 results to be equivalent to the Capping-30 scheme; the Adaptive-30 alternative falls in the middle, suggesting that a suitable number of hypotheses for the process of interest could be $n^b=15$, for the given relative loss threshold $\alpha_{\tilde{\mathcal{L}}}$.
In order to reveal the differences in performance of the considered algorithms, another MC test is reported, where the clutter rate has been increased to $\lambda_c=300$ (Table \ref{table:highClutterMC}).
\begin{small}
\begin{table}[hbtp]
\begin{centering}
\begin{tabular}{
|p{1.6cm}||p{1.7cm}|p{1.2cm}|p{1cm}|p{1cm}|}
\hline
\multicolumn{5}{|c|}{Monte Carlo average results for $N=1000$ runs} \\
\hline
\textbf{Algorithm} & \textbf{RMSE (m)} & \textbf{TL (\%)} & \textbf{LT (s)} & $\boldsymbol{\bar{n}^b}$\\
\hline
Capping-30 & 21.0946 & 86.9\% & 0.0137 & 29.3432\\
\hline
Runnalls-5 & 22.5189 & 21.2\% & 0.0266 & 4.9691\\
\hline
Runnalls-30 & 18.4154 & 10.0\% & 0.0869& 29.2683\\
\hline
Adaptive-30 & 18.9112 & 10.8\% & 0.0434& 16.0126\\
\hline
\end{tabular}
\smallskip
\caption{Comparison of mixture reduction schemes for $\lambda_c=300$, $n^b=30$, $\gamma=5\cdot10^{-4}$, $\tilde{\gamma}=10^{-10}$, $\alpha_{\tilde{\mathcal{L}}}=0.05$.\label{table:highClutterMC}\vspace{-1.6em}}
\end{centering}
\end{table}
\end{small}
In terms of RMSE, it is possible to see again how, for tracks which were not lost, a similar ordering as before is maintained; of course, more clutter amounts to a higher RMSE.
In terms of track loss, one can see how Capping-30 loses the track almost in every run,
proving its total unreliability as a reduction scheme.
As for the Runnalls-based alternatives, it is more clear now how the number of hypotheses can affect the performances when complex scenarios are addressed: the Runnalls-5 scheme detaches considerably from the other two approaches both in terms of RMSE and TL if compared to the previous case (medium clutter): this could be a first symptom of how aggressive merging can incorporate unwanted information.
In terms of execution times, we find a similar ordering as for the previous case of $\lambda_c=150$.
Overall, it seems that a higher number of hypotheses may be necessary if the scenario complexity increases. But to what extent employing a higher number of hypotheses can improve the tracker performances? And to what extent more merging (in the $D_\textit{KL}$ sense) than pruning can improve the filter robustness? Let us consider, at first, the same experiment as before ($\lambda_c=300$), with the same pruning thresholds, but with $n^b=50$.
\begin{small}
\begin{table}[hbtp]
\begin{centering}
\begin{tabular}{
|p{1.6cm}||p{1.7cm}|p{1.2cm}|p{1cm}|p{1cm}|}
\hline
\multicolumn{5}{|c|}{Monte Carlo average results for $N=1000$ runs} \\
\hline
\textbf{Algorithm} & \textbf{RMSE (m)} & \textbf{TL (\%)} & \textbf{LT (s)} & $\boldsymbol{\bar{n}^b}$\\
\hline
Capping-50 & 20.6443 & 85.8\% & 0.0252 & 48.6863\\
\hline
Runnalls-5 & 22.5897 & 21.9\% & 0.0275 & 4.9692\\
\hline
Runnalls-50 & 18.6655 & 10.8\% & 0.1577& 48.4460\\
\hline
Adaptive-50 & 19.1231 & 11.6\% & 0.0600& 24.0452\\
\hline
\end{tabular}
\smallskip
\caption{Comparison of mixture reduction schemes for $\lambda_c=300$, $n^b=50$, $\gamma=5\cdot10^{-4}$, $\tilde{\gamma}=10^{-10}$, $\alpha_{\tilde{\mathcal{L}}}=0.05$.\label{table:highClutterMoreHypsMC}\vspace{-1.1em}}
\end{centering}
\end{table}
\end{small}
By looking at Table \ref{table:highClutterMoreHypsMC}, it is possible to see how, w.r.t.\ Table \ref{table:highClutterMC}, not much has changed in terms of RMSE and track loss.
In this regard, the way hypotheses are managed appears to be more important than the number of maintained hypotheses.
To support this statement, the same test has been repeated ($\lambda_c=300$, $n^b=50$),
but where the pruning thresholds have been lowered respectively to $\gamma=10^{-4}$ and $\tilde{\gamma}=10^{-12}$. Thus, all algorithms, except for the capping-only scheme,
perform less pruning and more merging in the $D_\textit{KL}$ sense.
\begin{small}
\begin{table}[hbtp]
\begin{centering}
\begin{tabular}{
|p{1.6cm}||p{1.7cm}|p{1.2cm}|p{1cm}|p{1cm}|}
\hline
\multicolumn{5}{|c|}{Monte Carlo average results for $N=1000$ runs} \\
\hline
\textbf{Algorithm} & \textbf{RMSE (m)} & \textbf{TL (\%)} & \textbf{LT (s)} & $\boldsymbol{\bar{n}^b}$\\
\hline
Capping-50 & 21.1096 & 86.1\% & 0.0269 & 48.6773\\
\hline
Runnalls-5 & 25.6687 & 12.3\% & 0.0994 & 4.9689\\
\hline
Runnalls-50 & 20.0868 & 4.0\% & 0.2754& 48.6022\\
\hline
Adaptive-50 & 20.2236 & 5.2\% & 0.1036& 21.7148\\
\hline
\end{tabular}
\smallskip
\caption{Comparison of mixture reduction schemes for $\lambda_c=300$, $n^b=50$, $\gamma=10^{-4}$, $\tilde{\gamma}=10^{-12}$, $\alpha_{\tilde{\mathcal{L}}}=0.05$.\label{table:highClutterMoreHypsLowPruningMC}\vspace{-1.8em}}
\end{centering}
\end{table}
\end{small}
As it can be observed in Table \ref{table:highClutterMoreHypsLowPruningMC}, reducing pruning and increasing merging can improve dramatically the filter robustness, but can worsen the RMSE: as discussed, the Runnalls-5 reduction may be too aggressive, hence by presenting larger gates and enclosing significant noise generated by clutter; in addition, $n^b=5$ can represent in general an insufficient number of hypotheses for the case of interest. Nonetheless, it still presents a significant improvement in robustness if compared to the capping-only scheme. Meanwhile, by employing an adaptive reduction scheme, it is possible to achieve very robust and accurate tracking, close to the Runnalls-50 alternative, while employing a loop time comparable to the Runnalls-5 algorithm.
A general trend which has been observed in all tests carried out is that Runnalls-based schemes are able to recover the state estimate even if large deviations are taken; at the opposite, capping-only schemes, or alternatives where aggressive pruning is performed, do not possess such property.
\subsection{Discussion}
In this section, we want to discuss further points arising from the numerical tests.
First of all, less hypotheses to be managed do not necessarily imply a lower computational load.
As mentioned, depending on how the mixture is approximated, the component gates may expand, and considerably more measurements may be enclosed, hence requiring more filter updates in the next time step (see Fig. \ref{fig:example}.(c))%
\footnote{This effect can, however, be mitigated by the NWP \eqref{eq:NWP}.}.
When this happens, many new hypotheses are generated and more computational burden is added to the reduction phase.
In this regard, we noted that the performance of the adaptive scheme can be worse if the threshold $\alpha_{\tilde{\mathcal{L}}}$ of the relative accuracy loss is badly tuned.
Indeed, if a too high value is chosen, aggressive over-reductions may take place, especially in scenarios with low SNR.
A possible workaround may be to either lower the threshold (hence making the adaptive reduction more \textit{conservative}), or to employ a suitably tuned NWP threshold in order to get rid of those very low importance hypotheses that generate large gates.
The results of many tests performed suggest that the number of hypotheses really useful and significant in a target tracking context, depends on many factors, among which the process dynamics.
If the target moves essentially in straight lines, it may not be necessary to employ many GM components to achieve robust tracking.
Conversely, if the process dynamics is more erratic, and not much compatible with the adopted motion model, additional hypotheses could be beneficial to obtain a more robust and accurate tracking.
However, looking carefully at the results of the MC tests, it can be seen that by increasing the number of hypotheses, the average RMSE slightly increases. Again, this is linked to the fact that, if a large number of components is maintained, more tracks associated to clutter are propagated over time, before being pruned or merged, hence perturbing the resulting state estimate.
In this regard, suitably combining pruning with adaptive reduction can be beneficial.
We also carried out tests, not reported here, where the ground-truth trajectory was more closely aligned with the transition model \eqref{eq:constvelmod} used for the filter design, and tests with lower noise levels together with low rates of FAs.
In such cases, often the capping-only scheme has exhibited a decent overall behavior (low track loss percentage and low RMSE), because under the favourable circumstances just described, additional significant hypotheses, other than the one associated to the true state, are unlikely to appear.
In general, if the knowledge of the process statistics is very accurate (unlikely situation in real world applications), the use of MHTs with many hypotheses may not only be excessive, but in some cases can lead to a deterioration in performance.
In a summary, we have seen that, in some circumstances, an MHT that includes an adaptive GM reduction scheme can achieve essentially the same performances of a Runnalls reduction scheme with a significant number of hypotheses, but with a lower computational load.
But how can it be useful to save computation time, for the same level of performance?
Of course, a shorter filter/reduction loop time would allow increase the sensor scan rate, and in some applications this could be beneficial.
However, in those cases in which there is no need to increase the scan rate, a shorter filter/reduction loop time would allow to carry-out other tasks, useful for further improving the filter performances, like re-estimating the process and measurement noise covariances, or performing a GM refinement routine, as discussed in \cite{CGMR}.
\section{Conclusions and Future Works}\label{sec:Conclusions}
In this work, the problem of tracking a single target in the presence of clutter has been addressed with a special focus on the GM approximation of the posterior distribution.
After formulating the problem, an approach to adaptively reduce the GM sizes is proposed in order to achieve robust and efficient tracking.
Several tests carried out have shown that, in target tracking scenarios, where one or more objects have to be tracked in the presence of clutter, the Mixture Reduction is a core component of the filter as a whole, and under some circumstances, if rigorously addressed, can effectively improve the tracking performances.
The case of multiple-object tracking will be investigated in future works.
\begin{ack}
This work was supported by the Centre of EXcellence on Connected, Geo-Localized and Cybersecure Vehicles (EX-EMERGE), funded by the Italian Government under CIPE resolution n. 70/2017 (Aug.\ 7, 2017).
\end{ack}
|
1,477,468,751,176 | arxiv | \section{Introduction}
Over the last decades, brane world theories have received a lot of attention for the success on solving the gauge hierarchy and cosmological constant problems \cite{Randall,Arkani-Hamed98}. In the brane world scenario, our universe is {a 3-brane embedded} in a higher-dimensional bulk. The well known {Randall-Sundrum (RS)} models \cite{Randall} ({{including} RS-1 and RS-2 models}) involve one extra dimension with a non-trivial warp factor due to the underlying anti-de Sitter (AdS) geometry.
{In the RS thin brane models and their generalizations}, {the branes have no thickness and there are no} {dynamical mechanisms responsible for their formation}. {In order to investigate the dynamical generation of branes} {and their internal structure,} {domain wall (or thick brane) models were presented}, and for more details on thick brane models, see Refs. \cite{Dzhunushaliev:2009va,1707.08541}. {One of the features of thick brane is that} {usually it} is generated by one or {more} background scalar fields coupled with gravity.
In thick brane models, various fundamental matter fields are living in the higher-dimensional bulk. Therefore, in order to construct a more realistic brane world, on which the four-dimensional gravity and matter fields in the standard model {should be localized, it} is very necessary and significant to provide an effective localization mechanism for the bulk gravity and matter fields. The results of Refs.~
\cite{Gremm:1999pj,yizhong17,Barbosa-Cendejas,Farakos2007,GermanHerrera2012,
Herrera-A2012,Kakushadze:2000zp} show that four-dimensional gravity can be localized on the thick branes {generated by} background scalar field(s) in a five-dimensional {asymptotic AdS spacetime.} As shown in Refs.~\cite{ Linares,Liu7911,Koroteev08,Flachi09,Bajc2000,
Fuchune}, {a free massless scalar field} can also be localized on the thick {branes}. For {a Dirac fermion field}, without introducing the scalar-fermion coupling (also called the Yukawa coupling) \cite{ThickBrane2,guo1408,LiuXu2014,ThickBrane1,ThickBrane3,
Liu0803,Neupane} or fermion-gravity coupling \cite{LiLiu2017a}, {it has no} normalizable zero mode in five-dimensional RS-like brane models. Unfortunately, Ref.~\cite{Bajc:1999mh} {gave} the essence of ``no-go theorem'' in the thin brane limit (RS-2 model with {an infinite} extra dimension) that the localization for {a vector field} seems to require a richer brane structure, {for examples}, the de Sitter brane \cite{Liu200808,Liu20090902,GuoHerrera2013,Herrera-Aguilar2014}, the brane world with finite extra dimension \cite{LiuFuGuoLi2012}, or a six-dimensional {string-like model} \cite{Oda2000}.
Then, a lot of works have been devoted to find a mechanism for vector field localization, and {many} literatures show a wide variety of ideas. Kehagias and Tamvakis proposed a dilaton coupling between the vector field and background scalar field \cite{Kehagias2001}. This mechanism has been widely applied {in different} thick brane models \cite{CruzTahimAlmeida2010,Alencar2010,
CruzLimaAlmeida2013,FuLiuGuo2011,CruzTahimAlmeida2009,ChristiansenCunhaTahim2010,
CruzMalufAlmeida2013}. After that, Chumbes, {Holf da Silva} and Hott proposed {a coupling function between the vector field and the background scalar field \cite{ChumbesHoffHott2012}.} Vaquera-Araujo and Corradini introduced a Yukawa-like coupling, namely, a Stueckelberg-like action, to realize the localization of {the vector field} \cite{Vaquera-AraujoCorradini2014}.
{Recently, Zhao et al.} \cite{zhao1406} presented another localization mechanism of the vector field, {i.e., introduced} a mass term $\alpha Rg^{MN}A_{M}A_{N}$ with $R$ the five-dimensional scalar curvature. They found that only for {a special} coupling parameter $\alpha=-1/16$, the vector part $\hat{A}_{\mu}$ can be localized on the thick brane, and there are no tachyonic modes. Then, Alencar et al. introduced other forms of the mass term: $\beta R^{MN}A_{M}A_{N}$ and $\beta G^{MN}A_{M}A_{N}$ with $R^{MN}$ and $G^{MN}$ the Ricci tensor and Einstein tensor \cite{Alencar1506}.
While, in all these mechanisms, in order to eliminate tachyonic vector modes, the massive parameters $\alpha$ or $\beta$ should be fixed, since there is no more degree of freedom for the coupling parameter. As a result, the effective potential {of the vetor Kaluza-Klein (KK) modes} is fixed and usually there are no resonant
vector KK modes quasi-localized on the brane.
Inspired by the above works, we generalize the mass term to the following one
\begin{eqnarray}
{ -\frac{1}{2}\left(\alpha Rg^{MN}A_{M}A_{N}+\beta R^{MN}A_{M}A_{N}\right)}, \label{generalizedCoupling}
\end{eqnarray}
{since both terms are possible couplings. Then we} study the localization and quasi-localization of the vector field on the thick brane. For the quasi-localized massive KK modes, they {might be a candidate} for dark matter. {Note that, the consistency conditions for this kind of localization mechanism is just investigated by \cite{Freitas:2020mxr}.}
We decompose the vector field $A_{M}$ into three parts: {the transverse component $\hat{A}_{\mu}$ (transverse vector part), the longitudinal component $\partial_{\mu}\phi$ (scalar part), and the fifth component $A_{5}$ (scalar part). Here} the Latin indices ($M, N=0,1,2,3,5$) stand for the five-dimensional coordinate indices, Greek indices ($\mu, \nu=0,1,2,3$) correspond to the brane coordinate indices. We find that the {transverse vector part} $\hat{A}_{\mu}$ decouples with the scalar parts $\phi$ and $A_{5}$. Besides, in order to eliminate the tachyonic modes of $\hat{A}_{\mu}$, the two parameters in the coupling term \eqref{generalizedCoupling}, {$\alpha$ and $\beta$, should satisfy a relation:} $\beta=-1-8\alpha\pm\sqrt{1+12\alpha}$. With this {constraint}, we can get {a combination parameter} $\gamma=\frac{3}{2}\pm\sqrt{1+12\alpha}$, and the localized condition for the {transverse vector part} $\hat{A}_\mu$ is $\gamma>1/2$. More importantly, we can find the resonant states under this restrict condition. We investigate the resonant character of $\hat{A}_\mu$ with the general RS-like thick brane warp factor $A(z)=-\ln(1+k^2z^2)/2$. These resonant states can be considered as {a candidate} for dark matter.
The remaining parts are organized as follows. In section \ref{sec-2}, we introduce the generalized model of {the vector field}. Then, we calculate the localization of {the transverse part of a five-dimensional vector} on the thick brane in section \ref{localization}. After that, we study the resonant character of the {transverse vector part} in section \ref{resonance}. Finally, we conclude our results in the last section.
\section{{The generalized geometrical coupling mechanism of vector field}}\label{sec-2}
{The vector field can be localized on the thick brane by considering the geometrical coupling {term, e.g., the coupling between the vector field and the Ricci scalar (or Ricci tensor), which} can be viewed as a mass term \cite{zhao1406,Alencar1409,Alencar1506}. In this paper, we consider the generalized geometrical coupling (\ref{generalizedCoupling}). Then the full five-dimensional action for the vector field $A_M$ {is given by}
\begin{eqnarray}
S&=&-\frac{1}{4}\int d^5x\sqrt{-g}\Big( {F^{MN}F_{MN}} \nn\\
&&+2(\alpha R g^{MN}+\beta R^{MN})A_M A_N\Big ),\label{vectorAction}
\end{eqnarray}
where $g_{MN}$ is the metric of the five-dimensional bulk spacetime and $F_{MN}=\partial_{M}A_{N}
-\partial_{N}A_{M}$ is the field {strength. Here,} we decompose the vector $A_{M}$ in the following way:
\begin{equation}
A_{M}=(\hat{A}_{\mu}+\partial_{\mu}\phi,\;A_{5}), \label{eqAM}
\end{equation}
where $\hat{A}_{\mu}$ is the transverse component with the transverse condition $\partial_{\mu}\hat{A}^{\mu}=0$, and $\partial_{\mu}\phi$ is the longitudinal component.
We adopt the following metric ansatz to describe the five-dimensional spacetime:
\begin{equation}\label{metric}
ds^2=e^{2A(y)}\eta_{\mu\nu}dx^{\mu}dx^{\nu}+dy^2,
\end{equation}
where the warp factor $A(y)$ is a function of the extra dimensional coordinate $y$.
So, {the Ricci scalar and the nonvanishing components of the Ricci tensor} can be expressed as
\begin{eqnarray}
R~&=&-4(5A'^2+2A''),\\
R^{\mu\nu} &=&-(4{A'}^2+A'')g^{\mu\nu},\\
R^{55} &=& -4({A'}^2+A''),
\end{eqnarray}
where the prime denotes derivative with respect to $y$. {The components of the mass
terms} {in action (\ref{vectorAction})} can the written as
\begin{eqnarray}
\alpha R g^{\mu\nu}+\beta R^{\mu\nu} &=& \mathcal{W}g^{\mu\nu},\\
\alpha R g^{5 5}+\beta R^{5 5} &=& \mathcal{G}g^{5 5},
\end{eqnarray}
{where}
\begin{eqnarray}
\mathcal{W} &=& -4(5\alpha+\beta){A'}^2-(8\alpha+\beta)A'',\\
\mathcal{G} &=& -4(5\alpha+\beta){A'}^2-4(2\alpha+\beta)A''.
\end{eqnarray}
{By substituting} the decomposition \eqref{eqAM} into the action \eqref{vectorAction}, we can split the action into {two parts}
\begin{equation}
S=S_{V}(\hat{A}_{\mu})+S_{S}(\phi,\,A_5),
\end{equation}
where \begin{eqnarray}
S_{V}&=&-\frac{1}{4}\int d^5x\Big(\hat{F}_{\lambda \mu}\hat{F}_{\nu \rho}{\eta}^{\lambda\nu} {\eta}^{\mu\rho}+2{\partial}_{5}{\hat{A}_{\mu}}{\partial}^{5}{\hat{A}_{\nu}}\eta^{\mu\nu}e^{2A}\nn\\
&+&2\mathcal{W}\hat{A}_{\mu}\hat{A}_{\nu}\eta^{\mu\nu}e^{2A}\Big), \label{actionT} \\
S_{S}&=&-\frac{1}{2}\int d^5x e^{2A}\Big(\eta^{\mu\nu}g^{55}({\partial}_{5}{\partial}_{\mu}{\phi})({\partial}_{5}{\partial}_{\nu}{\phi})\nn\\
&+&\mathcal{W}\eta^{\mu\nu} {\partial}_{\mu}{\phi}{\partial}_{\nu}{\phi}
+\eta^{\mu\nu}g^{55}{\partial}_{\mu}{A}_{5}{\partial}_{\nu}{A}_{5}\nn\\
&+&\mathcal{G}e^{2A}g^{55}{A}_{5}{A}_{5}-2\eta^{\mu\nu}g^{55}{\partial}_{\mu}{A}_{5}
(\partial_{5}{\partial}_{\nu}{\phi})\Big), \label{actionscalar}
\end{eqnarray}
{where $\hat{F}_{\mu\nu} = \partial_{\mu} \hat{A}_{\nu}-\partial_{\nu} \hat{A}_{\mu}$. The above result shows that the transverse vector part $\hat{A}_\mu$ decouples from the scalar parts. So, we only consider separately the localization condition and resonant character of the transverse vector part $\hat{A}_\mu$.
\section{Localization of the transverse vector part of the vector field on thick brane}
\label{localization}
In this section, we consider the localization of the vector part $\hat{A}_\mu$ independently. We make the following KK decomposition:
\begin{equation}
\hat{A}_{\mu}(x,y)=\sum_{n}a^{(n)}_{\mu}(x^\nu)\tilde{\rho}_{n}(y),\label{KKdeco}
\end{equation}
where $a^{(n)}_{\mu}(x^\nu)$ is the four-dimensional vector KK mode and $\tilde{\rho}_{n}(y)$ is the corresponding extra dimensional profile (also called as the KK wave function in Ref. \cite{Ponton2012}), and the index ``$n$'' represents the $n$-th KK mode. By using the KK decomposition \eqref{KKdeco} and the orthonormality condition
\begin{equation}\label{ortconditions}
\int^{\infty}_{-\infty}\tilde{\rho}_{n}(y)\tilde{\rho}_{m}(y)=\delta_{mn},
\end{equation}
we can get an effective action including the four-dimensional massless vector field (the zero mode $a^{(0)}_{\mu}$) and a set of massive vector fields $a^{(n)}_{\mu}$ with $n>0$:
\begin{equation}\label{actionT4}
S_{V}=-\frac{1}{4}\sum_{n}\int d^4x\Big(
f^{(n)}_{\mu\lambda}f_{(n)}^{\mu\lambda}+2 m_n^2 a^{(n)}_{\mu}a^{(n)}_{\nu}\eta^{\mu\nu}\Big),
\end{equation}
where $f^{(n)}_{\mu\nu}=\partial_{\mu}a^{(n)}_{\nu}-\partial_{\nu}a^{(n)}_{\mu}$ is the four-dimensional vector field strength tensor. In addition, the extra dimensional part $\tilde{\rho}_{n}(y)$ should satisfy the following equation:
\begin{equation}
-\partial_y\left(e^{2A(y)}\partial_{y}\tilde{\rho}_n\right)+\tilde{\rho}_{n}e^{2A(y)}\mathcal{W}
=m_n^2\tilde{\rho}_n. \label{eq}
\end{equation}
{In order to solve the above equation} \eqref{eq}, {we make} a coordinate transformation $dz=e^{-A(y)}dy$, {for which} the metric can be expressed as
\begin{equation}\label{conformalmetric}
ds^2=e^{2A(z)}(\eta_{\mu\nu}dx^{\mu}dx^{\nu}+dz^2),
\end{equation}
{and Eq. \eqref{eq} is rewritten} as
\begin{equation}
-\partial_z\Big(e^{A(z)}\partial_z\tilde{\rho}_{n}\Big)+\tilde{\rho}_{n}e^{3A(z)}\mathcal{W}
=e^{A(z)}m^2_{n}\tilde{\rho}_{n},\label{eq1}
\end{equation}
with $\mathcal{W}=e^{-2A}(-12\alpha-3\beta)(\partial_{z}A)^2+e^{-2A}(-8\alpha-\beta)\partial^2_{z}A$. {After the field transformation $\tilde{\rho}_{n}=e^{-\frac{1}{2}A(z)}\rho_{z}(z)$}, Eq. \eqref{eq1} can be rewritten as a Schr\"{o}dinger-like equation:
\begin{equation}
\Big(-\partial^{2}_{z}+V_{v}(z)\Big)\rho_{n}=m^2_{n}\rho_{n},\label{equa1}
\end{equation}
where the explicit expression of the effective potential $V_{v}(z)$ is
\begin{equation}\label{effpov}
V_{v}(z)=\left(\frac{1}{4}-12\alpha-3\beta\right)(\partial_{z}A)^2
+\left(\frac{1}{2}-8\alpha-\beta\right)\partial^2_{z}A.
\end{equation}
In order to exclude the tachyonic vector modes, the {eigenvalues} of Schr\"{o}dinger-like equation \eqref{equa1} should be non-negative, i.e., $m_{n}^2\geq0$. So, Eq. \eqref{equa1} should be written in the form of $Q^{+}Q\rho_{n}= m_{n}^{2}\rho_{n}$ with $Q=-\partial_{z}+\gamma
\partial_{z}A$. That is to say, the effective potential should be in the form of
\begin{eqnarray}
V_{v}(z)=\gamma^2(\partial_{z}A)^2+\gamma\partial^2_{z}A.
\label{effpov2}
\end{eqnarray}
{To this end}, the two parameters $\alpha$ and $\beta$ should satisfy the following {relation:}
\begin{equation}
\beta=-1-8\alpha\pm\sqrt{1+12\alpha}, \label{relation}
\end{equation}
{and so parameter $\gamma$ in \eqref{effpov2} is given by}
\begin{eqnarray}
\gamma=\frac{3}{2}\pm\sqrt{1+12\alpha}.\label{gamma}
\end{eqnarray}
{With the relation \eqref{relation} and the expression \eqref{gamma},} the Schr\"{o}dinger-like equation \eqref{equa1} can be {further} rewritten as
\begin{equation}\label{sequ2}
\Big(-\partial^{2}_{z}+\gamma^2(\partial_{z}A)^2+\gamma\partial^2_{z}A\Big)\rho_{n}=m^2_{n}\rho_{n}.
\end{equation}
{Now, we investigate the localization of the zero mode of $\hat{A}_{\mu}$, for which $m_{0}=0$ and the solution is given by}
\begin{equation}
\rho_{0}(z)=c_{0}e^{\gamma A(z)},
\end{equation}
{where $c_{0}$ is the normalization constant.}
According to the orthonormality condition \eqref{ortconditions}, the integration of ${\rho}^2_{0}$ should be finite, namely,
\begin{eqnarray}
\int_{-\infty}^{+\infty}{\rho}^2_{0}dz&=&{c_0^2}\int_{-\infty}^{+\infty} e^{2\gamma A(z)}dz\nn\\
&=&{c_0^2}\int_{-\infty}^{+\infty} e^{(2\gamma-1)A(y)}dy =1 \label{int31}.
\end{eqnarray}
For the RS-like braneworld scenarios, the warp factor has the following asymptotic behavior as
\begin{equation}
A(y)|_{y\rightarrow\pm\infty}\rightarrow-k|y|,
\end{equation}
where $k$ is the scale parameter of the brane with mass dimension. Plugging it into Eq. \eqref{int31}, we obtain that
\begin{equation}
{e^{(2\gamma-1)A(y)}|_{y\rightarrow\pm\infty} \rightarrow e^{-(2\gamma-1)k|y|}.} \label{integration}
\end{equation}
{In order to ensure the integration \eqref{int31} is convergent, the parameter should satisfy $\gamma>1/2$, i.e., $2\pm\sqrt{1+12
\alpha}>0$.} So, the range of the parameter $\alpha$ for different concrete expressions of $\beta$ are:
\begin{eqnarray}\label{locondition}
\alpha>-1/12, \beta&=&-1-8\alpha-\sqrt{1+12\alpha},\label{condition1}\\
0>\alpha>-1/12, \beta&=&-1-8\alpha+\sqrt{1+12\alpha}.\label{condition2}
\end{eqnarray}
\section{The resonant character of $\hat{A}_{\mu}$} \label{resonance}
{In this section, we would like to investigate the massive KK states of the transverse vector part for the vector field. We will mainly look for resonant KK states of the vector field, which are quasi-localized on the brane but will propagate into extra dimension eventually. The resonance spectrum of these KK states is one of the typical characteristics of RS-like brane world models. They can interact with four-dimensional particles, which may led to the non-conservation of energy and momentum since the KK resonances can escape out of the brane. So, it is} possible to probe extra dimensions by detecting resonant states \cite{Aaltonen}. Besides, some physicists regard those massive KK particles as a candidate of dark matter (see Refs.~\cite{KK1,KK2,KK3} the details). The appearance of these resonances is related to the structure of the brane. Thus, it is important and interesting to study the resonant KK modes on the thick brane with different structures. References \cite{Almeida1,fuqumi,Fermion1,Cruz1,Landim,
duyuzhi} have considered resonances of graviton and fermion. {Besides, Arakawa} et al. considered a massive vector field as a candidate of dark matter to explain the strong CP problem \cite{tait19}. So, we will study the {resonances} of {the five-dimensional vector field}.
In order to study the resonant states, Almeida et al. proposed the large peaks of the wave function as the resonances method to study the fermion resonances \cite{Almeida1}. Then, Landim et al. {researched the resonant states with transfer matrix method \cite{Landim,duyuzhi}.} Here, we will choose the relative probability method proposed by Liu et al. \cite{Fermion1} to calculate the resonant KK modes of the vector part $\hat{A}_\mu$ {since the method is effective for both odd and even KK states}. The relative probability is defined as \cite{Fermion1}
\begin{equation}\label{relative probability}
P=\frac{\int^{z_b}_{-z_b}|\rho_{n}(z)|^2dz}{\int^{z_{max}}_{-z_{max}}|\rho_{n}(z)|^2dz},
\end{equation}
where $2z_b$ is approximately the width of the thick brane and $z_{max}=10z_b$. Since the potentials considered in this paper are symmetric, the wave functions are either even or odd. Hence, we can use the following boundary conditions to solve the differential equation \eqref{sequ2} numerically:
\begin{eqnarray}
\rho_{n}(0)&=& 0,~~\rho'_{n}(0)=1,~~~\text{for odd KK modes}, \nn \\
\rho_{n}(0)&=& 1,~~\rho'_{n}(0)=0,~~~\text{for even KK modes}.
\end{eqnarray}
We solve the {Schr\"{o}dinger}-like equation (25) with the general RS-like warp factor $A(z)=-\ln(1+{k^2z^2})/2$. According to the supersymmetric quantum mechanics, the supersymmetric {partner} potentials will share the same spectrum of massive excited states. So, we can judge whether there are resonances by analyzing the shape of the supersymmetric partaner potential (we call it the dual potential). For our case, the dual potential corresponding to \eqref{effpov2} is $V_{v}^{(\text{dual})}(z)=\gamma^2(\partial_{z}A)^2-\gamma\partial^2_{z}A$. If there is no well or quasi-well in the dual potential, then there is no resonances. Thus, only for $\gamma>3$, there might exist resonances. We solve the {KK states numerically. The result} shows that both the parameters $k$ and $\gamma$ will affect the properties of the resonant states.
Figure \ref{figure1} shows the influence of the combination parameter $\gamma$ on the effective potential $V_{v}(z)$ and the resonant KK modes of the vector field $\hat{A}_{\mu}$. Figure \ref{potentialla} shows that the height of the potential {barrier} increases with the combination parameter $\gamma$, {which indicates} there are more resonant KK modes for larger $\gamma$, and this can be {confirmed} from Figs. \ref{resonantv1}, \ref{resonantv2}, \ref{resonantv3}. Combining Figs. \ref{resonantv1}, \ref{resonantv2}, \ref{resonantv3}, we can see that the mass of the first resonant KK modes, the number of resonant states, and the mass gap of the resonant KK modes increase with the parameter $\gamma$.
The effect of the scale parameter $k$ is shown in Fig. \ref{figure2}. From Fig. \ref{potentialk}, we can see that the scale parameter $k$ can influence not only the width of the potential well, but also its {height}. {With the increasing of the scale parameter $k$}, the potential well becomes narrower and higher. {From} Figs. \ref{resonantk1}, \ref{resonantk2}, \ref{resonantk3}, we can see that the mass of the first resonant KK mode and the mass gap of the resonant KK modes increase with the parameter $k$. {However, the number of the resonances does not change with $k$ for a fixed $\gamma$.}
\begin{figure}[htb]
\subfigure[$V_{v}$]{\label{potentialla}
\includegraphics[width=0.22\textwidth]{A_muv.eps}}
\subfigure[$\gamma=10$]{\label{resonantv1}
\includegraphics[width=0.22\textwidth]{resonancev10.eps}}
\subfigure[$\gamma=15$]{\label{resonantv2}
\includegraphics[width=0.22\textwidth]{resonancev15.eps}}
\subfigure[$\gamma=25$]{\label{resonantv3}
\includegraphics[width=0.22\textwidth]{resonancev25.eps}}
\caption{The influence of the combination parameter $\gamma$ on the effective potential $V_{v}$ and the probabilities $P$ (as a function of $m^2$) {for the} odd-parity (blue dashed lines) and even-parity (red lines) massive KK modes. The scale parameter is set as $k=1$.}
\label{figure1}
\end{figure}
\begin{figure}[htb]
\subfigure[$V_{v}$]{\label{potentialk}
\includegraphics[width=0.22\textwidth]{A_muk.eps}}
\subfigure[$k=0.2$]{\label{resonantk1}
\includegraphics[width=0.22\textwidth]{resonancek02.eps}}
\subfigure[$k=2$]{\label{resonantk2}
\includegraphics[width=0.22\textwidth]{resonancek2.eps}}
\subfigure[$k=5$]{\label{resonantk3}
\includegraphics[width=0.22\textwidth]{resonancek5.eps}}
\caption{The influence of the scale parameter $k$ on the effective potential $V_{v}$ and the probabilities $P$ (as a function of $m^2$) {for both} the odd-parity (blue dashed lines) and even-parity (red lines) massive KK modes. The combination parameter is set as $\gamma=25$.}
\label{figure2}
\end{figure}
Tables \ref{Table2} and \ref{Table1} are the specific values of the mass $m_n$, the relative probability $P$, the width $\Gamma$, and the lifetime $\tau$ for different parameters. Here, we define the width $\Gamma=\Delta m_n$ at half maximum of the peak and $\tau=1/\Gamma$. Table \ref{Table2} shows that {with the increasing of} the parameter $\gamma$, the relative probability $P$ of the corresponding $n$-th resonant state becomes larger, and the lifetime of the resonant state becomes longer. From Tab.~\ref{Table1}, we can see that when the parameter $\gamma$ is fixed, all of the mass $m_{n}$, the width $\Gamma$, and the lifetime $\tau$ for the corresponding $n$-th resonant state are influenced by the parameter $k$. However, the values of $m_n/k$ are basically {the same} for different values of $k$, {so do the relative probability} $P$, the relative width $\Gamma/k$, and the relative lifetime $\tau*k$. So we can make a coordinate transformation as $\bar{z}=kz$ to offset the effect of $k$. Combining these two tables, we can see that the lifetime $\tau$ increases with $\gamma$, while decreases with $k$, which means that if the parameter $\gamma$ is large enough or the parameter $k$ is small enough, the lifetime of the resonant states can be long enough as the age of our universe. So, in this case, we can consider the resonant states as one of the candidates of dark matter.
Then, we calculate the lifetime of the first resonant state in order to check whether it can be a candidate of dark matter or not. For convenience, we make a coordinate transformation $\bar{z}=kz$, and define the scaled mass $\bar{m}_1=m_1/k$ and the scaled lifetime $\bar{\tau}=1/\bar{\Gamma}$ for the first resonant state in the Natural System of Units, where $\bar{\Gamma}=\Delta \bar{m}_1$ is the half maximum of the peak for the first resonant state. Note that, both $\bar{m}_{1}$ and $\bar{\tau}$ are dimensionless.
Figure \ref{canshutu} shows that both the scaled mass $\bar{m}_1$ and the scaled lifetime $\text{log}(\bar{\tau})$ {linearly depend on the} parameter $\gamma$, and the fit functions can be expressed as
\begin{eqnarray}
\bar{m}_1&=&-3.2+2.0\gamma,\label{fitm}\\
\text{log}(\bar{\tau})&=&4.7+0.2\gamma.\label{fitt}
\end{eqnarray}
\begin{figure}[htb]
\subfigure[$\bar{m}_1$]{\label{gammaM}
\includegraphics[width=0.22\textwidth]{canshutum.eps}}
\subfigure[$\text{log}(\bar{\tau})$]{\label{gammaT}
\includegraphics[width=0.22\textwidth]{canshutut.eps}}
\caption{The influence of the combination parameter $\gamma$ on the scaled mass $\bar{m}_1$ and the scaled lifetime $\text{log}(\bar{\tau})$ of the first resonant state. The black dots are {numerical results}, the red solid line is the fit function for $\bar{m}_{1}$ with $a_m=-3.2$ and $b_m=2.0$, and the blue solid line is the fit function for log($\bar{\tau}$) with $a_{\bar{\tau}}=4.7$, $b_{\bar{\tau}}=0.2$.}
\label{canshutu}
\end{figure}
{ It is known that} the age of our universe { is of about 13.8} billion years, i.e., { $4.35\times10^{17} \text{s}$}. So, if we consider { the} first resonant state as one of candidates of dark matter, { its lifetime} should be { larger than the age of universe}, i.e., { $\tau \gtrsim 4.35 \times 10^{17} \text{s}$, or in the Natural System of Units, }
\begin{equation}\label{conditiontau}
\tau=1/(k\bar{\Gamma})=\bar{\tau}/k \gtrsim 6.6\times10^{32}\text{eV}^{-1}.
\end{equation}
Thus, the restriction of the scale parameter $k$ can be expressed as
\begin{equation}
k \lesssim 1.5 \times{10^{-33}} \bar{\tau}~\text{eV}\simeq 7.5 \times 10^{-29+0.2\gamma}~\text{eV}.\label{conditionkgamma}
\end{equation}
In addition, in { the brane world theory considered in this paper}, the relation between the four-dimensional effective Planck scale $M_{Pl}$ and the five-dimensional fundamental scale $M_{*}$ is given by \cite{Kakushadze:2000zp}:
\begin{equation}
M_{Pl}^2=M_{*}^3\int_{-\infty}^{\infty} dz e^{3A(z)}
=2M_{*}^3/k. \label{conditionM}
\end{equation}
Theoretically, if the energy scale { reaches} the five-dimensional fundamental scale $M_{*}$, the quantum effect of gravity cannot be ignored. Experimentally, in the recent experiment of the Large Hadron Collider (LHC), { the collision energy is 13 TeV} and the result shows that the quantum effect of gravity can be ignored, which means that the five-dimensional fundamental scale $M_{*}>13$ TeV.
{ Thus, the constrain on the parameter $k$ is}
\begin{equation}\label{conditionM}
k > 4.4\times{10^{-17}}~\text{eV}.
\end{equation}
By combining the two conditions \eqref{conditionkgamma}, \eqref{conditionM} and the fit function \eqref{fitm}, we can get the restricted expressions of the mass of the first resonant state $m_{1}$ with the combination parameter $\gamma$
as
\begin{eqnarray}
m_1&>&(8.8\gamma-14.1)\times{10^{-17}} \text{eV}\label{conditionm1},\\
m_1&\lesssim&(1.5\gamma-2.4)\times{10^{-28+0.2\gamma}}\text{eV}\label{conditionm2}.
\end{eqnarray}
The shadow regions of Fig.~\ref{canshutu2} show the available ranges of the parameters $k$ and $m_{1}$, respectively. From Fig.~\ref{csxianzhik}, we can see that only if $\gamma>57$, the two restricted conditions \eqref{conditionkgamma} and \eqref{conditionM} of $k$ could be satisfied, which means that the parameter $\gamma$ has a lower limit. Correspondingly, Fig.~\ref{csxianzhim1} shows there is a lower limit for the first resonant state mass $m_{1}$, i.e., $m_{1}\gtrsim{10^{-15}}\text{eV}$.
\begin{figure}[htb]
\subfigure[$k$]{\label{csxianzhik}
\includegraphics[width=0.225\textwidth]{csxianzhik.eps}}
\subfigure[$m_{1}$]{\label{csxianzhim1}
\includegraphics[width=0.225\textwidth]{csxianzhim1.eps}}
\caption{The limit range of the scale parameter $k$ and the mass of the first resonant state $m_{1}$. The blue lines are restrictions from the five-dimensional fundamental scale $M_{*}$ should larger than $13$ TeV and the black lines are restrictions from the lifetime of the first resonant state should be on the magnitude of the universe life.}
\label{canshutu2}
\end{figure}
\begin{table*}[htb]
\begin{center}
\begin{tabular}{||c|c|c|c|c|c||}
\hline
$\gamma$ & $n$ & $m_{n}$ & $P$ & $\Gamma$ & $\tau$ \\
\hline \hline
&1& 4.0917 & 0.9474 & $1.0998\times 10^{-7}$ & $9.0924\times 10^{6}$ \\ \cline{2-6}
\raisebox{2.3ex}[0pt]{10}
&2 & 5.1461 & 0.3027 & $2.911\times 10^{-2}$ & $34.3351$ \\ \cline{2-6}
\hline\hline
&1 & 5.1822 & 0.9973 & $1.5778\times 10^{-8}$ & $8.6372\times 10^{7}$ \\ \cline{2-6}
&2 &6.8525 & 0.9032 & $1.4593\times 10^{-6}$& $6.8523\times 10^{5}$ \\ \cline{2-6}
\raisebox{2.3ex}[0pt]{15}
&3& 7.6902 & 0.2839 & $6.5012\times 10^{-3}$& $1.5382\times 10^{2}$ \\ \cline{3-6}
\hline\hline
&1 & 6.8457 & 0.9997 & $1.09557\times 10^{-9}$& $9.1277\times 10^{8}$ \\ \cline{2-6}
&2 & 9.3884 & 0.9468 & $1.598\times 10^{-7}$& $6.2589\times 10^{6}$ \\ \cline{2-6}
&3& 10.9936 & 0.9189 & $9.0939\times 10^{-6}$& $1.0996\times 10^{5} $ \\ \cline{2-6}
\raisebox{2.3ex}[0pt]{25}
&4 & 12.1288 & 0.7774 & $8.2448\times 10^{-4}$&$1.2129\times 10^{3}$ \\ \cline{2-6}
&5& 12.7800 & 0.2456 & $1.9736\times 10^{-3}$& $5.0668\times{10^{2}}$ \\ \cline{2-6}
\hline
\end{tabular}\\
\caption{The influence of combination parameter $\gamma$ on the mass spectrum $m_n$, the relative probability $P$, the width of mass $\Gamma$, and the lifetime $\tau$ of the KK resonances. The scale parameter $ {k}$ is set as $ {k}=1$. }
\label{Table2}
\end{center}
\end{table*}
\begin{table*}[htb]
\begin{tabular}{||c|c|c|c|c|c|c|c|c||}
\hline
$k$ & $n$ & $m_{n}$ & $m_{n}/k$ & $P$ & $\Gamma$&$\Gamma/k$ & $\tau$ & $\tau*k$ \\
\hline \hline
&1 & 1.3681 & 6.8405 & 0.9999 & $2.1928\times 10^{-10}$& $1.0964\times 10^{-9}$ & $4.5602\times 10^{9}$& $9.1204\times 10^{8}$\\ \cline{2-9}
&2 & 1.8852 & 9.4262 & 0.9987 & $2.6522\times 10^{-8}$ &$1.3261\times 10^{-7}$ & $3.7704\times 10^{7}$&$7.5408\times 10^{7}$ \\ \cline{2-9}
&3 & 2.1962 & 10.9813 & 0.9872 & $1.8213\times 10^{-6}$ & $9.1065\times 10^{-6}$ &$5.4904\times 10^{5}$&$1.0981\times 10^{5}$ \\ \cline{2-9}
\raisebox{2.3ex}[0pt]{0.2}
&4 & 2.4306 & 12.1529 & 0.8212 & $1.6546\times 10^{-4}$ &$8.2731\times 10^{-4}$ & $6.0767\times 10^{3}$ &$1.2153\times 10^{3}$ \\ \cline{2-9}
&5 & 2.5470 & 12.7350 & 0.3192 & $3.5343\times 10^{-3}$ & $1.7672\times 10^{-3}$ &$2.8294\times 10^3$&$5.6588\times 10^{2}$ \\ \cline{2-9}
\hline\hline
&1& 13.6982 & 6.8493 & 0.9993 & $2.1902\times 10^{-9}$& $1.0951\times 10^{-9}$ & $4.5657\times 10^{8}$& $9.1314\times 10^{8}$ \\ \cline{2-9}
&2 & 18.7361 & 9.3680 & 0.9923 & $2.6686\times 10^{-7}$& $1.3343\times 10^{-7}$& $3.7472\times 10^{6}$& $7.4944\times 10^{6}$ \\ \cline{2-9}
\raisebox{2.3ex}[0pt]{2}
&3 & 22.0077 & 11.0039 & 0.9361 & $1.8175\times 10^{-5}$& $9.0875\times 10^{-6}$& $5.5021\times 10^{4}$& $1.1004\times 10^{5}$ \\ \cline{3-9}
&4 & 24.2305 & 12.1141 & 0.8164 & $1.6507\times 10^{-3}$& $8.2535\times 10^{-4}$& $6.0578\times 10^{2} $& $1.2116\times 10^{3}$\\ \cline{2-9}
&5 & 25.6334 & 12.8167 & 0.3082 & $3.5109\times 10^{-3}$ & $1.7554\times 10^{-3}$& $2.8482\times 10^{2}$& $5.6964\times 10^{2}$\\ \cline{2-9}
\hline\hline
&1 & 34.1950 & 6.8391 & 0.9991 & $5.4417\times 10^{-9}$& $1.0883\times 10^{-9}$& $1.8377\times 10^{8}$& $9.1883\times 10^{8}$ \\ \cline{2-9}
&2 & 47.2038 & 9.4404 & 0.9986 & $7.4164\times 10^{-7}$& $1.4833\times 10^{-7}$& $1.3484\times 10^{6}$& $6.7419\times 10^{6}$ \\ \cline{2-9}
&3 & 54.8917 & 10.9784 & 0.9822 & $4.3735\times 10^{-5}$& $8.7471\times 10^{-6} $ & $2.8654\times 10^{4}$& $1.4327\times 10^{5}$ \\ \cline{2-9}
\raisebox{2.3ex}[0pt]{5}
&4 & 60.8046 & 12.1611 & 0.8203 & $4.1115\times 10^{-3}$& $8.2234\times 10^{-4}$&$2.43\times 10^{2}$& $1.2166\times 10^{3}$ \\ \cline{2-9}
&5 & 63.6192 & 12.7238 & 0.3290 & $7.8591\times 10^{-3}$& $1.5718\times 10^{-3}$&$1.2724\times 10^{2}$& $6.3227\times 10^{2}$ \\ \cline{2-9}
\hline
\end{tabular}\\
\caption{The influence of scale parameter $k$ on the mass spectrum $m_n$, the relative value $m_n/k$, the relative probability $P$, the width of mass $\Gamma$, the the relative width as $\Gamma/k$, the lifetime $\tau$, and the relative $\tau*k$ of the KK resonances. The combination parameter $\gamma$ is set as $\gamma=25$.}
\label{Table1}
\end{table*}
\section{Conclusion}
We generalized the geometrical coupling mechanism in order to localize a five-dimensional vector field on RS-like thick branes. The key feature of the mechanism is to introduce two mass terms of the vector field, which are proportional to the five-dimensional Ricci scalar and the Ricci tensor, respectively. We decomposed the vector field $A_{M}$ into three parts: the vector part $\hat{A}_{\mu}$, the scalar part $\phi$ and $A_{5}$. With the transverse condition $\partial_\mu\hat{A}^\mu=0$, we got a decoupled action of $\hat{A}_{\mu}$. We find that when the two parameters $\alpha$ and $\beta$ in the action (\ref{vectorAction}) satisfy the relation $\beta=-1-8\alpha\pm\sqrt{1+12\alpha}$, the effective potential $V_v(z)$ of the vector KK modes can be expressed as $V_v(z)=\gamma^2(\partial_{z}A)^2
+\gamma\partial^2_{z}A$ with $\gamma=\frac{3}{2}\pm\sqrt{1+12\alpha}$ and the tachyonic KK modes of $\hat{A}_{\mu}$ can be excluded. For $\gamma>1/2$, the zero mode of $\hat{A}_\mu$ can be localized on the brane.
Then, we investigated the resonances of the vector field by using the relative probability method and considered the possibility of these resonances as one of the candidates of dark matter. We analyzed the influence of the parameters $k$ and $\gamma$ on the resonant behavior. We found that, only for $\gamma>3$, the massive resonant KK modes could exist. Both the two parameters affect the height of the potential and hence the vector resonances. The number of the resonant states only increases with the parameter $\gamma$. We also considered scaled lifetime $\bar{\tau}$ and the scaled mass $\bar{m}_{1}$ of the first resonant state. We found that both the scaled mass $\bar{m}_{1}$ and the scaled lifetime $\text{log}(\bar{\tau})$ can be fitted by a linear function of $\gamma$ approximately. In order to view the first resonant vector KK state as dark matter, its lifetime should be long enough as the age of our universe. This would introduce some constrains on the parameters $k$ and $\gamma$ as well as the mass of the first resonance, i.e., $k\gtrsim{10^{-17}} \text{eV}$, $\gamma >57$, and $m_{1}\gtrsim {10^{-15}}\text{eV}$.
Note: When we finish this work, we find another work \cite{2001.01267} that also considered the same localization mechanism (\ref{vectorAction}) for the vector filed.
\section*{Acknowledgments}
This work was supported by the National Natural Science Foundation of China (Grants No.~11875151, No.~11705070, and No.~11522541), and the Fundamental Research Funds for the Central Universities (Grants No.~lzujbky-2018-k11, and No.~lzujbky-2019-it21).
|
1,477,468,751,177 | arxiv |
\section{Introduction}
Deep learning has advanced various applications in a wide range of domains \cite{devlin2018bert}\cite{he2016deep}\cite{cully2015robots}.
Deep Neural Networks (DNNs) have significantly grown in size in recent years, in pursuit of better model accuracy.
Training of large models over large datasets
has promoted the rapid development of distributed DNN training frameworks (e.g., TensorFlow~\cite{abadi2016tensorflow}, PyTorch~\cite{adam2019pytorch}).
A number of parallel-training paradigms have been adopted in practice.
Data parallelism~\cite{li2014scaling} partitions the training dataset among workers.
Each worker holds a
copy of the DNN, computes parameter updates using the local dataset, and synchronizes parameter updates with others periodically.
AllReduce operation~\cite{sergeev2018horovod} is a common approach for parameter synchronization among workers.
To handle large models which cannot be fit entirely into a single device's memory, model parallelism~\cite{shoeybi2019megatron} partitions a DNN model and places model partitions on different devices. In each training iteration, a mini-batch is processed by model partitions on the devices one after another, through forward propagation followed by backward propagation.
Such vanilla model parallelism suffers from low device utilization, as only one device is active at each time when a mini-batch is trained
across the devices hosting different model partitions.
Pipeline parallelism~\cite{harlap2016addressing} has been proposed
to maximize device utilization during training.
Similar to model parallelism, it partitions a DNN into stages and places stages over multiple devices;
it further partitions each mini-batch of training data into equal-sized microbatches, and
allows different devices to process different microbatches at the same time (\textit{i.e.}, microbatch pipelining).
Most works on pipeline parallelism~\cite{harlap2018pipedream}\cite{geng2019elasticpipe}\cite{Park2020hetpipe}\cite{pmlr-v139-narayanan21a} adopt asynchronous pipelining, by injecting microbatches into the training pipeline one by one and updating model parameters with gradients computed with a microbatch, whenever its backward propagation is done.
Asynchronous pipeline parallelism maximizes GPU utilization by fully saturating the pipeline.
However, the processing of different microbatches overlaps, each updating the model using gradients computed based on outdated parameters that are learned on different earlier microbatches, which
may inevitably slow down or even prevent training convergence, and render a model whose accuracy differs from that trained without pipelining~\cite{ho2013more}.
To ensure model convergence and accuracy, synchronous pipeline parallelism has been advocated by a few recent studies~\cite{huang2019gpipe}\cite{fan2021dapple}. It enforces a synchronization barrier between training iterations, to aggregate gradients computed with all microbatches before applying them for model update. Such a synchronization barrier flushes the pipeline and introduces waiting time (for training completion of all microbatches) into each training iteration, leading to lower device utilization as compared to asynchronous pipeline training. Optimal planning of synchronous pipeline training is needed to
improve device utilization and minimize per-iteration training time, to achieve similar training time as asynchronous pipelining while providing convergence and accuracy guarantees. Pipeline planning includes DNN model partition, replication and device placement, as well as scheduling the order of microbatch processing across the devices within each training iteration.
Non-trivial challenges exist,
as follows:
\textit{First}, in a typical DNN model, layers are not uniform in terms of computation time, parameter size and activation size. Optimal model partition over devices is hence challenging.
\textit{Second}, previous pipeline designs have been restricted to homogeneous inter-GPU connectivity (or homogeneous in each level of a hierarchical topology)~\cite{huang2019gpipe}\cite{harlap2018pipedream}. GPU inter-connectivity is often more complicated in a practical machine learning (ML) cluster,
including PCI-e or NVLink within a physical server~\cite{dgx1}, RDMA or TCP network between servers~\cite{wang2013gpu}.
We will show that heterogeneous GPU inter-connectivity leads to an exponential number of solutions for DNN model partition and device mapping (Sec.~\ref{sec::part}), adding to the difficulty of finding efficient, near-optimal solutions.
\textit{Third}, deciding the execution order of all microbatches over all devices,
respecting inter-stage dependencies and minimizing per-iteration training time,
falls in the category of job shop problems.
The job shop problem is NP-hard~\cite{goldberg2001better} even with only two machines (aka GPUs in our case).
Tackling the challenges, we design near-optimal algorithms
that efficiently partition a given DNN model, replicate and distribute the partitions over available GPUs with arbitrary inter-GPU connectivity, and schedule microbatch processing over the stages to minimize per-iteration training time.
Our main techniques and contributions are summarized as follows:
$\triangleright$ Assuming model partition and device mapping are given, we
design an efficient list ordering method to decide the processing order of microbatches on different GPUs, and then a scheduling algorithm that
minimizes idle time of devices based on the order. With thorough theoretical analysis, we identify an upper bound of per-iteration training time, decided by two key factors: the number of stages that the DNN is partitioned into, and the maximum time to process a microbatch on a single stage or an inter-stage communication channel.
$\triangleright$ We are hence inspired to design a pipeline partition and device mapping algorithm to minimize the
maximum per-stage/channel execution time, given the number of stages to partition the model into.
A recursive min-cut method is designed to identify a device order that maximizes inter-GPU bandwidth utilization.
Based on the device order, we use dynamic programming to derive the optimal partition, replication and mapping
solution.
$\triangleright$
Our complete synchronous pipeline planning algorithm, {\em SPP}, iteratively invokes the pipeline partition/mapping algorithm and the execution order scheduler to identify the best number of partitioned stages and the set of near-optimal pipeline execution strategies accordingly.
We rigorously analyze {\em SPP} and prove a worst-case performance guarantee.
$\triangleright$ We conduct extensive testbed experiments and trace-driven simulation, carefully comparing {\em SPP} with state-of-the-art pipeline-training paradigms, including
GPipe~\cite{huang2019gpipe}, PipeDream~\cite{harlap2018pipedream} and HetPipe~\cite{Park2020hetpipe}.
Experimental results show that {\em SPP} can accelerate training up to 147\% compared to GPipe, 157\% to PipeDream and 80\% to HetPipe in terms of per-iteration training time,
and achieves the target accuracy in the most expedited manner as compared to baselines.
We observe that {\em SPP} can strike a balance between the number of stages and the maximum per-stage execution/communication time in DNN partition, and maximally overlap communication and computation with its pipeline execution scheduling.
\section{System Model}
\label{sec::sys}
\subsection{DNN Model and Device Connectivity}
We consider a DNN model, $\mathcal{D}$, consisting of $L$ layers, \textit{e.g.}, attention layers, convolutional and fully-connected layers. In each training iteration, a mini-batch is divided into $M$ equal-sized microbatches of size $Z$ each. Every microbatch is trained through FP through all $L$ layers, followed by BP over the $L$ layers in the reverse order. We divide $\mathcal{D}$ into multiple {\em stages}, place the stages on different GPUs, and allow different GPUs to process different microbatches simultaneously in the pipelined manner. Following the end of BP of all microbatches,
a gradient aggregation operation aggregates gradients computed from all microbatches
and applies
them to update the model parameters.
As time needed for gradient aggregation and apply is much shorter than FP/BP time , we ignore it in our pipeline parallelism design.
$V$ homogeneous GPUs on multiple physical servers are available for training this DNN model.\footnote{Training a DNN using GPUs of the same type is the norm in today's production systems, based on our discussions with leading AI cloud operators.} We
consider a variety of GPU inter-connectivity,
including PCIe or NVLink
{(providing direct GPU-GPU communication channel)} between GPUs
in the same physical server (\textit{e.g.}, in NVIDIA DGX-1~\cite{dgx1}), TCP or RDMA connections across GPUs in different servers~\cite{wang2013gpu},
with various bandwidth levels.
Graph $G(\mathcal{V}, \mathcal{E})$ represents the multi-GPU system for training $\mathcal{D}$,
where $\mathcal{V}$ includes the $V$ GPUs
and $\mathcal{E}$ contains all inter-connections between the GPUs. \revise{Each edge $(v, v')$ in $\mathcal{E}$ is associated with a weight, $b_{v, v'}$, representing the available bandwidth between GPU $v$ and GPU $v'$.} Let $b_{min}$ and $b_{max}$ be the minimum and maximum bandwidth among all edges in $\mathcal{E}$, respectively.
The forward (backward) computation time of a microbatch over layer $l$ of DNN $\mathcal{D}$ on a given GPU is $p^f_l$ ($p^b_l$). Let $\alpha_l$ be the size of parameters/gradients of layer $l$, which can be profiled through one trial run of the whole model using several training iterations.
$d^f_{l,l+1}$ denotes the size of activations passed from layer $l$ to layer $l+1$ during FP, and $d^b_{l+1,l}$ is the size of gradients transferred from layer $l+1$ to layer $l$ during BP.
\begin{comment}
Each GPU has a device memory of size $H$. There are 3 main sources of memory consumption on each GPU during DNN training~\cite{sohoni2019low}:
(i) Memory to store model parameters.
(ii)
Memory accounting for gradient caching and optimizer.
This memory consumption can be $E$ times that of case (i),
where $E$ is a constant related to the optimizer(s) in use (\textit{e.g.}, $E = 2$ for SGD training with momentum~\cite{sohoni2019low}, and $E = 3$ with ADAM optimizer~\cite{rajbhandari2019zero}).
(iii)
Memory to store activations of operations in each layer produced through forward computation, for reuse during backward computation. The activation memory consumption of layer $l$, denoted by $a_l(z)$, depends on input data size $z$, where $z$ equals the size $Z$ of a microbatch divided by the number of replicas of the layer (onto multiple GPUs, if any).
For example, if we place layers $l_1$ to $l_2$ on a GPU with input batch size $z$, the memory consumption $H_{use}$ can be estimated as:
{\small
\begin{equation}
\label{eqn_mem}
H_{use} = \sum\limits_{l_1\leq l\leq l_2}((1 + E)\alpha_l + a_l(z))
\end{equation}}
\end{comment}
\begin{figure*}[!th]
\centering
\begin{subfigure}{0.7\columnwidth}
\centerline{\includegraphics[width=0.9\columnwidth]{fig/pipeline_partition}}
\caption{Stage partition, replication and device mapping}
\end{subfigure}
\begin{subfigure}{1.0\columnwidth}
\centerline{\includegraphics[width=1\columnwidth]{fig/pipeline_schedule}}
\caption{Execution schedule of 3 microbatches}
\end{subfigure}
\caption{A pipeline parallelism design example}
\vspace{-5mm}
\label{fig_pipeline_planner}
\end{figure*}
\setlength{\textfloatsep}{0pt}
\vspace{-2mm}
\subsection{Synchronous Pipeline with Stage Replication}
We target synchronous pipeline design that minimizes per-iteration training time,
including two subproblems:
\subsubsection{\bf Pipeline Partition, Replication and Device Mapping}
\label{sec:partmapproblem}
We decide the stage set $\mathcal{S} = \{s_1, s_2, \ldots, s_{|\mathcal{S}|}\}$ with $|\mathcal{S}| \leq V$ to partition model $\mathcal{D}$
into, and a device mapping function $\mathcal{F}: \mathcal{S} \rightarrow \mathds{P}(\mathcal{V})$, where $\mathds{P}(\mathcal{V})$
includes all subsets of device set $\mathcal{V}$.
We consider classical \textit{interval partition},
that each
stage consists of a number of consecutive layers:
for stage $s \in \mathcal{S}$, if layer $l_{start}$ and layer $l_{end}$ belong to $s$, then all layers $l$, with $l_{start} \leq l \leq l_{end}$, belong to $s$. Without loss of generality, we assume a sequential dependency through $s_1, s_2, \ldots, s_{|\mathcal{S}|}$, \textit{i.e.}, the last layer $l^n_{end}$ in stage $s_{n}$ is the predecessor of the first layer $l^{n+1}_{start}$ in stage $s_{n+1}$ in the DNN model.
$\mathcal{F}$ maps each stage $s \in \mathcal{S}$ to one or multiple GPUs, ensuring that each GPU hosts exactly one stage or one replica of a stage.
In our design,
we allow a stage to be replicated and executed over multiple GPUs in a data-parallel fashion. Suppose stage $s$ is replicated over $k$ GPUs $\{v_1, v_2, \ldots, v_k\}$.
Processing of a microbatch by stage $s$ is distributed over the $k$ GPUs (by evenly dividing input data among these GPUs), and
{we assume that the forward (backward) computation time of each layer $l$ in stage $s$ on
each replica device is now {\small $\frac{p^f_l}{k} (\frac{p^b_l}{k}$)}.\footnote{We note that non-linear change of training time may happen when a layer is replicated, i.e., each layer replica's execution time is not exactly $\frac{1}{k}$ of the layer's processing time without input data partition. Our algorithm can be readily extended to the non-linear case by modeling the computation time of each layer given different input data sizes.} }
Fig.~\ref{fig_pipeline_planner}(a) gives an example, where a 6-layer DNN model is trained using 4 GPUs.
The model is partitioned into three stages with stage 2 replicated over GPU 2 and GPU 3.
Such stage replication may improve GPU utilization and further balance stage processing time, together with stage partition.
In Fig.~\ref{fig_pipeline_planner}, supposing the size of layers in stage 2 is much larger than the other stages, replicating stage 2 on two GPUs allows forward/backward computation time of the stage to be similar to others, as shown in Fig.~\ref{fig_pipeline_planner}(b).
{\small
\begin{table}[!t]
\small\
\caption{NOTATION}
\begin{center}
\label{table_notation}
\begin{tabular}{|l|l|}
\hline
$\mathcal{D}$ & the DNN model\\
\hline
$L$ & \# of layers\\
\hline
$M$ & \# of microbatches in one iteration\\
\hline
$V$ & \# of GPUs\\
\hline
$\mathcal{G}(\mathcal{V}, \mathcal{E})$ & GPU inter-connection graph \\
& ($\mathcal{V}$: GPUs; $\mathcal{E}$: inter-GPU connections)\\
\hline
$b_{v, v'}$ & bandwidth between GPU $v$ and GPU $v'$\\
\hline
$b_{min} (b_{max})$ & minimum (maximum) bandwidth in $\mathcal{E}$\\
\hline
$p^f_l (p^b_l)$ & FB (BP) computation time of layer $l$ per \\& microbatch \\
\hline
$\alpha_l$ & size of parameters (gradients) of layer $l$\\
\hline
$d^f_{l,l+1}$ ($d^b_{l+1,l}$) & size of activations (gradients) from \\& layer $l$ to $l+1$ ($l+1$ to $l$) during FP (BP)\\
\hline
$\mathcal{S}$ & set of all stages that $\mathcal{D}$ is partitioned into\\
\hline
$\mathcal{S}_{repl}$ & set of replicated stages\\
\hline
$\mathcal{F}: \mathcal{S} \rightarrow \mathds{P}(\mathcal{V})$ & device mapping function from stages to sets\\& of GPUs\\
\hline
{\small$c^f_{s_n, s_{n+1}}$} & communication time between stages $s_n$ and\\
{\small $(c^b_{s_{n+1}, s_n})$}& $s_{n+1}$ during FP (BP)\\
\hline
$e^f_{m, s_n} (e^b_{m, s_n})$ & start time of stage $s_n$'s processing of \\& microbatch $m$ during FP (BP)\\
\hline
$A_s$ & time taken by AllReduce operation of stage $s$\\
\hline
$e^A_{s}$ & start time of AllReduce operation of stage $s$\\
\hline
\end{tabular}
\end{center}
\vspace{-2mm}
\end{table}
}
After completion of backward computation of microbatches on all $k$ replicas of stage $s$, a ring AllReduce operation~\cite{sergeev2018horovod} synchronizes gradients of stage $s$ across the $k$ GPUs.
{Especially, the $k$ GPUs are organized into a logical ring topology,
and each GPU exchanges gradients/parameters with its neighbors in the ring through inter-GPU connections.}
The size of communication data (gradients and parameters) involved in the AllReduce operation is $\frac{2(k-1)}{k}\sum_{l\in s}\alpha_l$ per GPU~\cite{allreduce_time}. The time taken by the AllReduce operation, denoted by $A_s$, is further
decided by the minimum connection bandwidth among the $k$ GPUs:
\vspace{-4mm}
{\small
\begin{eqnarray}
\label{eqn_allreduce}
A_s = \frac{2(k-1)\sum\limits_{l\in s}\alpha_l}{k\min\limits_{v,v'\in \{v_1, v_2, \ldots, v_k\}}b_{v,v'}}
\end{eqnarray}}
\vspace{-3mm}
\subsubsection{\bf Execution Scheduling}
\label{sec:scheduleproblem}
We also decide the execution order of processing each microbatch on each stage, as well as running the AllReduce operations for replicated stages.
Let $e^f_{m, s_n}$ ($e^b_{m, s_n}$) be the start time of forward (backward) computation of microbatch $m$
on stage $s_n$.
Execution schedule should respect forward-backward dependency and stage dependency. Each GPU can only
process one microbatch
at a time. Let $c^f_{s_n, s_{n+1}}$ and $c^b_{s_{n+1}, s_{n}}$ represent the inter-stage communication time between stage $s_n$ and stage $s_{n+1}$ during FP and BP, respectively. We ignore the time for data passing between layers residing in the same GPU. We formulate the dependencies as follows.
\begin{itemize}
\item (\textit{Forward-backward dependency}):
\end{itemize}
\vspace{-5mm}
{\small
\begin{eqnarray*}
e^f_{m, s_{|\mathcal{S}|}} + \frac{\sum\limits_{l \in s_{|\mathcal{S}|}}p^f_l}{|\mathcal{F}(s_{|\mathcal{S}|})|} \leq e^b_{m, s_{|\mathcal{S}|}}, \forall m\in \{1, \ldots, M\}
\end{eqnarray*}
}
\vspace{-5mm}
\begin{itemize}
\item (\textit{Stage dependency}):
\end{itemize}
\vspace{-5mm}
{\small
\begin{eqnarray*}
e^f_{m, s_n} + \frac{\sum\limits_{l \in s_{n}}p^f_l}{|\mathcal{F}(s_{n})|} + c^f_{s_n, s_{n+1}} \leq e^f_{m, s_{n+1}}, \\
\forall m\in \{1, \ldots, M\}, n\in \{1, \ldots, |\mathcal{S}|-1\}\\
e^b_{m, s_n} + \frac{\sum\limits_{l \in s_{n}}p^b_l}{|\mathcal{F}(s_{n})|} + c^b_{s_n, s_{n-1}} \leq e^b_{m, s_{n-1}}, \\
\forall m\in \{1, \ldots, M\}, n\in \{2, \ldots, |\mathcal{S}|\}\\
e^f_{1, s_1}=0
\end{eqnarray*}
}
\vspace{-6mm}
To compute inter-stage communication time, when $s_n$ and/or $s_{n+1}$ are replicated over multiple GPUs, we evenly distribute the data being transmitted across inter-stage links. For example in Fig.~\ref{fig_pipeline_comm}, $s_n$ is replicated onto 2 GPUs and $s_{n+1}$ onto 3 GPUs. \begin{comment}Activations from $s_n$'s replica on GPU1 are divided into 3 equal shares to be sent to each of GPUs 3, 4, 5 concurrently; gradients from $s_{n+1}$'s replica on GPU3 are divided into 2 shares to GPU1 and GPU2, respectively. \end{comment}
During FP, 1/3 of the activations derived by each of the two GPUs hosting stage $n$ is sent to each of the three GPUs hosting stage $n+1$. During backward propagation, each GPU running $s_{n+1}$ splits its gradients (computed with a microbatch) into two sets of gradients corresponding to two smaller batches, and sends the two sets to the two GPU of $s_n$, respectively; each replica of $s_n$ sums up received gradients from replicas of $s_{n+1}$.
The inter-stage communication time is decided by the minimum link bandwidth between GPUs in $\mathcal{F}(s_n)$ and in $\mathcal{F}(s_{n+1})$:
(note $d^f_{l^n_{end},l^{n+1}_{start}}$ and $d^b_{l^{n+1}_{start}, l^n_{end}}$ are data size produced by an entire microbatch, \textit{i.e.}, sum of data produced by all replicas of a stage)
\vspace{-3mm}
\begin{figure}[!t]
\centering
\includegraphics[width=0.35\textwidth]{fig/pipeline_comm}
\vspace{-2mm}
\caption{Inter-stage communication: an example}
\label{fig_pipeline_comm}
\end{figure}
\setlength{\textfloatsep}{0pt}
{\small
\begin{eqnarray*}
c^f_{s_n, s_{n+1}} = \frac{d^f_{l^n_{end}, l^{n+1}_{start}}}{|\mathcal{F}(s_{n})||\mathcal{F}(s_{n+1})|\min\limits_{v\in\mathcal{F}(s_n), v'\in\mathcal{F}(s_{n+1})}b_{v,v'}}
\end{eqnarray*}
\vspace{-6mm}
\begin{eqnarray*}
c^b_{s_{n+1}, s_{n}} = \frac{d^b_{l^{n+1}_{start}, l^n_{end}}}{|\mathcal{F}(s_{n})||\mathcal{F}(s_{n+1})|\min\limits_{v\in\mathcal{F}(s_n), v'\in\mathcal{F}(s_{n+1})}b_{v',v}}
\end{eqnarray*}
}
\vspace{-4mm}
Let $\mathcal{S}_{repl}$ be the set of replicated stages, and $e^A_{s}$ denote the start time of AllReduce operation of stage $s\in \mathcal{S}_{repl}$. We further have:
$\bullet$ (\textit{AllReduce operation dependency}):
\vspace{-2mm}
{\small
\begin{displaymath}
e^b_{m, s} + \frac{\sum\limits_{l \in s}p^b_l}{|\mathcal{F}(s)|} \leq e^A_{s}, \forall m\in \{1,\ldots, M\}, s\in \mathcal{S}_{repl}
\end{displaymath}
}
\vspace{-3mm}
We aim at minimizing the makespan of training all $M$ microbatches, \textit{i.e.}, the per-iteration training time of the DNN:
\vspace{-4mm}
{\small
\begin{equation}
\mbox{minimize } \max\{\max_{m\in\{1, 2, \ldots, M\}} (e^b_{m, s_1} + \frac{\sum\limits_{l \in s_{1}}p^b_l}{|\mathcal{F}({s_1})|}), \max_{s\in\mathcal{S}_{repl}}(e^A_{s} + A_s)\}
\label{eqn_makespan}
\end{equation}}
\vspace{-4mm}
An AllReduce operation and inter-stage communication do not share the same inter-GPU connections: the former uses links between GPUs hosting replicas of the same stage, while the latter is between GPUs hosting different stages (also recall that one GPU can only host (a replica of) one stage). The AllReduce operation of a replicated stage $s_n$ can take place at the same time as inter-stage communication and backward computation of stages $s_{n-1}, s_{n-2}, \ldots$, as well as AllReduce operations of other replicated stages. Therefore, the completion time of a training iteration in (\ref{eqn_makespan}) is decided by the latest among backward computation completion time of all microbatches over stage $1$ and end time of AllReduce operations of all replicated stages.
Note that our schedule may not ensure that microbatches are processed in the same sequence at each stage; instead, we enforce a synchronization barrier in each training iteration, as represented by the inside $\max$ over all microbatches in (\ref{eqn_makespan}).
An example execution schedule is given in Fig.~\ref{fig_pipeline_planner}(b).
As stage 2 is replicated over two GPUs, an AllReduce operation happens when backward computation of all three microbatches over both GPUs has been done, ensuring the model parameters on GPU 2 and GPU 3 are updated the same.
\subsection
DNN Partition and Device Mapping Algorithm}
\label{sec::part}
\vspace{-1mm}
Lemma~\ref{lemma_ss} shows that the per-iteration training time
is positively related to the number of stage $|\mathcal{S}|$ that DNN $\mathcal{D}$ is partitioned into, the maximum time $\mathcal{C}$ to process a microbatch on a single stage or communication channel, and the maximum AllReduce operation time among replicated stages.
The number of stage partitions, $|\mathcal{S}|$, is at most the same as the number of GPUs, $V$. We next design a model partition and device mapping algorithm aiming at minimizing the maximum time, $\mathcal{W}(|\mathcal{S}|)$, to process all microbatches on a single stage (including AllReduce operations) or
a
communication channel, given the number of stages $|\mathcal{S}|$.
The purpose is to minimize the upper bound of per-iteration training time in Lemma \ref{lemma_ss}, as $\mathcal{W}(|\mathcal{S}|)$ is related to both $\mathcal{C}$ and $\max_{s\in\mathcal{S}_{repl}}\{A_s\}$:
\vspace{-4mm}
{\small
\begin{eqnarray}
\mathcal{W}(|\mathcal{S}|) = \max\{&\max_{s\notin \mathcal{S}_{repl}}\{M\sum\limits_{l\in s}(p^f_l + p^b_l)\},\nonumber\\
&\max_{s\in \mathcal{S}_{repl}}\{M\frac{\sum\limits_{l\in s}(p^f_l + p^b_l)}{|\mathcal{F}(s)|} + A_s\},\nonumber\\
&\max_{n\in \{1,\ldots, |\mathcal{S}|-1\}}
\{M(c^f_{s_n, s_{n+1}}+c^b_{s_{n+1}, s_n})\}\}\nonumber
\end{eqnarray}}
\vspace{-6mm}
Our DNN partition and device mapping problem,
without considering stage replication, can be reduced to the NP-complete problem of pipeline partition over a heterogeneous communication platform~\cite{benoit2008mapping}, which partitions a workflow among a cluster of devices with heterogeneous connectivity to maximize the pipeline throughput. We design an efficient \textit{balanced pipeline partition and device mapping algorithm} (BPPM)
to derive a near-optimal solution, which includes two components: 1) a device ordering module that calculates a linear ordering of all GPUs;
and 2) an
algorithm that partitions the DNN model onto GPUs respecting the device order.
\textit{\textbf{1) Recursive device ordering (RDO)}}:
We decide a linear ordering of GPUs in $\mathcal{V}$:
$\{v_1, v_2, \ldots, v_{V}\}$.
We will map stages (stage replicas) to devices according to this
ordering and stage dependencies, \textit{i.e.}, (replicas of) the first stage mapped to the device(s) at the head of the device ordering, and then the next stage to latter device(s), etc.
We target a linear ordering with maximal bandwidth between consecutive GPUs,
such that the bandwidth between stages and between replicas of the same stage is maximized.
A recursive min-cut algorithm is designed to find the ordering
in polynomial time, as given in Alg.~\ref{alg_do}. $rank_l$ ($rank_h$) represents the lowest (highest) rank of devices in the current subgraph in the ordering, initialized to $1$ and $V$, respectively (in the complete Alg.~\ref{alg_spp} that invokes RDO). In each recursive step, we find a min-cut within the current input graph (leveraging an efficient min-cut algorithm in~\cite{stoer1997simple}),
partition the graph into two subgraphs accordingly, and call RDO again to order devices in the two subgraphs, respectively. When the input graph contains only one GPU, we assign $rank_l$ (which equals $rank_h$) to it. We order GPUs according to their computed ranks in ascending order.
{\small
\begin{algorithm}[!t]
\caption{Recursive Device Ordering
- \textbf{\textit{RDO}}}
\label{alg_do}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\begin{algorithmic}[1]
\REQUIRE $G(\mathcal{V}, \mathcal{E}), rank_l, rank_h$
\IF {$|\mathcal{V}| == 1$}
\STATE Set $rank(v) \leftarrow rank_l$
\ELSE
\STATE $G^1(\mathcal{V}^1, \mathcal{E}^1), G^2(\mathcal{V}^2, \mathcal{E}^2) = \text{min-cut}(G)$
\STATE \textbf{\textit{RDO}}$(G^1, rank_l, rank_l + |\mathcal{V}^1| - 1)$
\STATE \textbf{\textit{RDO}}$(G^2, rank_l + |\mathcal{V}^1|, rank_h)$
\ENDIF
\end{algorithmic}
\end{algorithm}
}
\begin{comment}
By dividing the device graphs with min-cuts,
link(s) in each min-cut will be used at most once for either communication between two consecutive stages mapped to GPUs in the two subgraphs respectively, or an AllReduce operation whose stage replicas are mapped into both subgraphs. On the other hand, at least one link in each min-cut needs to be used, as otherwise the training graph will not be connected. Hence, links with small bandwidth are minimally exploited while large-bandwidth links are maximally used.
\end{comment}
The link(s) within the min-cut will be used for inter-stage communication between the two consecutive stages mapped respectively onto two GPUs in the two subgraphs, or AllReduce operation within one stage whose replications are mapped into both subgraphs. Since all GPUs will be used in pipeline training,
at least one link in each min-cut needs to be used for communication
(as otherwise the training topology will not be a connected graph).
By dividing the device graphs in this way, link(s) in each min-cut will be used only between two stages or among replicas of one replicated stage, but not between two pairs of consecutive stages or replicas of two replicated stages.
Hence, links with small bandwidth are minimally exploited while large-bandwidth links are maximally used for inter-stage or AllReduce
communication, minimizing the maximum communication time on a single communication channel.
\begin{comment}
Fig.~\ref{fig_pipeline_rdo} gives an example of a device ordering. Alg.~\ref{alg_do} separates GPU 4 apart with Cut 1 and assigns rank 1 to GPU4, then separates GPU 2 from GPU 1 and 3 with Cut 2 and set rank 2 to GPU 2, and finally assign rank 3 and rank 4 to GPU 1 and GPU 3 with Cut 3. As a result, if we map a stage only to GPU 4, cut 1 ensures that the slow 10Gbps link can only be used once for inter-stage communication between stage on GPU 4 and any stage on GPU 1, 2, or 3; if a stage is replicated over GPU 4 and other GPUs, then the slow 10Gbps link will still be used only once for intra-stage AllReduce operation.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\columnwidth]{fig/pipeline_rdo.pdf}
\caption{Recursive device ordering: an example}
\label{fig_pipeline_rdo}
\end{figure}
\end{comment}
\sloppy \textit{\textbf{2) Pipeline partition, replication and mapping (PRM)}}: Following the device ordering $\{v_1, v_2, \ldots, v_{V}\}$, we leverage dynamic programming to partition and map $\mathcal{D}$ onto the GPUs that minimizes the maximum execution time $\mathcal{W}(|\mathcal{S}|)$ on a single stage or communication channel.
Let $W(l, \xi, r, i)$ denote the optimal (aka minimum) maximum execution time on a single GPU or communication channel, when we partition the first $l$ consecutive layers in $\mathcal{D}$ into $\xi$ stages with the last stage
replicated into $r$ replica(s) ($r \ge 1$), and place the stages over GPUs $v_1$ to $v_i$.
We have $\mathcal{W}(|\mathcal{S}|) = \min_{1\le r\le V}W(L, |\mathcal{S}|, r, V)$. $W(l, \xi, r, i)$ can be recursively computed as follows:
\vspace{-5mm}
{\small
\begin{eqnarray*}
\label{eqn_dp}
&W(l, \xi, r, i) = \min\limits_{1\le l'\le l-1, 1\le r'\le i-r}\max\{W(l', \xi-1, r', i-r), \\
&M\frac{d^f_{l', l'+1} + d^b_{l'+1, l'}}{r'rb_{r'r}}, M\frac{\sum\limits_{o = l'+1}^{l}(p^f_o + p^b_o)}{r} + A_{l'+1\rightarrow l}(v_{i-r+1} \rightarrow v_{i})\}
\end{eqnarray*}}
\vspace{-6mm}
\noindent The first term inside $\max$ is the maximal execution time on a single GPU/communication channel by optimal partition of layers $1$ to $l'$ into $\xi-1$ stages (with the last stage replicated on $r'$ GPUs) and mapping them on GPUs $v_1$ to $v_{i-r}$. The second term computes the total communication time on the communication channel between layers $l'$ and $l'+1$, where
$b_{r'r} = \min_{v'\in \{v_{i-r-r'+1}, \ldots, v_{i-r}\}, v\in \{v_{i-r+1}, \ldots, v_{i}\}}b_{v', v}$ is the minimal link bandwidth between $r'$ replicas of layer $l'$ and $r$ replicas of layer $l'+1$.
The third term is the training time on the last stage, including processing time of all microbatches over layers $l'+1$ to $l$ replicated on $r$ GPUs
and time taken by the corresponding AllReduce operation. Here $A_{l'+1\rightarrow l}(v_{i-r+1} \rightarrow v_{i})$ denotes the time for AllReduce operation of layers $l'+1$ to $l$ replicated over GPUs $v_{i-r+1}$ to $v_i$.
To compute $W(l, \xi, r, i)$, we solve the subproblem of optimally partitioning the first $l'$ layers into $\xi-1$ stages on GPUs $v_1$ to $v_{i-r}$, while replicating the stage containing layers $l'+1$ to $l$ over GPUs $v_{i-r+1}$ to $v_i$. We consider all possible choices of layer $l'$ and various replication strategies of the stage containing $l'$, and decide $W(l, \xi, r, i)$ as the minimal time computed among them.
The detailed dynamic programming PRM algorithm is given in Appendix~\ref{appendix_alg_dp}.
The following lemma shows that the best stage partition and device mapping that our algorithms identify when the given number of stages varies, achieves a maximum per-stage/communication channel execution time close to optimum.
\vspace{-2mm}
\begin{lemma}
\label{lemma_bmmp}
Let $\mathcal{W}_{PRM}=\min_{|\mathcal{S}| \in \{1,\ldots, V\}}\mathcal{W}(|\mathcal{S}|)$.
$\mathcal{W}_{PRM}$ achieved by RDO and PRM is no larger than $(1 + \Phi)$ times $\mathcal{W}^*$, the optimal (aka minimum) maximum execution time on a single stage or communication channel,
where $\Phi=\frac{\max\{p_{max}b_{max}, d_{max}\}}{\Gamma}(\frac{1}{b_{min}} - \frac{1}{b_{max}})$, {\small $d_{max}=\max_{1\le l\le L-1} (d^f_{l,l+1} + d^b_{l+1,l})$, $p_{max}=\max_{1\le l \le L} (p^f_{l} + p^b_{l})$}, and {\small $\Gamma = {\sum\limits_{1\le l \le L}(p^f_l + p^b_l)}/{V}$}.
\end{lemma}
\begin{proof}
{
Consider a new multi-GPU system graph ${\mathcal{G}_{max}}(\mathcal{V}, {\mathcal{E}_{max}})$, where the vertices in ${\mathcal{G}_{max}}$ are the same as in $\mathcal{G}$ while the bandwidth of every edge in ${\mathcal{E}_{max}}$ equals the maximum bandwidth $b_{max}$.
We consider the pipeline partition and mapping solution of $\mathcal{D}$ with ${\mathcal{G}}_{max}$ instead of the original $\mathcal{G}$, that minimizes the maximum total execution/communication time on a single stage or communication channel (regardless of the number of stages). We denote the solution as ${\mathcal{S}}_{max}$ and ${\mathcal{F}}_{max}$, and the optimal maximum total execution/communication time on a single stage or communication channel as ${\mathcal{W}}^*_{max}$. There are two cases of the solution:
\noindent$\triangleright$ \textbf{Case 1:} There is at least one stage replicated in the solution.
Without loss of generality, we consider one stage $s$ comprised of layer $l_1$ to layer $l_2$ is replicated over $k$ GPUs. Based on Eqn.~(\ref{eqn_allreduce}), we derive the total execution time $\Psi_s$ of $s$ on the cluster of $k$ GPUs:
{\small
\begin{equation}
\label{eqn_proof_bmmp_1}
\Psi_s = M\frac{\sum\limits_{l\in s}(p^f_l + p^b_l)}{k} + \frac{2(k-1)\sum\limits_{l\in s}\alpha_l}{k b_{max}}
\end{equation}}
On the other hand, we can construct a partition and mapping solution of layers in stage $s$ without replication. The construction is as follows. We firstly allocate the first $x$ layers in $s$ onto the first GPU until {\small$M\sum\limits_{l=l_1}^{l_{x-1}}(p^f_l + p^b_l) \leq M(\frac{\sum\limits_{l\in s}(p^f_l + p^b_l)}{k})$ while $M\sum\limits_{l=l_1}^{l_x}(p^f_l + p^b_l) > M(\frac{\sum\limits_{l\in s}(p^f_l + p^b_l)}{k})$}. Then, we consider the layer allocation on next GPU following the same procedure. We perform the above allocation iteratively until all the layers have been mapped to GPUs. It is clear that our construction can map all $l_1$ to $l_2$ layers onto the $k$ GPUs. Otherwise, if there are layers left, the sum of the execution time of all the allocated layers should be larger than {\small$M{\sum\limits_{l\in s}(p^f_l + p^b_l)}$}, yielding a contradiction.
In addition, our construction ensures that the maximum execution time on any GPU is no large than $M(\frac{\sum\limits_{l\in s}(p^f_l + p^b_l)}{k} + p'_{max})$, where $p'_{max} = \max\limits_{l_1\leq l\leq l_2}(p^f_l + p^b_l)$.
We denote the maximum total execution/communication time on a single stage or communication channel achieved with the construction as $\hat{\Psi}_s$. We have
{\small
\begin{multline}
\label{eqn_proof_bmmp_2}
\Psi_s \leq \hat{\Psi}_s \\
= M\max\{(\frac{\sum\limits_{l = l_1}^{l_2}(p^f_l + p^b_l)}{k} + p'_{max}), \frac{\max_{l_1\leq l \leq {l_2+1}}\{d^f_{l-1,l} + d^b_{l-1,l}\}}{b_{max}}\}
\end{multline}}
Combining Eqn.~(\ref{eqn_proof_bmmp_1}) and Eqn.~(\ref{eqn_proof_bmmp_2}) and noting that $p'_{max} \leq p_{max}$, we derive that for any replicated stage $s$, the data volume transmitted in its AllReduce operation, $d_s^{AR}$, is:
{\small
\begin{align}
d_s^{AR} &= \frac{2(k-1)\sum\limits_{l\in s}\alpha_l}{k} \nonumber\\
&\leq M\max\{p'_{max}b_{max}, {\max_{l\in s}\{d^f_{l-1,l} + d^b_{l-1,l}\}} - \frac{\sum\limits_{l\in s}(p^f_l + p^b_l)}{k}b_{max}\}\nonumber\\
& \leq M\max\{p_{max}b_{max}, d_{max}\}\label{eqn_proof_bmmp_3}
\end{align}}
Now, let us substitute ${\mathcal{E}}_{max}$ in ${\mathcal{G}}_{max}$ by ${\mathcal{E}}$ while keeping the solution unchanged and respecting the device ordering $\{v_1, v_2, \ldots, v_N\}$. The maximum execution/communication time on a single stage or communication channel in this case is denoted as $\mathcal{{W}}'$. As we only change the bandwidths in $\mathcal{G}_{max}$, we have:
{\small
\begin{align}
\mathcal{{W}}' &\leq {\mathcal{W}}^*_{max} + \max\{Md_{max}, \max_{s\in \mathcal{S}_{repl}}\{d_s^{AR}\}\}(\frac{1}{b_{min}} - \frac{1}{b_{max}}) \nonumber\\
& \leq {\mathcal{W}}^* + M\max\{p_{max}b_{max}, d_{max}\}(\frac{1}{b_{min}} - \frac{1}{b_{max}})\label{eqn_proof_bmmp_4}
\end{align}}
where the first inequality is because we consider the maximal possible increment in execution time due to the decrement in bandwidth, and the second inequality is due to Eqn.~(\ref{eqn_proof_bmmp_3}) and ${\mathcal{W}}^*_{max} \leq \mathcal{W}^*$.
In addition, we have {\small$\mathcal{W}_{PPM} \leq \mathcal{{W}}'$}.
Noting that $M\Gamma$, denoting the case of evenly distributed all workload across $N$ GPUs, is a lower bound of $\mathcal{W}^*$, we have:
{\small
\begin{align}
\mathcal{W}_{PRM} &\leq \mathcal{{W}}' \leq {\mathcal{W}}^* + M\max\{p_{max}b_{max}, d_{max}\}(\frac{1}{b_{min}} - \frac{1}{b_{max}})\nonumber\\
& = \mathcal{W}^* + M\frac{\max\{p_{max}b_{max}, d_{max}\}}{\mathcal{W}^*}(\frac{1}{b_{min}} - \frac{1}{b_{max}})\mathcal{W}^* \nonumber\\
&\leq (1 + \frac{\max\{p_{max}b_{max}, d_{max}\}}{\Gamma}(\frac{1}{b_{min}} - \frac{1}{b_{max}}))\mathcal{W}^* \label{eqn_proof_bmmp_5}
\end{align}}
\noindent$\triangleright$ \textbf{Case 2:} There is no stage replicated in the solution. Following similar steps as in case 1, we have:
{\small
\begin{align}
\mathcal{W}_{PPM} &\leq \mathcal{{W}}' \leq {\mathcal{W}}^*_{max} + Md_{max}(\frac{1}{b_{min}} - \frac{1}{b_{max}})\nonumber\\
&\leq (1 + \frac{d_{max}}{\Gamma}(\frac{1}{b_{min}} - \frac{1}{b_{max}}))\mathcal{W}^*\label{eqn_proof_bmmp_6}
\end{align}}
where the second inequality is because we only consider inter-stage communication in this case, and the last inequality is due to $M\Gamma \leq \mathcal{W}^*$.
We combine Eqn.~(\ref{eqn_proof_bmmp_5}) and Eqn.~(\ref{eqn_proof_bmmp_6}) to derive:
{\small
\begin{equation}
\mathcal{W}_{PRM} \leq (1 + \frac{\max\{p_{max}b_{max}, d_{max}\}}{\Gamma}(\frac{1}{b_{min}} - \frac{1}{b_{max}}))\mathcal{W}^*
\end{equation}
}
}
\end{proof}
\section{Conclusion}
\label{sec::conclusion}
This paper designs efficient algorithms for expediting synchronous pipeline training of DNNs over arbitrary inter-GPU connectivity. We partition a given DNN, replicate and distribute the partitions over available GPUs,
and design an efficient scheduler to order pipeline execution of microbatches over partitioned stages on different GPUs, minimizing the training time.
Our comparative experiments on two GPU testbeds prove that our design outperforms state-of-the-art approaches up to 157\%. Trace-driven simulations further
show our algorithms' superiority under various settings.
\subsection{Complete Synchronous Pipeline Planning Algorithm}
\label{sec::complete_alg}
\begin{algorithm}[!t]
\caption{Synchronous Pipeline Planning - \textbf{\textit{SPP}}}
\label{alg_spp}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\begin{algorithmic}[1]
\REQUIRE $G(\mathcal{V}, \mathcal{E}), \mathcal{D}$
\ENSURE $\mathcal{\bar{S}}, \mathcal{\bar{F}}, \bar{\textbf{e}}, T_{SPP}$
\STATE \textbf{\textit{RDO}}($\mathcal{G}(\mathcal{V}, \mathcal{E}), 1, V$)
\STATE Obtain device ordering $\{v_1, \ldots, v_V\}$ according to $rank(v), \forall v \in \mathcal{V}$
\STATE $T_{SPP} \leftarrow \textbf{INF} $
\FOR
$\xi \in \{1, 2, \ldots, V\}$}
\FOR {$r \in \{1, 2, \ldots, V\}$}
\STATE $W(L, \xi, r, V), \mathcal{S}_{r}, \mathcal{F}_{r} \leftarrow$ \textbf{\textit{PRM}}($G(\mathcal{V}, \mathcal{E}), \{v_1, \ldots, v_V\}, \mathcal{D}, L, V, \xi, r$)
\ENDFOR
\STATE Set $\mathcal{S}$ and $\mathcal{F}$ to $\mathcal{S}_{r}$ and $\mathcal{F}_{r}$ that achieve the minimum $W(L, \xi, r, V)$
\STATE $T_{PE}, \textbf{e} \leftarrow$\textbf{\textit{PE}}($\mathcal{G}, \mathcal{S}, \mathcal{F}$)
\STATE $T_{SPP} \leftarrow T_{PE}, \mathcal{\bar{S}} \leftarrow \mathcal{S}, \mathcal{\bar{F}} \leftarrow \mathcal{F}, \bar{\textbf{e}} \leftarrow \textbf{e}$ if $T_{PE} < T_{SPP}$
\ENDFOR
\STATE Return $\mathcal{\bar{S}}, \mathcal{\bar{F}}, \bar{\textbf{e}}, T_{SPP}$
\end{algorithmic}
\end{algorithm}
\vspace{-1mm}
Our complete synchronous pipeline planning ({\textit{SPP}}) algorithm is given in Alg.~\ref{alg_spp}, which produces model partition $\mathcal{\bar{S}}$, device mapping $\mathcal{\bar{F}}$ and execution schedule $\bar{\textbf{e}}$. We first leverage RDO in Alg.~\ref{alg_do} to obtain a linear ordering of all GPUs (lines 1-2). We next vary the number of stages
from $1$ to $V$ (line 4):
given a stage number to partition the model into, we vary $r$ and call PRM to compute the best stage partition and device mapping (lines 5-8) that achieve $\mathcal{W}(|\mathcal{S}|)$; we then invoke PE in Alg.~\ref{alg_schedule} to compute execution schedule of microbatches over these partitions on the respective devices (line 9).
We identify the best stage partition number as the one minimizing the makespan of a training iteration (lines 10-12) together with the corresponding $\mathcal{\bar{S}}$, $\mathcal{\bar{F}}$ and $\bar{\textbf{e}}$.
\vspace{-2mm}
\begin{theorem}
The makespan of a training iteration achieved by {\textit{SPP}}, $T_{SPP}$, is less than $(2 + \frac{4V-4}{M})(1 + \Phi)$ times the optimal makespan, $T^*$.
\label{th_approx_ratio}
\end{theorem}
\begin{comment}
\begin{proof}
Consider the makespan $T_{PE}$ achieved by the partition and mapping solution that achieves the optimal per-stage/communication channel execution time $\mathcal{W}_{PRM}$. We have $T_{SPP} \leq T_{PE}$.
Let $\mathcal{C}_{PRM}$ be the maximum time to process a microbatch on a single stage or communication channel, achieved with the above solution. Optimal makespan $T^*$ is no less than
$\mathcal{W}^*$ (defined in Lemma \ref{lemma_bmmp}). We have
\vspace{-4mm}
{\small
\begin{eqnarray*}
\label{eqn_spp_2}
T_{PE} &\leq (1 + \frac{4|\mathcal{S}|-4}{M})M\mathcal{C}_{PRM} + \max_{s\in\mathcal{S}_{repl}}\{A_s\} \nonumber\\
& <(2 + \frac{4V-4}{M})\mathcal{W}_{PRM
\leq (2 + \frac{4V-4}{M})(1 + \Phi)T^*
\end{eqnarray*}}
\vspace{-4mm}
\noindent where the first inequality is due to Lemma~\ref{lemma_ss}, the second inequality is due to $|\mathcal{S}| \leq V$, $\mathcal{W}_{PRM} \geq M\mathcal{C}_{PRM}$ and $\mathcal{W}_{PRM} > \max_{s\in\mathcal{S}_{repl}}\{A_s\}$, and the last inequality is due to Lemma~\ref{lemma_bmmp}.
Then, we have $T_{SPP} < (2 + \frac{4V-4}{M})(1 + \Phi)T^*$
\end{proof}
\end{comment}
\vspace{-2mm}
\begin{theorem}
Our complete synchronous pipeline planning Alg.~\ref{alg_spp} runs in polynomial time.
\end{theorem}
\vspace{-4mm}
\begin{comment}
\begin{proof}
The time complexity of min-cut() in Alg.~\ref{alg_do} is $O(V^3)$. As a result, lines 1-2 in Alg.~\ref{alg_spp} can be done in
time complexity $O(V^4)$.
There are $V$ iterations in lines 4-13. The dynamic programming problem can be solved by PRM in polynomial time by storing results of every \textbf{\textit{PRM}}$(G, \{v_1, v_2, \ldots, v_{N}\}, \mathcal{D}, l, i, \xi, r)$ to avoid re-computation~\cite{cormen2009introduction}.
The time complexity of PE is $O(V^2 + MV)$.
As a result,
Alg.~\ref{alg_spp} runs in polynomial time.
\end{proof}
\end{comment}
\section{Performance Evaluation}
\label{sec::eval}
\vspace{-2mm}
We evaluate {\textit{SPP}} with both testbed experiments and simulation studies.
\vspace{-2mm}
\subsection{Testbed experiments}
\noindent\textbf{Implementation.}
We implement {\textit{SPP}} using C++ and Python on Tensorflow 1.14.1~\cite{abadi2016tensorflow}. We use Tensorflow profiler to collect runtime data of each layer of each DNN model (\textit{e.g.}~forward/backward computation time, parameter size and activation size) over 20 training iterations.
We
assign a priority to each stage or AllReduce operation (implemented using NCCL collective AllReduce~\cite{NCCL}) based on our computed execution order, such that they can be scheduled by TensorFlow execution engine accordingly.
\noindent\textbf{Testbed.}
We evaluate {\textit{SPP}} in two testbed environments: (1) One consists of 4 GPU servers, inter-connected by a Dell Z9100-ON switch, with 50Gbps peak bandwidth between any two servers. Each server has one 8-core Intel E5-1660 CPU, two GTX 1080Ti GPUs and one 50GbE NIC. (2) The other
is a single server equipped with 4 Tesla V100 GPUs, two 10-core Intel Xeon
E5-2630 v4 CPUs and a 100GbE NIC. GPUs in the sever are connected with 128Gbps PCIe bus.
\noindent\textbf{DNN models.} We train 7 representative
DNN models: three image classification models
on the ImageNet dataset~\cite{deng2009imagenet} and four NLP models
on SQuAD2.0 dataset~\cite{rajpurkar2018know} (Table~\ref{table_model}).
The number of microbatches and microbatch size for training each model are set as the maximum number$\times$size (aka mini-batch size) without causing OOM (out of memory) for most baselines.
The large batch sizes we use are consistent with common practic
~\cite{fan2021dapple}.
To run {\textit{SPP}}, we modified ResNet152 by ignoring shortcut connections and Inception-V3 by aggregating parallel branches (branches with the same start point and end point) as one layer. We apply \textit{SPP} to the modified models to decide the
strategies,
and then train the original models (without the modifications) using the obtained strategies.
{\small
\begin{table}[!t]
\caption{Benchmark DNN models}
\begin{center}
\fontsize{8}{9}\selectfont
\begin{tabular}{|l|c|c|c|}
\hline
Model & \begin{tabular}[c]{@{}c@{}}\# of \\ parameters\end{tabular} & \begin{tabular}[c]{@{}c@{}}\# of \\ microbatches\end{tabular} & \begin{tabular}[c]{@{}c@{}}microbatch size\\ (\# of samples)\end{tabular} \\ \hline
VGG19~\cite{simonyan2014very} & 144M & 8 & 32 \\ \hline
ResNet152~\cite{he2016deep} & 60M & 4 & 4 \\ \hline
Inception-V3~\cite{szegedy2016rethinking} & 24M & 8 & 32 \\ \hline
Transformer~\cite{vaswani2017attention} & 55M & 8 & 32 \\ \hline
BERT-large~\cite{devlin2018bert} & 340M & 4 & 4 \\ \hline
XLNet-large~\cite{yang2019xlnet} & 550M & 4 & 4 \\ \hline
BERT-48~\cite{devlin2018bert} & 640M & 4 & \begin{tabular}[c]{@{}l@{}}4 - 1080Ti$\times$8\\ 2 - V100$\times$4 \end{tabular} \\ \hline
\end{tabular}
\label{table_model}
\end{center}
\vspace{-2mm}
\end{table}}
\noindent\textbf{Baselines.} {\em SPP} is compared with 4 state-of-the-art schemes:
(i) Data Parallelism (DP), with
each GPU training the complete model
with $\frac{\mbox{mini-batch size}}{\mbox{\# of GPUs}}$ amount of data;
(ii) GPipe~\cite{huang2019gpipe};
(iii) PipeDream~\cite{harlap2018pipedream};
(iv) HetPipe~\cite{Park2020hetpipe}
(see Sec.~\ref{ppparallelism} for details of the latter three).
Unless stated otherwise, we enforce a synchronization barrier at the end of each training iteration in PipeDream and HetPipe,
removing the negative impact of asynchronous training on model convergence.
\begin{figure*}[!t]
\begin{minipage}{0.25\textwidth}
\includegraphics[width=1\hsize]{fig/vgg_converge}
\captionof{figure}{VGG19 training progress: \revise{\textit{SPP}} vs. baselines}
\label{vgg_converge}
\end{minipage}
\hfill
\begin{minipage}{0.73\textwidth}
{\fontsize{8}{9}\selectfont
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Model & Testbed & \textit{SPP} & \begin{tabular}[c]{@{}c@{}}DP\\ (Speed-up) \end{tabular} & \begin{tabular}[c]{@{}c@{}}GPipe\\ (Speed-up) \end{tabular}& \begin{tabular}[c]{@{}c@{}}PipeDream\\ (Speed-up) \end{tabular} & \begin{tabular}[c]{@{}c@{}}HetPipe\\ (Speed-up) \end{tabular} \\ \hline
\multirow{2}{*}{VGG19} & 1080Ti$\times$8 & 1.799 & 2.882 (60.2\%) & 2.120 (17.9\%) & 1.949 (8.3\%) & 2.696 (49.9\%) \\ \cline{2-7}
& V100$\times$4 & 0.983 & 1.245 (26.7\%) & 1.004 (2.1\%) & 1.024 (4.2\%) & - \\ \hline
\multirow{2}{*}{ResNet152} & 1080Ti$\times$8 & 0.732 & 0.896 (22.4\%) & 1.214 (65.8\%) & OOM (-) & 0.843 (15.2\%) \\ \cline{2-7}
& V100$\times$4 & 0.832 & 1.209 (45.3\%) & 1.041 (25.1\%) & 0.873 (4.9\%) & - \\ \hline
\multirow{2}{*}{Inception-V3} & 1080Ti$\times$8 & 0.303 & 0.420 (38.6\%) & 0.551 (81.8\%) & 0.656 (116.5\%) & 0.408 (34.7\%) \\ \cline{2-7}
& V100$\times$4 & 0.357 & 0.663 (85.7\%) & 0.587 (64.4\%) & 0.919 (157.4\%) & - \\ \hline
\multirow{2}{*}{Transformer} & 1080Ti$\times$8 & 0.640 & 0.944 (47.5\%) & 1.234 (92.8\%) & 1.118 (74.7\%) & 0.766 (19.7\%) \\ \cline{2-7}
& V100$\times$4 & 1.065 & 2.533 (137.8\%) & 1.487 (39.6\%) & 1.830 (71.8\%) & - \\ \hline
\multirow{2}{*}{BERT-large} & 1080Ti$\times$8 & 0.409 & 0.524 (28.1\%) & 0.472 (15.4\%) & 0.421 (2.9\%) & 0.525 (28.4\%) \\ \cline{2-7}
& V100$\times$4 & 0.952 & 2.269 (138.3\%) & 1.665 (74.9\%) & 1.084 (13.9\%) & - \\ \hline
\multirow{2}{*}{XLNet-large} & 1080Ti$\times$8 & 1.299 & 1.388 (6.7\%) & 1.696 (30.6\%) & 1.384 (6.5\%) & 1.628 (25.3\%) \\ \cline{2-7}
& V100$\times$4 & 1.437 & 1.842 (28.3\%) & 1.720 (19.7\%) & 1.690 (17.6\%) & - \\ \hline
\multirow{2}{*}{BERT-48} & 1080Ti$\times$8 & 0.762 & OOM (-) & 1.885 (147.4\%) & 1.266 (66.1\%) & 1.377 (80.7\%) \\ \cline{2-7}
& V100$\times$4 & 0.855 & 1.656 (93.7\%) & 1.199 (40.2\%) & 1.160 (35.7\%) & - \\ \hline
\end{tabular}}
\captionof{table}{Per-iteration training time (in seconds) of different DNN models}
\label{table_iteration_time}
\end{minipage}
\vspace{-7mm}
\end{figure*}
\noindent\textbf{{Per-iteration training speed-up.}} We compare {\textit{SPP}} with all baselines in terms of per-iteration training time in the first testbed environment (1080Ti$\times$8).
In the other environment (V100$\times$4), we
omit HetPipe as all four GPUs are on the same server, reducing HetPipe to PipeDream solutions.
In Table~\ref{table_iteration_time}, the speed-up is computed by $\frac{\text{Baseline time} - \text{SPP time}}{\text{SPP time}}$.
{\textit{SPP}} outperforms the baselines in all cases.
While both DP and HetPipe require an AllReduce operation to synchronize gradients,
{\textit{SPP}} incurs less parameter synchronization traffic and maximally overlaps communication with computation within each training iteration.
As a result, {\textit{SPP}} outperforms them
by more than 20\% in most cases.
The large speed-up for {\textit{SPP}} over baselines on Inception-V3 demonstrates that our design handles a model with non-uniform layer computation time well.
As VGG-19 has a small number of layers that can be easily optimally partitioned, we observe minor gain comparing {\em SPP} with PipeDream and GPipe. For BERT-large on 1080Ti$\times$8 testbed, PipeDream partitions the model into uniform stages, achieving similar good performance as {\em SPP}.
{
\noindent\textbf{End-to-end training performance.} We next compare training convergence among \textit{SPP}, DP, GPipe and PipeDream.
For PipeDream, we use its original asynchronous pipeline design.
Fig.~\ref{vgg_converge} shows the training progress of VGG19 on the V100$\times$4 testbed to achieve a target 90\% top-5 accuracy~\cite{deng2009imagenet}. {\textit{SPP}} achieves the target accuracy using the least training time. Despite only marginal speed-up in per-iteration training time as compared to PipeDream (using its synchronous pipeline mode that we implemented), here {\textit{SPP}} outperforms PipeDream (with original asynchronous pipeline design) by 9.05\% in terms of the end-to-end training time.
This is because
PipeDream's asynchronous pipeline training slows down the model convergence progress, \revise{as training microbatches on outdated versions of model parameters~\cite{ho2013more}.}
}
\begin{figure}[!th]
\begin{minipage}[t]{0.45\columnwidth}
\includegraphics[width=\textwidth]{fig/diff_numbatch}
\caption{Training time: different \# of microbatches}
\label{diff_batch}
\end{minipage}
\begin{comment}
\begin{minipage}[t]{0.24\textwidth}
\includegraphics[width=\textwidth]{fig/diff_numbatch_gpu_utility}
\caption{GPU usage: different \# of microbatches}
\label{diff_batch_gpu_utility}
\end{minipage}
\end{comment}
\begin{minipage}[t]{0.45\columnwidth}
\includegraphics[width=\textwidth]{fig/diff_bandwidth}
\caption{Training time: different inter-server bandwidth levels}
\label{diff_band}
\end{minipage}
\begin{comment}
\begin{minipage}[t]{0.24\textwidth}
\includegraphics[width=1.1\textwidth]{fig/diff_bandwidth_gpu_utility}
\caption{GPU usage: different inter-server bandwidth levels}
\label{diff_band_gpu_utility}
\end{minipage}
\end{comment}
\begin{minipage}[t]{0.45\columnwidth}
\includegraphics[width=\textwidth]{fig/diff_gpuperserver}
\caption{Training time: different inter-GPU connectivity}
\label{diff_inter_connectivity}
\end{minipage}
\centering
\begin{minipage}[t]{0.45\columnwidth}
\includegraphics[width=\textwidth]{fig/diff_largemodel}
\caption{Training time: BERT with different \# of layers}
\label{diff_bert_model}
\end{minipage}
\begin{minipage}[t]{0.45\columnwidth}
\includegraphics[width=\textwidth]{fig/diff_comm_size}
\caption{Training time: different inter-layer data sizes}
\label{diff_comm_size}
\end{minipage}
\begin{minipage}[t]{0.46\columnwidth}
\includegraphics[width=1.1\textwidth]{fig/diff_stage_num.pdf}
\caption{\revise{Training time \& $\mathcal{W}_{PRM}$: different \# of stages}}
\label{diff_stage_num}
\end{minipage}
\end{figure}
\setlength{\textfloatsep}{0pt}
\vspace{-3mm}
\subsection{Trace-driven Simulation}
\vspace{-1mm}
\noindent\textbf{Settings.} By default, we simulate training of BERT-large (27-layers including 24 transformer layers) with 32 microbatches and a microbatch size of 6 on 8 servers, each equipped with 4 GPUs. We drive our simulation using profiled data
collected by running the DNN on a V100 GPU.
3 servers have intra-server bandwidth between [96, 128] Gbps (representing PCIe links~\cite{tallent2017evaluating}), and the other 5 servers
[160, 200] Gbps (representing NVLink connections~\cite{amaral2017topology}).
By default, inter-server bandwidth is set within [32, 40] Gbps to emulate an RDMA network~\cite{lu2018multi}.
\subsubsection{Different numbers of microbatches}
Fig.~\ref{diff_batch} shows that {\textit{SPP}} achieves significant training speed-up compared with the four baselines at different microbatch numbers ($M$).
We also observed (figure omitted due to space limit) that higher GPU utilization is achieved with {\textit{SPP}} when $M$ is larger, implying the diminishing gap between {\textit{SPP}} and the optimal solution (which maximally utilizes GPUs for the best training speed). This is consistent with Theorem~\ref{th_approx_ratio}: as $M$ increases, the approximation ratio becomes smaller (\textit{i.e.}, better).
\begin{comment}
Fig.~\ref{diff_batch_gpu_utility} further reveals higher GPU utilization achieved by {\textit{SPP}}, computed by averaging $\frac{\text{GPU busy time}}{\text{per-iteration training time}}$ over all GPUs. The increase of GPU usage with the microbatch number implies the diminishing gap between {\textit{SPP}} and the optimal solution (which maximally utilizes GPUs for the best training speed), which is consistent with Theorem~\ref{th_approx_ratio}: as the number of microbatch increases, the approximation ratio becomes smaller (\textit{i.e.}, better).
\end{comment}
\subsubsection{Different inter-server bandwidth levels} We emulate three types of inter-server networks with low, medium and high bandwidth, respectively.
Fig.~\ref{diff_band}
shows that per-iteration training time of {\textit{SPP}}, GPipe and PipeDream is stable
at different bandwidth levels,
while performance of DP and HetPipe drops dramatically at small bandwidth. This is because the former three
overlap most inter-stage communication and AllReduce operations with computation, achieving higher GPU utilization even with small communication bandwidth.
DP and HetPipe require an AllReduce operation over inter-server connections at the end of each training iteration, which incurs large communication time with small bandwidth.
\vspace{-1mm}
\subsubsection{Different inter-GPU connectivity} We
next vary the number of available GPUs on servers and the server number. In Fig.~\ref{diff_inter_connectivity},
$[6\times2, 3\times4,1\times 8]$ represents training the model over 6 servers each with 2 GPUs, three 4-GPU servers and one 8-GPU server.
{\textit{SPP}} achieves the best performance in all inter-GPU connection topologies.
\subsubsection{Different numbers of layers} Fig.~\ref{diff_bert_model} compares the training performance
of BERT-large, BERT-48 (48 Transformer layers) and BERT-72 (72 Transformer layers). As the model size increases, it is more difficult to obtain optimal model partition and device mapping solution. However,
performance of {\textit{SPP}} and PipeDream remains quite stable,
with {\textit{SPP}} outperforming PipeDream by more than 20\% on the three models.
\subsubsection{Different inter-layer data sizes} We investigate the impact of activation sizes which influence inter-stage communication time,
by scaling the
activation data in BERT-large by different factors.
Fig.~\ref{diff_comm_size} shows that the per-iteration training time with {\textit{SPP}} remains similar with the increase of activation sizes, due to
the excellent communication and computation overlap it achieves. GPipe
tends to partition the model into more stages, resulting in more inter-stage communication time.
\subsubsection{Different numbers of stages} Lemma~\ref{lemma_ss} gives that the performance of our algorithm is related to: (1) the number of stages $|\mathcal{S}|$, and (2) $\mathcal{W}_{PRM}$, the maximum time to process all microbatches on a single stage or communication channel. While PipeDream only aims at minimizing $\mathcal{W}_{PRM}$, {\textit{SPP}} strikes a balance between the two factors. In Fig.~\ref{diff_stage_num},
$\mathcal{W}_{PRM}$ first decreases when the model is partitioned into more stages, and becomes stable starting from the stage number of 4. The main reason is that for training BERT-large (with 24 uniform Transformer layers) on 32 GPUs, the per-stage training time with $4$ stages is already quite close to the optimal per-stage training time.\footnote{With 4 stages, we roughly have 6 layers per stage and each stage replicated to 8 GPUs; per-stage time is 6p/8 (p denotes per-layer computation time) plus AllReduce time. Optimal per-stage training time is lower bounded by 24p/32.}
The
training time first decreases as $\mathcal{W}_{PRM}$ drops, and then increases when $\mathcal{W}_{PRM}$ stabilizes and $|\mathcal{S}|$ becomes the dominant factor,
which is consistent with Lemma~\ref{lemma_ss}.
This indicates that only minimizing $\mathcal{W}_{PRM}$ does not yield the best solution.
{\textit{SPP}} strategically selects the 6-stage partition solution to minimize per-iteration training time.
\section{Background and Related Work}
\label{sec::related_work}
\noindent{\bf DNN Training.}
A DNN model comprised of multiple layers is usually trained over a large dataset iteratively to minimize a loss function~\cite{goodfellow2016deep}. The dataset is typically divided into equal-sized mini-batches. In each training iteration, one mini-batch is processed to update the DNN model as follows: (1) {\em forward propagation (FP)}: the mini-batch is computed by each layer of the DNN sequentially to derive a loss; 2) {\em backward propagation (BP)}: gradients of model parameters are computed based on the loss from the last layer to the first layer; 3)
a {\em gradient update} operation applies computed gradients to parameters in each layer with an optimization algorithm, \textit{e.g.},~stochastic gradient descent (SGD) or adaptive moment estimation (Adam)~\cite{goodfellow2016deep}.
\vspace{-2mm}
\noindent{\bf DNN Model Partition and Device Mapping.}
A number of studies have focused on partition and device mapping strategies for large DNN models through learning-based methods~\cite{mirhoseini2017device}\cite{addanki2018placeto}\cite{yi2020optimizing}, which require large computing resources and long training time to derive a satisfying policy for one training job.
A few efforts~\cite{wu2020stanza}\cite{yi2020fast} exploit efficient heuristics for DNN model partition and device mapping at the operation level, requiring detailed cost modeling of the DNN model and accurate profiling of operation execution time.
Our work focuses on layer-level DNN model partitioning and mapping, and derives a polynomial-time pipeline planning strategy.
\vspace{-2mm}
\noindent{\bf Data Parallelism (DP) and Model Parallelism (MP)}
are commonly adopted to parallelize training across multiple devices. As shown in Fig.~\ref{fig_mp_pp}(a), with DP,
three mini-batches
are each trained on one GPU with a complete copy of model parameters; an AllReduce operation synchronizes computed gradients after training of all mini-batches.
With MP (Fig.~\ref{fig_mp_pp}(b)), in each training iteration, a mini-batch is fed into the device hosting the first stage(s) of the DNN model for FP, and the computed
activations are passed to later stages on other devices for FP; during BP, gradients are computed and passed from one device to another following reverse sequence of the stage(s). In this way, only one device is active at each time, where \revise{FP} or BP of the mini-batch is being carried out, while other devices are idle, leading to low device utilization.
\begin{comment}
With \textit{data parallelism},
each device hosts a portion of the training dataset and a complete copy of the model. Every device trains on its dataset while synchronizing its parameter update with other devices~\cite{sergeev2018horovod}\cite{li2014scaling}.
{For example, in Fig.~\ref{fig_mp_pp}(a),
there are three equal-size microbatches, each trained on one GPU with a complete copy of model parameters. An AllReduce operation synchronizes model parameters after the training of all microbatches.}
Data parallelism does not reduce GPU memory footprint as each device stores a full set of model parameters~\cite{rajbhandari2019zero}. As a result, data parallelism cannot handle large models that can not be fit entirely into a single GPU's memory.
\textit{Model parallelism}
partitions a large DNN model into stages (of layers) and place them on different devices~\cite{shoeybi2019megatron}. In each training iteration, a mini-batch is trained over all the devices, as illustrated in Fig.~\ref{fig_mp_pp}(b).
The mini-batch is fed into the device hosting the first stage(s) for FP, and the computed intermediate data (\textit{i.e.}, activations) are passed to later stages on other devices for FP; during BP, gradients are computed and passed from one device to another following reverse sequence of the stage(s) they host.
Even though model parallelism enables the training of large models, inter-stage communication volume can be large, which may incur large communication delay if inter-machine bandwidth is limite
~\cite{rajbhandari2019zero}. Further, in each training iteration, only one device is active at each time, where FB or BP of the mini-batch is being carried out, while other devices are idle, leading to low device utilization.
\end{comment}
\begin{figure}[!t]
\centering
\begin{subfigure}{0.89\columnwidth}
\centerline{\includegraphics[width=\columnwidth]{fig/dp}}
\caption{Data Parallelism}
\end{subfigure}
\begin{subfigure}{0.89\columnwidth}
\centerline{\includegraphics[width=\columnwidth]{fig/mp}}
\caption{Model Parallelism}
\end{subfigure}
\begin{subfigure}{0.89\columnwidth}
\centerline{\includegraphics[width=\columnwidth]{fig/pp}}
\caption{Pipeline Parallelism}
\end{subfigure}
\caption{Data parallelism vs. model parallelism vs. pipeline parallelism:
1 mini-batch divided into 3 microbatches.}
\label{fig_mp_pp}
\end{figure}
\setlength{\textfloatsep}{0pt}
\vspace{-2mm}
\noindent\textbf{Pipeline Parallelism}
\label{ppparallelism}
Based on model parallelism, \textit{pipeline parallelism} further divides a mini-batch into equal-sized microbatches (Fig.~\ref{fig_mp_pp}(c)).
The microbatches
are consecutively fed into the device hosting the first stage(s) whenever the forward computation of the previous microbatch is done on this device, rendering a training pipeline. Consequently, it enables multiple devices to
process different microbatches simultaneously.
{\em Asynchronous Pipeline Training.} PipeDream~\cite{harlap2018pipedream} partitions a DNN model over multiple servers, allowing stage replication among the servers, and further divides a stage over GPUs within each server, aiming at minimizing the maximum time to process a single stage. Its server configuration and inter-server connectivity are both homogeneous. Stage execution is scheduled to ensure that every FP stage is immediately followed by a BP stage.
Geng \textit{et al.}~\cite{geng2019elasticpipe} study pipeline parallelism over heterogeneous GPUs, and propose a dynamic tuning algorithm to identify straggler devices and redistribute the DNN model for better load balance.
In HetPipe~\cite{Park2020hetpipe}, each node (comprised of homogeneous GPUs) trains the DNN model in a pipelined manner similar to PipeDream without stage replication;
DP is used for training and parameter synchronization among nodes.
With asynchronous pipelining, microbatches are trained on outdated versions of model parameters to compute gradients,
leading to slow model convergence
and lower accuracy of the obtained model as compared to synchronous training~\cite{ho2013more}.
Several studies have investigated mitigating the accuracy loss
via weight prediction~\cite{chen2018efficient} or randomized smoothing~\cite{colin2019theoretical}, under restricted assumptions of training loss functions.
{\em Synchronous Pipeline Training.}
GPipe~\cite{huang2019gpipe} is a synchronous pipeline training framework,
including (1) a partition strategy that ensures approximately the same number of DNN layers on each GPU, and (2) a schedule to execute all FP before starting any BP. It does not allow stage replication and provides no device mapping strategies.
\begin{comment}
DAPPLE~\cite{fan2020dapple} also investigates
synchronous pipeline training,
which is comprised of an early backward scheduling heuristic, and a planner for model partition and mapping considering three simple placement strategies in homogeneous system.
In contrast to above approaches, our design can be applied to arbitrary inter-GPU connectivity with worst-case guarantees
We focus on synchronous pipeline parallelism which maintains the same training quality as synchronous training without pipelining.
\end{comment}
We design efficient algorithms to deal with all aspects of synchronous pipeline planning.
\begin{comment}
Considering non-pipeline approaches, Krizhevsky~\cite{krizhevsky2014one} proposes a hybrid parallel design considering only CNN models.
Wang \textit{et al.}~\cite{wang2019supporting} designs a partition-n-reduce method that performs operation-level splitting and partition over GPUs within single server.
Their design needs to decide the splitting strategy for each operation, and the partitioning algorithm can not handle complex inter-GPU connectivity.
FlexFlow~\cite{jia2018beyond} considers DNN model parallelization in SOAP (Sample-Operation-Attribute-Parameter) search space, and adopts Markov Chain Monte Carlo (MCMC) algorithm to search for the partition strategy in non-polynomial time. On the contrary, our design produces a near-optimal pipeline parallelism solutions for large DNN models in polynomial time.
\end{comment}
\section{Pipeline Planning Algorithms}
\vspace{-1mm}
We now design algorithms for efficient synchronous pipeline training. We start with execution scheduler design, assuming model partition and device mapping are given; then we devise the partition and device mapping algorithm that minimizes per-iteration training time together with the execution scheduler.
\vspace{-2mm}
\subsection
Execution Scheduler}
\label{sec::schedule}
\vspace{-1mm}
Given model partitions $\mathcal{S}$ and device mapping $\mathcal{F}$, our scheduling problem, as presented in Sec.~\ref{sec:scheduleproblem}, is a special case of the
NP-hard job shop problem~\cite{goldberg2001better}: microbatches
correspond to jobs of the same type and stages correspond to machines in the job shop problem,
and the objective is to minimize the total time of executing all jobs. We design an efficient
{\em pipeline execution} (PE) scheduling algorithm to achieve a proven performance bound.
The PE algorithm contains two modules:
1) an
ordering method to decide execution order of microbatches over stages on different GPUs, and 2) an
algorithm that schedules pipeline execution based on the computed order.
\textit{\textbf{1)
Execution ordering}}: We define a \textit{computation block} as the forward or backward computation of a stage.
As backward computation of the last stage $s_{|\mathcal{S}|}$ follows immediately forward computation of $s_{|\mathcal{S}|}$,
we merge stage $s_{|\mathcal{S}|}$'s forward and backward computation blocks into a single computation block.
We define the inter-stage communication from $s_n$ to $s_{n+1}$ or from $s_{n+1}$ to $s_n$ to be a \textit{communication block}, including all communication over this {\em communication channel}, \textit{i.e.}, the set of connections from GPU(s) hosting the former stage to GPU(s) hosting the latter stage.
The end-to-end training of every microbatch in a training iteration involves $2|\mathcal{S}|-1$ computation blocks, $2|\mathcal{S}|-2$ communication blocks and $|\mathcal{S}_{repl}|$ AllReduce operations for replicated stages. Let $\mathcal{J}=\{1, 2, \ldots, 4|\mathcal{S}|-3\}$ be the ordered list of all computation and communication blocks, with blocks ordered according to their execution dependencies.
An execution order queue, $U_s$, is maintained for each stage $s \in \mathcal{S}$, containing $\tt{(microbatch\ index, }$ $\tt{block\ number)}$ pairs indicating the order of processing microbatches by forward or backward computation blocks of stage $s$.
For each block $j \in \mathcal{J}$, we maintain an \textit{available microbatch} queue $Q_j$,
containing microbatches which have been processed by block $j-1$ but not
by $j$.
Initially $Q_1$ includes all microbatches in order of their indices,
and $Q_j = \emptyset, \forall j \in \mathcal{J}/\{1\}$.
We order microbatch processing over the
blocks as follows. According to the order of blocks in $\mathcal{J}$, we pop out one microbatch $m$ at the head of a non-empty
queue $Q_{j}$, and push it to the end of queue $Q_{j+1}$ of the next block (if $j$ is not the last block in $\mathcal{J}$); if
block $j$ is a computation block of stage $s$, we add $(m, j)$ to
execution order queue $U_s$. Going through the block list, we identify at most one microbatch to be processed by each block,
corresponding to microbatches that can be processed about simultaneously.
We loop through the block list repeatedly
until all available microbatch queues are empty ($Q_{j} = \emptyset, \forall j \in \mathcal{J}$), \textit{i.e.}, end-to-end training of all microbatches is ordered.
\begin{comment}
Through the above procedure, we can actually obtain a pipeline execution schedule, by concurrently scheduling all microbatch-block processing identified in each loop and then moving onto the next bunch of microbatch-block processing identified in the next loop. However, as processing time of a microbatch over different blocks differs, this schedule may well result in many idle periods on devices or communication channels between bunch execution. Instead, we next derive a more efficient schedule that minimizes such idle gaps, exploiting the execution order queues, $U_s$'s.
, identified for microbatches' processing over different stages through the procedure. We do not maintain any execution order queues for processing microbatches over communication channels, as execution order over a communication channel is uniquely decided by the computation order, by enabling communication to happen immediately after the precedent computation is done.
\end{comment}
\textit{\textbf{2) Scheduling}}:
We next
exploit the execution order queues, $U_s$'s, and schedule a microbatch's processing on a block as soon as it is ready. We start by popping the first (microbatch index, block number) out of queue $U_{s_1}$ of the first stage $s_1$, and process the corresponding microbatch on the respective block. Once a computation block is executed, the successor communication block is immediately run (upon the communication channel becoming idle).
Upon processing completion of a scheduled computation block of stage $s$ or a communication block which transmits data to stage $s$,
we examine queue $U_s$:
if the first (microbatch index, block number) in $U_s$ is ready to be executed (\textit{i.e.}, the microbatch has been processed by the precedent block), we pop it out and run it.
This procedure terminates when $U_s = \emptyset, \forall s \in \mathcal{S}$, \textit{i.e.}, all microbatches have been processed by all computation and communication blocks.
For each replicated stage $s \in \mathcal{S}_{repl}$, when all microbatches have been processed by backward computation block of this stage, the corresponding AllReduce operation is executed.
\begin{comment}
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{fig/pipeline_cycle_illustration}
\caption{Pipeline execution schedule: an example}
\label{fig_cycle_ordering}
\end{figure}
Fig.~\ref{fig_cycle_ordering} shows a pipeline schedule example, for training a DNN model partitioned into two stages with the second stage replicated over multiple GPUs.
There are six microbatches, each processed by five computation/communication blocks in sequence: forward computation block of stage 1, communication block from stage 1 to stage 2, forward plus backward computation block of stage 2, communication block from stage 2 to stage 1, and backward computation block of stage 1.
An AllReduce operation is scheduled when all microbatches' forward computation in the second stage is done.
\end{comment}
{\small
\begin{algorithm}[!th]
\caption{Pipeline Execution Scheduler - \textbf{\textit{PE}}}
\label{alg_schedule}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\begin{algorithmic}[1]
\REQUIRE $\mathcal{G}(\mathcal{V}, \mathcal{E}), \mathcal{S}, \mathcal{F}: S \rightarrow \mathbb{P}(\mathcal{V})$
\ENSURE $T_{PE}, \textbf{e}$
\STATE Initialize execution order queues $U_{s} \leftarrow \emptyset, \forall s \in \mathcal{S}$
\STATE Initialize available microbatch queues $Q_{1} \leftarrow \{1, 2, \ldots, M\}$ and $Q_{j} \leftarrow \emptyset, \forall j \in \mathcal{J}/\{1\}$
\WHILE {$\exists j \in \mathcal{J}, Q_{j} \neq \emptyset$}
\FOR {$j \in \mathcal{J}: Q_{j} \neq \emptyset$}
\STATE \revise{Pop one microbatch $m$ out from the head of $Q_{j}$, and push $m$ to the end of $Q_{j+1}$ if $j < |\mathcal{J}|$}
\STATE Add $(m,j)$ to the corresponding $U_{s}$ if block $j$ is a computation block
\ENDFOR
\begin{comment}
\FOR {$s \in \mathcal{S}$}
\STATE Add (M, $A_s$) to $U_{s}$ if backward propagation of all $M$ microbatches on stage $s$ finishes
\ENDFOR
\end{comment}
\ENDWHILE
\STATE Pop the first $(1, 1)$ out of $U_{s_1}$, and set $e^f_{1,s_1}=0$
\WHILE {$\exists s \in \mathcal{S}$, a block of stage $s$ completes or a communication block which transmits data to stage $s$ finishes at time $t$}
\IF {$s \in \mathcal{S}_{repl}$ and $U_{s} = \emptyset$}
\STATE Start AllReduce operation, and set $e^A_{s}$ to $t$
\ENDIF
\IF {$U_{s} = \emptyset, \forall s \in \mathcal{S}$}
\STATE \textbf{break}
\ENDIF
\IF {a computation block of $s$ finishes}
\STATE Start successor communication block
\ENDIF
\IF {the first (microbatch index, block number) in $U_{s}$ is ready
\STATE Pop (microbatch index, block number) out of $U_{s}$
\STATE Start the block and set the $e^f_{m,s}$ or $e^b_{m,s}$ to $t$
\ENDIF
\ENDWHILE
\STATE Calculate the makespan: {\footnotesize $T_{PE}=\max\{\max_{m\in\{1, 2, \ldots, M\}} (e^b_{m, s_1} + \frac{\sum\limits_{l \in s_{1}}p^b_l}{|\mathcal{F}({s_1})|}), \max_{s\in\mathcal{S}_{repl}}(e^A_{s} + A_s)\}$}
\STATE Return $T_{PE}$, $\textbf{e}$
\end{algorithmic}
\end{algorithm}
\setlength{\textfloatsep}{0pt}
}
We summarize our pipeline execution scheduling algorithm in Alg.~\ref{alg_schedule}.
The following lemma gives an upper bound of the per-iteration training time achieved by this PE algorithm.
\begin{comment}
We next define a
\textit{cycle time}, $\mathcal{C}$, as the maximum time to execute a microbatch's workload on a single stage (including forward and backward computation) or inter-stage communication channel (including data transfer in both forward and backward propagation phases), without considering AllReduce operations:
{\small
\begin{equation}
\mathcal{C} = \max\{\max_{n \in \{1, \ldots, |\mathcal{S}|\}} \sum_{l \in s_n}(p^f_l + p^b_l), \max_{n\in \{1, \ldots, |\mathcal{S}|-1\}
}\{c^f_{n, n+1}+c^b_{n+1, n}\}\}
\end{equation}
}
\end{comment}
\begin{lemma}
\label{lemma_ss}
Per-iteration training time achieved by Alg.~\ref{alg_schedule}, $T_{PE}$, is no larger than {\small $(1 + \frac{4|\mathcal{S}|-4}{M})M\mathcal{C} + \max_{s\in\mathcal{S}_{repl}}\{A_s\}$}, where {\small $\mathcal{C} = \max\{\max_{n \in \{1, \ldots, |\mathcal{S}|\}} \frac{\sum_{l \in s_n}(p^f_l + p^b_l)}{|\mathcal{F}(s_n)|}, \allowbreak\max_{n\in \{1, \ldots, |\mathcal{S}|-1\}
}\{c^f_{s_{n}, s_{n+1}}+c^b_{s_{n+1}, s_n}\}\}$}, denoting the maximum time to process a microbatch on a single stage (including both forward and backward computation) or an inter-stage communication channel (including data transfer in both forward and backward propagation phases), without considering AllReduce operations.
\end{lemma}
\begin{proof}
{Given the execution order computed with lines 3-8 in Alg.~\ref{alg_schedule}, we consider a new \textit{cycle scheduling algorithm} whose per-iteration training time, denoted by $T_{CS}$, serves as an upper-bound of $T_{PE}$.
In every cycle, we schedule to execute every computation/communication block $j \in \mathcal{J}$ for at most one microbatch if available.
Our cycle scheduler starts by entering the first cycle, and only schedules the execution of the first microbatch for the forward computation block of stage 1 as all other blocks are not available. After the execution of all available blocks in the current cycle, our cycle scheduler transits to the next cycle, and checks the availability of all blocks again. If a block has at least one microbatch that is available, we schedule the execution of one of the available microbatches for the block. The scheduler ends when every microbatch has been processed by all the blocks,
In addition, for each replicated stage, we execute the corresponding AllReduce operation immediately upon all microbatches have been processed by backward computation block of this stage.
Consequently, the execution time of every cycle is at most
{\small $\mathcal{C} = \max\{\max_{n \in \{1, \ldots, |\mathcal{S}|\}} \frac{\sum_{l \in s_n}(p^f_l + p^b_l)}{|\mathcal{F}(s_n)|}, \allowbreak\max_{n\in \{1, \ldots, |\mathcal{S}|-1\}}\{c^f_{s_{n}, s_{n+1}}+c^b_{s_{n+1}, s_n}\}\}$}, representing the maximum time to process a microbatch on a single stage (including both forward and backward computation) or an inter-stage communication channel (including data transfer in both forward and backward propagation phases), without considering AllReduce operations.
Fig.~\ref{fig_pipeline_cycle} shows an example of our cycle-based schedule of six microbatches in one training iteration. The DNN model is partitioned into two stages where the second stage is replicated over multiple GPUs. In the first cycle, we only process microbatch 1 on the forward computation block in stage 1. Then in the second cycle, two blocks have available microbatches to be executed: microbatches 2-6 on forward computation block in stage 1; and microbatch 1 on forward communication block in communication channel 1. Hence, we schedule the execution of the two blocks once, \textit{i.e.}~CF1 and F2 in the figure. It takes 10 cycles to execute all microbatches on all the blocks.
Every cycle starts upon all the blocks scheduled in the previous cycle finishes, and strictly schedules to execute at most one microbatch on each block.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.8\textwidth]{fig/pipeline_cycle_illustration}
\caption{Cycle schedule: an example}
\label{fig_pipeline_cycle}
\end{figure*}
Given that the DNN model is partitioned into $N$ stages with $M$ microbatches, there are two cases:
$\triangleright$ \textbf{Case 1:} $M > 4|\mathcal{S}|-4$
In this case, we first perform $4|\mathcal{S}|-4$ cycles before entering a cycle where every block $j \in \mathcal{J}$ has at least one available microbatch. Afterwards, we perform another $M-4|\mathcal{S}|+4$ cycle until all the microbatches have been processed by the first block. We further perform $4|\mathcal{S}|-4$ cycles to finish all microbatches.
$\triangleright$ \textbf{Case 2:} $M \leq 4|\mathcal{S}|-4$
Similarly to the first case, we perform in total $M + 4|\mathcal{S}| - 4$ cycles to execute all the microbatches.
In conclusion, we perform in total $M + 4|\mathcal{S}| - 4$ cycles in either cases, with the time for each cycle no greater than $\mathcal{C}$.
As a result, we have:
{\small
\begin{displaymath}
T_{CS} \leq (M + 4|\mathcal{S}| - 4)\mathcal{C} + \max_{s\in\mathcal{S}_{repl}}\{A_s\}
\end{displaymath}}
The per-iteration training time achieved by the cycle scheduling algorithm, $T_{CS}$, is an upper bound of the per-iteration training time achieved by Alg.~\ref{alg_schedule}, $T_{PE}$. This is because Alg.~\ref{alg_schedule} schedules the execution of microbatches on the same block without interruption if available, while in the cycle scheduling algorithm, the execution of blocks waits until all blocks in the previous cycle have been executed.
We have proven that
{\small
\begin{displaymath}
T_{PE} \leq T_{CS} \leq (1 + \frac{4|\mathcal{S}|-4}{M})M\mathcal{C} + \max_{s\in\mathcal{S}_{repl}}\{A_s\}
\end{displaymath}}
}
\end{proof}
\section{Proof of Lemma~\ref{lemma_ss}}
\section{Pipeline Partition, Replication and Mapping Algorithm}
\label{appendix_alg_dp}
The pipeline partition, replication and mapping algorithm (PRM) is given in Alg.~\ref{alg_dp}
If $l$ or $i$ is less than $\xi$, PRM terminates immediately as there is no feasible solution (lines 1-3). If $\xi$ equals 1 and $r$ is $i$, indicating there is only one stage, PRM groups the first $l$ layers as a single stage and replicates it over $\{v_1, \ldots, v_i\}$ (lines 4-12). Otherwise, we use dynamic programming to compute the optimal partition and mapping
(lines 13-25).
\begin{algorithm}[!t]
\caption{Pipeline Partition, Replication and Mapping - \textbf{\textit{PRM}}}
\label{alg_dp}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\begin{algorithmic}[1]
\REQUIRE $G(\mathcal{V}, \mathcal{E}), \{v_1, v_2, \ldots, v_{N}\}, \mathcal{D}, l, n, \xi, r$
\ENSURE $W(l, \xi, r, n), \mathcal{S}, \mathcal{F}$
\IF {$l < \xi$ or $n < \xi$}
\STATE Return $\textbf{INF}, \text{None}, \text{None}$
\ENDIF
\IF {$\xi = 1$ and $r = n$}
\STATE $W(l, \xi, r, n) \leftarrow M\sum\limits_{i = 1}^{l}\frac{p^f_i + p^b_i}{n} + A_{1\rightarrow l}(v_{1} \rightarrow v_{n})$
\STATE $s \leftarrow \{1, \ldots, l\}$
\STATE $\mathcal{F}(s) \leftarrow \{v_1, \ldots, v_n\}$
\ELSE
\IF {$\xi = 1$ or $r = n$}
\STATE Return $\textbf{INF}, \text{None}, \text{None}$
\ENDIF
\ENDIF
\STATE $min\_max\_time = \textbf{INF}, $
\FOR {$l' \in \{1, 2, \ldots, l-1\}$}
\FOR {$r' \in \{1, 2, \ldots, n-r\}$}
\STATE $W(l', {\xi-1}, r', {n-r}), \mathcal{S'}, \mathcal{F}\leftarrow \text{\textbf{\textit{PRM}}}(G, \{v_1, \ldots, v_n\}, \mathcal{D}, l', n-r, \xi-1, r')$
\STATE Set:
{\small
\begin{multline*}
max\_time \leftarrow \max\{W(l', \xi-1, r', n-r), \\M\frac{d^f_{l', l'+1} + d^b_{l'+1, l'}}{rr'b_{rr'}}, \\M\frac{\sum\limits_{i = l'+1}^{l}(p^f_i + p^b_i)}{r} + A_{l'+1\rightarrow l}(v_{n-r+1} \rightarrow v_{n})\}
\end{multline*}}
\IF {$min\_max\_time > max\_time$}
\STATE $min\_max\_time \leftarrow max\_time$
\STATE $s \leftarrow \{l'+1, l'+2, \ldots, l\}$
\STATE $\mathcal{S} \leftarrow \mathcal{S'}\cup \{s\}$
\STATE Cancel previous mapping, and Set $\mathcal{F}(s) \leftarrow \{v_{k+1}, \ldots, v_{n}\}$
\ENDIF
\ENDFOR
\ENDFOR
\STATE Return $min\_max\_time, \mathcal{S}, \mathcal{F}$
\end{algorithmic}
\end{algorithm} |
1,477,468,751,178 | arxiv | \section{Introduction}
In a previous paper \cite{OV-M-1} we have studied the implications of
the presence of compactified spatial dimensions (or equivalently, the
non-trivial topology of the space-time) on the cosmological solutions
of a two-dimensional toy model coupled to Brans-Dicke gravity. There
we found that the only observable effect of the closed dimension
appears in the thermal history. The universe with compactified
dimensions gets cold more slowly than in the open space case. The
dynamical evolution, on the contrary, remains exactly the same. The
reason for this is that the introduction of non-trivial topology in
the space does not modify the equation of state associated with the
(field-theoretical) matter filling the universe.
Now we would like to study the cosmological solutions to the
Brans-Dicke equations when the source of the gravitational field is a
gas of strings living in a two-dimensional target space. From the
very
beginning we face a problem which is that of determining the correct
expression of the free energy. If we take as true the conjecture of
\cite{TV} in which the free energy is naively identified with the
toroidal compactification in $S^{1}\times S^{1 }$, the only remnant
question to clarify is that of identifying the time coordinate. If we
restrict our interest up to one loop in the world-sheet the question
is irrelevant since the space dimension is compact and $R^{(2)}=0$,
then this is only a problem at higher genus ($g\geq 2$). If we took
the Liouville field as the time coordinate its coupling to the
world-sheet curvature would imply that only discrete values of the
temperature are allowed \cite{TV}. On the contrary, based on
aesthetical grounds, it looks more appealing to us to take the
Liouville field as a spatial coordinate and then the length of the
target coordinate of the $c=1$ string is $\beta$. This leads to a
picture in which the string interaction produces a space whose length
is quantized.
Following the conjecture of \cite{TV}, an expression for the
Helmholtz
free energy is gotten which reproduces the partition function of the
$c=1$ non-critical string on a two-dimensional target torus
\cite{DKL}
(the concrete relation is that the sum over world-sheet surfaces
equals $-\beta F(\beta)$)
\begin{equation}
F(\beta,L)=\frac{1}{\beta}\ln{\left(\frac{L}{\sqrt{\alpha^{'}}}
\right)} + \frac{2}{\beta}\ln{\left[\eta\left(i \frac{\beta
L}{4\pi^{2}\alpha^{'}}\right)
\eta\left(i\frac{L}{\beta}\right)\right]}\:,\label{free-energy}
\end{equation}
where $L$ is the length of the spatial compactified dimension and
$\eta(\tau)$ is the Dedekind $\eta$-function. The equation of state
is
\begin{equation}
\rho-p=\frac{1}{\beta L}-\frac{1}{12\pi\alpha^{'}}
E_{2}\left(i\frac{\beta L}{4\pi^{2}\alpha^{'}}\right)\;,
\end{equation}
with $E_{2}(\tau)$ the Eisenstein series. In this case we have some
extra symmetries that are absent in the field-theoretical case,
namely, the thermal partition function $Z(\beta,L)$ enjoys both
$\beta$ and space-time duality
\begin{equation}
Z(\beta,L)=Z\left(\frac{4\pi^{2}\alpha^{'}}{\beta},L\right)=
Z\left(\beta,\frac{4\pi^{2}\alpha^{'}}{L}\right)
\end{equation}
Since the Einstein-Hilbert-Brans-Dicke action
\begin{equation}
S=\int d^{2}x \sqrt{-g}\left[\Phi(R-2\Lambda)-\frac{\omega}{\Phi}
\nabla_{\mu}\Phi \nabla^{\mu}\Phi\right]
\label{EHBD}
\end{equation}
with the Friedmann-Robertson-Walker {\it ansatz} for the metric
\begin{equation}
ds^{2}=-dt^{2}+L^{2}(t)d\xi^{2}
\label{FRW}
\end{equation}
and a space-independent Brans-Dicke field $\Phi(t)$ is invariant
under
the duality replacement
\begin{equation}
L \rightarrow \frac{C}{L}\:, \hspace{2cm} \Phi\rightarrow
\frac{L^{2}}{C}\Phi\;,
\end{equation}
with $C$ any constant, we see that the whole action ({\it fields +
matter}) is invariant under the space-time duality transformation
\begin{equation}
L \rightarrow \frac{4\pi^{2}\alpha^{'}}{L}\;, \hspace{3cm} \Phi
\rightarrow \frac{L^{2}}{4\pi^{2}\alpha^{'}}\Phi\;.
\end{equation}
This is not true for $\beta$-duality. The reason is that only the
matter action depends on the temperature and it transforms
non-trivially under duality. Then the changes in the action of the
matter induced by the duality transformation cannot be undone by the
variation of the Einstein-Hilbert-Brans-Dicke action since this does
not depend on $\beta$. The result is that the equations governing the
dynamics of our universe are invariant under space-time duality but
not under $\beta$-duality.
It seems then that $\beta$-duality does not survive the coupling to
the Einstein-Hilbert-Brans-Dicke action. Furthermore, what is really
a
problem is that (\ref{free-energy}) is positive definite for some
region in the $\beta-L$ plane and actually it suffers from a fatal
disease, the entropy is negative and divergent as $\beta \rightarrow
\infty$. The only possible conclusion is that the partition function
for the $c=1$ model on a torus, given by the integral on the
fundamental region of the modular group of the solitonic
contribution,
does not coincide with minus the logarithm of the thermal partition
function $Z(\beta,L)$. This is not a surprise because we have found
the same situation for massless free fields in $ S^{1}\times\mbox{\bf
R}$ \cite{OV-M-1}\footnote{$L$ is the length of $S^{1}$} and, after
all, the $c=1$ model can be effectively described by a massless
scalar
field in two dimensions \cite{P}. It is worth noticing that
(\ref{free-energy}) can be written
\begin{equation}
F(\beta,L)=
\frac{1}{\beta}\ln{\left(\frac{\beta}{\sqrt{\alpha^{'}}}
\right)} + F_{B}(\beta,L) +
F_{B}\left(\beta,\frac{4\pi^{2}\alpha^{'}}{L}\right)
\label{new}
\end{equation}
where $F_{B}(\beta,L)=(2/\beta)\ln{\eta(i\beta/L)}$ is the free
energy
of a massless boson in $S^{1}\times \mbox{\bf R}$ \cite{OV-M-1}. Our
proposal is that dropping the first term which depends only on
$\beta$
we get the correct result for the one-loop free energy of the $c=1$
model in $S^{1}\times\mbox{\bf R}$. The price we have to pay is to
renounce to $\beta$-duality. Nevertheless, we cure the {\it maladie}.
The plan of the paper is as follows. In section 2 we will study the
main features of the thermodynamics of the critical two-dimensional
string in $S^{1}\times \mbox{\bf R}$ and its relationship with a
bosonic field living in the same space. In section 3 we will present
the cosmological solutions to the Brans-Dicke equations. Finally in
section 4 we will summarize the conclusions.
\section{The thermodynamics of the string in $S^{1}\times \mbox{\bf
R}$}
The quantization of the bosonic string in arbitrary space-time
dimension is a hard problem that still remains unsolved in general.
The reason is that away from the critical dimension ($d=26$) there is
an anomaly associated with the conformal invariance of the string
action \cite{Polyakov}. Then the conformal factor of the metric (the
Liouville field) becomes a dynamical field that has also to be
quantized. The quantization of the Liouville theory can be made in
the
particular case of the $c=1$ conformal field theory coupled to
two-dimensional gravity \cite{DDK}. In this case, after identifying
the Liouville field as a new coordinate and with a suitable dilatonic
background, the $c=1$ string can be reinterpreted as a critical
two-dimensional string theory \cite{GM}.
We are going to consider that the matter content of the universe is
given by a gas of these two-dimensional critical strings. If we
compactify the spatial coordinate in a circumference of radius
$R=L/2\pi$, and take (\ref{free-energy}), i.e., the result of the
toroidal compactification as computed in \cite{DKL} (cf.
\cite{OV,TV}), as the free energy we see that our thermodynamics
enjoys both space-time and $\beta-$duality,
\begin{equation}
F(\beta,L)= F\left(\beta, \frac{4\pi^{2}\alpha^{'}}{L}\right)=
\frac{4\pi^{2}\alpha^{'}}{\beta^{2}}
F\left(\frac{4\pi^{2}\alpha^{'}}{\beta},L\right)\;.
\end{equation}
In addition to this, $-\beta F(\beta,L)$ is invariant under the
interchange $\beta \leftrightarrow L$, as it should be if the free
energy is gotten from the toroidal compactification in $S^{1}\times
S^{1}$.
Let us review the main thermodynamical properties that can be
extracted from (\ref{free-energy}). The first important thing to
mention is that $F(\beta,L)$ is not always negative. One can think
that, after all, being the result of a regularization procedure some
arbitrary term of the form $constant/\beta$ can be added. Physically,
adding this term is equivalent to tuning the value of the canonical
entropy at zero temperature ($\beta=\infty$). Then one can try to fix
the constant to get zero entropy at zero temperature. From the very
beginning this is not a legitimate manipulation because in quantum
statistical mechanics the degeneration of the vacuum is what it is.
But, we do not even need to think about this problem because fixing
the entropy at zero temperature to zero by adding that term is
impossible since the entropy behaves as minus $\ln{\beta}$ as $\beta
\rightarrow \infty$ (see below).
A similar phenomenon is observed in critical strings at finite
temperature \cite{AO-PA} where the dual phase has {\it bounded}
negative entropy. Here one is also tempted to add the adequate
constant to set the entropy positive since at least it is only a
finite constant. But one must be careful because standard texts in
thermodynamics and statistical mechanics (see for example
\cite{Balescu}) teach us that the entropy measures the number of
states at a given energy that can be excited. But in String Theory it
seems that at the self-dual point new degrees of freedom are excited
\cite{OV-M2} that kill the low energy field-theoretical ones. Let us
concentrate for a moment in the particular case of the supersymmetric
heterotic string at finite temperature \cite{AW,AO-NP}.
$\beta$-duality implies that at high temperature there are degrees of
freedom which act still as fermions so as to recover a high
temperature version of supersymmetry. In standard statistical
mechanics it is precisely the equivalence between Bose, Fermi and
Maxwell-Boltzmann statistics at high temperature and low density
which
fixes the additive constant in the entropy. Our conclusion is that
all
these things, namely, the statistics of the string itself and the
physical degrees of freedom that it represents, are involved in the
resolution of this problem that is a mystery to us. In the problem we
have now at hand, by inspecting the expressions for the energy and
the
pressure, we can see that high temperature and low density can
correspond either to a stringy or a field-theoretical regime.
The canonical entropy obtained from (\ref{free-energy}) is
\begin{eqnarray}
S(\beta,L)&=& -\frac{\beta L}{12 \pi \alpha^{'}}
E_{2}\left(i\frac{\beta L}{4 \pi^2 \alpha^{'}}\right) +\frac{\pi
L}{6 \beta} E_{2}\left(i\frac{L}{\beta}\right) - \beta F(\beta,L)
\nonumber \\ &=& 1-\ln{\frac{\beta}{\sqrt{\alpha^{'}}}}+
S_{B}(\beta,L)+
S_{B}\left(\beta,\frac{4\pi^{2}\alpha^{'}}{L}\right)\;,
\label{entropy}
\end{eqnarray}
where
\begin{equation}
S_{B}(\beta,L)=-2
\ln{\eta\left(i\frac{\beta}{L}\right)}-\frac{\pi\beta}{6L}E_{2}
\left(i\frac{\beta}{L}\right)
\end{equation}
is the entropy for a massless bosonic field. From this expression we
see that when $\beta$ goes to infinity the entropy diverges as minus
$\ln{\beta}$ since $S_{B}$ is zero in that limit. The problem is not
that the entropy is negative, but the fact that it is unbounded.
$\beta$-duality leads also to an intriguing relationship between the
Helmholtz free energy and the average of the entropy, namely,
\begin{equation}
-\beta F(\beta,L)= \frac{1}{2}\left[S(\beta,L)+
S\left(\frac{4\pi^{2}\alpha^{'}}{\beta},L \right)\right]\;.
\end{equation}
Another consequence of $\beta$-duality is that the energy density
$\rho(\beta,L)$ vanishes at the self-dual temperature $\beta=2 \pi
\sqrt{\alpha^{'}}$ for every value of the volume. Similarly,
$L$-duality implies the vanishing of the pressure for $L=2
\pi\sqrt{\alpha^{'}}$ and any value of $\beta$. This means that at
the
self-dual temperature and the self-dual size we have $\rho=p=0$. We
will see in section 3 that this particular situation would define a
static euclidean universe.
The main question is that of the existence of any connection between
$F(\beta)$ given by (\ref{free-energy}) and any quantum field theory.
As mentioned in \cite{OV-M-1}, (\ref{free-energy}) does not coincide
with the free energy of a massless scalar boson. Finding a
relationship with fields is equivalent to distinguishing a stringy
regime. The presence of $\alpha^{'}$ induces us to identify the
field-theoretical regime as that arising in the limit $\beta L \gg 4
\pi^{2}\alpha^{'}$, to get
\begin{equation}
F(\beta,L)\rightarrow -\frac{L}{24 \pi \alpha^{'}}
+\frac{1}{\beta}\ln{\left(\frac{\beta}{\sqrt{\alpha^{'}}}\right)}
+\frac{2}{\beta}\ln{\eta\left(i\frac{\beta}{L}\right)}\;,
\end{equation}
since in this approximation we have that
\begin{equation}
-\frac{1}{\beta}\left[\frac{\beta L}{24 \pi \alpha^{'}}
+\ln{\left(\frac{\beta}{\sqrt{\alpha^{'}}}\right)}\right] \sim
-\frac{L}{24 \pi \alpha^{'}}\;,
\label{lim1}
\end{equation}
we finally obtain the free energy for a massless boson in $S^{1}
\times \mbox{\bf R}$ with an additional vacuum energy given by
$-L/(24
\pi \alpha^{'})$. The corresponding equation of state is
$p=\rho+L/(12
\pi \alpha^{'})$.
On the other hand we can also identify an {\it ultrastringy} regime
in
which $\beta L \ll 4 \pi^{2}\alpha^{'}$. In this limit the Helmholtz
free energy takes the form
\begin{equation}
F(\beta,L)\rightarrow -\frac{2\pi^{3}\alpha^{'}}{3\beta^{2}L}+
\frac{2}{\beta}\ln{\eta\left(i\frac{L}{\beta}\right)}
\end{equation}
which gives rise to the equation of state $p=\rho-4
\pi^{3}\alpha^{'}/(3\beta^{2}L^{2})$. So, in this regime, the string
cannot be described in terms of a quantum field theory, as it must be
if string theory is something more than ordinary quantum field
theory.
We have seen that the free energy (\ref{free-energy}) suffers from a
severe problem, namely, the canonical entropy diverges when $\beta$
goes to infinity. This is a consequence of the presence of the term
\begin{equation}
\frac{1}{\beta}\ln{\left(\frac{\beta}{\sqrt{\alpha^{'}}}\right)}
\end{equation}
in the free energy which ensures $\beta$-duality and the invariance
of
$\beta F(\beta,L)$ under the replacement $\beta \leftrightarrow L$.
In
order to solve the problems with (\ref{free-energy}) we propose that
the correct result for the free energy of the $c=1$ string at finite
temperature is obtained by dropping the term containing
$\ln{(\beta/\sqrt{\alpha^{'}})}$ in (\ref{free-energy}), namely
\begin{equation}
\hat{F}(\beta,L)=F_{B}(\beta,L)+F_{B}\left(\beta,\frac{4\pi^{2}
\alpha^{'}}{L}\right)=
\frac{2}{\beta}\ln{\eta\left(i\frac{\beta}{L}\right)}+
\frac{2}{\beta}\ln{\eta\left(i\frac{\beta L}{4\pi^{2}\alpha^{'}}
\right)}\;.
\label{our}
\end{equation}
The new equation of state is
\begin{equation}
\rho-p=-\frac{1}{12\pi\alpha^{'}} E_{2}\left(i\frac{\beta
L}{4\pi^{2}\alpha^{'}}\right)\;.
\end{equation}
That $\hat{F}$ gives a positive definite canonical entropy is a
trivial exercise since it is the sum of the entropies for two bosonic
fields. What is more important is to check that using (\ref{our}) we
can recover the field-theoretical regime when $\beta L\gg 4
\pi^{2}\alpha^{'}$. Using the expression of the Dedekind
$\eta$-function we get
\begin{equation}
\hat{F}(\beta,L)\rightarrow -\frac{L}{24\pi\alpha^{'}}+
\frac{2}{\beta}\ln{\eta\left(i\frac{\beta}{L}\right)}\;,
\end{equation}
i.e., we get the expression for a massless field with a vacuum
energy,
in accordance with the result (\ref{lim1}) obtained from
(\ref{free-energy}). In the {\it ultrastringy} regime ($\beta L \ll 4
\pi \alpha^{'}$) the limit we get has the same form than the one
obtained from (\ref{free-energy}),
\begin{equation}
\hat{F}(\beta,L)\rightarrow -\frac{2\pi^{3}\alpha^{'}}{3\beta^{2}L}
+\frac{2}{\beta}\ln{\eta\left(i\frac{\beta}{L}\right)}\;.
\end{equation}
Nevertheless, we have to use an aproximation similar to that in
(\ref{lim1}).
As we have pointed out, $\beta\hat{F}(\beta,L)$ does not present
either $\beta$-duality or invariance under the replacement
$\beta\leftrightarrow L$. Nevertheless, as mentioned in sec. 1,
$\beta$-duality does not survive the coupling to Brans-Dicke gravity
so it seems that this is not a fundamental symmetry to preserve. With
respect to the breaking of the invariance under the interchange of
$\beta$ and $L$ this is a consequence of the fact that the proposed
free energy is not obtained from the toroidal compactification in
$S^{1}\times S^{1}$. We have shown \cite{OV-M-1} that the same
breakdown of the equivalence between the Helmholtz free energy and
the
toroidal compactification happens when studying massless fields
living
in compact spaces, in particular in $S^{1}\times \mbox{\bf R}$. What
we claim is that this is what happens when considering a
two-dimensional string in $S^{1}\times\mbox{\bf R}$: the Helmholtz
free energy is not given by the toroidal compactification. We state
that the free energy of the two-dimensional string can be interpreted
as the one corresponding to two massless fields in
$S^{1}\times\mbox{\bf R}$, one living in a circumference of length
$L$
and the other one in a circumference of length $4\pi^{2}\alpha^{'}/L$
(cf. \cite{OV}). Notice that (\ref{our}) enjoys invariance under the
duality transformation $L\rightarrow 4\pi^{2}\alpha^{'}/L$. Since
this
is a symmetry of the Brans-Dicke action (with the {\it ansatz}
(\ref{FRW}) for the metric) this seems to be the preserved
fundamental
symmetry.
In the next section we will study the solutions to the
Einstein-Hilbert-Brans-Dicke equations using either
(\ref{free-energy}) or (\ref{our}) for the Helmholtz free energy.
\section{Cosmological solutions}
We are interested in the cosmological solutions of our stringy
universe. Then we are going to consider our string propagating in the
presence of a background metric $g_{\mu\nu}$ and a background dilaton
field $\Phi$. The condition for this background to define a CFT (to
lowest order in $\alpha^{'}$) is given by the vanishing of the
$\beta$-functions associated with the background fields. The
equations so obtained for $g_{\mu\nu}$ and $\Phi$ can also result
from
using the action principle with the following action functional
\cite{TV}
\begin{equation}
S_{eff}= \int d^{2}\xi \sqrt{-g}
\left[\Phi(R+\frac{16}{\alpha^{'}})
+ \frac{1}{\Phi}\nabla_{\mu}\Phi\nabla^{\mu}\Phi\right]
\end{equation}
which correspond to the Brans-Dicke action (\ref{EHBD}) with
$\omega=-1$ and $\Lambda=-8/\alpha^{'}$. The coupling to the matter
can be made via the perfect-fluid energy-momentum tensor
\begin{equation}
T_{\mu\nu}=(p+\rho)u_{\mu}u_{\nu}-pg_{\mu\nu}
\end{equation}
The equations for the background fields are then
\begin{eqnarray}
-\frac{8}{\alpha^{'}}g_{\mu\nu}&=&\frac{8\pi}{\Phi}T_{\mu\nu}-
\frac{1}{\Phi^{2}}
\left(\nabla_{\mu}\Phi\nabla_{\nu}\Phi-\frac{1}{2}g_{\mu\nu}
\nabla_{\sigma}\Phi\nabla^{\sigma}\Phi\right)\nonumber \\ &+&
\frac{1}{\Phi}\left(\nabla_{\mu}\nabla_{\nu}\Phi
-g_{\mu\nu}\Box\Phi\right)\;,
\label{eqn1}\\
R+\frac{16}{\alpha^{'}}&=&
-\frac{1}{\Phi^{2}}g^{\mu\nu}\nabla_{\mu}\Phi\nabla_{\nu}\Phi+
\frac{2}{\Phi}\Box\Phi\;,
\label{eqn2}
\end{eqnarray}
together with the integrability condition imposed by the local
conservation of the energy-momentum tensor $\nabla_{\mu}
T^{\mu\nu}=0$.
For the background metric we are going to use the
Friedmann-Roberston-Walker {\it ansatz} (\ref{FRW}) and a
space-independent dilaton field $\Phi(t)$. In this case eqs.
(\ref{eqn1}) and (\ref{eqn2}) can be rewritten as
\begin{eqnarray}
2\Phi^{2}\frac{\ddot{L}}{L}+\frac{16}{\alpha^{'}}\Phi^{2}
&=&\dot{\Phi}^{2}-2
\Phi\ddot{\Phi}-2\Phi\dot{\Phi}\frac{\dot{L}}{L}\;,
\label{BD1} \\
\dot{\Phi}^{2}+2\Phi\dot{\Phi}\frac{\dot{L}}{L} &=&
16\pi\Phi\rho(\beta,L)-\frac{16}{\alpha^{'}}\Phi^{2}\;,
\label{BD2} \\
\Phi\ddot{\Phi}-\frac{1}{2}\dot{\Phi}^{2} &=& -8\pi \Phi
p(\beta,L)-\frac{8}{\alpha^{'}}\Phi^{2}\;, \label{1.3}
\end{eqnarray}
and the energy-momentum conservation
\begin{equation}
\dot{\rho}(\beta,L)+\frac{\dot{L}}{L}[\rho(\beta,L)+p(\beta,L)]=0.
\label{tmunu}
\end{equation}
It is easy to check that eq. (\ref{BD1}) is equivalent to the
conservation of $T_{\mu\nu}$ so we can drop it and be left with eqs.
(\ref{BD2}), (\ref{1.3}) and (\ref{tmunu}) that together with the
equation of state determine our system in terms of $\Phi(t)$, $L(t)$
and $\beta(t)$.
Let us now make some comments about $L$-duality. We have seen that
$\hat{F}(\beta,L)$ (and $F(\beta,L)$) are invariant under the
replacement $L\rightarrow 4\pi^{2}\alpha^{'}/L$. This implies that
the
energy density and the pressure transform according to
\begin{equation}
\rho(\beta,L)=\frac{4\pi^{2}\alpha^{'}}{L^{2}} \rho \left(\beta,
\frac{4\pi^{2}\alpha^{'}}{L}\right)\;, \hspace{2cm}
p(\beta,L)=-\frac{4\pi^{2}\alpha^{'}}{L^{2}}p
\left(\beta,\frac{4\pi^{2}\alpha^{'}}{L}\right)\;.
\end{equation}
It can be easily checked that these transformations for the energy
density and the pressure, together with the transformation of the
dilaton field $\Phi(t)$ make the system of differential equations
(\ref{BD2}), (\ref{1.3}) and (\ref{tmunu}) invariant under
$L$-duality
(in fact, the equations combine among themselves). A pending problem
is that of clarifying what is the dynamical meaning of this symmetry.
It has been argued in \cite{BV,TV} that the duality property would
imply an effective minimum length $L_{min}=2\pi\sqrt{\alpha^{'}}$ for
the universe. Nevertheless, in \cite{BV}, the arguments leading to
this conclusion lie upon the condition that one has to substitute the
definition of the localized states in terms of the Fourier transform
of the momentum modes when $L>L_{min}$ (which are light in this
regime), by the definition in terms of the Fourier transform of the
winding modes when $L$ decreases below the self-dual length (which
are
the light states in this case). Nevertheless, the only conclusion one
can extract from $L$-duality using equations
(\ref{BD2})-(\ref{tmunu})
is that, for any solution $L(t)$, $\Phi(t)$ and $\beta(t)$, the new
set of functions
\begin{equation}
L^{'}(t)=\frac{4\pi^{2}\alpha^{'}}{L(t)}, \hspace{1cm}
\Phi^{'}(t)=\frac{L^{2}(t)}{4\pi^{2}\alpha^{'}}\Phi(t)\;,
\hspace{1cm} \beta^{'}(t)=\beta(t)\;,
\label{dual}
\end{equation}
is also a solution to (\ref{BD2})-(\ref{tmunu}). In fact, as we will
see in a moment, there are solutions for which $L(t)$ goes through
the
self-dual length and ends at $L=0$. This kind of solutions exist when
we take $F(\beta,L)$ as the free energy as well as when
$\hat{F}(\beta,L)$ is chosen.
With respect to $\beta$-duality, the situation is somewhat different.
The transformation property of $F(\beta,L)$ with respect to the
duality replacement $\beta\rightarrow 4\pi^{2}\alpha^{'}/\beta$
implies the following transformation rules for the energy density and
the pressure
\begin{equation}
\rho(\beta,L)=-\frac{4\pi^{2}\alpha^{'}}{\beta^{2}}\rho \left(
\frac{4\pi^{2}\alpha^{'}}{\beta},L\right)\;, \hspace{1cm}
p(\beta,L)=\frac{4\pi^{2}\alpha^{'}}{\beta^{2}} p\left(
\frac{4\pi^{2}\alpha^{'}}{\beta},L\right)\;.
\end{equation}
It is now clear from eqs. (\ref{BD2})-(\ref{tmunu}) that
$\beta$-duality is broken by the coupling the two-dimensional
dilatonic gravity. This is because, unless the case of $L$-duality,
the geometric part of the action, i.e., $L(t)$ and $\Phi(t)$, do not
transform under $\beta$-duality, so one cannot reabsorb the change
induced in the energy density and the pressure by a change in the
dynamical variables $L$ and $\Phi$.
When solving the classical equations of motion for the fields, there
are a set of variables in terms of which the expressions simplify
notably and which have a clear physical meaning. Let us define two
new
functions $u(t)$, $v(t)$ in terms of $\beta(t)$ and $L(t)$
\begin{equation}
u(t)=\frac{L(t)}{\beta(t)}\;, \hspace{1cm} v(t)=\frac{\beta(t)
L(t)}{4\pi^{2}\alpha^{'}}\;.
\end{equation}
Here $v$ is a kind of {\it stringiness} parameter; when $v\gg 1$ we
are in what we called before the field-theoretical regime. In that
case, since the string Helmholtz free energy (either $F$ or
$\hat{F}$)
reduces to the free energy for a massless boson in
$S^{1}\times\mbox{\bf R}$ plus a vacuum energy, we expect the
universe
to behave as in the cases studied in \cite{OV-M-1} with the
appropriate values for $\omega$ and $\Lambda$. On the other hand,
the
limit $v\ll 1$ corresponds to a {\it ultrastringy} regime in which
the
matter content of the universe is governed by the equation of state
deduced in sec. 2.
When studying the numerical solutions to (\ref{BD2})-(\ref{tmunu})
using the matter described by (\ref{our}) one can find three
different
kinds of solutions depending on the initial conditions $u_{0}$,
$v_{0}$, $\Phi_{0}$ and $\dot{\Phi}_{0}$. The first one corresponds
to
field-theoretical-like universes in which $v(t)\gg 1$ for every value
of $t$. In fig. \ref{FieldL} we have plotted $L(t)$ for this class of
solutions. The universe contracts from an infinite size and reaches a
minimum length that, however, is much larger than the self-dual size.
The dilaton field (fig. \ref{FieldD}) grows and, after reaching a
maximum, begins to decrease. In fig. \ref{FieldT} we have plotted the
time evolution of the temperature for this universe. We see that, as
it corresponds to a field-theoretical universe, the temperature drops
to zero when the size of the universe goes to infinity (cf. the
results gotten in ref. \cite{OV-M-1}).
A second kind of solutions arise when we consider less restrictive
initial conditions. In this case we get a universe whose scale factor
$L(t)$ comes from infinity and after reaching the self-dual length
keeps on decreasing until it gets $L=0$ (fig. \ref{twoL}). The
dilaton, on the other hand, grows monotonously as it is shown in fig.
\ref{twoD}. The temperature, nevertheless, has a quite stringy
behavior (fig. \ref{twoT}). For small $t$, when the universe is big,
the temperature is low, which is consistent with a field-theoretical
description. However, when the size of the universe gets below the
self-dual size, the temperature, instead of continuing raising (as
one
expects from field theory) begins to decrease and it is zero when the
universe reaches a null size. This behavior is a consequence of
$L$-duality, because the entropy satisfies
$S(\beta,L)=S(\beta,4\pi^{2}\alpha^{'}/L)$. Since the integrability
condition $\nabla_{\mu}T^{\mu\nu}=0$ implies that the entropy is
conserved, we have
\begin{equation}
\frac{d}{dt}S(\beta,L)=0=\frac{\partial S}{\partial
\beta}\dot{\beta} +\frac{\partial S}{\partial L}\dot{L}\;.
\end{equation}
Being the scale factor $L(t)$ a monotonously decreasing function, we
can parametrize the evolution of $\beta$ by $L$. Then, from the last
relation we have
\begin{equation}
\frac{d\beta}{dL}=\frac{\dot{\beta}}{\dot{L}}= -\frac{\partial
S/\partial L}{\partial S/\partial \beta}[\beta(L),L]\;.
\end{equation}
Now, implementing duality it is easy to check that
\begin{equation}
\frac{d\beta}{dL}[\beta(L),L]\frac{d\beta}{dL}
\left[\beta\left(\frac{4\pi^{2}\alpha^{'}}{L}\right),
\frac{4\pi^{2}\alpha^{'}}{L}\right] <0\;,
\end{equation}
so this implies that $d\beta/dL=0$ if $L=2\pi\sqrt{\alpha^{'}}$.
We have said above that the meaning of $L$-duality is that the
solutions can be paired according to (\ref{dual}). When we apply the
duality transformations to the second kind of solutions describing a
collapsing universe, we get new solutions which closely resembles the
time reversal of the original ones, i.e., we have a scale factor that
starts at $L=0$ and grows monotonously, passing through the self-dual
value, and reaching infinity in a finite time. Nevertheless, when we
apply (\ref{dual}) to the first kind of solutions (the
field-theoretical-like ones) we get a new class featuring a string
that lives in the {\it ultrastringy} regime all the time. The scale
factor $L(t)$ starts and ends at zero length and reaches a maximum
value, much smaller than the self-dual one (fig. \ref{usL}). With
respect to the dilaton field we get a monotonously increasing
$\Phi(t)$ very different from the field-theoretical type (fig.
\ref{usD}). The temperature, on the contrary, is the corresponding to
the field-theoretical-like case (fig. \ref{FieldT}).
One can ask oneself whether there is a solution describing a static
universe. Imposing the condition $\dot{L}=\ddot{L}=0$ on eqs.
(\ref{BD2})-(\ref{tmunu}) we find that there is one given by
\begin{equation}
L(t)=2\pi\sqrt{\alpha^{'}}, \hspace{1cm} \beta(t) \sim
2\pi\sqrt{\alpha^{'}} \times 0.523522\;, \hspace{1cm}
\Phi(t)=C\exp{\left(i \frac{4t}{\sqrt{\alpha^{'}}}\right)}\;,
\end{equation}
where the values of $L$ and $\beta$ are fixed by the requirement that
the energy density and the pressure vanish. We dislike complex
dilatonic fields since the $\sigma$-model dilaton is a real field.
This can be fixed by performing a Wick rotation to imaginary time
$t\rightarrow it$. In that case we get a solution to Euclidean
dilatonic gravity with constant scale factor. This solution can be
interpreted as a kind of gravitational instanton interpolating
between
two universes, both at the self-dual size, one of which is in a
strong
coupling regime ($\Phi\ll 1$) and the other one in a weak coupling
one
($\Phi\gg 1$) since we relate the string coupling constant with the
vacuum expectation value of the dilaton field. Besides, as it is the
case with most of the known gravitational instantons, this
configuration of the fields renders the geometric part of the action
zero. One can see that the static situation results from the action
of
the negative Casimir energy which compensates the positive
contribution of the thermal motion to the internal energy.
Now we are going to study how all this structure of solutions changes
when we substitute the Helmholtz free energy given by (\ref{our}) by
the one gotten from the toroidal compactification, i.e.
(\ref{free-energy}). The first thing we can see is that, since in
this
case $F(\beta,L)$ recovers the field theory regime when $v\gg 1$, we
are operatoralso going to have field-theoretical-like solutions of
the
type described above. On the other hand the {\it ultrastringy}
solutions gotten from them by performing the duality transformation
(\ref{dual}) are also to be present (let us recall that $F$ does
enjoy
$L$-duality as well as $\beta$-duality). An important point
nevertheless is that now we have a kind of {\it intermediate}
solutions of the type depicted in figs. \ref{intL} and \ref{inT}.
This intermediate (marginal) case sitting between the first
(field-theoretical) and the second types can be gotten by making a
fine-tuning of the initial conditions. The corresponding universe has
a scale factor (see fig. \ref{intL}) which comes from infinity and,
after reaching the self-dual length, it stays at that value of $L$
for
a long time and finally drops to $L=0$. Of course, by applying
$L$-duality to (\ref{dual}) we get a similar exploding solution
starting at $L=0$. The temperature now follows a curve similar to
that
of fig. \ref{twoT}, i.e., it is zero in both limits, but with a
pronounced plateau (fig. \ref{inT}). Besides, we also find those
solutions describing a monotonously contracting (resp. expanding)
universe ending (resp. beginning) at $L=0$ of the type described by
figs. \ref{twoL}-\ref{twoT}. The instanton-like solution described
above, is also present but now the existence of $\beta$-duality sets
the temperature to the self-dual value.
In both cases (for $\hat{F}$ and $F$) the field equations are
invariant under time reversal. This means that, together with the
type
of solutions described above we also have their time reversed ones
which differ by the sign of $\dot{\Phi}_{0}$. All the plots have been
made in units in which $\alpha^{'}=1$.
\section{Conclusions}
We have studied the cosmological solutions to dilatonic gravity using
as the source of the gravitational field a gas of two-dimensional
strings. The first thing we have found is that the expression of the
Helmholtz free energy given by (\ref{free-energy}) and deduced in
\cite{DKL} presents an important problem; it gives a canonical
entropy
which is negative and diverges in the low temperature limit. As a
solution we have proposed for the Helmholtz free energy another
expression, (\ref{our}), in which we drop the term which produces the
problem. The principal feature of this new expression is that it
breaks $\beta$-duality and, consequently, invariance under the
interchange $\beta\leftrightarrow L$. The new expression can be
interpreted as the free energy for two massless bosons one of which
lives in a circumference of length $L$ and the other one in a
circumference of length $4\pi^{2}\alpha{'}/L$ (cf. \cite{OV}). This
breaking of the equivalence between the toroidal compactification and
the Helmholtz free energy can be traced back to the massless bosonic
field in $S^{1}\times \mbox{\bf R}$ \cite{OV-M-1}. Since the only
on-shell state of the $c=1$ string (despite of the topological
discrete states, which do not contribute to the partition function
\cite{KP}) is the center of mass which can be effectively described
by
a massless bosonic field \cite{P}, it seems to be natural to keep
this
equivalence at finite temperature. Indeed, the effective image of the
$c=1$ string (or equivalently, the critical two-dimensional one) is
that of its center of mass. If we look at it as a point on a
cylinder,
its hamiltonian would be given by $\hat{H}_{1}=\hat{m}/L$ where
$\hat{m}$ is the operator whose eigenvalues are the momentum integer
numbers of the string. On the other hand, since the string is an
extended object it can wrap around the spatial compactified dimension
and we have another contribution to its energy given by
$\hat{H}_{2}=\hat{n}L/(4\pi^{2}\alpha^{'})$, $\hat{n}$ being the
winding number operator with integer eigenvalues $n$. Then, from a
target-space point of view the dynamics of the string is governed by
the hamiltonian
\begin{equation}
\hat{H}=\frac{\hat{m}}{L}+\frac{\hat{n}L}{4\pi^{2}\alpha^{'}}
\end{equation}
To compute the thermal partition function of the multi-string system
we need to make a trace over the second quantized states
$|m_{i},n_{i}\rangle$ which are eigenvectors of the operators
$\hat{m}_{i}$ and $\hat{n}_{i}$. Since the complete Hilbert space is
the direct product of the winding and the momentum sectors the
partition function factorizes and this is reflected in the fact that
the Helmholtz free energy is the sum of two terms which correspond to
two massless fields one in a space with length $L$ and the other one
in the dual space. Now it is important to notice that the zero mode
in
both traces has to be suppressed. By the way, the partition function
in the zero temperature limit computed in \cite{P} can be interpreted
as the partition function of two bosonic massless fields one at a
temperature $T=1/L$ and the second one at $T=L/(4\pi^{2}\alpha^{'})$
After studying the cosmological solutions, using both $F$ and
$\hat{F}$ we arrive at several conclusions. First of all, we find
that
there is no dynamical rebound of the scale factor when reaching the
self-dual size. Secondly we find that our string universe is not free
from singularities. In fact, we have solutions that begin or end at
$L=0$. At first sight this is quite surprising because they are
absent
in a field-theoretical universe (see \cite{OV-M-1}). But one of the
claims in \cite{BV} is that windings prevent expansion and that is
indeed what they do so as to get $L=0$. What we do not see is the
suspected behaviour of the momentum states preventing the
contraction.
This effect would be the result of changing from the momentum modes
when $L>2\pi\sqrt{\alpha^{'}}$ to the winding modes when
$L>2\pi\sqrt{\alpha^{'}}$. In some sense we can say that the
existence
of this kind of singular solutions is implied by duality since the
equivalence of $L=0$ and $L=\infty$ is a result of this symmetry.
It is really important to notice that when all the spatial dimensions
are compact the thermal free energy is not given by the corresponding
toroidal compactification if massless fields are involved. For
example, this means that the free energy for the heterotic
supersymmetric string with all the spatial dimensions compactified is
not given by the toroidal compactification because the massless
excitations of the string has to be treated with care. At least this
is pretty clear from the analogue model in which we compute the free
energy summing over the field content. Using this approach in a
proper
time representation of the free energy we know that the massless
sector is not going to be correctly described. On the contrary, the
modular invariant result for the associated toroidal compactification
(see second reference in \cite{OV-M2}) does not present any
divergence
for any value of $\beta$ when $Im\,\tau$ goes to infinity where
$\tau$
is the standard one-loop modular parameter. What happens is that the
connection between the modular invariant result and the proper time
implementation of the analogue model is broken for any value of
$\beta$. If we forget about this way of representing the analogue
model we can try to implement it by directly writting the
contribution
of the massless sector as computed in \cite{OV-M-1} and then adding
the contribution of the massive states in the proper time
representation which is perfectly well defined. It seems that now the
question about modular invariance is nonsense.
As a final comment, the qualitative effect of substituting
$\hat{F}(\beta,L)$ by $F(\beta,L)$ is only to add the pseudo-statical
solutions of fig. \ref{intL}; we preserve the other three types of
solutions discussed in sec. 3.
\section*{Acknowledgements}
We thank J.\,L.\,F. Barb\'on for valuable discussions. We acknowledge
finantial support by the CICyT Project No. AEN-90-0272. The work of
M.A.V.-M. has also been partially supported by a Comunidad Aut\'onoma
de Madrid Predoctoral Fellowship.
|
1,477,468,751,179 | arxiv | \section{Introduction}
\label{sect:intro}
\small
Development of high-fidelity image simulators has become commonplace in large instrumentation projects in astronomy~\cite{Peterson:2014,Peterson:2015,Leschinski2016,Rowe2015,Sivaramakrishnan2002,Dobke2010,Hilbert2017,Knight2012}. In addition, a full comprehensive physics-based method capable of simulating images from the source to the readout has been developed.\cite{Peterson:2014,Peterson:2015}. For example, galaxy morphology is altered by the atmosphere (for ground-based observatories), geometric aberrations in the optical train, diffraction, mirror micro-roughness, surface misalignments/perturbations, figure errors, and a variety of detector effects. Such systematics have important effects on morphological studies of astronomical objects. A good example is a weak lensing measurements, since systematically-induced galaxy ellipticity can contaminate shear measurements. Future and current dark matter surveys utilizing weak lensing as a probe of cosmological density fluctuations are limited by systematics\cite{Hikage:2018,Zhan2018,DES2018,Okabe:2010}. As telescopes become larger and their instruments become more sensitive, large extragalactic surveys will produce unprecedented levels of image data throughout the 2020s. This sharp rise in the available statistics must be informed by an equally transformative understanding of the systematics in these images.
The existing image simulation paradigm is mainly to use a parameterized point spread function (PSF) and create an image using the instrument's plate scale and background noise statistics. The approach of using optical path difference maps generated from a wavefront error budget is even rarely done in itself. However, tools such as WebbPSF\cite{Perren:2012,Perren:2014} employ this method for simulating images. WebbPSF relies on calculating the PSF from libraries of optical path difference maps. See Table~\ref{table:1} for a brief summary of PhoSim versus WebbPSF capabilities.
In this work, we present PhoSim-NIRCam: a comprehensive, end-to-end, image simulator of the James Webb Space Telescope's (JWST) Near-Infrared Camera (NIRCam) using a physics-based photon Monte Carlo code. This code, The Photon Simulator\cite{Peterson:2015,Peterson:2014} (PhoSim), enables detailed study of the optical system and detector effects including the field- and wavelength-dependent PSF. Forward-modeling approaches such as those presented in this work are still rarely employed for astronomical image simulations. In this work, we study telescope/instrument systematics in images produced by PhoSim-NIRCam, which simulates one photon at a time. Notably, PhoSim-NIRCam can simulate both the diffraction and geometric aberration components of the PSF across the field of view. Modular PhoSim commands can be used with ease to turn various physics on and off.
Additionally, we report on changes made to the PhoSim source code by the authors of this paper to simulate infrared space-based telescopes. As of version 4.0, the NIRCam instrument's imaging modes are fully implemented in PhoSim. The PhoSim code is publicly available and open-source\footnote{\footnotesize\url{https://bitbucket.org/phosim/phosim_release}}.
\subsection{JWST/NIRCam}
JWST~\cite{Gardner:2006} is NASA's next-generation flagship space-based observatory. JWST will be located at Earth-Sun Lagrange point 2, and is currently slated for launch in 2021. The observatory has a planned minimum mission lifetime of five years with a commissioning phase of six months. Its primary imaging instrument, NIRCam~\cite{Greene:2010} is a dual-channel optical system designed to operate in the wavelength range at 0.6 $\mu$m to 5.0 $\mu$m. NIRCam has a broad range of scientific goals including subjects where telescope/instrument systematics are important. See the list of approved guaranteed-time observers (GTO) and director's discretionary early release science (ERS) programs for specific examples.
\subsection{The Photon Simulator}
PhoSim is a set of physics-based, end-to-end, ab initio photon Monte Carlo codes originally developed for use with the Large Synoptic Survey Telescope\cite{LSST2009,Ivezic:2008} (LSST).
While LSST is a very different telescope than JWST, the PhoSim architecture is generalized in a way that makes implementing new telescopes straightforward. We simply add a new set of instrument and site characteristics (ISC) data files that specify the details of JWST/NIRCam and its environment. This allows us to generate high-fidelity NIRCam images quickly while taking advantage of PhoSim's extensive validation and robustness obtained over its more than 10 year development period.
One important benefit of PhoSim is its speed. With multithreading capability, PhoSim-NIRCam can produce images from a moderately sized catalog on a modern laptop or desktop computer in just a few minutes, whereas a PSF from a single faint star can be simulated in milliseconds.
Additionally, PhoSim can be run on a single laptop with a modern graphical user interface (GUI; Fig.~\ref{fig:gui}) or command line, while also being scalable to grid or cluster computing. Large data challenges are already underway for LSST and its survey to test its image processing pipeline.
\begin{figure}[ht]
\centering
\includegraphics[width=0.75\columnwidth]{gui.jpg}
\caption{\label{fig:gui}
Screenshot of the PhoSim version 4.0 GUI running on MacOS. The telescope/instrument must be specified. All other inputs are optional; PhoSim will choose settings from distributions or calculate them self-consistently.
}
\end{figure}
Next, the physics-based nature of PhoSim-NIRCam calculations means that complicated effects in images naturally emerge from the underlying physics once the proper ISC files are provided. In its over a decade of development as the official image simulator for the LSST project, PhoSim's physics have been successfully validated against its expected behavior from optical models such as Zemax.
Perhaps the most powerful component of PhoSim is the option to independently turn on and off various components of its physics. Using simple, one-line commands, one can determine how each component of the physics in PhoSim affects the final images. In this work, for example, images are simulated with and without diffraction to investigate NIRCam PSFs.
PhoSim works by performing a comprehensive photon-by-photon Monte Carlo raytrace through the atmosphere (which we turn off using a physics command), telescope, camera, and detector. The effects of diffraction are included by performing Fourier transform of the JWST entrance pupil and kicking each photon's angle proportionally. The result is an output of Flexible Image Transport System (FITS) images after the raytrace (electron image) and after the readout procedure (amplifier image). The electron image essentially provides a pre-calibrated image, whereas the amplifier images replicates the noise seen in raw data.
\begin{table}[]
\small
\centering
\begin{tabular}{l|ll}
& WebbPSF & PhoSim-NIRCam \\ [0.5ex] \hline
Optics: & Library of OPD maps & Full 3-D optical model \& perturbation capability \\
Simulates: & PSF only & PSF and full image \\
Modes: & Imaging and chronograph modes & Imaging modes only \\
Detector: & No detector model & HgCdTe detector and limited noise model \\
Interface: & GUI or python API & GUI or command-line interface \\ [1ex]
\end{tabular}
\caption{Summary of PhoSim-NIRCam versus WebbPSF capabilities.}
\label{table:1}
\end{table}
\section{Implementation}
In this work, we report on the implementation of several new PhoSim features in version 4.0 to simulate infrared and space-based telescopes: proper treatment of the pupil diffraction, Mercury Cadmium Telluride (MCT) detectors, and a mode for MULTIACCUM readout patterns.
NIRCam is a dual-channel optical system for short wavelength (SW) and long wavelength (LW) infrared (IR) light bifurcated with a dichroic beam splitter. Two ``fully redundant'' and ``functionally identical'' modules, denoted A and B, contain both channels --- meaning four focal plane assemblies in total\cite{Greene:2010}. However, there are several key differences between the modules such as the throughput curves and detector parameters. Thus, we create four separate ISC file sets for each NIRCam channel/module: \verb$nircam_sw_a$, \verb$nircam_sw_b$, \verb$nircam_lw_a$, \verb$nircam_lw_b$.
\section{Methodology}
Setting aside additional effects, the total PSF for space telescopes is comprised of a geometric component (from aberrations in the optical train) and a diffraction component (from the limiting pupil geometry). In the diffraction-limited regime the characteristic size of the geometric component of the PSF is negligibly small compared to the diffraction-limit. However, in many cases the geometric component of the PSF is non-negligible, such as when the image is out of focus or when aberrations are large at certain wavelengths or field positions. When no atmosphere is considered, the condition for this intermediate case occurs when the characteristic size of the geometric and diffraction components of the PSF are roughly equal,
\begin{equation}
\sigma_g \sim \frac{\lambda F}{D}
\end{equation}
where $\lambda$ is the photon wavelength, $F$ is the effective focal length, $D$ is the effective pupil diameter, and $\sigma_g$ is characteristic size of the geometric component of the PSF.
We can rewrite the above expression as,
\begin{equation}
\frac{\lambda f}{\sigma_g} \sim 1
\end{equation}
upon substituting for the focal ratio $f=F/D$. For an Airy-like PSF, this condition is: $\sigma_g \sim 0.42 \lambda f$, or in terms of the diffraction-limited angular size: $\theta \sim 1.22 \lambda / D$.
We are interested in the subtleties of this regime. Additionally, NIRCam is only required to be diffraction-limited for photon wavelength's above 2 $\mu$m. Thus, a comprehensive approach that accounts for both geometric and diffraction components is required to capture the full PSF morphology below 2 $\mu$m. To accomplish this, a complete physical description of JWST/NIRCam is implemented for each component of the PSF. When simulating the geometric component of the PSF, PhoSim uses a Monte Carlo raytrace method through a comprehensive specification of the optical prescription. When simulating the diffraction component of the PSF, we use the standard results in the Fraunhofer regime where the diffraction component of the PSF is given by the Fourier transform of the limiting pupil geometry.
Using its powerful physics commands, PhoSim has the capability to simulate any and all components of the PSF independently or together in various combinations. The following subsections detail our methodology for reproducing the major components of the PSF.
\subsection{Geometric Optics}
The NIRCam optical design is complicated due to the unique constraints of a space-based observatory. There are many flat fold mirrors that reflect light back and forth through a series of lenses that achieves the long focal length in a compact design. This means that there are a large number of optical surfaces (28 for the SW channel; 26 for the LW channel) with various orientations and positions. The design is also slightly off-axis by about $0.13$ degrees. Since PhoSim is physics-based, correctly implementing the optical design is essential to producing realistic images with the expected PSF and field distortion.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{nircam.jpg}
\\
(a) \hspace{6.5cm} (b)
\caption{\label{fig:nircam}
Visualization of a PhoSim Monte Carlo photon raytrace through NIRCam SW (a) and LW (b) channels. Mirrors are shown in gray, lenses in blue, filters in yellow, and detector chips in black. Only a sample of rays are shown to avoid clutter. As a result, few photons are transmitted through the filters. The raytrace from previous OTE surfaces is also included, but not shown here.}
\end{figure}
The PhoSim optics model is a full description of the optical prescription of JWST/NIRCam converted from two Lockheed Martin flight-ready Zemax lens files: \verb$L050713FLT.zmx$ (LW channel) and \verb$S050713FLT.zmx$ (SW channel). The Zemax lens files contain a complete description of the optical system where spatial coordinates of each surface are defined sequentially \cite{Zemax:2011}. These models approximate OTE primary mirror as a single cylindrically-symmetric surface.
The JWST optical train can be divided into two components: the optical telescope element (OTE) and integrated science instrument module (ISIM). The OTE is a three-mirror anastigmat design comprised of the segmented primary mirror plus three additional surfaces (including the planar fine steering mirror). The ISIM houses all four of the observatory's instruments and is cryogenically cooled to 37 K to reduce thermal noise. Together, the entire optical train is referred to as the optical telescope element/integrated science instrument module (OTIS). OTIS testing was completed successfully in 2017\cite{Kimble2018}.
In this work, we implement the entire OTIS optical prescription from the flight-ready Zemax design files which exist for both modules (Fig.~\ref{fig:nircam}). Each optical surface has parameters describing its coordinates and shape. The coordinate parameters are 3 Euler angle orientations and 3 spatial vertex positions. The shape parameters are contained in the typical equation for cylindrically symmetric surface,
\begin{equation}
z(r)=\frac{r^2}{R\left(1+\sqrt[]{1-(1+\kappa)\frac{r^2}{R^2}}\right)}+\sum_{i=3}^{10}\alpha_ir^i
\end{equation}
where the sag of the surface $z$ is expressed in terms of the radial distance from the optical axis $r$, the radius of curvature $R$, the conic constant $\kappa$, and the asphere coefficients $\alpha_i$. The the inner and outer radius is specified for each surface. The finite elements of the surface map are far smaller than the wavelength of optical and infrared light. The primary mirror is specified in this manner where the inner and outer radii have been calibrated to reproduce the expected wavefront error and photometry.
In addition, we include models for the dispersive index of refraction of intervening medium from Zemax. Five materials are modeled in PhoSim for NIRCam: BaF$_2$, LiF$_2$, ZnSe, Si, and fused silica. The cryogenic indicies of refraction for each material are described as a function of wavelength $\lambda$ by either the Sellmeier equation \cite{Sellmeier:1871}:
\begin{equation}
\begin{aligned}
n(\lambda)=\sqrt{1+\frac{B_1\lambda^2}{\lambda^2-C_1}+\frac{B_2\lambda^2}{\lambda^2-C_2}+\frac{B_3\lambda^2}{\lambda^2-C_3}}
\end{aligned}
\end{equation}
or the Schott equation \cite{Zemax:2011}:
\begin{equation}
\begin{aligned}
n(\lambda)=\sqrt{a_o+a_1\lambda^2+a_2\lambda^{-2}+a_3\lambda^{-4}+a_4\lambda^{-6}+a_5\lambda^{-8}}.
\end{aligned}
\end{equation}
where the various coefficients are specified for a material.
The throughput curves (reflection, transmission, and absorption probability as a function of wavelength and incident angle) of the entire optical system are taken from the JWST user documentation\cite{STScI2017}. We divide-out the detector quantum efficiency, which is calculated with electron conversion physics with PhoSim. The desired NIRCam channel, module, and filter can be selected from the GUI or by specifying the appropriate command-line inputs to PhoSim.
\subsection{Diffraction}
The diffraction component of the PSF is given by the squared modulus of the Fourier transform of the electric field amplitude over the pupil plane,
\begin{equation}
\text{PSF}(\hat{n})=\left|\frac{1}{A}\int_Ae^{i\hat{n}\cdot \vec{r}} e^{i\phi(\vec{r})}d^2\vec{r}\right|^2
\end{equation}
where the wavenumber is $k=2\pi/\lambda$, $\hat{n}$ is the unit vector in the direction of a field point on the focal plane, $\phi$ is the phase shift induced by the changing index of refraction along the optical train, and $A$ is the pupil screen function transmission probability.
\begin{figure}[ht]
\centering
\includegraphics[width=0.75\columnwidth]{pupil.jpg}
\\
(a) \hspace{5.7cm} (b)
\caption{\label{fig:pupil}
JWST revision V pupil screen before zero-padding (a) and after (b).}
\end{figure}
When simulating the diffraction component of the PSF, the basis of the algorithm is a fast Fourier transform (FFT) of the 2-D pupil geometry. Due to the nature of the discrete FFT algorithm, we must pad the array with zeros to ensure enough frequency bins are created to obtain reasonably accurate results (Fig.~\ref{fig:pupil}). Then, we create a cumulative probability distribution,
\begin{equation}
P(\vec{r}) = \int_A \text{FFT}(A(\vec{r})) d\vec{r}
\end{equation}
The result is to kick photons' incident angle $\theta$ before the raytrace through the optics by,
\begin{equation}
\delta\theta=\frac{\left|\vec{r}\right|\lambda}{\gamma}
\end{equation}
where $\vec{r}$ is sampled from a uniformly distributed random number in $P(\vec{r})$, $\lambda$ is the wavelength of the photon, and $\gamma$ is the zero-padding factor. Due to the extra zero-padding, we lose some detail in the pupil image since it must be scaled down to fit inside an array of reasonable size ($1024 \times 1024$ pixels). However, our analysis to follow demonstrates our approximation is reasonably good at replicating the expected diffraction pattern and PSF size with a padding factor of $\gamma=8$ (Fig.~\ref{fig:diffEx} and \ref{fig:psfsize3}).
Presently, the OTE is described by a single cylindrically-symmetric surface for the geometric raytrace, and a tricontagon-shaped pupil screen (Fig.~\ref{fig:pupil}) which is FFT’d separately. Photons are then kicked by an angle $\delta\theta$ (from result of FFT) and rays are propagated geometrically to the focal plane to simulate diffraction from the tricontacgon aperture. Although the higher-order spatial content of the geometric PSF is affected by the segmented primary mirror surface geometry, we show the convergence to the expected diffraction limit in Fig.~\ref{fig:psfsize1} and compare (to first-order) the PSF size to the nominal diffraction limit and WebbPSF (Fig.~\ref{fig:psfsize3}).
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\columnwidth]{diffEx.jpg}
\caption{\label{fig:diffEx}
An oversampled example of a PhoSim-NIRCam PSF with only diffraction physics on.}
\end{figure}
\subsection{Photo-Electric Conversion}
NIRCam's ten $2048 \times 2048$ pixel Teledyne HAWAII-2RG complementary metal-oxide-semiconductor (CMOS) detectors are composed of MCT, Hg$_{1-x}$Cd$_x$Te, with different relative compositions (molar fraction or stoichiometric ratio) of Cd to Hg $x$~\cite{Loose:2007}. This allows for a tunable bandgap, which corresponds to a variable cutoff wavelength $\lambda_{co}$. Considerable effort has been made to understand the optical properties and electron interactions of MCT photodetectors in recent decades~\cite{Rogalski:2005,Itsuno:2012,Rieke:2007}. We have implemented MCT detectors in PhoSim, which calculates the photon mean free path from the absorption coefficient for a given $x$.
PhoSim simulates all relevant physics of CMOS (and CCD) detectors in a multi-step photon-to-electron conversion code. A final image is produced with highly realistic results \cite{Peterson:2015}. However, in previous versions of PhoSim, only Si detectors were implemented. To model the absorption coefficient in MCT as a function of photon wavelength in the absorption layer, we first make use of the Hansen equation\cite{Hansen:1982} describing the material's energy gap:
\begin{equation}
\label{hanson}
\begin{aligned}
E_g(x,T)=-0.302+1.93x+5.53(10^{-4})T(1-2x)-0.810x^2+0.832x^3
\end{aligned}
\end{equation}
where $E_g$ is the bandgap energy in eV. Applying the Planck-Einstein relation, $E_g = hc/\lambda_{\text{cutoff}}$, Eq.~\ref{hanson} can be re-expressed in terms of the cutoff wavelength, given in $\mu m$:
\begin{equation}
\label{hanson_co}
\begin{aligned}
\frac{1.24\text{ eV$\mu$m}}{\lambda_{\text{cutoff}}}\cong-0.302+1.93x+5.53(10^{-4})T(1-2x)-0.810x^2+0.832x^3.
\end{aligned}
\end{equation}
Using the known cutoff wavelengths of both detectors, $\lambda_{\text{cutoff}}=2.5~\mu$m and $\lambda_{\text{cutoff}}=5.3~\mu$m \cite{Garnett:2004} for the SW and LW channels respectively, we solve for the real root of Eq.~\ref{hanson_co} with NIRCam's cryogenic temperature $T = 37$ K. The small effect on the absorption coefficient from variations of $x$ in the absorption layer is not currently considered.
To calculate the absorption coefficient $\alpha$, we implement an empirical piece-wise model for the Kane region ($E_{\gamma} > E_g$) given by Chu et al. \cite{Chu:1994} and the modified Urbach tail ($E_{\gamma} < E_g$), given by Finkman and Schacham \cite{Finkman:1984,Hougen:1989} where $E_{\gamma}$ is the incident photon energy:
\begin{equation} \alpha = \begin{cases}
\alpha_o\exp{\left[\sigma \left( \frac{E_\gamma-E_o}{T+T_o}\right)\right]} & E_\gamma < E_g\\
\beta \sqrt[]{E_\gamma-E_g} & E_\gamma > E_g
\end{cases}
\end{equation}
where the parameters are defined as:
\begin{multicols}{2}
$\alpha_o=\exp{(53.61x - 18.88)}$
$E_o = -0.3424 + 1.838x + 0.148x^2$
$T_o = 81.9$
$\sigma = 3.267\times10^{4}(1 + x)$
\\
$E_T = \left(\frac{T_o + T}{\sigma}\right)\ln(\alpha_T/\alpha_o) + E_o$
where $\alpha_T = 100 + 5000x$
\\
$\beta = \alpha_T(E_T - E_g)^{-1/2}$
\\
and $E_g$ is specified by Eq.~\ref{hanson}.
\end{multicols}
The mean free path of a photon is simply given as the inverse of $\alpha$. The conversion path length is calculated in PhoSim by multiplying the absorption coefficient by an exponentially distributed random number~\cite{Peterson:2015}. Both detectors' absorption regions are approximately 8 $\mu$m thick. Fig.~\ref{fig:mfp} shows the absorption coefficients for MCT in PhoSim as a function of incident photon wavelength for both channels at 37 K.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\columnwidth]{mfp.jpg}
\caption{
\label{fig:mfp}
Plot of the inverse of the absorption coefficient (mean free path) for both SW and LW MCT types as a function of wavelength. Results are consistent with Ref.~\citenum{Beletic:2008}, Fig. 5.
}
\end{figure}
Next, we implement a simple model given by Lui et al.\cite{Lui:1994} for the index of refraction in MCT as a function of $\lambda$, $T$, and $x$:
\begin{equation}
\begin{aligned}
n(\lambda,T,x)=\sqrt[]{A+\frac{B}{1-(C/\lambda)^2}+D\lambda^2}
\end{aligned}
\end{equation}
where the parameters A, B, C, and D are defined as:
\\
$A=13.173 - 9.852x + 2.909x^2 + 0.001(300 - T)$
$B=0.83 - 0.246x - 0.0961x^2 + 8\times 10^{-4}(300 - T)$
$C = 6.706 - 14.437x + 8.531x^2 + 7\times 10^{-4}(300 - T)$
$D = 1.953\times10^{-4} - 0.00128x + 1.853\times10^{-4}x^2$.
\\
In accordance with Ref.~\citenum{Dornhaus:1983}, the relative permittivity (dielectric constant) in MCT $\epsilon_\text{MCT}$ is given in the high frequency approximation by,
\begin{equation}
\begin{aligned}
\epsilon_\text{MCT}(x)=15.2-15.6x+8.2x^2.
\end{aligned}
\end{equation}
The transverse diffusion is calculated with the Gaussian diffusion width, $\sqrt{2Dt_c}$, where $D$ is the diffusion coefficient given by,
\begin{equation}
\begin{aligned}
D=\frac{\mu_q(x,T)kT}{q}
\end{aligned}
\end{equation}
where $\mu_q(x,T)$ is the electron mobility in MCT. Following Ref.~\citenum{Rosbeck:1982}, we implement the model for electron mobility in MCT:
\begin{equation}
\begin{aligned}
\mu_q(x,T) = \frac{9\times10^8s}{100T^{2r}}
\end{aligned}
\end{equation}
where $r=(0.2/x)^{0.6}$ and $s=(0.2/x)^{7.5}$.
The collection time is,
\begin{equation}
\begin{aligned}
t_c=\int_{z_c}^{z}\frac{dz}{|\mu_q(x,T)E_z(z)|}.
\end{aligned}
\end{equation}
Further work will identify what other relevant sensor effects may be important to produce more realistic images.
\subsection{Device Readout}
We mimic the standard NIRCam CMOS readout procedure, dubbed MULTIACCUM. This means that multiple frames can be read out non-destructively during an integration sequence as charge accumulates in the pixels. In practice, multiple frames are average-combined into groups and an initial reference (zero) frame before they are reset for the next sequence due to data transmission constraints. Several different MULTIACCUM sequence patterns exist depending on the observation's science goals, target, and time constraints.
Each chip is segmented into four output channels of dimensions $2048\times512$ for the readout. Thus, four average-combined amplifier images are generated for each MULTIACCUM group plus the reference frame. The proper read noise and bias level is added (see Ref.~\citenum{Peterson:2015} for a more complete description of the PhoSim amplifier images.)
\subsection{Background}
The background in deep NIRCam images is dominated by Zodiacal light. There exists thermal emission from JWST itself, but this is negligible in the NIRCam bands. The Zodiacal light spectral radiance is the sum of scattered and thermal emission produced by the Zodiacal dust. The model given by in the sensitivity calculations technical report \cite{Rieke:2013} is,
\begin{equation}
\begin{aligned}
F(\lambda)=\frac{3.95\times10^{-14}\cdot1.19\times10^8\cdot\lambda^{-5}}{e^{14388/(\lambda\cdot5300)}-1}+\frac{2.79\times10^{-8}\cdot1.19\times10^8\cdot\lambda^{-5}}{e^{14388/(\lambda\cdot282)}-1}
\end{aligned}
\end{equation}
where $\lambda$ is given in $\mu$m. The first term is the scattering and the second term is the thermal emission.
The spatial and temporal variation of the Zodiacal background flux is also modeled. To first-order, the spatial variation is a function of the ecliptic latitude of the telescope's pointing. The temporal variation is a seasonal variation with a period of 1 year\cite{Kelsall:1998}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\columnwidth]{zodi.jpg}
\\
(a) \hspace{6.7cm} (b)
\caption{\label{fig:zod}
PhoSim Zodiacal background SED model (a) and Zodiacal background time-variation model for a fixed telescope pointing direction (b).
}
\end{figure}
Cosmic rays can be simulated in PhoSim by applying a known rate of comics ray events and randomly selecting from a set of pre-defined postage stamp images of cosmic ray interactions which are then given a random orientation. This is then simply added to the electron and amplifier image outputs.
\section{Results and Discussion}
\subsection{Point Spread Function}
Systematics effects in the PSF can be wavelength and field dependent. We determine the centroid RMS size of the PSF at various positions in the NIRCam field of view (field points) using each filter.
The total PSF size $\sigma_T$ can be approximated by the sum in quadrature of the geometric $\sigma_g$ and $\sigma_d$ diffraction components,
\begin{equation}
\label{eq:diff}
\sigma_T\approx\sqrt[]{\sigma_g^2+\sigma_d^2}.
\end{equation}
We adopt an iterative weighting method used in the weak lensing community where the RMS $\sigma$ is,
\begin{equation}
\sigma=\sqrt[]{I_{11}+I_{22}}
\end{equation}
where $I_{ij}$ are normalized moments of the PSF's intensity profile $f(x_1,x_2)$ weighted by an elliptical Gaussian filter $W(x_1,x_2)$,
\begin{equation}
I_{11}=\frac{\int\int dx_1dx_2W(x_1,x_2)f(x_1,x_2)x_ix_j}{\int\int dx_1dx_2W(x_1,x_2)f(x_1,x_2)}.
\end{equation}
This method is iterated until we reach an acceptable level of error, which is given in the image captions wherever measured values appear.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\columnwidth]{psfdefocus.jpg}
\caption{\label{fig:psfsize1}
PhoSim PSF size versus defocus of image plane. The blue line is the total simulated geometric and diffraction PSF simulated by PhoSim propagation of geometric rays to the exit pupil with photon kicks from the FFT of the JWST tricontagon pupil geometry. The black line is the geometric-only component with a cylindrically-symmetric surface only. The yellow line is the sum of in quadrature of the geometric component (black line) and the diffraction limit (dashed gray line), which rightly approximates the blue line. Data were simulated at 3.56 $\mu$m near the center of the field of view (field point 5). The blue line is slightly underestimated due to the image cutout cutting off some of the PSF. Uncertainties are on the order of $10^{-8}$ arcseconds for the PhoSim data and $10^{-5}$ arcseconds for the WebbPSF data.}
\centering
\includegraphics[width=1.0\columnwidth]{psfimage.jpg}
\caption{\label{fig:psfsize2}
PhoSim PSF versus defocus of image plane. Data obtained at 3.56 $\mu$m near the center of the field of view (field point 5) using an oversampled image cutout. The top row shows simulated PSFs with diffraction and geometric physics on (Fig.~\ref{fig:psfsize1}, blue line). The bottom row shows simulated PSFs with the geometric raytrace approximation only (Fig.~\ref{fig:psfsize1}, black line). The ring shape is due to our cylindrically-symmetric approximation of the primary mirror geometry for the raytrace. Complete results for the geometric PSFs may be investigated in the future by coupling the tricontagon mirror geometry to the PhoSim raytrace code.}
\end{figure}
Fig.~\ref{fig:psfsize1} and \ref{fig:psfsize2} demonstrate how the PSF size varies as function of defocus of the image plane in PhoSim. We move the detector surface through the optimal focus and measure the size of the geometric component of the PSF, and the total PSF. The results show that PhoSim is approximating the total PSF in accordance with Eq.~\ref{eq:diff}.
\begin{figure}[ht]
\centering
\includegraphics[width=1\columnwidth]{psfsize.jpg}
(a) \hspace{5.7cm} (b)
\caption{\label{fig:psfsize3}
PSF radial size measured from the centroid of the PSF versus wavelength at $0$ mm defocus are shown in (a). Uncertainties are on the order of $10^{-8}$ arcseconds for the PhoSim data and $10^{-5}$ arcseconds for the WebbPSF data. Field point 5 is near the center of the field of view. Field points 1-4 are near the centers of chips 1-4. Field point 6 is near the edge of the field of view. We emphasize that this data may not represent the final PSFs, which depends on the final design configuration of the observatory and will need to be calibrated in-orbit. (b) shows the locations of the test field points. Also shown are the approximate outlines of the chips (blue squares = SW, enclosing red square = LW) in the PhoSim-NIRCam field of view. Note, this has yet to be matched to the final, exact chip positions.}
\end{figure}
Fig.~\ref{fig:psfsize3} shows the PSF radial size versus wavelength at each filter location for 5 different field positions. Data points for PhoSim results and WebbPSF\cite{Perren:2012,Perren:2014} revision V results (155 nm RMS WFE OPD model; OTE+NIRCam) are shown for comparison. These results are highly dependent on the final details of the optics and defocus settings. Nevertheless, the results appear consistent with WebbPSF. The large size of the PSF below 1.2 $\mu$m is due to the large geometric PSF component at those wavelengths. In our tests, different focus positions of the image plane exist which removes this, at the undesirable expense of slightly larger PSFs at wavelengths near or above 2 $\mu$m.
The results shown here are consistent with WebbPSF results and the requirement that NIRCam images be near diffraction-limited above 2 $\mu$m. Although the exact results are somewhat dependent on the SED of choice, especially for the wide-band filters, and the final optical configuration. Fig.~\ref{fig:psfs1}-\ref{fig:psfs4} show a sample of these PSFs for field points 5 and 6. The figures show that the PSFs are near diffraction-limited and well-behaved around the field center, but are noticeably distorted toward the field edge.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{f5_sw.jpg}
\caption{\label{fig:psfs1}
PhoSim-NIRCam PSFs for each SW filter at $0$ mm defocus at field point 5. Near the center of the field of view, the PSFs are typically near diffraction-limited and well-behaved.}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{foff_sw.jpg}
\caption{\label{fig:psfs3}
PhoSim-NIRCam PSFs for each SW filter at $0$ mm defocus at field point 6. Near the edge of the field of view, the PSFs are typically not diffraction-limited and more distorted.}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{f5_lw.jpg}
\caption{\label{fig:psfs2}
PhoSim-NIRCam PSFs for each LW filter at $0$ mm defocus at field point 5. Near the center of the field of view, the PSFs are typically near diffraction-limited and well-behaved.}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{foff_lw.jpg}
\caption{\label{fig:psfs4}
PhoSim-NIRCam PSFs for each LW filter at $0$ mm defocus at field point 6. Near the edge of the field of view, the PSFs are typically not diffraction-limited and more distorted.}
\end{figure}
\cleardoublepage
\subsection{Simulation of Extragalactic Sky}
To demonstrate the capabilities of PhoSim-NIRCam, we have simulated an extragalactic blank-sky field using the source catalog created from a real space near-infrared image taken with the WFC3-IR camera on the Hubble Space Telescope (HST).\footnote{\footnotesize The simulated images as well as the code and input catalogs used for this simulation are available at \url{https://fenrir.as.arizona.edu/phosim}.}
Fig.~\ref{fig:sim} shows the comparison between the original WFC3-IR F160W image (the top panel; part of the CANDELS GOODS-S data\cite{Grogin2011,Koekemoer2011}) and simulated NIRCam SW F200W image (the bottom left panel) and LW F356W image (the bottom right panel). Although the JWST primary mirror ($D = 6.5$ m) is significantly larger than that of the HST ($D = 2.4$ m), the observed wavelengths of these simulated NIRCam images are also longer (especially with the F356W image), resulting in comparable image sizes.
The figure clearly demonstrates that PhoSim-NIRCam is capable of producing realistic JWST/NIRCam images for both the SW and LW channels, including the diffraction spikes produced by the hexagonal primary mirror of the JWST. Although an integration time of 4 hours was assumed to produce these NIRCam images, the image depth here is essentially limited by the CANDELS HST source catalog used for the simulation. Actual 4-hour NIRCam images would look much deeper with many more fainter objects. Here, the HST-produced source catalog was used to allow direct comparison between the real and simulated images, but to simulate more realistic deeper NIRCam images with a variety of galaxy SEDs, the use of a properly constructed mock catalog\cite{Williams:2018} would be more appropriate (such a simulation is currently underway).
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\columnwidth]{sim.jpg}
\\
\caption{\label{fig:sim}
Comparison of a real HST/WFC3-IR F160W image (top) and simulated JWST/NIRCam F200W (bottom left) and F356W (bottom right) images created by PhoSim-NIRCam. The filter numbers refer to wavelength in $\mu$m (e.g., the effective wavelength of F160W is 1.6 $\mu$m). The input source catalog for the simulation was produced from the HST image, and includes morphological information based on S\'ersic 2D models. A flat-$f_\nu$ SED was used to extrapolate source brightnesses to 2 and 3.6 $\mu$m, and a sky background of 0.1 MJy/sr was included. The NIRCam SW F200W image consists of four quadrants of H2RG detectors (0.03"/pixel) while the LW F356W image is produced by one H2RG detector (0.06"/pixel). The figure clearly demonstrates that PhoSim-NIRCam is capable of producing realistic JWST/NIRCam images for both the SW and LW channels. Note the diffraction spikes seen for bright sources in the NIRCam images produced by the hexagonal primary mirror of JWST.
}
\end{figure}
\section{Conclusions, Limitations, and Future Work}
We harness the power of PhoSim, demonstrating the capability to simulate high-fidelity NIRCam images from a realistic catalog of stars and galaxies. Our end-to-end physics-based method simulates one photon at a time to replicate the relevant effects on NIRCam PSF morphology and overall image characteristics relevant for background-limited observations.\footnote{\footnotesize The PhoSim-NIRCam source code and ISC files describing the entire PhoSim-NIRCam model are open-source and available at \url{https://bitbucket.org/phosim/phosim_release}.} Software updates including additional features and bug fixes are published on a regular basis.
PhoSim-NIRCam may be applied to better understand systematics in NIRCam images, especially galaxy morphology, weak lensing, and morphology of other extended sources. The initial avenues for applying PhoSim-NIRCam are the GTO and ERS programs, and in-orbit commissioning.
In this work, we present a method to simulate space-based optical and infrared instruments within the comprehensive PhoSim framework. We show initial PhoSim-NIRCam results that approximate the wavelength- and field- dependent PSF behavior. We demonstrate PhoSim-NIRCam's capability to simulate NIRCam PSFs with various physics independently, although we do not yet claim our results are an accurate model of the instrument's ultimate performance. In the future, a more complete validation and modeling campaign will be performed in conjunction with the JWST in-orbit commissioning.
Planned extensions of this work include implementing the proper spatial pattern and power spectrum of NIRCam readout noise from pyNRC\cite{Leisenring2018} along with a realistic FITS file format for use with JWST image pipeline (e.g. header keywords and naming scheme). We also plan to include a more realistic cosmic ray signature and rates for space telescopes. It may also be interesting to investigate a more realistic thermal model of the telescope and detectors, extending recent work coupling photon Monte Carlo methods to opto-mechanical deformation of ground-based optics\cite{Peterson2019}.
Finally, a model to couple the geometric raytrace to the primary mirror tricontagon geometry, surface perturbations, and figure errors that properly affect the ellipticity and higher-order spatial content of the geometric PSF will be investigated in the future. This work will be essential to matching the final performance of NIRCam after launch.
We welcome greater involvement from the JWST community to supplement these efforts. Extensions to other space-based or ground-based telescopes would also be straightforward following the methods developed in this work.
\acknowledgments
JRP \& CJB acknowledge funding from Purdue University and NASA through Subcontract \#366623 from NASA Contract NAS50210 with the University of
Arizona. We thank Dr. Christina Williams for providing the CANDELS GOODS-S source catalog for our image simulation. We acknowledge use of Matplotlib\cite{Hunter:2007}, a community-developed Python library for plotting. This research has made use of NASA's Astrophysics Data System.
\small
\begin{multicols}{2}
|
1,477,468,751,180 | arxiv | \section{Introduction}
\label{sec:intro}
Our understanding of the physical processes involved in the formation of massive stars ($>$ 8 M$_{\odot}$) is still incomplete \citep[e.g.,][]{zinnecker07,Motte+2018,rosen20}. Most recent high-resolution studies of massive young stellar objects (MYSOs) suggest that
massive stars form via infall from a surrounding envelope and disk-mediated accretion similar to their low-mass counterparts \citep[see][]{rosen20}.
In this relation, understanding accretion and jets/outflows in MYSOs is crucial for constraining the physics related to their birth
and early evolution.
The simultaneous detection of a highly collimated jet and a wide-angle outflow in low-mass stars has
been known \citep[e.g.,][]{arce07,zapata14} and theoretical models of such event have been developed.
These are considered as two flavours of the same mass-ejection process in protostars.
However, the study of connection between these two forms in MYSOs {is still lacking \citep[e.g.,][]{arce07,frank14} and} {very few observations of the simultaneous presence of a highly collimated jet and a wide-angle outflow in MYSOs exist \citep[e.g.,][]{torrelles11,zinchenko20}.}
No models predict the simultaneous existence of a jet and a wide-angle outflow as well as rotation of both the components together in MYSOs \citep[e.g.,][]{torrelles11}. Therefore, the observational study of such two flavours of ejection from MYSOs can provide vital inputs for constraining the accretion based models for massive star formation.
{In this context, the target of this paper is a genuine MYSO embedded in the infrared dark cloud (IRDC) SDC18.888--0.476 (hereafter SDC18). The target MYSO is associated with both the 6.7 GHz methanol maser emission (MME) and Extended Green Object (EGO).}
Situated at a distance of $\sim$5.0 kpc, the IRDC SDC18 has been seen as a bright emission region in the submillimeter (sub-mm) maps \citep[see Figure~2 in][]{dewangan20c}.
A massive dust clump at 870 $\mu$m has been reported toward SDC18 \citep[$L_\mathrm{bol}$ $\sim$53.8 $\times$ 10$^{3}$ $L_\odot$; $M_\mathrm{clump}$ $\sim$4880 $M_\odot$;][]{urquhart18,dewangan20c},
which is located at the edge of the W39 H\,{\sc ii} region hosting several massive OB-stars \citep{westerhout58,kerton13}. Figure~\ref{fig1}a displays the {\it Spitzer} 24 $\mu$m image overlaid with the IRAM 1.2 mm continuum emission contours \citep[beam size $\sim$13$''$;][]{rigby18}, showing the mm emission peaks toward the clump at 870 $\mu$m.
In Figure~\ref{fig1}a, we mark the positions of a water maser \citep[V$_\mathrm{lsr}$ $\sim$65.1 km s$^{-1}$;][]{walsh11} and a 6.7 GHz MME \citep[V$_\mathrm{lsr}$ $\sim$56.4 km s$^{-1}$;][]{breen15,yang19}.
The position of the 6.7 GHz MME coincides with {the peak of 1.2 mm continuum emission} and the EGO G18.89$-$0.47 \citep{cyganowski08,towner19}.
{\citet{dewangan20c} identified a point-like source as an infrared counterpart (IRc) of the 6.7 GHz MME (hereafter G18.88MME). Previously, \citet{kerton13} characterized this source as a protostar (stellar mass $\sim$8 M$_{\odot}$; see source ID \#G18-2 in Table~3 in their paper) using the photometric data at 3.6--24 $\mu$m.}
No radio counterpart of G18.88MME is reported in the literature \citep[e.g.,][]{towner19,dewangan20c}.
All these results allowed \citet{dewangan20c} to propose the IRc G18.88MME as a genuine MYSO candidate in a very early evolutionary phase, just before the ultracompact (UC) H\,{\sc ii} stage (see an arrow in Figure~\ref{fig1}a).
In this letter, we analyzed the Atacama Large Millimeter/submillimeter Array (ALMA) 1.38 {mm} continuum map and data cubes of SiO, HC$_{3}$N, and $^{13}$CO lines (resolution $\sim$0\rlap.{$''$}8 or 4000 AU) of G18.88MME, which have enabled us to discover
a dense fast narrow jet-like molecular outflow surrounded by a slower wide wind.
Both components are found to be rotating.
{The ALMA data adopted in this work are described in Section~\ref{sec:obser}.
The results derived using the continuum and line data are presented in Section~\ref{sec:results}.
We discuss the implications of our observed findings in Section~\ref{sec:discussion}.}
\section{Data sets}
\label{sec:obser}
We downloaded the ALMA continuum map at 1.38 mm (resolution $\sim$0\rlap.{$''$}82 $\times$ 0\rlap.{$''$}6; P.A. = 66$\degr$.6)
and the processed cubes of three lines (i.e., SiO(5--4), HC$_{3}$N(24--23), and $^{13}$CO(2--1); beam size $\sim$0\rlap.{$''$}9 $\times$ 0\rlap.{$''$}66) from the ALMA science archive.
These observations were taken with the 12-m ALMA array in Band-6 under the
project 2017.1.00983.S (PI: Brogan, Crystal). All these data sets were produced by the ALMA pipeline.
The SiO(5--4), HC$_{3}$N(24--23), and $^{13}$CO(2--1) lines were observed
in spectral windows with frequency ranges (bandwidths) of 216.58--217.52 GHz (935 MHz), 217.55--218.49 GHz (468 MHz), and 220.24--220.71 GHz (468 MHz), respectively.
The velocity resolutions (rms noise) of the SiO(5--4), HC$_{3}$N(24--23), and $^{13}$CO(2--1) lines
{are 1.35 km s$^{-1}$ (2.1 mJy beam$^{-1}$), 0.33 km s$^{-1}$ (2.1 mJy beam$^{-1}$), and 0.33 km s$^{-1}$ (4.1 mJy beam$^{-1}$), respectively.
Additionally, we used the {\it Spitzer} MIPS Inner Galactic Plane Survey \citep[MIPSGAL;][]{carey05} 24 $\mu$m image (resolution $\sim$6$''$) and the IRAM 1.2 mm continuum emission map \citep[resolution $\sim$13$''$; from][]{rigby18} of SDC18.}
\section{Results}
\label{sec:results}
\subsection{Continuum Emission}
\label{sec:conta}
To further study the inner structures of the clump {detected} in the IRAM 1.2 mm continuum map (see Figure~\ref{fig1}a), we present the ALMA 1.38 mm continuum map and contours in Figure~\ref{fig1}b.
{In the direction of G18.88MME, four continuum sources (i.e., MM1, MM2, MM3, and MM4) are identified (see ellipses), and are spatially found at the center of
the emission structure ({extent $\sim$0.3 pc $\times$ 0.2 pc}; see {the red contour} in Figure~\ref{fig1}b) traced in the ALMA map.}
The continuum sources were selected using the Python-based {\sc astrodendro}-package\footnote{https://dendrograms.readthedocs.io/en/stable/index.html}
\citep[see also][for more details]{baug20},
which uses the {\sc dendrogram} algorithm as described in \citet{rosolowsky08}.
In this context, a minimum flux level of 5$\sigma$ was adopted, where the background flux (i.e., 1$\sigma$) of 0.06 mJy beam$^{-1}$ was estimated from the emission-free areas in the continuum map.
\begin{figure}
\center
\includegraphics[width=7.8cm]{fg1.eps}
\caption{a) Overlay of the IRAM 1.2 mm continuum contours on the {\it Spitzer} 24 $\mu$m image of SDC18.
The 1.2 mm continuum map is exposed to a Gaussian smoothing function with a width of 3 pixels.
{The contour levels of the IRAM continuum emission are 28, 45, 70, 100, 120, 180, 250, 300, 430, and 570 mJy beam$^{-1}$.} The positions of a water maser and a 6.7 GHz MME are marked by an upside down triangle and a star, respectively.
The solid box (in magenta) outlines the area shown in Figure~\ref{fig1}b.
b) The panel shows the ALMA 1.38 mm continuum image around G18.88MME.
The 1.38 mm continuum contours are also shown with the levels of 0.55 and 2 mJy beam$^{-1}$.
The synthesized beam is $\sim$0\rlap.{$''$}82 $\times$ 0\rlap.{$''$}6, P.A. = 66$\degr$.6 (lower left corner).
{{The 1.38 mm continuum contour with a level of 0.55 mJy beam$^{-1}$ (in red) outlines the emission structure.} {Four} continuum sources are indicated by ellipses (in white; see also broken arrows).
The solid box (in yellow) outlines the area shown in Figures~\ref{fig3}a and~\ref{fig3}b.}}
\label{fig1}
\end{figure}
{The flux densities (deconvolved angular sizes) of the continuum sources MM1, MM2, MM3, and MM4 are
about 33.5 mJy (0\rlap.{$''$}92 $\times$ 0\rlap.{$''$}6), 37.3 mJy (0\rlap.{$''$}75 $\times$ 0\rlap.{$''$}58),
24.4 mJy (1\rlap.{$''$}53 $\times$ 0\rlap.{$''$}66), and 6.3 mJy (0\rlap.{$''$}46 $\times$ 0\rlap.{$''$}31), respectively.} \citet{towner19} reported the dust temperature ($T_{\rm d}$) and the kinetic
temperature of the clump hosting G18.88MME to be $\sim$22~K and $\sim$28~K, respectively. The kinetic temperature was derived from the NH$_3$ emission \citep{towner19}.
The observed mm fluxes ($S_\nu$) can be utilized to compute the masses of continuum sources \citep[e.g.,][]{hildebrand83}.
In the calculations, we adopted the dust opacity ($\kappa_\nu$) = 0.9\,cm$^2$\,g$^{-1}$ at 1.3 mm \citep{ossenkopf94}, distance ($d$) = 5.0 kpc, and $T_{\rm d}$ = [22, 28]~K \citep[see equation~1 in][]{dewangan16}.
Using the values of $S_\nu$ and $T_{\rm d}$ = 22(28)~K, {the masses of MM1, MM2, MM3, and MM4 are estimated to be $\sim$18(13), $\sim$20(15), $\sim$13(10), and $\sim$3.4(2.5) M$_{\odot}$, respectively.} The uncertainty in the mass calculation could be typically $\sim$20\% to $\sim$50\%, which includes various uncertainties in the assumed dust temperature, opacity, and measured flux.
Among these sources, MM1 is found as a main core associated with the dense gas (see Section~\ref{sec:line}).
\subsection{Molecular Line Emission}
\label{sec:line}
Figure~\ref{fig:spectra} presents the profiles of the {$^{13}$CO}, HC$_{3}$N and SiO emission toward MM1, indicating the existence of extended non-gaussian wings {(indicative of high-velocity outflows) in all of these lines}.
\begin{figure}
\centering
\includegraphics[width=7.5cm]{fg2.eps}
\caption{ALMA $^{13}$CO, SiO and HC$_{3}$N flux spectra integrated over the area covering MM1 and the outflow. The SiO and HC$_{3}$N spectra are shifted along the ordinate axis for clarity.}
\label{fig:spectra}
\end{figure}
{In Figures~\ref{fig3}a and~\ref{fig3}b, we show the integrated intensity emission contours of the red-shifted and blue-shifted components in the SiO, $^{13}$CO and HC$_{3}$N emission superimposed on the ALMA 1.38~mm continuum map. For the blue-shifted $^{13}$CO emission we selected the velocity range {in the far wing of the line ([44, 50]~km\thinspace s$^{-1}$)} free from the additional emission peaks, which are probably produced by some other components in this complex.
All these molecular outflows (extent $\sim$28000~AU {in SiO}) are centered at the continuum source MM1. At the same time the morphology of the emission in different lines is somewhat different. In particular, the $^{13}$CO emission apparently surrounds the HC$_{3}$N emission, especially in the blue lobe.} It is worth mentioning that the outflows seem to be bent.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{fg3.eps}
\caption{{Overlay of the outflow lobes of the molecular emission on the ALMA 1.38 mm continuum image.
a) The observed outflow in the SiO emission. The contours of the red-shifted component at [74, 94] km s$^{-1}$ (in white) are
at (0.18, 0.24, 0.4, 0.55, 0.75, 0.95) $\times$ 338.8 mJy beam$^{-1}$ km s$^{-1}$.
The contours of the blue-shifted component at [28, 61] km s$^{-1}$ (in blue) are at (0.1, 0.22, 0.4, 0.6, 0.9) $\times$ 906.2 mJy beam$^{-1}$ km s$^{-1}$.
PV diagrams along the dotted-dashed line and the solid line are presented in Figures~\ref{fig2} and~\ref{fig4}, respectively.
b) The observed outflows in the $^{13}$CO (solid curves)
and HC$_{3}$N (filled contours) emission.
The $^{13}$CO emission contours are at (0.3, 0.4, 0.5, 0.65, 0.85, 0.95) $\times$ peak value (i.e.,
438.5 mJy beam$^{-1}$ km s$^{-1}$ for red-shifted component at [74, 90] km s$^{-1}$ (in white) and 64.7 mJy beam$^{-1}$ km s$^{-1}$ for blue-shifted component at [44, 50] km s$^{-1}$ (in blue)).
Filled contours of the HC$_{3}$N emission are at (0.3, 0.4, 0.5, 0.65, 0.85, 0.95) $\times$ peak value (i.e., 102.4 mJy beam$^{-1}$ km s$^{-1}$ for red-shifted component at [73, 85] km s$^{-1}$ (in pink) and 224.5 mJy beam$^{-1}$ km s$^{-1}$ for blue-shifted component at [40, 63] km s$^{-1}$ (in gray)). PV diagrams along four cuts c1--c4 (see dotted-dashed lines) are presented in Figure~\ref{fig5}. In each panel, ellipses show the synthesized beams (lower left corner).} {The filled squares in both panels indicate the center locations of the continuum sources as shown in Figure~\ref{fig1}b.}
}
\label{fig3}
\end{figure}
Figure~\ref{fig2} displays the position-velocity (PV) diagram in the HC$_{3}$N line at the position angle of 160$\degr$ across MM1 (perpendicular to the outflow), allowing to explore the kinematics of the continuum source.
{This PV diagram is consistent with a Keplerian-like rotation (although the red-shifted part of the emission is practically missing), which may refer to the probable disk in MM1 (unresolved in the present data).
The dynamical central mass of the core is determined to be $M \sim 8/\sin^{2}i$~M$_{\odot}$, where $i$ is the unknown disk inclination (with a high uncertainty due to the insufficient angular resolution).}
\begin{figure}
\centering
\includegraphics[width=7.5cm]{fg4.eps}
\caption{PV diagram in the HC$_{3}$N line at the PA of 160$\degr$ across ``MM1" {(perpendicular to the outflow; see a dotted-dashed line in Figure~\ref{fig3}a).} The curves display the Keplerian rotation around the central mass of $M\sin^{2}i$ = 8 M$_{\odot}$ (dashed) and $M\sin^{2}i$ = 12 M$_{\odot}$ (solid).}
\label{fig2}
\end{figure}
Figure~\ref{fig4}a displays the PV diagram in the SiO line along the outflow (at the position angle of 81$\degr$) across MM1.
The PV diagram is also overlaid with the $^{13}$CO PV contours (in white).
In Figure~\ref{fig4}b, we present the overlay of the HC$_{3}$N PV contours on the PV diagram in the SiO line.
{The PV diagrams indicate the existence of two components:}
a fast jet-like one (with a large extent in velocity) and a slower, more extended spatially component (see arrows and a dashed curve in Fig.~\ref{fig4}a).
The $^{13}$CO data {may be} consistent with a wide-angle wind picture for the slow component, which shows a structure similar to that expected in this case \citep[for comparison see Figure~2 in][]{arce07}.
The fast SiO outflow component coincides with the HC$_{3}$N outflow. The PV diagram for this component is typical for the jet-driven bow shock model \citep{arce07}.
\begin{figure}
\centering
\includegraphics[width=8.0cm]{fg5.eps}
\caption{a) PV diagram in the SiO line at the PA of 81$\degr$ across ``MM1" {(see a solid line in
Figure~\ref{fig3}a).} The contours show the $^{13}$CO emission. {The contour levels are at (0.05, 0.15, ..., 0.95) $\times$ 456~mJy\,beam$^{-1}$.} The arrows indicate the jet-like outflow and the dashed curve indicates the probable wide-angle wind.
b) Overlay of HC$_{3}$N PV contours on the PV diagram in the SiO line. {The contour levels are at (0.1, 0.2, ..., 0.9) $\times$ 69~mJy\,beam$^{-1}$.}}
\label{fig4}
\end{figure}
Figure~\ref{fig5} shows the PV diagrams in the SiO line at four cuts across the outflow lobes (i.e., c1--c4 in Figure~\ref{fig3}b), allowing us to examine the transverse structure of the outflow.
{The cuts c1 and c2 are selected toward the red-shifted lobe, while the cuts c3 and c4 are chosen in the direction of the blue-shifted lobe. Each PV diagram is also overlaid with the HC$_{3}$N PV contours.
These PV diagrams show a narrow fast and a wide slow components in both lobes. One can see velocity gradients across the outflow lobes in both components. These gradients are especially clear in the red-shifted outflow lobe (see panels ``c1" and ``c2"). It has the same sign as in the core. In the blue-shifted fast lobe the gradient is either absent or slightly opposite (see panels ``c3" and ``c4").}
\begin{figure}
\includegraphics[width=\columnwidth]{fg6.eps}
\caption{PV diagrams in the SiO line correspond to four cuts (i.e., c1--c4) across the outflow lobes
(see dotted-dashed lines in Figure~\ref{fig3}b). The contours (in white) show the HC$_{3}$N emission. {The contour levels are at (0.2, 0.35, 0.5, 0.65, 0.8, 0.95) $\times$ peak value (15, 28, 36, 29~mJy\,beam$^{-1}$ for the cuts c1, c2, c3 and c4, respectively.)}
}
\label{fig5}
\end{figure}
\section{Discussion and Conclusion}
\label{sec:discussion}
Outflows/jets are often explained by the X-wind model {and disk-wind model \citep[e.g.,][]{arce07,frank14}.} Previously, using the water maser observations, \citet{torrelles11} reported a two-wind outflow (i.e., a fast narrow jet and a slower wide-angle outflow) from a massive protostar Cep A HW2. A similar picture is observed in S255IR-SMA1 harboring a 20~M$_\odot$ MYSO \citep{zinchenko20}. The observed two flavours of ejection from MYSOs are crucial inputs for unifying the jet-driven and wind-driven scenarios of molecular outflows in MYSOs, which is one of the open research topics in star formation. However, such event is rare in the literature. Hence, the most striking outcome of this work is the simultaneous detection of a wide-angle wind and a narrow jet-like outflow, which are driven by the MYSO G18.88MME embedded in the continuum core MM1 (mass $\sim${13--18} M$_{\odot}$).
The mass of the red-shifted outflow lobe estimated from the $^{13}$CO spectrum is $\sim 1\times (T_\mathrm{ex}/100\,\mathrm{K})$~M$_{\odot}$ assuming a normal $^{13}$CO abundance (we have no estimate of the $^{13}$CO excitation temperature $ T_\mathrm{ex} $). The blue part of the $^{13}$CO spectrum is contaminated by the {emission peaks which prevent such an estimate}. The terminal line-of-sight velocity of the outflow as observed in the SiO line is $ \sim 25 $~km\thinspace s$^{-1}$\ in the red lobe and $ \sim 40 $~km\thinspace s$^{-1}$\ in the blue lobe. The SiO relative abundance estimated {from the ratio of the SiO and $^{13}$CO line wing intensities toward the SiO emission peaks} is $ \sim 5\times 10^{-9} $ in the red lobe and $ \sim 10^{-8} $ in the blue lobe under the assumption of {low optical depth and} equal LTE excitation of SiO and $^{13}$CO {with $T_\mathrm{ex}>E_\mathrm{u}/k$}. These values are among the highest SiO abundances observed in the outflows in HMSF regions \citep{zinchenko20}.
In general, the HC$_{3}$N(24--23) line is known as a very good tracer of warm and dense gas in star-forming regions \citep{lis93}. The critical density of this transition is $\sim 3\times 10^6$~cm$^{-3}$ using the collisional rates obtained by \citet{Faure16}. Its detection in outflows is very rare and indicates a very high density in the fast jet-like outflow. For the HC$_{3}$N abundance we obtain the values of $ \sim 10^{-9} $ in the red lobe and {a factor of 2} higher in the blue lobe (under the same assumptions as for SiO).
{Our outcomes also show velocity gradients across the outflow lobes (see Figure~\ref{fig5})}, which can be interpreted as rotation. It is worth noting that such interpretation is not unique. Some asymmetries can produce similar velocity gradients \citep[e.g.,][]{DeColle16}. The picture is complicated by the fact that our outflow is apparently bent. Such bending may hint at the disk precession, which can be a consequence of a binary nature of this system {\citep{Monin07}}. In this case the rotation is combined with a helical motion which can produce a complicated velocity pattern. Perhaps this can explain the difference in the appearance of the red-shifted and blue-shifted outflow lobes with the absence or even an opposite sign of the velocity gradient in the fast component of the latter one.
If we interpret the velocity gradient in the red-shifted outflow lobe as a rotation, {we can {try to} estimate the launching radius of the outflow} following the approach suggested in \citet{Anderson03}. {At the cut ``c2" (Fig.~\ref{fig5}) the total velocity span is about 20~km\thinspace s$^{-1}$\ at the offset interval of about 0.4~arcsec, which corresponds to 2000~AU. This implies the specific angular momentum of $\sim 10000$~AU\,km\thinspace s$^{-1}$.} This is a very high value, much higher than observed in nearby low-mass objects \citep[e.g.,][]{Zhang18}.
{According to this model the launching radius is about $30\times(v_\mathrm{p}/100$~km\thinspace s$^{-1}$)$^{-4/3}$~AU, where $v_\mathrm{p}$ is the poloidal velocity. For the typical values of $v_\mathrm{p}$ for jets from $\sim$100~km\thinspace s$^{-1}$\ to $\sim$1000~km\thinspace s$^{-1}$\ \citep[e.g.][]{Anglada18} the launching radius varies from $\sim$30~AU to $\sim$1.4~AU.}
These estimates indicate the disk wind as a launching mechanism for the outflow. {However, they should be considered as very preliminary since the data do not permit us to see the morphology of the outflow lobes and to judge whether the observed high-velocity gas is ejected from the disk or represents the entrained material. So fast rotation on such large scales needs a confirmation.}
There is a question whether {the two} components have separate origins or perhaps we see a transformation of the fast outflow into a slower one. A larger extent of the slow component is an argument against the {latter suggestion}. In principle there can be episodic ejections but the data do not provide any support for such suggestion. The jet-driven bow shock model for the fast outflow implies an existence of the underlying ionized jet. It can be revealed by high resolution radio and/or IR observations.
Finally, we can conclude that G18.88MME represents a unique case of a massive YSO driving a very dense probably rotating fast jet-like molecular outflow surrounded by a slower wide-angle wind. {This object deserves an investigation at a much higher angular resolution.}
\subsection*{Data availability}
The ALMA continuum data underlying this article are available from the publicly accessible JVO ALMA FITS archive\footnote[2]{http://jvo.nao.ac.jp/portal/alma/archive.do/}.
The {\it Spitzer} 24 $\mu$m continuum map underlying this article is available from the publicly accessible NASA/IPAC infrared science archive\footnote[3]{https://irsa.ipac.caltech.edu/frontpage/}.
The published IRAM 1.2 mm continuum map underlying this article is available from the website of the VizieR Service\footnote[4]{https://vizier.u-strasbg.fr/viz-bin/VizieR}.
\section*{Acknowledgments}
{We are very grateful to the anonymous referee for the helpful comments.}
I.I.Z. acknowledges the support by the Russian Science Foundation (grant No. 17-12-01256).
The research work at Physical Research Laboratory is funded by the Department of Space, Government of India.
DKO acknowledges the support of the Department of Atomic Energy, Government of India, under project Identification No. RTI 4002.
This work is based [in part] on observations made with the {\it Spitzer} Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA.
This paper makes use of the following archival ALMA data: ADS/JAO.ALMA\#2017.1.00983.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
\bibliographystyle{mnras}
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.